[cctalk] Re: Experience using an Altair 8800 ("Personal computer" from 70s)

2024-05-24 Thread Paul Koning via cctalk



> On May 24, 2024, at 1:26 PM, Chuck Guzis  wrote:
> 
> On 5/24/24 09:52, Paul Koning wrote:
> 
>> 
>> I once ran into a pre-WW2 data sheet (or ad?) for a transistor, indeed an 
>> FET that used selenium as the semiconducting material.  Most likely that was 
>> the Lilienfeld device.
> 
> Could also have been a device from Oskar Heil in the 1930s.

No idea.  I vaguely remember that it was French.  It was in a pile of papers in 
my father's office -- long since lost, unfortunately.

> What really made the difference in the case of transistors of any
> stripe, was the adoption of zone refining: (1951) William Gardner Pfann.
> Pfann knew Shockley and devised one of the early point-contact
> transistors, from a 1N26 diode. Zone-refining removed one of the
> bugaboos that plagued early semiconductor research--that of getting
> extremely pure material.
> 
> Pfann was a quiet, shy individual which perhaps explains why he doesn't
> get the historical applause.
> 
> Something akin to the Tesla-Steinmetz treatment.

I also remember the name Czochralski -- creator of the process that produces 
single crystals from which the wafers are sliced.

paul



[cctalk] Re: Experience using an Altair 8800 ("Personal computer" from 70s)

2024-05-24 Thread Paul Koning via cctalk



> On May 24, 2024, at 12:45 PM, Chuck Guzis via cctalk  
> wrote:
> 
> ...
> Just pointing out that "firsts" are very difficult.  Even though, for
> years, Shockley et al were trumpeted as the "inventors of the
> transistor", it's noteworthy that their patent application was carefully
> worded to avoid claims from work decades earlier by Julius Lilienfeld.
> In an interesting twist of history, it's the Lilienfeld model of a MOS
> transistor that prevails in our current technology, not the Shockley
> junction device.

I once ran into a pre-WW2 data sheet (or ad?) for a transistor, indeed an FET 
that used selenium as the semiconducting material.  Most likely that was the 
Lilienfeld device.

Apparently they didn't work well, not surprising given the use of selenium, 
which is a very marginal semiconductor.  Speaking of which: some early 
computers tried to use selenium diodes as circuit elements (for gates), with 
rather limited success.  The MC ARRA is an example.

paul



[cctalk] Re: Experience using an Altair 8800 ("Personal computer" from 70s)

2024-05-24 Thread Paul Koning via cctalk



> On May 24, 2024, at 10:40 AM, Sellam Abraham via cctalk 
>  wrote:
> 
> ...
> But it doesn't meet the other criteria Dave laid out. Most people these
> days have never heard of the Micral, but even normies might've heard of the
> Altair 8800 because of the very notoriety it has today because of it's
> significance back then.

This is a familiar pattern in discovery and invention.  In many cases, X was 
first invented by A and then some time later by B.  Or "discovered" instead of 
"invented".  And often the reason A is not generally identified as the first to 
do X is that the way A did it didn't lead to something that was widely used.

For example:
Vikings were the first Europeans to discover America, but their voyages didn't 
start a major movement so Columbus usually gets the credit.

FM radio was invented by Hanso Idzerda, but his approach was a bit odd and the 
economic reasons for it disappeared some years later, so Edwin Armstrong gets 
the credit and Idzerda is pretty much forgotten.  In this case, the bias is so 
strong that attempts to revise Wikipedia to correct the history get rejected.  
:-(

paul



[cctalk] Re: C. Gordon Bell, Creator of a Personal Computer Prototype, Dies at 89

2024-05-23 Thread Paul Koning via cctalk
I have a vague memory of visiting the Computer Museum when it was still at DEC, 
in the Marlboro building (MRO-n).  About the only item I recall is a Goodyear 
STARAN computer (or piece of one).  I found it rather surprising to have see a 
computer made by a tire company.  I learned years later that the STARAN is a 
very unusual architecture, sometimes called a one-bit machine.  More precisely, 
I think it's a derivative of William Shooman's "Orthogonal Computer" vector 
computer architecture, which was for a while sold by Sanders Associates where 
he worked.  

paul

> On May 23, 2024, at 5:00 PM, Kevin Anderson via cctalk 
>  wrote:
> 
> I had the good fortune of visiting The Computer Museum in Boston in the 
> summer of 1984.  Reading the museum's Wikipedia article, it appears I was 
> there while they were still freshly setting up their Museum Wharf location, 
> yet hadn't officially opened yet.  Unfortunately I only had an hour (or 
> little more) to visit before I had to return to where my wife was at a 
> different location (which I vaguely recall was at an aquarium somewhere 
> nearby?).  The clerk at the front entrance was really surprised that I was 
> leaving so soon...which in hindsight I wish now had not been so short.
> 
> Kevin Anderson
> Dubuque, Iowa



[cctalk] Re: C. Gordon Bell, Creator of a Personal Computer Prototype, Dies at 89

2024-05-22 Thread Paul Koning via cctalk



> On May 22, 2024, at 3:29 PM, Gavin Scott via cctalk  
> wrote:
> 
> On Wed, May 22, 2024 at 2:25 PM John Herron via cctalk
>  wrote:
> 
>> Out of curiosity is the book the size of a floppy disk or some computer
>> item at the time? (Any significance or just him being unique?).
> 
> Here's an Amazon listing showing what it looked like. Ordinary book
> size if not shape.
> 
> https://www.amazon.com/Computer-Structures-Readings-Examples-McGraw-Hill/dp/0070043574/

It's about as high as a typical hardcover textbook, just unusually wide.  I 
don't know of any other reason other than "it's different".

As I mentioned, it is not unprecedented; I have a book about book design which 
talks at some length about choosing the page proportions, and it mentions 
square pages as one of the recognized choices.  I think it says that it isn't 
very common, but I don't remember what else it says, for example any particular 
reason why one might choose this format (or reasons to avoid it).

paul



[cctalk] Re: C. Gordon Bell, Creator of a Personal Computer Prototype, Dies at 89

2024-05-22 Thread Paul Koning via cctalk



> On May 22, 2024, at 1:19 PM, Bill Degnan via cctalk  
> wrote:
> 
> It's a slog, but if you can make it through Gordon Bell's book, "Computer
> Structures Readings and Examples" you realize Gordon is a "father of
> vintage computing", in addition to his involvement with the first computer
> museum in Boston.  He knew better than anyone the historical significance
> of computing well before the term "vintage computer" existed.

I still have that book, though it's deep in some box.

Fun trivia item: it's the only book I can remember that is square.  Almost all 
books are "portrait" layout; a few are "landscape", but while square format is 
a known option shown in book design references, it is almost unheard of.

The only other book I can think of that's nearly (but not quite) square is the 
lovely "Powers of ten".

paul




[cctalk] Re: C. Gordon Bell, Creator of a Personal Computer Prototype, Dies at 89

2024-05-22 Thread Paul Koning via cctalk



> On May 22, 2024, at 11:10 AM, Don R via cctalk  wrote:
> 
> Control-G
> 
> In one of the comments I found this interesting tidbit:
> 
> Working at DEC for many years, I learned a lot from Mr. Bell.  One of my 
> favorite sayings was he calling himself "the industry standard dummy."  Which 
> simply meant that he approached all new products without pre-conceived 
> notions of "how" it should work.  He found so many bugs and interface errors 
> that way, and taught everyone to do the same.  On old computer keyboards one 
> used to be able to make a bell ring by typeing CTRL-G.  That industry 
> standard was set for G. Bell.

That's a nice story but it doesn't seem all that likely.  A "bell" code long 
predates ASCII where indeed it was Control-G; it showed up decades earlier in 
the 5-bit Teletype machines "Baudot" (a.k.a., "Murray") code.

paul



[cctalk] Re: interlace [was: NTSC TV demodulator ]

2024-05-20 Thread Paul Koning via cctalk



> On May 20, 2024, at 3:40 PM, Adrian Godwin via cctalk  
> wrote:
> 
> I remember the VT100 interlace setting. Yes, it changed the signal
> generated. I don't know if it also changed the characteristics of the
> monitor but I would think not.

The Pro also has such a thing in its video card.  It doesn't touch the monitor 
as far as I can tell.  The details may in the video gate array spec (I have 
that on paper, buried somewhere); my guess would be that it changes the 
vertical sync frequency from horizontal rate / 262 to horizontal rate / 262.5.

> It gave slightly higher resolution (the expectation would be double but the
> tube didn't have focus that good) at the cost of a horrible juddering
> display. I don't remember it being there on the later VT220.

Yes, that's just how I remember it on the Pro.

The Pro has another display feature: you can set it to 625 lines at 25 Hz (or 
half that at 50 Hz).  The US monitor handles that, somewhat to my surprise, but 
it's not a pleasant experience.

paul



[cctalk] Re: interlace [was: NTSC TV demodulator ]

2024-05-20 Thread Paul Koning via cctalk



> On May 20, 2024, at 1:37 PM, Wayne S via cctalk  wrote:
> 
> Young , hah. No i’m old 70.
> The pc monitors, not Tv, always had a setup menu. Even the Vt100 series let 
> you choose interlace if you needed. 

VT100?  I don't think so.  And yes, it has a setup menu, but that's setup of 
the terminal functionality, not the monitor part.

The earliest monitors could only handle one format.  A major innovation was 
"multisync" where the monitor would determine the horizontal and vertical sweep 
rate and line count, and display things the right way.  The first PC I owned 
had one of those, and as far as I can remember it had nothing that one would 
call a "setup menu".

The reason interlace matters is not the very slight slope of the scan line in 
analog monitors, but rather the fact that alternate frames are offset by half 
the line spacing of the basic frame, so each frame sweeps out the gaps in 
between the lines scanned by the preceding frame.  It matters to get that 
right, otherwise you're not correctly displaying consecutive rows of pixels.  
In particular, when doing scan conversion (from analog format to a digital X/Y 
pixel raster) you have to offset Y by one every other frame if interlace is 
used, but not if it isn't.

paul




[cctalk] Re: interlace [was: NTSC TV demodulator ]

2024-05-20 Thread Paul Koning via cctalk
I think you have that backwards.

TVs use interlace.  Older PC displays may do so, or not; typically the 480 line 
format was not interlaced but there might be high resolution modes that were.  
The reason was to deal with bandwidth limitations.

Flat panel displays normally support a pile of input formats, though only the 
"native" format (the actual line count matching the display hardware) is 
directly handled, all the others involve reformatting to the native format.  
That reformatting generally results in some loss of display quality, how much 
depends on how well the relevant hardware is designed.  And interlaced formats 
are often supported not just for the VGA input (if there is one) but also for 
DVI/HDMI inputs.  To get the accurate answer you have to check the 
specification sheet.

paul

> On May 20, 2024, at 12:13 PM, CAREY SCHUG via cctalk  
> wrote:
> 
> This may have been covered before, VERY early in this tread.
> 
> I think I tried a game on a flatscreen, and had issues.  I don't know if it 
> applies to the radio shack Color Computer, the interest of the original 
> poster.
> 
> many games and entry pcs with old style tv analog format, don't interlace, 
> and tube TVs nearly all (except maybe a few late model high end ones?) are 
> fine with that, but I seem to recall that most or all digital/flat screen  
> can't deal with non-interlace.
> 
> --Carey



[cctalk] Re: Thirties techies and computing history

2024-05-20 Thread Paul Koning via cctalk



> On May 20, 2024, at 9:33 AM, Nico de Jong via cctalk  
> wrote:
> 
> 
> Den 2024-05-20 kl. 15:26 skrev Paul Koning via cctalk:
>> 
>> ...
>> I just flipped through it briefly, and spotted what was the Electrologica 
>> headquarters (page 143).  And a few pages later there is a bit of history 
>> that explains the French origin of the PR8000 (or P8000), which was where I 
>> learned assembly language programming.  Quite a neat machine but very little 
>> documentation of it still exists.
>> 
>>  paul
> 
> I have quite a lot of documentation for the P85x CPU's and other stuff. Let 
> me know what you need.
> 
> /Nico

The P85x are 16-bit machines, right?  The PR8000 is 24 bits.  The only 
documentation I have seen is what I supplied to Bitsavers.

paul

[cctalk] Re: Thirties techies and computing history

2024-05-20 Thread Paul Koning via cctalk



> On May 20, 2024, at 6:08 AM, Nico de Jong via cctalk  
> wrote:
> 
> ...
> I used to work on the P6000 series, and they had a very interesting 
> architecture. For those who want to know a bit more about Philips' history, I 
> can recommend an e-book written by one of the guys in Sweden, where the P6000 
> series was developped. The P6000 was based on the P800, but extended into a 
> system appropiate for bookings, airline reservations, banking etc.
> 
> (Link below).
> 
> The author is Mats Danielson. By the way, the James Bond film "For your eyes 
> only" shows a lot of Philips hardware. The "atomic comb" is a PTS 6272 
> keyboard with (I think) a display boltet to the back of it. Hilarious, just 
> like the book.
> 
> /Nico
> 
> ---
> Read my new history book (free e-book)
> 
> https://www.researchgate.net/publication/37427_The_Rise_and_Fall_of_Philips_Data_Systems

Nice!

I just flipped through it briefly, and spotted what was the Electrologica 
headquarters (page 143).  And a few pages later there is a bit of history that 
explains the French origin of the PR8000 (or P8000), which was where I learned 
assembly language programming.  Quite a neat machine but very little 
documentation of it still exists.

paul



[cctalk] Re: Thirties techies and computing history

2024-05-19 Thread Paul Koning via cctalk



> On May 19, 2024, at 11:14 AM, Tarek Hoteit via cctalk  
> wrote:
> 
> A friend of a friend had a birthday gathering. Everyone there was in their 
> thirties, except for myself, my wife, and our friend. Anyway, I met a Google 
> engineer, a Microsoft data scientist, an Amazon AWS recruiter (I think she 
> was a recruiter), and a few others in tech who are friends with the party 
> host. I had several conversations about computer origins, the early days of 
> computing, its importance in what we have today, and so on. What I found 
> disappointing and saddening at the same time is their utmost ignorance about 
> computing history or even early computers. ...

I don't find this very surprising.  It's just a special case of the fact that 
few young people know much about history.  And the fact that they know so 
little about the country's history is a far more serious matter than that they 
know so little about the history of computing.

That said, I've run into a number of young people who definitely are 
interested.  On the PLATO system at Cyber1.org there are a number of them.  One 
is a CS professor who teaches a course about computer games, and has brought 
the PLATO multi-user games that date back to the 1970s into that class.  
Another is a small business tech owner/engineer who has made himself into one 
of the world's top experts on the PLATO plasma terminals -- which are older 
than he is.

paul

[cctalk] Re: Random items on Pascal #3

2024-05-16 Thread Paul Koning via cctalk



> On May 16, 2024, at 1:50 PM, Kevin Jordan  wrote:
> 
> Regarding NOS/VE and the notion that its command language was horribly 
> awkward ... the command language was strongly influenced by Multics and some 
> thinking in the Computer Science world about user-friendliness in command 
> languages being linked to predictability.  ...

It's taken a while for people to learn that languages need to be designed to 
match the environment where they are used.  Language needed for rapid 
interaction can't be as verbose as regular programming languages.  The Unix 
shell takes that notion to extremes (as does the ITS command handler, from what 
little I know of it -- and its ancestor the PDP-10 interactive debugger).

One of my favorite examples of an interesting command language is the one on 
Burroughs mainframes, called WFL (work flow language).  It looks vaguely like 
ALGOL, for the very good reason that is is compiled into executable code that 
is run to perform the various operations you ask for, by invoking the various 
applications as "forks" and executing flow control like "if" and looping 
statements.  For ALGOL programmers, which was most of us on that system, it was 
a very comfortable setup.  Oh yes, that was a batch system, so WFL would be on 
card decks, not banged into a terminal interactively.

paul




[cctalk] Re: Papertape-Reader Decitek 442A9: need manual/schematics

2024-05-16 Thread Paul Koning via cctalk



> On May 16, 2024, at 11:22 AM, Martin Bishop via cctalk 
>  wrote:
> 
> It looks as though Decitek remain in business 
> http://www.decitek.com/index.html
> 
> Scan of a series 700 reader manual on bitsavers 
> http://www.bitsavers.org/pdf/decitek/
> 
> On an optical reader, I would not recon the capstan running at power on as 
> unusual - a pinch roller which engages for drive and a tape clamp engaging 
> for stop motion are both common features.  For simple single byte read 
> operations, probably the paradigm used when the unit was built, it is not 
> uncommon for the sprocket hole to stop feed and energise clamp. 

There are two basic design schemes for optical tape readers: a sprocket wheel 
that engages the tape and does the start-stop motion, typically with a stepper 
motor; and a transport roller with pinch roller and brake, typically solenoid 
driven.  The DEC PC11 is an exaxmple of the former.  The latter are more likely 
to show up in high speed readers because the tape motion is continuous, so 
easier on the tape as long as it doesn't need to stop.

I'm still amazed at the Electrologica X8 tape reader, rated at over 1000 
characters per second, and able to stop and restart without skipping a byte.

paul



[cctalk] Re: Random items on Pascal #3

2024-05-16 Thread Paul Koning via cctalk



> On May 16, 2024, at 11:08 AM, Gary Grebus via cctalk  
> wrote:
> 
> We were a beta test site for NOS/VE and the hardware (Cyber 180?).  CDC sent 
> the machine and a software support engineer to help us do something with it.  
> My one recollection was that the command language was horribly awkward, but I 
> didn't spend much time on the system.
> 
> I know there are some manuals for NOS/VE on bitsavers, but I wonder if any of 
> the software still exists?

I don't know if it does.  The other issue is that there isn't as far as I know, 
an emulator that supports the 64 bit mode it needs.  There is of course a (very 
solid) emulator for the classic 60 bit architecture, DtCyber.

paul



[cctalk] Re: Random items on Pascal #3

2024-05-10 Thread Paul Koning via cctalk



> On May 10, 2024, at 11:16 AM, Sellam Abraham via cctalk 
>  wrote:
> 
> On Fri, May 10, 2024, 7:53 AM Chuck Guzis via cctalk 
> wrote:
> 
>> There's a third class that I haven't (yet) mentioned.  Design a machine
>> to solve a particular problem or class of problems.  Saxpy was such a
>> machine; we have bitcoin ASICs and our latest AI ventures.
>> 
>> What was the CM-1 programmed in?
>> 
>> --Chuck
>> 
> 
> Of course, there's the Manchester Baby.
> 
> Sellam

And SAGE.

paul



[cctalk] Re: Random items on Pascal #3

2024-05-10 Thread Paul Koning via cctalk



> On May 9, 2024, at 8:58 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 5/9/24 16:30, Michael Thompson wrote:
>> I have a source code tape for Pascal on a CDC 6600 from CDC in France.
>> I am not sure which version it is.
> 
> Broadly speaking, there were only three major CDC versions; the 1972
> original, the 1975 rewrite, and the (I think) 1980s version.  There were
> intermediate versions, of course.
> 
> I think that the 1975 version was widely used as a reference for many
> other implementations.
> 
> But now comes the question, "Does one design a machine to a language or
> a language to a machine?"  If you take the former course, you have the
> problem of not being able to implement features that the language
> designers didn't imagine.   In the latter case, you wind up with a
> language that isn't easily made portable.

Or neither.  "Machine to a language" can be seen in the Burroughs mainframes 
(ALGOL-60), in the IBM 360 (Fortran and COBOL) and perhaps some others.  But a 
lot of machines are not built to a particular language, certainly that's the 
case for most if not all modern machines.   Some machines might have features 
to optimize certainly languages while still being quite general; the "display" 
support in the Electrologica X8 is an example, but it is just as happy running 
Fortran or LISP.

As for "language to the machine" that's pretty much unheard of.  While there 
certainly are languages that only were seen on one or a few machines or 
architectures -- SYMPL, CYBIL, BLISS, TUTOR -- it isn't because that was the 
intent of those languages.  I suppose you could pose ESPOL as an example of a 
language for a machine, though I suspect it could have been generalized, as C 
was, if there had been a desire to do so.

paul



[cctalk] Re: Random items on Pascal #3

2024-05-09 Thread Paul Koning via cctalk



> On May 9, 2024, at 7:55 PM, Fred Cisin via cctalk  
> wrote:
> 
>>> ...
>>> I've written code in Pascal, as well as Modula-2.  Never liked
>>> it--seemed to be a bit awkward for the low-level stuff that I was doing.
> 
> On Thu, 9 May 2024, Paul Koning via cctalk wrote:
>> Not surprising, since that's not what it is all about.  Both, like their 
>> predecessor ALGOL-60 as well as successors like Ada, are strongly typed 
>> languages where doing unsafe stuff is made very hard.  Contrast that with C, 
>> which sets out to make it easy to do unsafe things and partly for that 
>> reason has a feeble type system.  So doing low level stuff like device 
>> drivers is difficult, unless you create extensions to break out of the type 
>> system.  An example of how to do that is the Burroughs extension of ALGOL 
>> called ESPOL, which is what they used to write the OS.  Actually, Burroughs 
>> did a number of extended versions for different purposes; there's also 
>> DCALGOL (Data comm ALGOL) intended for writing communications software.  Why 
>> that's separate from ESPOL I don't really know; I only ever got to do 
>> regular ALGOL programming on Burroughs mainframes.  One reason for that: 
>> those systems depend on the compilers for their security; if ordinary users 
>> got access to ESPOL they could write dangerous code, but in ALGOL they 
>> cannot.
> 
> One of the things that _I_ love about C is that it is easy to get it out of 
> the way when you want to do something lower level.
> 
> Rather than feeble type system, it could have had a requirement to explicitly 
> "cast" anything being used as a "wrong" type.
> 
> One of Alan Holub's books about C is titled
> "Enough rope to shoot yourself in the foot"

True, and Stroustrup added that "and C++ is a cannon that blows off your entire 
leg".

> Each language has its own specialty.  And you need to find the one that fits 
> you best.
> 
> It used to be (and likely still is), that every computer science grad student 
> created a new language.  A requirement (usually UNSPOKEN) was that the 
> compiler be able to compile itself.  That the language compiler is written 
> (actually normally RE-written) in that language and compiled by that 
> compiler.  That certainly seems to bias things towards languages that are 
> well suited for writing compilers!  If you were to create a language that was 
> specializzed for something completely different, and poorly suited for 
> writing compilers, then it would not be respected.

If you don't mind the total lack of protection, FORTH is very nice: it even 
more easily than C lets you do low level things, and it is also very small.  
And the implementation is by definition entirely extensible.  A large FORTH 
program I wrote in the 1980s, on PDP-11 FORTH, starts out by redefining the 
language as a 32-bit version.

I still remember a classmate of mine, who told me when we were both at DEC that 
he had written an expression parser in COBOL.  I think he also tried to do one 
in RPG but found it was too hard.

paul



[cctalk] Re: DOS p-System Pascal: (Was: Saga of CP/M)

2024-05-09 Thread Paul Koning via cctalk



> On May 9, 2024, at 7:05 PM, Will Cooke via cctalk  
> wrote:
> 
> 
> 
>> On 05/09/2024 5:46 PM CDT ben via cctalk  wrote:
>> 
> 
>> Did any one make a REAL TIME OS the 386?
> 
> There were / are quite a few.
> https://en.wikipedia.org/wiki/Comparison_of_real-time_operating_systems
> 
> The 386ex was specifically intended for embedded systems.
> 
> The first one that came to mind, and caused me to find that list, was RTEMS 
> that I think was originally written on/for the 386.
> https://en.wikipedia.org/wiki/RTEMS

RTEMS is still around.

I still have a uc/OS book, though I haven't used that RTOS.

paul



[cctalk] Re: Random items on Pascal #3

2024-05-09 Thread Paul Koning via cctalk



> On May 9, 2024, at 6:43 PM, Chuck Guzis via cctalk  
> wrote:
> 
> ...
> I've written code in Pascal, as well as Modula-2.  Never liked
> it--seemed to be a bit awkward for the low-level stuff that I was doing.

Not surprising, since that's not what it is all about.  Both, like their 
predecessor ALGOL-60 as well as successors like Ada, are strongly typed 
languages where doing unsafe stuff is made very hard.  Contrast that with C, 
which sets out to make it easy to do unsafe things and partly for that reason 
has a feeble type system.  So doing low level stuff like device drivers is 
difficult, unless you create extensions to break out of the type system.  An 
example of how to do that is the Burroughs extension of ALGOL called ESPOL, 
which is what they used to write the OS.  Actually, Burroughs did a number of 
extended versions for different purposes; there's also DCALGOL (Data comm 
ALGOL) intended for writing communications software.  Why that's separate from 
ESPOL I don't really know; I only ever got to do regular ALGOL programming on 
Burroughs mainframes.  One reason for that: those systems depend on the 
compilers for their security; if ordinary users got access to ESPOL they could 
write dangerous code, but in ALGOL they cannot.

paul



[cctalk] Re: FWIW CD & DVD demagnitizitation [was: Double Density 3.5" Floppy Disks]

2024-05-09 Thread Paul Koning via cctalk



> On May 9, 2024, at 9:28 AM, Alexander Schreiber via cctalk 
>  wrote:
> 
> On Wed, May 08, 2024 at 07:09:58PM -0700, Fred Cisin via cctalk wrote:
>>> More here:
 https://www.enjoythemusic.com/magazine/equipment/0114/audiophile_ac_outlets.htm
 If I knew that this stuff wasn't real, I'd figure that it was an April
 Fool's prank.
>> 
>> On Wed, 8 May 2024, Sellam Abraham via cctalk wrote:
>>> Why stop there?  A truly dedicated audiophile would run new pure silver
>>> electrical wire through the walls directly to the breaker box.
>>> Then you gotta upgrade to the breaker box that was disinfected from
>>> transient spirits through an exorcism, and then special 24K solid
>>> gold-contact breakers in inert nylon housings.
>> 
>> But, how much good will that do, if you don't also upgrade the drop from the
>> pole?
>> 
>> . . . and, do you know whether the electrons that you are receiving are from
>> nuclear, hydro, solar, wind, or fossil?
> 
> German snake oil wizards to the rescue! The "Atomstromfilter" (nuclear
> power filter) joke product has been making the rounds in Germany for
> at _least_ 20+y now: https://traumshop.net/produkt/atomstromfilter/
> 
> It claims to filter power generated by nuclear power plants out of
> your power flow at the wall socket ;-)

That's a wonderful joke site.  They don't make it really obvious, but products 
like a "dark LED" and "frozen hot water" are a hint.

paul




[cctalk] Re: APL (Was: BASIC

2024-05-08 Thread Paul Koning via cctalk



> On May 8, 2024, at 10:25 AM, Harald Arnesen via cctalk 
>  wrote:
> 
> Paul Koning via cctalk [07/05/2024 19.31]:
> 
>> (Then again, I had a classmate who was taking a double major: math and music 
>> composition...)
> 
> Mathemathics and music is not a rare combination - see Tom Lehrer, for 
> instance.
> -- 
> Hilsen Harald

My wife (a voice major) pointed out that instrumental music majors tend to be 
good at math; voice majors not so much.

paul



[cctalk] Re: saving old technology [was: Recordak Magnaprint Microfiche Printer Free]

2024-05-08 Thread Paul Koning via cctalk
Sure, but classic Diesel engines are purely mechanical.

paul

> On May 8, 2024, at 9:38 AM, Michael Thompson  
> wrote:
> 
> Most modern Diesel engines use a common-rail electronically controlled 
> injection system.
> 
>> On May 8, 2024, at 8:58 AM, Paul Koning via cctalk  
>> wrote:
>> 
>> 
>> 
>>> On May 8, 2024, at 7:56 AM, CAREY SCHUG via cctalk  
>>> wrote:
>>> 
>>> At a local linux meeting, the leader was disparaging any resurrection of 
>>> old technology
>>> 
>>> Anybody else reminded of the science fiction story where ethereal life 
>>> forms arrive from a distant star system after receiving our first radio 
>>> transmissions.  life that eats radio and electricity, starting with the 
>>> frequencies of our first transmissions, but then mutating(?) to all radio, 
>>> then electricity even in wires, and wiping out all communications, 
>>> vehicles, etc.  There is a desperate project to resurrect steam engines (to 
>>> build other steam engines) and breed horses.  All those steam train museums 
>>> turn out to be what saves humanity.  just now I realized..shouldn't they 
>>> also consume all the light too?  But I guess they can't go beyond 
>>> microwaves.
>> 
>> "The Waveries" by Fredric Brown, 1945.  Never mind the bit about light; the 
>> author missed the fact that Diesel engines don't need electricity, and also 
>> the fact that thunder can't happen without lightning.
>> 
>>   paul
>> 



[cctalk] Re: saving old technology [was: Recordak Magnaprint Microfiche Printer Free]

2024-05-08 Thread Paul Koning via cctalk



> On May 8, 2024, at 7:56 AM, CAREY SCHUG via cctalk  
> wrote:
> 
> At a local linux meeting, the leader was disparaging any resurrection of old 
> technology
> 
> Anybody else reminded of the science fiction story where ethereal life forms 
> arrive from a distant star system after receiving our first radio 
> transmissions.  ...

An entirely different story but also much about preserving technology is 
"Lucifer's Hammer" by Niven & Pournelle.  It's the reason I have the two-volume 
set of "The Way Things Work" on my bookshelf.

paul



[cctalk] Re: saving old technology [was: Recordak Magnaprint Microfiche Printer Free]

2024-05-08 Thread Paul Koning via cctalk



> On May 8, 2024, at 7:56 AM, CAREY SCHUG via cctalk  
> wrote:
> 
> At a local linux meeting, the leader was disparaging any resurrection of old 
> technology
> 
> Anybody else reminded of the science fiction story where ethereal life forms 
> arrive from a distant star system after receiving our first radio 
> transmissions.  life that eats radio and electricity, starting with the 
> frequencies of our first transmissions, but then mutating(?) to all radio, 
> then electricity even in wires, and wiping out all communications, vehicles, 
> etc.  There is a desperate project to resurrect steam engines (to build other 
> steam engines) and breed horses.  All those steam train museums turn out to 
> be what saves humanity.  just now I realized..shouldn't they also consume all 
> the light too?  But I guess they can't go beyond microwaves.

"The Waveries" by Fredric Brown, 1945.  Never mind the bit about light; the 
author missed the fact that Diesel engines don't need electricity, and also the 
fact that thunder can't happen without lightning.

paul



[cctalk] Re: FWIW CD & DVD demagnitizitation [was: Double Density 3.5" Floppy Disks]

2024-05-07 Thread Paul Koning via cctalk



> On May 7, 2024, at 1:15 PM, CAREY SCHUG via cctalk  
> wrote:
> 
> my ears would never be good enough to notice any difference
> 
> For what it's worth:
> 
> First, in general, there are so many apparent reviews of so many products, it 
> is hard to believe they are all scams.  How can there be enough fools to buy 
> enough of those products to have that many different ones?  I mean, it takes 
> a lot of work to develop a product, if you only sell 5, it is not worth it.  
> if you take money and don't send anything,t hat would show up in a google 
> search.
> 
> also, what some hinted at is the issue is even a very slight amount of 
> magnitsm, spinning very fast, could affect the signal in the playback head

A CD or DVD "demagnetizer" is by definition a scam and an utter fraud.  Those 
media are non-magnetic and in any case magnetism plays no role whatsoever in 
reading them.

Also, keep in mind that something may look like a review but it's actually a 
press release, perhaps slightly warmed over.

> Do CDs and DVDs have parity and or checksums?  If you grab a CD twice, will 
> both results be identical bit for bit?  

They go way beyond checksums, using sophisticated ECC schemes.  There's a good 
reason why a scratched disk can, in most cases, be read without trouble.  I 
remember reading years ago that someone showed off the CD ECC scheme by 
drilling a hole into a CD (2-3 mm or so) -- it read just fine anyway.

So yes, unless the disk is damaged beyond the power of the ECC, it will read 
correctly every time.  And even if it does exceed the ECC, it will in most 
cases read the same, though some bits of data will be unrecoverable.  You'd 
have to go WAY beyond the ECC limits to reach the point of undetected data 
errors, i.e., a misreading of the data that the ECC doesn't catch (let alone 
correct).

That property holds for all codes, in fact.  Every one of them has a set of 
error patterns it will detect, a set it will correct, and a set it will miss.  
The design challenge for codes is (a) understand the likely error patterns it 
will be confronted with in the wild, (b) understand the required probabilities 
for (1) uncorrectable and (2) undetected errors, and (c) to create a good code 
that delivers on these requirements efficiently.  "Efficiency" is defined by 
coding overhead as well as implementation cost.
> 
> https://www.gcaudio.com/tips-tricks/cd-dvd-demagnetization/
> 
> https://forum.audiogon.com/discussions/if-you-have-a-cd-player-you-need-to-do-this-periodically
> 
> At first, this SEEMS even more ludicrous, demagnetizing vinyl LPs, but the 
> pickup heads are analogue magnetic, so maybe more reasonable
> .
> https://www.canadianhifi.com/shop/analog/accessories/furutech-demag-a-lp-cd-cable-demagnetizer/

What an amazing pile of bunk.

Some years ago I was joking about the possibility that someone would sell gold 
plated fiber optic cables to suckers like that.  Imagine my surprise when, 
somewhat later, I spotted Monster fiber optic cables with gold-plated 
connectors.

paul



[cctalk] Re: APL (Was: BASIC

2024-05-07 Thread Paul Koning via cctalk



> On May 7, 2024, at 1:20 PM, Sellam Abraham via cctalk  
> wrote:
> ...
>  Thus proving to
> be complete horseshit all the educators that said if you want to get into a
> computer career you must be good at math.

Indeed.

One of the most amazing programmers I ever worked with was a graduate of the 
Berklee School of Music.  And two other quite competent computer people I know 
had Conservatory of Music degrees in piano performance.

(Then again, I had a classmate who was taking a double major: math and music 
composition...)

paul



[cctalk] Re: BASIC

2024-05-03 Thread Paul Koning via cctalk



> On May 3, 2024, at 6:22 PM, Sytse van Slooten via cctalk 
>  wrote:
> 
> And since nobody else seems to, allow me to recall:
> 
> - MINC BASIC, with all its extensions for I/O and real time events.
> 
> - MUBAS, the multi-user basic for RT-11.
> 
> And playing around with BASIC is just so much easier and more fun than 
> anything else you can do with old hardware or emulations thereof. Run a C 
> program? Sure, marvel in how much slower it is than on your desktop, phone, 
> or the MCU in your microwave. None of those will have BASIC though, and 
> certainly not the MINC extensions with the blinkenlights. And isn't that what 
> all the joy is about?

That's one reason I like FORTH: it's just as compact, perhaps more so, fast, 
and much more flexible and extensible than BASIC.

I didn't know MINC BASIC, should compare it with the "LABBASIC" I created in 
college.  The one line description is identical; mine ran on an 11/20 with 
AD01, AD11, KW11-P and DR11-A.  The programmable clock enabled stuff like take 
a vector of samples spaced at a tightly controlled time interval, or run bits 
of BASIC code from timer (or DR11) interrupts. 

paul



[cctalk] Re: BASIC

2024-05-03 Thread Paul Koning via cctalk



> On May 3, 2024, at 5:31 PM, Sean Conner via cctalk  
> wrote:
> 
> It was thus said that the Great Steve Lewis via cctalk once stated:
>> Great discussions about BASIC.   I talked about the IBM 5110 flavor of
>> BASIC last year (such as its FORM keyboard for quickly making structured
>> input forms), and recently "re-learned" that it defaults to running with
>> double-precision.  But if you use "RUNS" instead of "RUN" then the same
>> code is run using single-precision (but I haven't verified yet if that
>> translates into an actual runtime speed difference).  I think most of the
>> "street BASICs" used single precision (if they supported floats at all).
>> But speaking of Microsoft BASIC, I think Monte Davidoff is still around
>> and deserves a lot of credit for doing the floating point library in the
>> initial Microsoft BASIC (but it's a bit sad that history has lost the names
>> of individual contributors
> 
>  I think most of the "street BASICs" were written before IEEE-754 (floating
> point standard) was ratified (1985 if I recall).  Microsoft's floating point
> [1] was five bytes long---four bytes for the mantissa, and one byte for the
> exponent, biased by 129.  I did some tests a month ago whereby I tested the
> speed of the Microsoft floating point math on the 6809 (using Color Computer
> BASIC) vs. the Motorola 6839 (floating point ROM implementing IEEE-754), and
> the Microsoft version was faster [2].

BASIC-PLUS (part of RSTS) had a weird floating point history.  The original 
version, through RSTS V3, used 3-word floating point: two words mantissa, one 
word exponent.  Then, presumably to match the 11/45 FPU, in version 4A they 
switched to your choice of 2 or 4 word float, what later in the VAX era came to 
be called "F" and "D" float.

One curious thing about floating point formats of earlier computers is that 
they came with wrinkles not seen either in IEEE nor in DEC float.  As I recall, 
the 360 is really hex float, not binary, with an exponent that gives a power of 
16.  CDC 6600 series mainframes used a floating point format where the mantissa 
is an integer, not a fraction, and negation is done by complementing the entire 
word.

The Electrologica X8 is yet another variation, which apparently came from an 
academic paper of the era: it treats the mantissa as an integer too, like the 
CDC 6600, but with a different normalizationn rule.  THe 6600 does it like most 
others: shift left until all leading zeroes have been eliminated.  (It doesn't 
have a "hidden bit" as DEC did.)  But in the EL-X8, the normalization rule is 
to make the exponent as close to zero as possible without losing bits.  So an 
integer value is normalized to the actual integer with exponent zero.  And 
since there is no "excess n" bias on the exponent, the encoding of an integer 
and of the identical normalized floating point value are in fact the same.

paul




[cctalk] Re: 5,34 Petaflop System Cheyenne

2024-05-03 Thread Paul Koning via cctalk



> On May 3, 2024, at 3:27 PM, Gavin Scott via cctalk  
> wrote:
> 
> On Fri, May 3, 2024 at 1:30 PM W2HX via cctalk  wrote:
> 
>> Someone seems to want it. Bidding is at $250,000 and counting. I guess 
>> someone didn’t get the memo about getting just a few nvidia cards!
> 
> If you go to Amazon today and buy just the CPUs and RAM, that will
> cost you around 21 million dollars. So I imagine those used parts are
> still worth substantially more than the current high bid. The close of
> the auction should be interesting.

Or maybe it's just metals recycling.  If there's more than 120 ounces of gold 
in all those racks, that's $250k right there. 

paul



[cctalk] Re: APL (Was: BASIC

2024-05-02 Thread Paul Koning via cctalk


> On May 2, 2024, at 8:45 PM, Paul Koning  wrote:
> 
> Yes, it sure is.  I was mistaken about it being the first issue.  Instead, 
> the RSA article appears in Vol. 1 No. 3 (4Q80).  Too bad the article itself 
> isn't included in the scanned material.

Ah, but it does show up elsewhere: 
http://ai.eecs.umich.edu/people/conway/VLSI/ClassicDesigns/RSA/RSA.L4Q80.pdf 


> 
>   paul
> 
>> On May 2, 2024, at 8:39 PM, Lee Courtney > > wrote:
>> 
>> Paul,
>> 
>> Is this the Lambda/VLSI Design magazine you refer to:
>> 
>> Lynn Conway's VLSI Archive: Main Links (umich.edu) 
>> 
>> 
>> ?
>> 
>> Thanks!
>> 
>> Lee
>> 
>> On Thu, May 2, 2024 at 1:00 PM Paul Koning > > wrote:
>> 
>> 
>> > On May 2, 2024, at 3:50 PM, Lee Courtney via cctalk > > > wrote:
>> > 
>> > The first "professional software" I wrote (almost) out of University in
>> > 1979 was a package to emulate the mainframe APL\Plus file primitives on a
>> > CP/M APL variant. Used to facilitate porting of mainframe APL applications
>> > to microcomputers.
>> > 
>> > I'm still an APL adherent since the late 1960s, but it was probably too
>> > heavy-weight, with obstacles noted elsewhere (character-set, radical
>> > programming paradigm), to be successful in the early days of
>> > microcomputing. Although the MCM-70 was an amazing feat of technology.
>> > 
>> > Too bad because the language itself lends itself to learning by anyone with
>> > an understanding of high school algebra.
>> 
>> The one professional application APL I heard of was in a talk by Ron Rivest, 
>> at DEC around 1982 or so.  He described a custom chip he had built, a bignum 
>> ALU (512 bits) to do RSA acceleration.  The chip included a chunk of 
>> microcode, and he mentioned that the microcode store layout was done by an 
>> APL program about 500 lines long.  That raised some eyebrows...
>> 
>> Unless I lost it I still have the article somewhere: it's the cover story on 
>> the inaugural issue of "Lambda" which later became "VLSI Design", a 
>> technical journal about chip design.
>> 
>> My own exposure to APL started around 1998, when I decoded to try to use it 
>> for writing cryptanalysis software.  That was for a course in cryptanalysis 
>> taught by Alex Biryukov at Technion and offered to remote students.  The 
>> particular exercise was solving an ADVFX cipher (see "The Code Breakers", 
>> the unabridged hardcover, not the useless paperback).  It worked too, and it 
>> took less than 100 lines.
>> 
>> paul
>> 
>> 
>> 
>> 
>> -- 
>> Lee Courtney
>> +1-650-704-3934 cell
> 



[cctalk] Re: APL (Was: BASIC

2024-05-02 Thread Paul Koning via cctalk
Yes, it sure is.  I was mistaken about it being the first issue.  Instead, the 
RSA article appears in Vol. 1 No. 3 (4Q80).  Too bad the article itself isn't 
included in the scanned material.

paul

> On May 2, 2024, at 8:39 PM, Lee Courtney  wrote:
> 
> Paul,
> 
> Is this the Lambda/VLSI Design magazine you refer to:
> 
> Lynn Conway's VLSI Archive: Main Links (umich.edu) 
> 
> 
> ?
> 
> Thanks!
> 
> Lee
> 
> On Thu, May 2, 2024 at 1:00 PM Paul Koning  > wrote:
> 
> 
> > On May 2, 2024, at 3:50 PM, Lee Courtney via cctalk  > > wrote:
> > 
> > The first "professional software" I wrote (almost) out of University in
> > 1979 was a package to emulate the mainframe APL\Plus file primitives on a
> > CP/M APL variant. Used to facilitate porting of mainframe APL applications
> > to microcomputers.
> > 
> > I'm still an APL adherent since the late 1960s, but it was probably too
> > heavy-weight, with obstacles noted elsewhere (character-set, radical
> > programming paradigm), to be successful in the early days of
> > microcomputing. Although the MCM-70 was an amazing feat of technology.
> > 
> > Too bad because the language itself lends itself to learning by anyone with
> > an understanding of high school algebra.
> 
> The one professional application APL I heard of was in a talk by Ron Rivest, 
> at DEC around 1982 or so.  He described a custom chip he had built, a bignum 
> ALU (512 bits) to do RSA acceleration.  The chip included a chunk of 
> microcode, and he mentioned that the microcode store layout was done by an 
> APL program about 500 lines long.  That raised some eyebrows...
> 
> Unless I lost it I still have the article somewhere: it's the cover story on 
> the inaugural issue of "Lambda" which later became "VLSI Design", a technical 
> journal about chip design.
> 
> My own exposure to APL started around 1998, when I decoded to try to use it 
> for writing cryptanalysis software.  That was for a course in cryptanalysis 
> taught by Alex Biryukov at Technion and offered to remote students.  The 
> particular exercise was solving an ADVFX cipher (see "The Code Breakers", the 
> unabridged hardcover, not the useless paperback).  It worked too, and it took 
> less than 100 lines.
> 
> paul
> 
> 
> 
> 
> -- 
> Lee Courtney
> +1-650-704-3934 cell



[cctalk] Re: BASIC

2024-05-02 Thread Paul Koning via cctalk



> On May 2, 2024, at 4:23 PM, Gordon Henderson via cctalk 
>  wrote:
> 
> ...
> I'm told Lua is the new Basic or Python is the new Basic, but the best thing 
> for me about Basic on the old micros was being able to turn the computer on 
> and type Basic into it immediately And to that end, I decided to 
> re-target my C Basic it to a bare metal framework for the Raspberry Pi I'd 
> been working on - boots to Basic in... well, it's not as quick as an Apple II 
> or BBC Micro, but under 2 seconds. It's a bit faster on a Pi Zero as there's 
> no USB to initialise... 

That's why I've been playing with FORTH on my Raspberry Pico microcontrollers 
(Travis Bemann's "Zeptoforth" dialect, to be precise).  It's nice and compact, 
and it boots in milliseconds.  Multicore, multitasking, lots of library 
modules... nice.

paul




[cctalk] Re: APL (Was: BASIC

2024-05-02 Thread Paul Koning via cctalk



> On May 2, 2024, at 3:50 PM, Lee Courtney via cctalk  
> wrote:
> 
> The first "professional software" I wrote (almost) out of University in
> 1979 was a package to emulate the mainframe APL\Plus file primitives on a
> CP/M APL variant. Used to facilitate porting of mainframe APL applications
> to microcomputers.
> 
> I'm still an APL adherent since the late 1960s, but it was probably too
> heavy-weight, with obstacles noted elsewhere (character-set, radical
> programming paradigm), to be successful in the early days of
> microcomputing. Although the MCM-70 was an amazing feat of technology.
> 
> Too bad because the language itself lends itself to learning by anyone with
> an understanding of high school algebra.

The one professional application APL I heard of was in a talk by Ron Rivest, at 
DEC around 1982 or so.  He described a custom chip he had built, a bignum ALU 
(512 bits) to do RSA acceleration.  The chip included a chunk of microcode, and 
he mentioned that the microcode store layout was done by an APL program about 
500 lines long.  That raised some eyebrows...

Unless I lost it I still have the article somewhere: it's the cover story on 
the inaugural issue of "Lambda" which later became "VLSI Design", a technical 
journal about chip design.

My own exposure to APL started around 1998, when I decoded to try to use it for 
writing cryptanalysis software.  That was for a course in cryptanalysis taught 
by Alex Biryukov at Technion and offered to remote students.  The particular 
exercise was solving an ADVFX cipher (see "The Code Breakers", the unabridged 
hardcover, not the useless paperback).  It worked too, and it took less than 
100 lines.

paul




[cctalk] Re: BASIC

2024-05-02 Thread Paul Koning via cctalk



> On May 2, 2024, at 2:30 PM, Mike Katz via cctalk  
> wrote:
> 
> Microsoft loves to take languages developed by others and transmogrify them 
> into the "Microsoft Universe".
> 
> Quick Basic, Visual Java, Visual Basic, Visual C# (barely resembles C) and 
> the worst offender of all Visual C++ .NET.
> 
> Your post reminded me that Postscript is an actual programming language as 
> well.

It sure is.  My favorite fractal curve, the "Tree of Pythagoras" has been my 
sample graphics exercise for any number of systems with graphics I/O.  The 
PostScript version I have is only a few dozen lines long, a simple recursive 
program.  A trickier version is my original one, in FORTRAN II (no recursion).

paul




[cctalk] Re: BASIC

2024-05-02 Thread Paul Koning via cctalk



> On May 1, 2024, at 6:44 PM, Wayne S via cctalk  wrote:
> 
> IMHO, “C” nomenclature really screwed up the equality vs assignment 
> statements.  The == made it difficult to understand especially if you came 
> from a language that didn’t have it. Basically all languages before “C”.

Well, sort of.  Some languages confused the two by using the same token -- 
BASIC is a notorious example.  ALGOL, FORTRAN, C, APL, POP-2 all solve the 
problem by using two different tokens; the only question is which of the two 
functions is marked by the "=" token.  In ALGOL , APL, and I think POP-2 it's 
equality,  in FORTRAN and C it's assignment.  Either works but you have to 
remember which it is; if you use languages of each kind then you may get 
confused at times.  :-(

paul



[cctalk] Re: APL (Was: BASIC

2024-05-02 Thread Paul Koning via cctalk



> On May 2, 2024, at 6:55 AM, Liam Proven via cctalk  
> wrote:
> 
> On Thu, 2 May 2024 at 00:51, Fred Cisin via cctalk
>  wrote:
>> 
>> What would our world be like if the first home computers were to have had
>> APL, instead of BASIC?
> 
> To be perfectly honest I think the home computer boom wouldn't have
> happened, and it would have crashed and burned in the 1970s, with the
> result that microcomputers remained firmly under corporate control.
> 
> I have been watching the APL world with interest since I discovered it
> at university, and I still don't understand a word of it.
> 
> I've been watching Lisp for just 15 years or so and I find it unreadable too.
> 
> I think there are widely different levels of mental flexibility among
> smart humans and one person's "this just requires a small effort but
> you get so much in return!" is someone else's eternally impossible,
> unclimbable mountain.

That sounds right to me.

> After some 40 years in computers now, I still like BASIC best, with
> Fortran and Pascal very distant runners-up and everything else from C
> to Python is basically somewhere between Minoan Linear A and Linear B
> to me.

Well, Linear B isn't that hard, it's just Greek.  :-)

My guess is that the languages you use routinely are the ones that work best, 
and which languages those are depends on where you work and on what projects.  
For example, I don't *like* C (I call it a "feebly typed language") and C++ not 
either, but my job uses these two plus Python.

Now Python is actually my favorite (though recently I've done a bunch of work 
in FORTH).  I like to mention that, in 50 years or so, I have only encountered 
two programming languages where I went from "no knowledge" to "wrote and 
debugged a substantial program" in only one week -- Pascal (in graduate school) 
and Python (one job ago).

paul



[cctalk] Re: BASIC

2024-05-02 Thread Paul Koning via cctalk



> On May 1, 2024, at 6:26 PM, Mike Katz via cctalk  
> wrote:
> 
> The Beginners All-purpose Symbolic Instruction Code (BASIC)
> 
> Developed by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. 
>  This ran on the Dartmouth Time Sharing System (DTSS) which was an early time 
> sharing system running on Honeywell and GE Main Frames with Datanet systems 
> running the terminal interfaces.
> 
> This system was intended to be an online code/run/debug cycle system rather 
> than a batch processing system like most Cobol and Fortran compilers were.
> 
> BASIC was actually their third language attempt to simplify the syntax of 
> languages like Fortran and Algol.
> 
> There are literally 100's of dialects of BASIC, both as compilers (as was the 
> original) and interpreters and even pseudo compilers.
> 
> Like many of us older members of this thread, some form of BASIC was our 
> "computer milk language" (our first computer language).
> 
> Some early microcomputers even wrote their operating systems in some form of 
> BASIC.
> 
> I learned basic in September of 1972 on a 4K PDP-8/L running EduSystem 10 
> Basic with time also spent at the Kiewit Computation Center at Dartmouth (as 
> a 12 year old) running Dartmouth Basic.
> 
> Let's hear your earliest introduction to BASIC.

BASIC was my fourth language, after ALGOL-60, FORTRAN-II, and Philips PR8000 
assembler.  The first version I met was BASIC-PLUS, on RSTS-11.  That's a 
compiler (to threaded code, like P-code, not to machine code).  Soon after that 
I worked on RT-11 BASIC, which is an interpreter, and modified it to be a lab 
machine control system with interrupts and analog and digital I/O.

Someone commented on "what if the first PCs had run APL".  Shortly after 
reading the famous "Tablet" paper (Stephen Wolfram and his students at U of 
Illinois) I played a bit with that notion: a tablet computer supporting APL so 
you could program quickly because it requires so few characters per unit of 
work.  The crucial miss in that concept is that PCs are not sold (primarily) to 
programmers but to application users, and for that an APL-focused machine is no 
advantage.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-29 Thread Paul Koning via cctalk



> On Apr 29, 2024, at 1:59 AM, Steve Lewis via cctalk  
> wrote:
> 
> After learning more about the PALM processor in the IBM 5100, it has a
> similarity to the 6502 in that the first 128 bytes of RAM is a "register
> file."  All its registers (R0 to R15, across 4 interrupt "layers") occupy
> those first addresses.  In addition, they are physically on the processor
> itself (not in actual RAM).  

That sort of thing goes way back.  There is of course the PDP-6 and PDP-10 
where the 16 registers alias to the low 16 memory locations.  And the notion of 
registers per interrupt level also appears in the Philips PR-8000, a 24 bit 
minicomputer from the 1960s aimed (among other things) at industrial control 
applications.  That sort of architecture makes interrupt handling very 
efficient since it eliminates state saving.  Unfortunately there's very little 
documentation of that machine anywhere; the little I found is on Bitsavers.

paul




[cctalk] Re: PCs in home vs businesses (70s/80s)

2024-04-27 Thread Paul Koning via cctalk



> On Apr 27, 2024, at 1:15 PM, Tarek Hoteit via cctalk  
> wrote:
> 
> I came across this paragraph from the July 1981 Popular Science magazine 
> edition in the article titled “Compute power - pro models at almost home-unit 
> prices.” 
> 
> “ ‘Personal-computer buffs may buy a machine, bring it home, and then spend 
> the rest of their time looking for things it can do’, said …. ‘In business, 
> it’s the other way around. Here you know the job, you have to find a machine 
> that will do it. More precisely, you have to find software that will do the 
> job. Finding a computer to use the software you’ve selected becomes 
> secondary.”. 
> 
> Do you guys* think that software drove hardware sales rather than the other 
> way around for businesses in the early days? I recall that computer hardware 
> salespeople would be knocking on businesses office doors rather than software 
> salesmen.  Just seeking your opinion now that we are far ahead from 1981. 

Not PCs, but the first systems I worked on for DEC were turnkey PDP-11 based 
systems for newspaper production.  Clearly the customer wanted to publish 
newspapers, and the hardware involved wasn't what drove the decision.  A lot of 
our competitors were specialized companies concentrating on that particular 
business, not computer makers.  For example, arguably the top company at the 
time (Atex, if I remember right) also used PDP-11s.  That was around 1978.

Also about that time, I worked with some people running a computer store in the 
LA area ("Rainbow Computing") on a proposal for a business application.  That 
was a work scheduling and routing system for hospitals, and there too the point 
of it was the application needed to solve the business problem, not the 
hardware on which it would run.

paul



[cctalk] Re: CDC and IBM Schoonschip

2024-04-26 Thread Paul Koning via cctalk



> On Apr 25, 2024, at 4:27 PM, Paul Koning via cctalk  
> wrote:
> 
> Looking at the webpage for the CDC version, I noticed the comment about SB0 
> B0 vs. NO and the "lore" about the divide unit.  That issue is reported in 
> Thornton's book.  It wouldn't surprise me if it were a real issue on the 
> "preproduction serial number 3" system where that code was first created.  
> 
> It clearly was fixed soon after, though.  In the 6600 block diagrams manual 
> where the flow of the execution machinery is documented in detail, it's quite 
> clear that NO does not do any functional unit reservations (but SB0 does).  
> So at least starting with serial number 8, and possibly on earlier machines 
> after suitable FCOs were applied, NO is indeed the preferred pass 
> instruction.  On the other hand, 30 bit pass has by convention been done with 
> SB0 B0+0, suggesting it's faster to do that than to do two NO instructions.  
> I guess that's right in the absence of increment unit conflicts: 3 minor 
> cycles for the SB0 vs. 4 for the pair of NO instructions.
> 
>   paul
> 

Thinking about the NO instruction: it sure seems odd that it takes 3 cycles 
given that it doesn't reserve anything.  You'd think it would just take up an 
issue cycle but nothing else, i.e., 1 minor cycle total.  If that were the case 
then two NO instructions would be the correct way to do 30 bit padding.

Curious.  Something to look at some day.

paul



[cctalk] Re: CDC and IBM Schoonschip

2024-04-25 Thread Paul Koning via cctalk
Looking at the webpage for the CDC version, I noticed the comment about SB0 B0 
vs. NO and the "lore" about the divide unit.  That issue is reported in 
Thornton's book.  It wouldn't surprise me if it were a real issue on the 
"preproduction serial number 3" system where that code was first created.  

It clearly was fixed soon after, though.  In the 6600 block diagrams manual 
where the flow of the execution machinery is documented in detail, it's quite 
clear that NO does not do any functional unit reservations (but SB0 does).  So 
at least starting with serial number 8, and possibly on earlier machines after 
suitable FCOs were applied, NO is indeed the preferred pass instruction.  On 
the other hand, 30 bit pass has by convention been done with SB0 B0+0, 
suggesting it's faster to do that than to do two NO instructions.  I guess 
that's right in the absence of increment unit conflicts: 3 minor cycles for the 
SB0 vs. 4 for the pair of NO instructions.

paul



[cctalk] Re: CDC and IBM Schoonschip

2024-04-25 Thread Paul Koning via cctalk


> On Apr 25, 2024, at 9:43 AM, James Liu via cctalk  
> wrote:
> 
> Hi,
> 
> As some of you may recall, a few years ago I asked for assistance
> reading a 9 track tape containing IBM S/360 source for Martinus
> Veltman's computer algebra program, Schoonschip
> (https://en.wikipedia.org/wiki/Schoonschip).  With Chuck's assistance,
> we recovered all the code from the tape.  After working with the
> principals involved, I am pleased to announce that the source code is
> now publicly available at:
>https://vsys.physics.lsa.umich.edu/

Neat.  I vaguely remember that program name from long ago, though I haven't 
used it.  (Part of why it's familiar is that it's a nice Dutch word hard to 
pronounce for most others... :-) )

I wonder how difficult it would be to port to a present-day compiler like 
gfortran.

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 8:14 PM, Bill Gunshannon  
> wrote:
> 
> On 4/22/2024 2:30 PM, Paul Koning wrote:
>>> ...
>> Of course the VAX started out as a modified PDP-11; the name makes that 
>> clear.  And I saw an early document of what became the VAX 11/780, labeled 
>> PDP-11/85.  Perhaps that was obfuscation.
> 
> I have never seen anything but the vaguest similarity to the PDP-11 in
> the VAX.  I know it was called a VAX-11 early on but I never understood
> why.

Hm.  I thought it was pretty obvious.  The addressing modes are similar but a 
superset, it has similar registers, just twice as many and twice as big.  The 
instructions are similar but extended.  And the notation used to describe the 
instruction set was used earlier on the PDP-11.  For me as a PDP-11 assembly 
language programmer the kinship was obvious and the naming made perfect sense.

>> Anyway, I would think such a small microprocessor could emulate a PDP-11 
>> just fine, and probably fast enough.  The issue isn't so much the 
>> instruction set emulation but rather the electrical interface.  That's what 
>> would be needed to be a drop-in replacement.  Ignoring the voltage levels, 
>> there's the matter of implementing whatever the bus protocols are.
>> Possibly an RP2040 (the engine in the Raspberry Pico) would serve for this, 
>> with the PIO engines providing help with the low level signaling.  Sounds 
>> like a fun exercise for the student.
> 
> I wasn't thinking just the PDP-11.  I was thinking about the ability
> to replace failing CPU's of other flavors once production come to an
> end.  I suspect that is far enough in the future that I won't have to
> worry about it, but it sounded like an interesting project.
> 
> bill

It certainly would be.  And if you needed to replace a failed F-11 or single 
chip PDP-8, it might be useful now.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 7:03 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 14:34, dwight via cctalk wrote:
> 
>> For those that don't know what a UV(UX)201 was, it was most commonly used 
>> for audio amplification in early battery powered radios. These used a lot of 
>> filament current, not like later miniature tubes.
>> They had a UV(UX)200 tube for RF detections that worked better as a grid 
>> leak detector, I think because of less cutoff voltage needed as a detector.
>> The A series used a better getter and lower current filament ( one or both? 
>> ) but still used a lot of filament current.
> 
> I've long considered it to be an interesting coincidence that the
> filament voltage of the UV201 was 5V, just like much later TTL logic.

What about the coincidence that a lot of today's logic runs on 3.3 volts, just 
about the same as the first generation of IC logic (RTL).

> Folks don't recall that RCA was formed to get around a patent issue on
> the basic idea of a triode.

Interesting!

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk
They sure do now, but not back in 1964.  :-)

paul

> On Apr 22, 2024, at 5:13 PM, Mike Katz via cctalk  
> wrote:
> 
> Compilers do that with what is called loop rotation optimization.
> 
> On 4/22/2024 3:59 PM, Chuck Guzis via cctalk wrote:
>> On 4/22/24 13:53, Paul Koning via cctalk wrote:
>>> In COMPASS:
>>> 
>>> MORESA1 A1+B2   (B2 = 2)
>>> SA2 A2+B2
>>> BX6 X1
>>> LX7 X2
>>> SB3 B3-2
>>> SA6 A6+B2
>>> SA7 A7+B2
>>> PL  b3,MORE
>> My recollection is that putting the stores at the top of the loop and
>> the loads at the bottom managed to save a few cycles.  Of course, you
>> have to prime the loop...
>> 
>> --Chuck
>> 
> 



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 4:59 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 13:53, Paul Koning via cctalk wrote:
>> In COMPASS:
>> 
>> MORE SA1 A1+B2   (B2 = 2)
>>  SA2 A2+B2
>>  BX6 X1
>>  LX7 X2
>>  SB3 B3-2
>>  SA6 A6+B2
>>  SA7 A7+B2
>>  PL  b3,MORE
> 
> My recollection is that putting the stores at the top of the loop and
> the loads at the bottom managed to save a few cycles.  Of course, you
> have to prime the loop...
> 
> --Chuck

Might well be, I don't remember.  Or moving the SB3 (the loop counter) to be 
right after the loads is probably helpful.  The full answer depends on 
understanding the timing, both of the instructions and of the memory references 
that are set in motion by them. 

I never had my hands on a 6600, only a 6400 which is a single unit machine.  So 
I had to do some thinking to understand why someone would do a register 
transfer with L (shift operation) rather than B (boolean operation) when I 
first saw that in my code reading.  The answer is that both instructions take 
300 ns, but they are in different functional units on the 6600 so they can 
start 100 ns apart.

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 3:31 PM, ben via cctalk  wrote:
> 
> 
> >One other factor is that RISC machines rely on simple operations >carefully 
> >arranged by optimizing compilers (or, in some cases, >skillful programmers). 
> > A multi-step operation can be encoded in a >sequence of RISC operations run 
> >through an optimizing scheduler more >effectively than the equivalent 
> >sequence of steps inside the
> >micro-engine of a CISC processor.
> 
> Lets call them LOAD/STORE architectures.
> 
> Classic cpu designs like the PDP-1, might be better called RISC.

Um, no.  Load-store machines like the PDP-1, or many other machines of that 
era, have instructions where typically one operand is a register and the other 
is a memory location.  That means arithmetic operations necessarily include a 
memory reference, implying a memory wait.

A key part of RISC is arithmetic on registers only, and enough registers so you 
can schedule the loads and stores to run concurrently with other arithmetic 
operations.  The CDC 6600 is the pioneering example.  A very simple scenario 
would be a memory move loop, where you'd issue two loads to two different 
registers, then two register-register move operations that use different 
functional units, followed by two store operations from two different 
registers.  (The move operations because the 6600 would do loads to one set of 
registers and stores from a different set.)  Keeping two memory operations in 
flight concurrently made quite a difference.

In COMPASS:

MORESA1 A1+B2   (B2 = 2)
SA2 A2+B2
BX6 X1
LX7 X2
SB3 B3-2
SA6 A6+B2
SA7 A7+B2
PL  b3,MORE

paul

[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 4:21 PM, Mike Katz via cctalk  
> wrote:
> 
> Once CPUs became faster than memory the faster the memory the faster the CPU 
> could run.
> 
> That is where CACHE came in.  Expensive small high speed ram chips would be 
> able to feed the CPU faster except in case of a cache miss and then the cache 
> had to reload from slow memory.  That is why multiple cache buffers were 
> implemented so one could be filling (predicatively) while another buffer was 
> being used.

An early cache, though not called that, is the track buffer in the ARMAC, a 
1955 or so research computer built at CWI (then called MC) in Amsterdam.  Its 
main memory was a drum, like its predecessors, but it would keep the most 
recently accessed track in memory (core?) for fast access.  That was handled in 
hardware if I remember right, so it's exactly like a one-entry cache with a 
line size of whatever the track length is (32 words?).

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 4:24 PM, Mike Katz via cctalk  
> wrote:
> 
>> Again, not impossible, but very likely not feasable. 
> 
> Well not possible with the hardware available at the time.
> 
> If one cycle per minute or less is acceptable then I guess it was possible.
> 
> That is why we used in circuit emulators to do cycle accurate counting on 
> more complex machines.  This machines were clunky and unreliable but they 
> worked for the most part.

Well, the SB-1 is a multi-core pipelined machine with multiple caches and all 
sorts of other complications.  And the company certainly had a cycle accurate 
simulator.  They were reluctant to let it out to customers, but we leaned hard 
enough.  It was slow, indeed.  Certainly not a cycle per minute; I'm pretty 
sure it was a whole lot more than a cycle per second.  Given that the code I 
was debugging was only a few hundred instructions long, it was quite acceptable.

Speaking of slow emulation: a CDC 6600 gate level model, in VHDL, is indeed 
slow.  Now we're indeed talking about a cycle per second.  I'm thinking I could 
translate the VHDL to C and have it go faster (by omitting irrelevant detail 
that VHDL carees about but the simulation doesn't).

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 4:21 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 13:02, Wayne S wrote:
>> I read somewhere that the cable lengths were expressly engineered to provide 
>> that signals arrived to chips at nearly the same time so as to reduce chip 
>> “wait” times and provide more speed. 
> 
> That certainly was true for the 6600.  My unit manager, fresh out of
> UofMinn had his first job with CDC, measuring wire loops on the first
> 6600 to which Seymour had attached tags that said "tune".

Not so much "to arrive at the same time" but rather "to arrive at the correct 
time".  And not so much to reduce chip wait times, because for the most part 
that machine doesn't wait for things.  Instead, it relies on predictable 
timing, so that an action set in motion is known to deliver its result at a 
specific later time, and when that signal arrives there will be some element 
accepting it right then.

A nice example is the exchange jump instruction processing, which fires off a 
bunch of memory read/restore operations and sends off current register values 
across the various memory buses.  The memory read completes and sends off the 
result, then 100 ns or so later the register value shows up and is inserted 
into the write data path of the memory to complete the core memory full cycle.  
(So it isn't a read/restore actually, but raher a "read/replace".)

Another example is the PPU "barrel" which books like Thornton's show as a 
passive thing except at the "slot" where the arithmetic lives.  In reality, 
about 6 positions before the slot the current memory address (PC or current 
operand) is handed off to the memory so that just before that PP rotates to the 
slot the read data will be available to it.  And then the output of the slot 
becomes the restore, or update, data for the write part of the memory full 
cycle.

> But then, take a gander at a modern notherboard and the lengths (sic) to
> which the designers have routed the traces so that timing works.

Indeed, and with multi-Gb/s interfaces this stuff really matters.  Enough so 
that high end processors document the wire lengths inside the package, so that 
"match interconnect lengths" doesn't mean "match etch lengths" but rather 
"match etch plus in-package lengths".

The mind boggles at the high end FPGAs with dozens of interfaces running at 
data rates up to 58 Gb/s.

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 3:45 PM, Mike Katz  wrote:
> 
> Cycle accurate emulation becomes impossible in the following circumstances:
>   • Branch prediction and pipelining can cause out of order execution and 
> the execution path become data dependent. ...

I disagree.  Clearly a logic model will do cycle accurate simulation.  So an 
abstraction of that which still preserves the details of out of order 
execution, data dependency, etc., will also be cycle accurate.

It certainly is true that modern high performance processors with all those 
complexities are hard to simulate, but not impossible.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 2:34 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/22/24 11:09, Bill Gunshannon via cctalk wrote:
> 
>> 
>> Following along this line of thought but also in regards all our
>> other small CPUs
>> 
>> Would it not be possible to use something like a Blue Pill to make
>> a small board (small enough to actually fit in the CPU socket) that
>> emulated these old CPUs?  Definitely enough horse power just wondered
>> if there was enough room for the microcode.
> 
> Blue pills are so yesterday!  There are far more small-footprint MCUs
> out there.   More RAM than any Z80 ever had as well as lots of flash for
> the code as well as pipelined 32-bit execution at eye-watering (relative
> to the Z80) speeds.
> 
> Could it emulate a Z80?  I don't see any insurmountable obstacles to
> that.  Could it be cycle- and timing- accurate?   That's a harder one to
> predict, but probably.

Probably not.  Cycle accurate simulation is very hard.  It's only rarely been 
done for any CPU, and if done it tends to be incredibly slow.  I remember once 
using a MIPS cycle-accurate simulator (for the SB-1, the core inside the 
SB-1250, later called BCM-12500).  It was needed because the L2 cache flush 
code could not be debugged any other way, but it was very slow indeed.  Almost 
as bad as running the CPU logic model in a Verilog or VHDL simulator.  I don't 
remember the numbers but it probably was only a few thousand instructions per 
second.

Then again, for the notion of a drop-in replacement for the original chip, you 
don't need a cycle accurate simulator, just one with compatible pin signalling. 
 That's not nearly so hard -- though still harder than a SIMH style ISA 
simulation.

paul




[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 22, 2024, at 2:09 PM, Bill Gunshannon via cctalk 
>  wrote:
> 
> 
> 
> Following along this line of thought but also in regards all our
> other small CPUs
> 
> Would it not be possible to use something like a Blue Pill to make
> a small board (small enough to actually fit in the CPU socket) that
> emulated these old CPUs?  Definitely enough horse power just wondered
> if there was enough room for the microcode.

Microcode?

> It would bring an even more interesting concept to the table.  The
> ability to add modifications to some of these chips to see just where
> they might have gone.  While I don't mind the VAX, I always wondered
> what the PDP-11 could have been if it had been developed instead.  :-)
> 
> bill

Of course the VAX started out as a modified PDP-11; the name makes that clear.  
And I saw an early document of what became the VAX 11/780, labeled PDP-11/85.  
Perhaps that was obfuscation.

Anyway, I would think such a small microprocessor could emulate a PDP-11 just 
fine, and probably fast enough.  The issue isn't so much the instruction set 
emulation but rather the electrical interface.  That's what would be needed to 
be a drop-in replacement.  Ignoring the voltage levels, there's the matter of 
implementing whatever the bus protocols are.  

Possibly an RP2040 (the engine in the Raspberry Pico) would serve for this, 
with the PIO engines providing help with the low level signaling.  Sounds like 
a fun exercise for the student. 

paul



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-22 Thread Paul Koning via cctalk



> On Apr 21, 2024, at 9:17 PM, Will Cooke via cctalk  
> wrote:
> 
> 
> 
>> On 04/21/2024 7:06 PM CDT Peter Coghlan via cctalk  
>> wrote:
>> 
>> 
>> 
>> Why is that? Did the Z80 take more cycles to implement it's more complex
>> instructions? Is this an early example of RISC vs CISC?
>> 
>> Regards,
>> Peter Coghlan
> 
> I'm certainly no authority, but I have programmed both processors in assembly 
> and studied them somewhat.  It took many years for me to believe that the 
> 6502 was "faster" than the Z80, but now I'm (mostly) a believer.  So here is 
> my take.
> 
> First, yes, the Z80 takes roughly 4 times as many clock cycles per 
> instruction.  Where the 6502 can complete a simple instruction in a single 
> clock, the Z80 takes a minimum of four.
> 
> The 02 certainly has a simpler architecture, but calling it a RISC machine 
> would probably make the RISC believers cringe.  It is simple, but it doesn't 
> follow the pattern of lots of registers (well, maybe) and a load/store 
> architecture.  But that may be its strongest point.  The zero page 
> instructions effectively make the first 256 bytes of RAM into a large (128 or 
> 256) register file.
> ...

Cycles per instruction is one aspect of RISC vs. CISC but there are more, and 
cycles per instruction may not be the most significant one.

Given enough silicon and enough brainpower thrown at the problem, CISC machines 
can be made to run very fast.  Consider modern x86 machines for example.  But 
the key point is "given enough silicon...".  

I think the significance of RISC isn't so much in cycles per instruction but 
rather in simplicity of implementation (for a given level of performance).  
It's not just single cycle instructions.  In RISC architectures it is often 
easier to achieve pipelining and parallelism.  Consider what's arguably the 
first example, the CDC 6600 with its parallelism, and its sibling the 7600 
which made the rather obvious addition of pipelining.

Simplicity of implementation means either lower cost for a given level of 
performance, or higher achievable performance for a given level of technology, 
or lower power per unit of performance, or easier power management 
optimization, or any combination of the above.  Consider ARM machines vs. x86.  
It's not so much that ARM machines go faster but that they do so on a fraction 
of the power, and that they require only a small amount of silicon to do so.

One other factor is that RISC machines rely on simple operations carefully 
arranged by optimizing compilers (or, in some cases, skillfull programmers).  A 
multi-step operation can be encoded in a sequence of RISC operations run 
through an optimizing scheduler more effectively than the equivalent sequence 
of steps inside the micro-engine of a CISC processor.

paul




[cctalk] Re: Last Buy notification for Z80 (Z84C00 Product line)

2024-04-21 Thread Paul Koning via cctalk



> On Apr 21, 2024, at 3:11 PM, ben via cctalk  wrote:
> 
> On 2024-04-21 8:45 a.m., Mike Katz wrote:
> 
>> As for the RP2040 being cheap crap, I beg to differ with you.  It is a solid 
>> chip, produced in 10s of millions at least.  And, I would bet, a better 
>> quality chip than your Z-80, if due only to improved IC manufacturing 
>> technologies.
> The pi looks like parts were picked for lowest cost,biggest profit, like most 
> products today. RISC chips have been around for 40 years, and yet versions 
> change like hotcakes every year.

Raspberry Pi and Raspberry Pico are entirely different and unrelated devices.

I've been using the original Pico and also the Pico-W (with WiFi for one more 
dollar) for a bunch of projects; they work well and the price is hard to beat.  
The PIO (programmable state machine) subsystem is particularly impressive; for 
example, I used it to implement a DDCMP sync line interface.

paul




[cctalk] Re: Drum memory on pdp11's? Wikipedia thinks so....

2024-04-17 Thread Paul Koning via cctalk



> On Apr 15, 2024, at 10:05 AM, Christopher Zach via cctalk 
>  wrote:
> 
>> If you want word-addressable, the RF11 will do that.  Not the RC11, it has 
>> 32 word sectors.
> 
> Oh yeah, the pdp11 world had a DF32 like thing with the RF11. Totally forgot 
> about that one.
> 
> C

The RC11 is the controller; the drive was called RS64.  It may be basically a 
double-capacity derivative of the DF32.

RC11 is quite an obscure device, because it was only around very briefly at the 
start of the PDP11 era.  We had one on the physics lab computer at my college; 
I think they got it in 1972, running DOS.  I wanted to run RT BASIC on it so I 
worked on getting RT11 V2 to run on it, which required writing a device driver 
and boot driver.  Fortunately, Anton Chernoff was working at the college that 
year.  I asked him why he didn't write an RC11 driver when he was creating RT11 
V2 at DEC -- his answer "because we couldn't find one".

The 32 word blocksize was a bit of a nuisance because of the RT11 requirement 
that partial-block writes have to be zero-filled to the next 512 byte boundary. 
 On disks like the RK05, the controller handles that, but on the RC11 the 
driver had to do any filling from the sector boundary to the 512 byte block 
boundary.

The other thing about the RC11 is that it was so small that few people wanted 
to deal with it.  RSTS only ever supported it as a non-file-structured swap 
disk, not as a file system.  It was handled as a weird appendage to the system 
disk (boot disk) file structure.

paul



[cctalk] Re: Drum memory on pdp11's? Wikipedia thinks so....

2024-04-16 Thread Paul Koning via cctalk



> On Apr 16, 2024, at 10:15 AM, William Donzelli via cctalk 
>  wrote:
> 
>> I'll bet the source was talking about large contemporary storage units that
>> looked like drums or may have been called "drums" but were not actual 50's
>> drum memory with tubes and such.  There was no rotating drum storage, the
>> media rotates in the PDP era.
>> 
>> Take a look at any pdp 11 peripheral handbook, there would be drum memory
>> there if it was an official product.
> 
> Key words being "official product".
> 
> Digital CSS department - Computer Special Systems, where all that
> weird stuff that was DEC engineered and built came from. Call it "low
> run semi custom".

For that matter, there are a lot of DEC products not seen in any Handbook.  If 
you want to see everything that was produced by DEC, check the Option/Module 
list.  

To pick an example, the typesetting products were certainly official DEC 
products, not CSS, though admittedly low volume.  But you won't find the PA611, 
or the VT61/t, or the VT71 or VT20, in any peripheral or other "handbook".

paul



[cctalk] Re: Drum memory on pdp11's? Wikipedia thinks so....

2024-04-15 Thread Paul Koning via cctalk



> On Apr 15, 2024, at 1:15 PM, Tom Uban via cctalk  
> wrote:
> 
> I recall around 1980, the "A" machine at Purdue University Electrical 
> Engineering, a PDP-11/70 running Version 7 Unix had a RS04 drum drive used 
> for swap. It was getting long in the tooth and when a power failure occurred, 
> someone would have to get a wrench to help spin it up as the head lubricant 
> was no longer as good as it once was...

RS04 is a fixed head disk, similar properties as a drum but shape-wise it's a 
flat platter with heads on one side.

The "lubricant" bit reminds me of our college RF11 swap disk (for RSTS/E). When 
we got the RP04 it was no longer so interesting so it was not configured in, 
but it was still powered up.

One day I noticed that the "timing track error" light was lit, and since the 
system was under contract we called DEC field service to repair it.  The local 
tech checked things out and realized the drive was not spinning even though it 
had spindle power.  Oops.  He took the drive apart and discovered that the 
heads had landed, and melted so they were essentially hot-melt-glued to the 
platter.  Those drives use rather low powered motors, intentionally, so with 
the heads stuck it would not spin up.

He ended up replacing all the heads and the platter, not sure about the motor.  
Got an alignment disk and timing track writer from Maynard, and had to figure 
out how to use these -- the documentation was sparse, to put it politely.  In 
the end, everything worked.  Oh by the way, I don't think that field repair of 
a drive like that was a standard procedure, but for Jim Newport this wasn't a 
big deal.

paul



[cctalk] Re: Drum memory on pdp11's? Wikipedia thinks so....

2024-04-15 Thread Paul Koning via cctalk



> On Apr 13, 2024, at 5:26 PM, Christopher Zach via cctalk 
>  wrote:
> 
> Was reading the Wikipedia article on Drum memories:
> 
> https://en.wikipedia.org/wiki/Drum_memory#External_links

I noticed the question was asked (but not answered): what is the largest 
storage capacity found in drums?  I commented that it's at least 512k 27-bit 
words, which is the size of the Electrologica X8 drum, circa 1970.

That reminded me the TU Eindhoven replaced their X8 by a Burroughs 6700 system, 
which also had a drum.  I remember a cube about a meter wide.  But I don't 
remember its capacity and can't find anything on Bitsavers that even mentions 
drums on any Burroughs mainframe.  Does anyone know?

paul



[cctalk] Re: Drum memory on pdp11's? Wikipedia thinks so....

2024-04-15 Thread Paul Koning via cctalk



> On Apr 13, 2024, at 5:26 PM, Christopher Zach via cctalk 
>  wrote:
> 
> Was reading the Wikipedia article on Drum memories:
> 
> https://en.wikipedia.org/wiki/Drum_memory#External_links
> 
> And came across this tidbit.
> 
> As late as 1980, PDP-11/45 machines using magnetic core main memory and drums 
> for swapping were still in use at many of the original UNIX sites.
> 
> Any thoughts on what they are talking about? I could see running the 
> RS03/RS04 on a 11/45 with the dual Unibus configured so the RS03's talk to 
> memory directly instead of the Unibus, but that's not quite the same as true 
> drum memory.
> 
> Closest thing I remember was the DF32 on a pdp8 which could be addressed by 
> word as opposed to track/sector.
> 
> Thoughts?
> C

I don't know of any drums on PDP-11 systems.  RS03/04 are of course fixed head 
disks, as are the earlier RF11 and RC11/RS64.  All these are functional analogs 
of drums in that they have no seek time.  Are drums usually word addressable?  
That doesn't seem necessary, not unless you use them as main memory.  Even the 
early ARMAC (1956-ish) which uses a drum for main memory doesn't really need it 
to be word-addressable because it had a one-track buffer memory (think of it as 
an early cache).  If you want word-addressable, the RF11 will do that.  Not the 
RC11, it has 32 word sectors.

paul



[cctalk] Re: Odd IBM mass storage systems

2024-04-14 Thread Paul Koning via cctalk



> On Apr 14, 2024, at 2:50 PM, Van Snyder via cctalk  
> wrote:
> 
> On Sun, 2024-04-14 at 13:15 -0400, Paul Koning via cctalk wrote:
>> The printer I was describing sounds a lot like the Versatec ones you
>> mentioned, including the funny paper and smelly toner.  But it was
>> actually made by Varian, and the driver tells me it had 1408 pixels
>> across the width of the paper, so at 11 inches wide that would make
>> it 128 PPI.  I wonder if I still have a sample page or two from that
>> printer.
> 
> American Geophysical had a fleet of trucks fitted with hydraulic
> "thumpers." They would go out to a potential oil or gas field, lay out
> a few thousand feet of cables with geophones on them, and drive around
> thumping the ground. Within the truck, they had Varian V70 computers
> with microcode to do Fast Fourier Transforms. 

I remember a Varian computer sitting in a corner of a lab at U of Illinois 
(computer science department).  It looks similar to the ones shown in Bitsavers 
but not quite the same -- it had a front panel that had mostly brown coloring, 
and the panel was totally flat.  It used membrane pushbuttons for operation, 
with the button positions marked by circles on the flat plastic front panel.

Does that ring any bells?  I remember being told it had user programmable 
microcode, but I never used it, in fact I never heard of anyone using it.

paul




[cctalk] Re: Odd IBM mass storage systems

2024-04-14 Thread Paul Koning via cctalk



> On Apr 13, 2024, at 5:48 PM, Jon Elson via cctalk  
> wrote:
> 
> On 4/12/24 20:21, Paul Koning via cctalk wrote:
>> 
>>> On Apr 12, 2024, at 7:48 PM, Van Snyder via cctalk  
>>> wrote:
>>> 
>>> ... The other was to print on its "whippet"
>>> printer, a very fast electrostatic printer that put soot onto a thermal
>>> paper that was then heated to "fix" it. There was a huge variac under
>>> the printer to adjust the heater. The perfect setting was between two
>>> windings. Too cold and the soot fell off. Too hot and it was melted and
>>> smeared into an almost illegible mess. But it was very fast -- and only
>>> 80 columns wide. It was about the size of a KSR-33.
>> Different beast, but it reminds me of an electrostatic plotter we at on the 
>> U of Illinois PLATO system.  That one was by Versatec, either 11 or 17 
>> inches wide (I forgot), 300 dpi, pretty sure it used wet toner.  It also 
>> used a chain drive for the paper feed, which had enough backlash that 
>> starting and stopping would produce visible irregularities in the output.  
>> So I wrote a driver for it that did overlapped I/O to avoid that problem.  
>> (File I/O directly from a PPU program, lots of fun!)
>> 
>> With that, it did an awesome job printing musical scores.
> 
> Yes, there were a number of Versatec models for different paper sizes and 
> pixel density.  I worked with a bunch of 1200A units, they could run either 
> roll or fanfold paper at 11" width.  The paper was clay coated and felt like 
> a dirty chalkboard.  The toner was quite smelly, some kind of paraffin oil 
> with carbon particles suspended in it.  There was a blower to evaporate the 
> toner solvent.  The 1200A had 200 pixels/inch, so you got 2112 pixels across 
> the page, IIRC.  it applied 800 V to the writing electrodes, and something 
> like 400 V to the segmented backplate that was on the opposite side of the 
> paper.  It could print text at about 1200 LPM, which was pretty fantastic for 
> the time.
> 
> But, I am glad to not have to deal with these things anymore!

Indeed.

Meanwhile, it turns out my memory was faulty.  The printer I was describing 
sounds a lot like the Versatec ones you mentioned, including the funny paper 
and smelly toner.  But it was actually made by Varian, and the driver tells me 
it had 1408 pixels across the width of the paper, so at 11 inches wide that 
would make it 128 PPI.  I wonder if I still have a sample page or two from that 
printer.

paul



[cctalk] Re: Other input devices.

2024-04-13 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 9:55 PM, ben via cctalk  wrote:
> 
> Did any one ever use a keyboard to magtape as input device?

My wife did, sort of: for a while she worked with IBM MT/ST word processors.  
Those were very early word processing systems that used a custom magnetic tape 
cartridge for storage and a Selectric typewriter for I/O.

paul



[cctalk] Re: Odd IBM mass storage systems

2024-04-13 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 9:49 PM, ben via cctalk  wrote:
> 
> On 2024-04-12 7:23 p.m., Paul Koning via cctalk wrote:
>>> On Apr 12, 2024, at 5:54 PM, CAREY SCHUG via cctalk  
>>> wrote:
>>> 
>>> ...
>>> my favorite terminal 3190 that was neon gas, so monochrome, but could take 
>>> 5 addresses, and flip between 62 lines of 160 characters (always there), to 
>>> 4 terminals of 62x80 any two visible at a time, or 4 terminals of 31x160 
>>> characters, any 2 visible at a time, or 4 terminals of 31x80 all visible at 
>>> once.  when given a choice, my new boss was surprised that I chose that 
>>> instead of the color 3279 with graphics that everybody else wanted.  Great 
>>> for running virtual systems...
>> Sounds like the plasma panel displays that were invented for the PLATO 
>> system, by Don Bitzer and a few others, at the U of Illinois.  Inherent 
>> memory: if you lit a pixel it would stay lit, to turn it off you'd feed it a 
>> pulse of the opposite polarity.  So it was a great way to do 512x512 bitmap 
>> graphics with very modest complexity, no refresh memory needed.
>>  paul
> 
> But too slow I suspect to run a game like spacewar.

PLATO was the system where a whole lot of early games first appeared, 
especially multi-player games.  Among them were any number of variations of 
"Star Trek" inspired ones.  While you couldn't refresh a screen full of space 
ships in motion as fast as you can on a dedicated graphics engine, it was 
certainly acceptable for the players.  And a simpler two-ship game like the 
original spacewar would work even better, because you'd only need a couple of 
operations per refresh -- on the classic terminal, 12 output words at 60 per 
second, so 200 ms per refresh.  Not quite "full motion" but close.

paul



[cctalk] Re: Odd IBM mass storage systems

2024-04-12 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 5:54 PM, CAREY SCHUG via cctalk  
> wrote:
> 
> ...
> my favorite terminal 3190 that was neon gas, so monochrome, but could take 5 
> addresses, and flip between 62 lines of 160 characters (always there), to 4 
> terminals of 62x80 any two visible at a time, or 4 terminals of 31x160 
> characters, any 2 visible at a time, or 4 terminals of 31x80 all visible at 
> once.  when given a choice, my new boss was surprised that I chose that 
> instead of the color 3279 with graphics that everybody else wanted.  Great 
> for running virtual systems...

Sounds like the plasma panel displays that were invented for the PLATO system, 
by Don Bitzer and a few others, at the U of Illinois.  Inherent memory: if you 
lit a pixel it would stay lit, to turn it off you'd feed it a pulse of the 
opposite polarity.  So it was a great way to do 512x512 bitmap graphics with 
very modest complexity, no refresh memory needed.

paul



[cctalk] Re: Odd IBM mass storage systems

2024-04-12 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 7:48 PM, Van Snyder via cctalk  
> wrote:
> 
> ... The other was to print on its "whippet"
> printer, a very fast electrostatic printer that put soot onto a thermal
> paper that was then heated to "fix" it. There was a huge variac under
> the printer to adjust the heater. The perfect setting was between two
> windings. Too cold and the soot fell off. Too hot and it was melted and
> smeared into an almost illegible mess. But it was very fast -- and only
> 80 columns wide. It was about the size of a KSR-33.

Different beast, but it reminds me of an electrostatic plotter we at on the U 
of Illinois PLATO system.  That one was by Versatec, either 11 or 17 inches 
wide (I forgot), 300 dpi, pretty sure it used wet toner.  It also used a chain 
drive for the paper feed, which had enough backlash that starting and stopping 
would produce visible irregularities in the output.  So I wrote a driver for it 
that did overlapped I/O to avoid that problem.  (File I/O directly from a PPU 
program, lots of fun!)

With that, it did an awesome job printing musical scores.

paul



[cctalk] Re: Odd IBM mass storage systems

2024-04-12 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 3:25 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 4/12/24 12:04, Paul Koning via cctalk wrote:
> 
>> I remember a concept for a very fast magnetic storage system that didn't 
>> become a product, as far as I know.  The scheme was to build a large array 
>> of heads, using IC-manufacturing type techniques, and mount that array in 
>> contact or near-contact with a flat rectangular magnetic plate.  The plate 
>> (or the heads) could move a small amount in one direction.  The idea was 
>> "head per sector", with the mechanical motion scanning the sector across the 
>> head.  Given something like piezo-electric actuators it would have been 
>> quite fast.
>> 
>> There's a neat document in the CWI archives, a course on computer design 
>> from early 1948.  It has a section about memories, well before core memory 
>> was invented.  The schemes it describes are quite curious, including 
>> photographic memories, selectrons, and various other schemes.  Also drum 
>> memories, including the rather mythical notion of a drum spinning at 60,000 
>> rpm.
> 
> That UNIVAC nickel-plated sewer pipe in a box, the Fastrand II used a
> series of solenoids and lever arms for head positioning.  I vaguely
> remember a FJCC article describing it.
> 
> But fast?  Not so much, at least for drum storage of that era.  I
> believe there were also microphones incorporated into it, called "ping
> detectors"

Yes, the Univac Model I acoustic delay line memories.  The document I mentioned 
is MC report CR-3, by A. van Wijngaarden, 1948, which is online in their 
archive but only as a not particularly clear scanned document in Dutch.  It 
describes six memory technologies: photographic film, fluorescence, electric 
resistance (including the notion of a neon cross-bar, which is another way of 
describing Bitzer's inherent-memory plasma display panel but more than a decade 
earlier), acoustic waves (such as the Univac memory), magnetic tape, wire, or 
drum storage, and electrostatic charges (the Selectron is described in detail).

Not all that fast, well, it depends on what you're comparing with.  Given tube 
logic with cycle times measures in microseconds, quite possibly serial rather 
than parallel organization, those acoustic or drum memory systems weren't all 
that terrible.  

paul



[cctalk] Re: IBM 360

2024-04-12 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 2:10 PM, Tom Gardner via cctalk  
> wrote:
> 
> Data Cell - Tape, Card or Disk?
> 
> I'm pretty sure the developers thought of the  media of the IBM 2321 as tape
> rather than cards, although the strips (of tape) were addressed as disk
> drives (DASD) not tape. 

Actually, they look like a disk.  See the 2841 manual on Bitsavers (that's the 
controller which drives 2311 disks as well as 2321 data cell devices).  It says 
that each strip has 100 tracks, read/written by a movable heads unit that has 
20 heads on it, so 5 positions.  And it shows the layout of each track, which 
is the conventions count/key/data layout of 360 disk drives like the 2311.  
Yes, variable length "sectors", you'd specify in the JCL what you wanted for 
blocksize of that particular file.  If I remember right, the block length could 
vary from one block to the next, which is pretty wild.  (Contrast with the 
EL-X8 disk drive, which also has variable length sectors, but more limited: a 
choice of one of 5 possible sizes, chosen on a per-track basis when you first 
write or format that track.)  Apart from those, I only ever remember seeing 
fixed size sectors, though the actual lengths might be strange -- like the CDC 
mainframe disks with sectors of 322 12-bit words.

paul




[cctalk] Re: Odd IBM mass storage systems

2024-04-12 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 2:28 PM, Paul Berger via cctalk  
> wrote:
> 
> 
> On 2024-04-12 2:45 p.m., Christian Kennedy via cctalk wrote:
>> 
>> On 4/12/24 10:28, Chuck Guzis via cctalk wrote:
>>> Isn't that the IBM 2321 Data Cell drive?
>> 
>> Same idea, but I recall the cabinets being lower to the floor and the media 
>> being more rigid than the 2321 noodles.  Then again, it's been the better 
>> part of 50 years, and it could well have been a 2321.
>> 
>> Memory rot sucks.
>> 
>>> Having one's files "photostored" at LLL was a chancy proposition.  There
>>> were bootleg programs to access every file for a user, just to keep them
>>> from being consigned to the photostore.
>> 
>> It was chancy at LBL as well.  The mechanical handling of the 1360 
>> photostore cells was something that would have defied the imagination of 
>> Rube Goldberg, and chips routinely ended up in places where they didn't 
>> belong (although they did make pretty cool bookmarks for my teenage self).
>> 
> The problem with a lot of these old machines was they relied on a lot of 
> electro-mechanical  devices that would today be replaced by electronics and a 
> few simple actuators.  These mechanical devices need to be adjusted and 
> maintained  and have lots of parts to wear out.  While I only started with 
> IBM in 1979 I still got to work on machines that would now be considered 
> electro-mechanical nightmares.

Some of the earliest magnetic storage was mechanically simple: magnetic drums.  
Nothing moving apart from the spinning media, and quite fast.  Fixed head 
("head per track") disk drives are a variation on that theme, DEC had some that 
were successful for a while.

I remember a concept for a very fast magnetic storage system that didn't become 
a product, as far as I know.  The scheme was to build a large array of heads, 
using IC-manufacturing type techniques, and mount that array in contact or 
near-contact with a flat rectangular magnetic plate.  The plate (or the heads) 
could move a small amount in one direction.  The idea was "head per sector", 
with the mechanical motion scanning the sector across the head.  Given 
something like piezo-electric actuators it would have been quite fast.

There's a neat document in the CWI archives, a course on computer design from 
early 1948.  It has a section about memories, well before core memory was 
invented.  The schemes it describes are quite curious, including photographic 
memories, selectrons, and various other schemes.  Also drum memories, including 
the rather mythical notion of a drum spinning at 60,000 rpm.

paul




[cctalk] Re: IBM 360

2024-04-12 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 9:48 AM, Liam Proven via cctalk  
> wrote:
> 
> On Fri, 12 Apr 2024 at 13:31, Paul Koning  wrote:
> 
>> Yes.  See also https://en.wikipedia.org/wiki/IBM_2321_Data_Cell .  By the 
>> standards of the time it was an unusually high capacity storage device, way 
>> faster than a room full of tapes and much larger than the 2311 disk drive.
> 
> Fascinating. Thank you. It sounds truly awful. A device that
> effectively tries to push strips of tape into receptacles?

I suppose.  Or magnetic cards.  There were other devices that used magnetic 
cards, like the Olivetti Programma -- world's first programmable calculator.  
For that matter, magnetic cards are still around, they are called credit cards. 
 :-)

paul




[cctalk] Re: IBM 360

2024-04-12 Thread Paul Koning via cctalk



> On Apr 12, 2024, at 5:47 AM, Liam Proven via cctalk  
> wrote:
> 
> On Thu, 11 Apr 2024 at 19:32, Van Snyder via cctalk
>  wrote:
> 
>> 
>> An IBM salesman convinced them to try out a 360/30 with a Data Cell.
> 
> No idea what a "data cell" is.
> 
> I found this:
> 
> https://www.pcmag.com/encyclopedia/term/data-cell
> 
> At the Eastercon last week, I met a chap who learned to code on an IBM
> 1620. He told me of a terrible, terrible storage device which used a
> robot to load strips of magtape. Cheap but a horrendous failure rate.
> Is it this thing?

Yes.  See also https://en.wikipedia.org/wiki/IBM_2321_Data_Cell .  By the 
standards of the time it was an unusually high capacity storage device, way 
faster than a room full of tapes and much larger than the 2311 disk drive.

paul



[cctalk] Re: IBM 360

2024-04-11 Thread Paul Koning via cctalk



> On Apr 11, 2024, at 2:42 AM, Joseph S. Barrera III via cctalk 
>  wrote:
> 
> On Wed, Apr 10, 2024 at 6:36 AM Murray McCullough via cctalk <
> cctalk@classiccmp.org> wrote:
> 
>> I don’t think I truly realized the seminal work done at IBM then
>> (60's&70's).

One interesting historic tidbit is the Dutch connection on the IBM/360.  One of 
the lead designers on the 360 was Gerrit Blaauw, a Dutch computer engineer who 
learned his craft at Harvard (with Aiken), and refined it at the MC in 
Amsterdam (now CWI) leading the design of several one-off research computers.  
Among other things, he taught the other designers that logic needs to be 
clocked to be reliable. :-)

After that, he left for IBM and worked on several computer designs, culminating 
in the 360.  Wikipedia says that the choice of 8 bit characters rather than the 
then-current 6 bits came from him.  

paul




[cctalk] Re: IBM 360

2024-04-10 Thread Paul Koning via cctalk



> On Apr 10, 2024, at 5:01 PM, Van Snyder via cctalk  
> wrote:
> 
> ...
> I think the 360/67 replaced "Halt and Catch Fire" with "Rewind and
> Break Tape."

I always wondered if that wasn't a standard property of IBM tape drives of that 
era.  The ones I remember from our 360/44 had capstans that turned 
continuously, one to each side of the head.  The tape was shoved against the 
capstan to start tape motion, and against a rubber brake block to stop it.  
That was wild enough, but the other crazy aspect is that the vacuum columns 
were arranged so the oxide was facing outward, i.e., rubbing against the side 
walls of the vacuum column.

I never did wear out a tape, but then again, I never used a tape more than a 
half dozen times on that system.

paul




[cctalk] Re: IBM 360 and 1400 series emulation

2024-04-10 Thread Paul Koning via cctalk
I don't know if it was an option.  If so, presumably it was included if you 
elected the emulator option, since both are intended for running OS/360.

paul

> On Apr 10, 2024, at 1:00 PM, CAREY SCHUG via cctalk  
> wrote:
> 
> I thought you could get regular channels as an optional feature?
> 
> --Carey
> 
>> On 04/10/2024 11:47 AM CDT Paul Koning via cctalk  
>> wrote:
>> 
>> 
>>> On Apr 10, 2024, at 11:25 AM, Jon Elson via cctalk  
>>> wrote:
>>> 
> ...
>>>> 
>>> ...  The model 44 had no channels, there was only direct I/O (a set of 
>>> 32-bit parallel input and output registers) and a pair of cartridge hard 
>>> drives inside the CPU cabinet.  Think DEC RK05s.
>> 
>> No channels?  That doesn't sound right.  The 360/44 I used certain had an 
>> RK05-like drive in the CPU cabinet (I only remember one, though).  I'm 
>> fairly sure it was a 16-sector pack, so more like an RK08.  But the system 
>> ran both OS/360 and TSO, and had three 2311 disk drives, three tape drives 
>> (with an amazingly ugly mechanical design), a card reader/punch, and a line 
>> printer.  Also some sort of terminal max, but I never used the timesharing 
>> feature so I don't know what that involved.
>> 
>> It certainly had enough of a channel-like I/O system that the emulator 
>> program loader could be implemented in a card reader channel program no 
>> different from that of other 360s.  I remember quite well deciphering it 
>> using the CCW documentation on the "green card".
>> 
>> Yes, the emulation of SS instructions was via traps, but specifically by a 
>> trap into emulator mode in a separate chunk of memory not visible to the 
>> main OS.
>> 
>> I never saw the cartridge drive in use by anyone.
>> 
>>  paul



[cctalk] Re: IBM 360 and 1400 series emulation

2024-04-10 Thread Paul Koning via cctalk



> On Apr 10, 2024, at 11:25 AM, Jon Elson via cctalk  
> wrote:
> 
> On 4/10/24 07:18, CAREY SCHUG via cctalk wrote:
>> Nearly all the 360s were microcoded, so adding a bit more microcode let them 
>> emulate 1400/7000 series computers as a standard optional feature. (well the 
>> model 44 emulated the 1620, and probably the 95/195 could not emulate 
>> anything since they were hard wired).
>> 
> The model 44 was not microcoded.  It had faster floating point than a model 
> /50 but no decimal or string instructions.  Emulation of these was done 
> through trap handlers.  I would assume any other machine emulators were done 
> by something like an emulation wrapper program - like Virtualbox or VMware.  
> The model 44 had no channels, there was only direct I/O (a set of 32-bit 
> parallel input and output registers) and a pair of cartridge hard drives 
> inside the CPU cabinet.  Think DEC RK05s.

No channels?  That doesn't sound right.  The 360/44 I used certain had an 
RK05-like drive in the CPU cabinet (I only remember one, though).  I'm fairly 
sure it was a 16-sector pack, so more like an RK08.  But the system ran both 
OS/360 and TSO, and had three 2311 disk drives, three tape drives (with an 
amazingly ugly mechanical design), a card reader/punch, and a line printer.  
Also some sort of terminal max, but I never used the timesharing feature so I 
don't know what that involved.

It certainly had enough of a channel-like I/O system that the emulator program 
loader could be implemented in a card reader channel program no different from 
that of other 360s.  I remember quite well deciphering it using the CCW 
documentation on the "green card".

Yes, the emuation of SS instructions was via traps, but specifically by a trap 
into emulator mode in a separate chunk of memory not visible to the main OS.

I never saw the cartridge drive in use by anyone.

paul



[cctalk] Re: IBM 360 and 1400 series emulation

2024-04-10 Thread Paul Koning via cctalk



> On Apr 10, 2024, at 8:18 AM, CAREY SCHUG via cctalk  
> wrote:
> 
> Nearly all the 360s were microcoded, so adding a bit more microcode let them 
> emulate 1400/7000 series computers as a standard optional feature. (well the 
> model 44 emulated the 1620, ...

Um, what?

In college I used a 360/44, which ran OS/360 (PCP 19.6, all that could fit in 
128 kB of memory), which was made possible by the fact that it had the 
"emulator" option.  But that wasn't a 1620 emulation; instead, it added the SS 
instructions of the standard 360 instruction set back in, those were omitted 
from the base model 44.  Without SS instructions, OS/360 could not run, which 
is why the model 44 had an operating system specifically for that machine 
(PS/44 ?  I'm not sure, I never used it).

The emulator had a separate chunk of memory and a separate IPL button; 
unimplemented instructions would trap to that memory for the emulator to handle 
-- very much like how subset VAX systems like MicroVAX would emulate the 
missing instructions.

The emulator binary came in a card deck, a standard BPS binary deck preceded by 
a single-card loader that was an amazingly clever self-modifying channel 
program.  The entire logic to interpret the fields of the binary cards and load 
the entire deck to the right places was implemented in that one-card channel 
program.

I read the relevant documentation back then and decoded the loader, but I have 
never seen any of it since; even just a bare mention of the emulator feature is 
nearly non-existent.

paul




[cctalk] Re: oscilloscopes

2024-04-07 Thread Paul Koning via cctalk



> On Apr 6, 2024, at 11:40 AM, Phil Budne via cctalk  
> wrote:
> 
> Paul Koning wrote:
> 
>> Yes, and some emulations have done this, such as Phil Budne's famous work in 
>> SIMH.
> 
> Famous??  I'm famous???!!!
> 
> To be fair, I started with Douglas W. Jones' PDP8 Emulator.
> 
> Which reminds me of:
> 
>If I have seen farther than others, it is because I was standing on the 
> shoulders of giants. -- Isaac Newton
> 
>In the sciences, we are now uniquely privileged to sit side by side with 
> the giants on whose shoulders we stand. -- Gerald Holton
> 
>If I have not seen as far as others, it is because giants were standing on 
> my shoulders. -- Hal Abelson
> 
>In computer science, we stand on each other's feet. -- Brian K. Reid

Well said, indeed!

> It was certainly an awakening to the inherent parallelism of "analog"
> natural processes...  I wrote and tuned the code twenty years ago, but
> haven't looked at whether better results might be possible by wasting
> the capabilities of current systems (SIMD libaries and/or multiple
> cores).  I felt like I only was able to give a slim impression, and
> not an immersive experience. I've also wondered what could be done
> with 4K HDR displays: making points round(!) and simulating the
> "bloom" and intensity of repeatedly or highly intensified spots.

I did these things, in an emulation of the CDC 6600 console (DD60).  It paints 
the "dot" on the screen using a 2D Gaussian distribution around the nominal 
center, with its parameters adjusted by the "focus" and "intensity" controls 
just like on the original.  And each visit of the spot is summed into the 
current screen data using saturated arithmetic.  So you get intensification at 
no extra charge, and if a spot is drawn many times it also gets a bit blurrier 
due to the skirts of the Gaussian distribution becoming more visible.  At one 
point I had the spot as an RGB value with a touch of blue in it, so the "bloom" 
would be bluer than normal lines.  I took that out because I don't remember 
what the real screen actually does.  But clearly a color shift like that could 
be simulated.

paul



[cctalk] Re: oscilloscopes

2024-04-04 Thread Paul Koning via cctalk



> On Apr 4, 2024, at 3:12 PM, Brent Hilpert via cctalk  
> wrote:
> 
>> On 2024Apr 4,, at 7:22 AM, Adrian Godwin via cctalk  
>> wrote:
>> 
>> This 'scope clock also uses circle generators rather than vectors to
>> produce well-formed characters. It mentions a Teensy controller so I don't
>> think it's the original made in this way - the first I heard of was too
>> long ago for that. But I don't know if it's an update or a separate design.
> 
>> https://scopeclock.com/ 
> Technically, the scopeClock is generating neither curves nor vectors, it's 
> generating pixels in an XY display - it's just that they’re of fine enough 
> resolution and fast enough that they’re seen as a smooth-enough curve on the 
> CRT.
> 
> The MIT/Electronics-magazine and Wyle techniques are using analog electronics 
> to generate portions of sine waves for selected phase periods and phase 
> relation such that when applied to the XY cartesian display you get 
> continuous portions (chords) of circles. Some digital logic gates the analog 
> sine generators appropriately to produce the chords and line segments, with 
> offsets, in a sequence to form characters.
> 
> The scopeClock, in contrast, is using DACs in the microcontroller to generate 
> (discrete approximations of) sine wave segments - which is to say it’s 
> relying on the abilities of inexpensive current-day high-speed digital 
> electronics.

That is similar to what the CDC 6612 controller for the DD60 console display 
does.  So given that you're sending the resulting step waveform through a 
deflection circuit with finite bandwidth, you do in fact end up with a 
continuous vector with rounded features.  How nicely rounded depends on the 
bandwidth and the number and size of the steps.  For example, the DD60 display 
doesn't look all that elegant, but it is definitely well rounded, simply 
because the step clock is 10 MHz and the deflection chain bandwidth isn't a 
whole lot more than that.  So the fact that you're dealing with what originally 
was a step waveform with just 7 positions for X and Y isn't at all obvious in 
the final image.

paul




[cctalk] Re: oscilloscopes

2024-04-04 Thread Paul Koning via cctalk



> On Apr 3, 2024, at 6:32 PM, Rick Bensene  wrote:
> 
> I wrote:
> 
>>> The digits are among the nicest looking digits that I've ever seen 
>>> on a CRT display, including those on the CDC scopes as well as IBM >> 
>>> console displays.
> 
> To which Paul responded:
> 
>> I have, somewhere, a copy of a paper that describes analog circuits > for 
>> generating waveforms for digits along the lines you describe.  
>> Might have been from MIT, in the 1950s, but right now I can't find > it.
> 
>> Found it (on paper): "Generating characters" by Kenneth Perry and 
>> Everett Aho, > Electronics, Jan 3, 1958, pp. 72-75.
> 
>> Bitsavers has it in the MIT/LincolnLaboratory section:   
>> https://bitsavers.org/pdf/mit/lincolnLaboratory/Perry_and_Aho__Generating_Characters_-_Electronics_19580103.pdf
> 
> Very interesting.   Here's a link to the patent for the display system on the 
> Wyle Labs calculator:
> 
> https://patentimages.storage.googleapis.com/17/51/58/89c19cee6c60e2/US3305843.pdf
> 
> The concepts are very similar to the paper written up in ELECTRONICS magazine 
> in early 1958 that you found.  Your memory is incredible to have been able to 
> have this pop into your mind when you read my description of the way the 
> calculator generates its display.
> 
> Thank you for looking up this article!   It'll provide some nice background 
> for the concepts of generating characters this way when I finally get to 
> documenting the Wyle WS-01/WS-02 calculators in an Old Calculator Museum 
> exhibit.
> 
> I wonder if the inventor of the display system for the calculator (in fact, 
> the inventor of the entire Wyle Labs calculator architecture) had read this 
> article at some point prior?  
> 
> I scanned through the patent for the calculator display system looking for 
> any reference to the article or any document from MIT relating, and I 
> couldn't find anything.   

I didn't see any either, and the patent examiners didn't cite any.  Then again, 
it's amazing how often patent examiners miss relevant prior art.  One example I 
like to mention is Edwin Armstrong's patent for FM radio, which doesn't cite an 
actual earlier US patent, 1,648,402 from 1927, actually filed 12 years before 
Armstrong's.  Or the prior art centuries preceding US 6469...

On the other hand, while the concept is similar the details are rather 
different, and the Wyle design is clearly a whole lot simpler.

> The inventor is still alive, and I have talked to him on the telephone a 
> couple of times.   For his advanced age, he is still quite sharp, and 
> remembers a lot of the challenges involved with trying to make a solid-state 
> electronic calculator that would fit on a (large) desktop using early 1960's 
> technology.

It would be neat to ask him about that MIT article.

paul



[cctalk] Re: oscilloscopes

2024-04-03 Thread Paul Koning via cctalk



> On Apr 3, 2024, at 2:20 PM, Paul Koning via cctalk  
> wrote:
> 
> 
> 
>> On Apr 3, 2024, at 1:49 PM, Rick Bensene via cctalk  
>> wrote:
>> 
>> ...
>> Even with only having to render the digits zero through nine and a decimal 
>> point (the calculator didn't support negative numbers; they were represented 
>> using tens complement form), the display generator also used a batch of 
>> diode-transistor gates to generate the digits.The interesting thing 
>> about it is that instead of generating strokes to create the digits, the 
>> machine uses sine/cosine waveforms that are gated by the character 
>> generation logic to draw the digits on the screen.   The position of the 
>> digits, like the CDC scopes, is derived by precision resistor DACs, and then 
>> a mixer takes over as the character is drawn using gated segments of the 
>> sine and cosine waveforms mixed together with the position voltage.   The 
>> result is really beautifully rendered digits that look almost like they are 
>> drawn by a draftsperson who is extremely consistent in the drawing of each 
>> digit.  The CRT has yellow-orange phosphor with a moderate persistence, so 
>> when the digits change, they look like they quickly morph from one digit to 
>> the next.  
>> 
>> The digits are among the nicest looking digits that I've ever seen on a CRT 
>> display, including those on the CDC scopes as well as IBM console displays.
> 
> I have, somewhere, a copy of a paper that describes analog circuits for 
> generating waveforms for digits along the lines you describe.  Might have 
> been from MIT, in the 1950s, but right now I can't find it.

Found it (on paper): "Generating characters" by Kenneth Perry and Everett Aho, 
Electronics, Jan 3, 1958, pp. 72-75.

Bitsavers has it in the MIT/LincolnLaboratory section: 
https://bitsavers.org/pdf/mit/lincolnLaboratory/Perry_and_Aho_-_Generating_Characters_-_Electronics_19580103.pdf

paul



[cctalk] Re: oscilloscopes

2024-04-03 Thread Paul Koning via cctalk



> On Apr 3, 2024, at 1:49 PM, Rick Bensene via cctalk  
> wrote:
> 
> ...
> Even with only having to render the digits zero through nine and a decimal 
> point (the calculator didn't support negative numbers; they were represented 
> using tens complement form), the display generator also used a batch of 
> diode-transistor gates to generate the digits.The interesting thing about 
> it is that instead of generating strokes to create the digits, the machine 
> uses sine/cosine waveforms that are gated by the character generation logic 
> to draw the digits on the screen.   The position of the digits, like the CDC 
> scopes, is derived by precision resistor DACs, and then a mixer takes over as 
> the character is drawn using gated segments of the sine and cosine waveforms 
> mixed together with the position voltage.   The result is really beautifully 
> rendered digits that look almost like they are drawn by a draftsperson who is 
> extremely consistent in the drawing of each digit.  The CRT has yellow-orange 
> phosphor with a moderate persistence, so when the digits change, they look 
> like they quickly morph from one digit to the next.  
> 
> The digits are among the nicest looking digits that I've ever seen on a CRT 
> display, including those on the CDC scopes as well as IBM console displays.

I have, somewhere, a copy of a paper that describes analog circuits for 
generating waveforms for digits along the lines you describe.  Might have been 
from MIT, in the 1950s, but right now I can't find it.

The CDC console waveforms start out as step function waveforms, with delta x 
and/or y of +/- 1 or 2 units, at 100 ns intervals.  Given the bandwidths of the 
circuits involved they get rounded off in the generation, and a whole lot more 
in the DD60 deflection amplifier signal chain.  I've tried to create a SPICE 
model of that signal path to try to reproduce what we know actually showed up 
on the display screen, but haven't had much luck.  Too much of the circuit 
involves parts with unknown properties, starting with the transistors, on to 
the wirewound resistors that apparently show up in various places, and ending 
with the deflection plates of the CRTs themselves.  Still, a crude IIR filter 
mimicking some of the more obvious contributions do produce acceptable 
character shapes in my DD60 emulation software.

Speaking of nice looking numeric displays: probably the best ever are the 
projection displays made by IEE, in the 1960s I think.  
https://www.antiqueradios.com/forums//viewtopic.php?f=12=341355 shows a 
sample.  A few computers from that era used them for the console, the CDC 1604 
seems to be an example.

paul




[cctalk] Re: oscilloscopes

2024-04-03 Thread Paul Koning via cctalk



> On Apr 3, 2024, at 12:28 PM, Martin Bishop via cctalk  
> wrote:
> 
> Ignore my last - incontinence or is it incompetence
> 
> A fairly ordinary GPU, in a PC, could almost certainly provide an XY display 
> with Z fade (long persistance phosphor).  I use them for waterfall displays 
> and they keep up - the data does of course arrive by E'net.

Yes, and some emulations have done this, such as Phil Budne's famous work in 
SIMH.  I adopted some of those ideas for DtCyber (CDC 6000 emulation) and it 
works well, though the timing is marginal given the graphics subsystem I use.  
That is WxWidgets -- changes are SDL would do better and some day I will try 
that.

> Equally, FPGAs / SOCs can implement frame buffers; eg to output waterfall 
> displays.  The fading memory would have to be in DRAM, FPGA memory is fast 
> but small 3 ns access time but only 240 ki by .. 2.18 Mi by (Zynq 10 .. 45, 
> the '45 is a corporate purchase).  A ping pong buffer arrangement could 
> implement fading - computed in either processor  (vector instructions) or 
> logic (raw muscle).  The DAC input lines could supply the data.

Agreed, and that would be an elegant way to emulate a CDC DD60.  Or a GT40.  
You'd presumably want to use at least an HD level display signal (1920 by 
1080), if not double that, to make it look right; less than that would give 
pixel artefacts that make it not look as vector-like as you would want.

paul

[cctalk] Re: oscilloscopes

2024-04-03 Thread Paul Koning via cctalk



> On Apr 3, 2024, at 11:21 AM, Mike Katz via cctalk  
> wrote:
> 
> I'm surprised some digital scope manufacturer hasn't implemented X-Y-Z 
> control as an option.   Driving X-Y was fairly common for certain types of 
> signals.  And many also used the Z input.

Oh, they offer X/Y display, but sampled just as the normal operation is.  Some 
of the applications we're talking about here don't appreciate the digitization 
artefacts.

paul




[cctalk] Re: oscilloscopes

2024-04-03 Thread Paul Koning via cctalk



> On Apr 3, 2024, at 11:16 AM, Paul Koning via cctalk  
> wrote:
> 
> 
> 
>> On Apr 3, 2024, at 11:01 AM, Guy Fedorkow via cctalk  
>> wrote:
>> 
>> Vintage computer enthusiasts might want to keep track of where to find 
>> CRT-based analog oscilloscopes, for use as output devices.
>> The early MIT and Lincoln Labs computers used D/A converters to steer and 
>> activate the beam on analog scopes to draw vector images.
>> Working on Whirlwind simulation, we've been able to get this technique to 
>> work with "real" oscilloscopes, e.g., Tek 475, but we have not yet found a 
>> single DSO that has X/Y _and_ Z inputs (let alone the required phosphor 
>> fade).
> 
> So did a whole range of DEC computers, of course.  And the famous CDC 
> mainframe console (DD60) though it did vectors only for text (graphics was 
> dot-mode only since it wasn't a major use case for that device).

The DD60 and its associated controller in the mainframe (6612 or 6602) was an 
interesting beast.  The interface between controller and display is a hybrid, 
with the positioning information delivered as 9 bits each of X and Y, but the 
character vectors are generated in the controller and sent to the display as 
analog waveforms, X and Y on differential pairs.

Another oddity is the character waveform generation: that uses two pairs of A/D 
converters, and the converters are essentially base one --  6 equally weighted 
inputs to produce output values 0..6.  And since ROMs were hard to  come by in 
1964, at least ones with 100 ns cycle time, the digital inputs for the waveform 
generators are an amazingly large pile of gates.

paul



[cctalk] Re: oscilloscopes

2024-04-03 Thread Paul Koning via cctalk



> On Apr 3, 2024, at 11:01 AM, Guy Fedorkow via cctalk  
> wrote:
> 
> Vintage computer enthusiasts might want to keep track of where to find 
> CRT-based analog oscilloscopes, for use as output devices.
> The early MIT and Lincoln Labs computers used D/A converters to steer and 
> activate the beam on analog scopes to draw vector images.
> Working on Whirlwind simulation, we've been able to get this technique to 
> work with "real" oscilloscopes, e.g., Tek 475, but we have not yet found a 
> single DSO that has X/Y _and_ Z inputs (let alone the required phosphor fade).

So did a whole range of DEC computers, of course.  And the famous CDC mainframe 
console (DD60) though it did vectors only for text (graphics was dot-mode only 
since it wasn't a major use case for that device).

I once built a graphics display setup for an 11/20 lab machine (in college) 
using DEC D/A modules (AA-01?) with an RC-11 disk serving as the refresh 
memory, DMA direct to the D/A data register.

paul




[cctalk] Re: EMP was: oscilloscopes

2024-04-02 Thread Paul Koning via cctalk



> On Apr 2, 2024, at 3:17 PM, Bill Gunshannon via cctalk 
>  wrote:
> 
> 
> 
> On 4/2/2024 11:01 AM, Jon Elson via cctalk wrote:
>> On 4/2/24 00:03, Just Kant via cctalk wrote:
>>> Accordimg to certain individuals on this list, going back a few years, 
>>> electronics/computers can be damaged due to an electrical storm, presumably 
>>> very intense activity, even while off. Go look through the archives.
>>> 
>> I have had two incidents where nearby lightning strikes blew out components 
>> on gear I had.  Many years ago, I had two computers connected by a parallel 
>> port cable, and chips on both ends were popped by a strike that might have 
>> hit power lines about two blocks away.
>> About a decade ago, we had a lightning strike that hit trees half a block 
>> away.  It took out an ethernet port on one computer, and blew out a bunch of 
>> stuff on a burglar alarm I had built.  Both involved long wire runs.
> 
> I have had lots of stuff taken out by lightening.  Even when it
> wasn't particularly close but hit a power line as much as a mile
> away.

I used to have problems until I put substantial panel-mounted surge protectors 
both at the main power entry and at the panels of the barns.  Since then, all 
good.  Phone and cable are also protected, and all those protectors are mounted 
on a sheet of copper connected directly to the main panel shell and the main 
ground.

paul




[cctalk] Re: EMP was: oscilloscopes

2024-04-02 Thread Paul Koning via cctalk
Absolutely, by all means go right ahead.

As you pointed out, the NEC absolutely requires bonding all ground rods.  And 
Roger Block spells out in quite some detail why this is important in his books.

Come to think of it, apart from bonding electrical system grounds, I think 
there's also a requirement for bonding other metal objects that are anywhere 
nearby, like other utilities.  I'm not sure about the details; they should be 
in the NEC or in building codes.

paul

> On Apr 2, 2024, at 1:30 PM, Jay Jaeger via cctalk  
> wrote:
> 
> Paul, would you mind if I shared this on the Facebook EndFed Halfwave Antenna 
> group?  Time and time again I see folks talk about putting in ground rods in 
> for antennas and NOT bonding them to the electrical service ground rod.In 
> most (if not all) locations in the US, this kind of bonding is now a required 
> part of the electrical code, and newly constructed houses (or ones that have 
> their panels replaced) will typically have a "service ground' bus bar 
> installed near the electrical panel.
> 
> (The ARRL book is a pretty good resource on this topic, too, but real life 
> experience may convince some to think twice about what they are doing.)
> 
> https://www.arrl.org/grounding-and-bonding-for-the-amateur
> 
> JRJ
> 
> On 4/2/2024 10:13 AM, Paul Koning via cctalk wrote:
>> 
>>> On Apr 2, 2024, at 11:01 AM, Jon Elson via cctalk  
>>> wrote:
>>> 
>>> On 4/2/24 00:03, Just Kant via cctalk wrote:
>>>> Accordimg to certain individuals on this list, going back a few years, 
>>>> electronics/computers can be damaged due to an electrical storm, 
>>>> presumably very intense activity, even while off. Go look through the 
>>>> archives.
>>>> 
>>> I have had two incidents where nearby lightning strikes blew out components 
>>> on gear I had.  Many years ago, I had two computers connected by a parallel 
>>> port cable, and chips on both ends were popped by a strike that might have 
>>> hit power lines about two blocks away.
>>> 
>>> About a decade ago, we had a lightning strike that hit trees half a block 
>>> away.  It took out an ethernet port on one computer, and blew out a bunch 
>>> of stuff on a burglar alarm I had built.  Both involved long wire runs.
>> Some years ago we had a lightning strike on the driveway next to the house.  
>> It took out every single device directly or indirectly connected to the 
>> cable TV (also Internet) connection.  The reason was something I knew about 
>> but which I did not sufficiently understand: the cable TV connection came 
>> into the house at the opposite end from power and telephone, and was 
>> grounded there.
>> 
>> A lightning strike will set up a voltage gradient in the soil near the 
>> strike, so the "ground" seen by power and phone was at a very different 
>> voltage than the "ground" seen by the ground rod "protecting" the cable TV 
>> entry.  The resulting current actually evaporated the cable TV surge 
>> protector innards, and took out TV, printer, cable modem, Ethernet switch, 
>> PC, and a bunch of other things.
>> 
>> Lesson learned: I rerouted the cable TV to go first to the power entry 
>> point, and attached its protector to the same copper ground sheet that the 
>> other two protectors sit on.
>> 
>> A great reference for all this is the handbook "The grounds for lightning 
>> protection" by Polyphase Co., a maker of professional lightning protection 
>> devices.  I haven't done everything they call for -- for example, our house 
>> doesn't have a perimeter ground.  But it does now have single point 
>> grounding, and as a result we've had no trouble even though there have been 
>> plenty of lightning strikes in the neighborhood.
>> 
>>  paul
>> 



[cctalk] Re: EMP was: oscilloscopes

2024-04-02 Thread Paul Koning via cctalk



> On Apr 2, 2024, at 11:54 AM, steve shumaker via cctalk 
>  wrote:
> 
> Company (Polyphaser) web page doesn't seem to list that handbook as an 
> available lit product.  Can you suggest a source?
> 
> Steve
> 
> 
> On 4/2/24 8:13 AM, Paul Koning via cctalk wrote:
>> 
>> ...
>> A great reference for all this is the handbook "The grounds for lightning 
>> protection" by Polyphase Co., a maker of professional lightning protection 
>> devices.  I haven't done everything they call for -- for example, our house 
>> doesn't have a perimeter ground.  But it does now have single point 
>> grounding, and as a result we've had no trouble even though there have been 
>> plenty of lightning strikes in the neighborhood.
>> 
>>  paul

I found a somewhat different book from Polyphaser: 
https://w5nor.org/presentations/PolyphaserGuide.pdf which cites the other one 
in the references section in the back.  The material looks rather similar to 
the older book; I recognized a number of illustrations.  It has more content, 
it seems, and more recent material.

The exact title is "The Grounds for Lightning and EMP Protection" by Roger 
Block, and searching for that title yields some sources, including a used copy 
on eBay and an online copy on Scribd.

paul



[cctalk] Re: EMP was: oscilloscopes

2024-04-02 Thread Paul Koning via cctalk



> On Apr 2, 2024, at 11:01 AM, Jon Elson via cctalk  
> wrote:
> 
> On 4/2/24 00:03, Just Kant via cctalk wrote:
>> Accordimg to certain individuals on this list, going back a few years, 
>> electronics/computers can be damaged due to an electrical storm, presumably 
>> very intense activity, even while off. Go look through the archives.
>> 
> I have had two incidents where nearby lightning strikes blew out components 
> on gear I had.  Many years ago, I had two computers connected by a parallel 
> port cable, and chips on both ends were popped by a strike that might have 
> hit power lines about two blocks away.
> 
> About a decade ago, we had a lightning strike that hit trees half a block 
> away.  It took out an ethernet port on one computer, and blew out a bunch of 
> stuff on a burglar alarm I had built.  Both involved long wire runs.

Some years ago we had a lightning strike on the driveway next to the house.  It 
took out every single device directly or indirectly connected to the cable TV 
(also Internet) connection.  The reason was something I knew about but which I 
did not sufficiently understand: the cable TV connection came into the house at 
the opposite end from power and telephone, and was grounded there.  

A lightning strike will set up a voltage gradient in the soil near the strike, 
so the "ground" seen by power and phone was at a very different voltage than 
the "ground" seen by the ground rod "protecting" the cable TV entry.  The 
resulting current actually evaporated the cable TV surge protector innards, and 
took out TV, printer, cable modem, Ethernet switch, PC, and a bunch of other 
things.

Lesson learned: I rerouted the cable TV to go first to the power entry point, 
and attached its protector to the same copper ground sheet that the other two 
protectors sit on.

A great reference for all this is the handbook "The grounds for lightning 
protection" by Polyphase Co., a maker of professional lightning protection 
devices.  I haven't done everything they call for -- for example, our house 
doesn't have a perimeter ground.  But it does now have single point grounding, 
and as a result we've had no trouble even though there have been plenty of 
lightning strikes in the neighborhood.

paul



[cctalk] Re: oscilloscopes

2024-04-01 Thread Paul Koning via cctalk



> On Apr 1, 2024, at 8:14 PM, Brent Hilpert via cctalk  
> wrote:
> 
> On 2024Apr 1,, at 3:33 PM, Just Kant via cctalk  wrote:
>> 
>> I have more then I need. All the working ones are HP w/color crts, and as 
>> far as older, verifiably vintage tools (right down to the 680x0 processor in 
>> either) I have to admit I favor them as a brand. Call we an oddball, weird 
>> egg, badges I wear with pride.
>> 
>> But who could resist the allure of the newer ultra portable, even handheld 
>> units (some with bandwidth or sampling rates to 50mhz). I'm a big cheapo. 
>> But there's no real reason to agonize over a 65 - 200$ or thereabouts 
>> acquisition. It's a bit tiring to wade through the piles of availability. I 
>> favor a desktop unit, larger screen (but not always, careful). But most of 
>> those need wall current I think? The convenience of a handheld battery 
>> powered unit obviously has it's benefits.
>> 
>> I will always love and dote upon my color crt based HPs. But the damned 
>> things are so heavy, so unwieldy. Judy-Jude knocked my 54111d over, hit the 
>> paved floor, shook the house. And still works! Built to withstand an atomic 
>> bombardment.
> 
> 
> 
> Pardon the plug for my own web page, but given the topic of scopes and DSOs, 
> for any interested in some minor reading on the origins of the DSO and 
> geeking out on sophisticated and little-known HP equipment from their heyday:
> 
>   http://madrona.ca/e/HP5480A/index.html 
> 
> 
> Or TLDR: digital capture of analog signals to the low KHz in the late 1960s 
> using core memory & TTL, or, “a DSO before the DSO”.
> As for portability, it’s possible for one person to manhandle it around but 
> it comfortably needs 2 people to carry.

The same could be said for the Tektronics scope I have, a DSA602.  It just fits 
in an H960 rack, and weighs perhaps 50 pounds.  I can lift it -- if I'm careful.

That is my main oscilloscope, but once in a while I grab the Tek 7603 (thanks, 
Fair Radio!).  Analog, two channel, 100 MHz bandwidth on a good day.  But if I 
think I'm looking at aliased signals, the 7603 will tell me because it doesn't 
have any.

I once had a 535.  Repairing the HV supply was interesting; not all that easy 
to find the rectifier tubes.

Speaking of A/D specs, that old HP device reminds me of a digital voltmeter in 
my father's university lab, I think also by HP: it had a successive 
approximation A/D constructed out of relays.  It would typically sample every 
other second or so, and make a "krrrt" sound while all those relays were 
flipping.  The display was some number of columns of 10 light bulbs showing 
digits 0-9.

paul

[cctalk] Re: oscilloscopes

2024-04-01 Thread Paul Koning via cctalk



> On Apr 1, 2024, at 8:09 PM, Bill Gunshannon via cctalk 
>  wrote:
> 
> 
> On 4/1/2024 7:12 PM, Rick Bensene via cctalk wrote:
 And still works! Built to withstand an atomic bombardment.
>> Except for the EMP.   It'll theoretically render such devices nice looking, 
>> well-built scrap.
>> The old completely vacuum-tube-based, discrete component oscilloscope from 
>> back in the day  may actually survive such an event if it's outside the 
>> blast radius but still reasonably sheltered; and you are also outside lethal 
>> fallout zones, or can shelter and survive in radioactivity-safe places for a 
>> long time.
>> Stock up on quality-made (e.g., Tektronix, Hewlett Packard) tube and 
>> cold-cathode-based test equipment (VTVM, oscilloscope, etc.) as well as 
>> quality radios and transceivers.   Hopefully they will continue to serve as 
>> interesting artifacts of a time gone by, but if something were to go 
>> sideways in our world, they could potentially come in very handy.
> 
> You do realize that in the event of such an occurrence there
> would be nothing left to use them on.  :-)

Not many computers, but my 51J-3 would be just fine.

paul



[cctalk] Re: Amoeba OS

2024-03-29 Thread Paul Koning via cctalk



> On Mar 28, 2024, at 9:44 PM, ben via cctalk  wrote:
> 
> On 2024-03-28 5:50 p.m., Fred Cisin via cctalk wrote:
> 
>> OTOH, spammer mailing lists, and leaked personal and trade secrets seem to 
>> last forever.
> 
> You forgot Mickey Mouse.

Not any more; Steamboat Willy is in the public domain now.

paul




[cctalk] Re: Amoeba OS

2024-03-28 Thread Paul Koning via cctalk



> On Mar 28, 2024, at 1:02 PM, Alessandro Mazzini via cctalk 
>  wrote:
> 
> Sorry if I intrude... now is no more possible to obtain hobbyist licenses for 
> vms ??

You can still get one for OpenVMS/x86.

paul




[cctalk] Re: Cleanup time again

2024-03-26 Thread Paul Koning via cctalk



> On Mar 26, 2024, at 2:59 PM, steve shumaker via cctalk 
>  wrote:
> 
> and,  if you inquire in the right places, there is law enforcement focused 
> forensic analysis software specifically designed to acquire RAID volumes and 
> rebuild the data.
> 
> Steve

Yes, though from the one time I encountered that use case I have my doubts 
about it.  I was asked to help with such a forensic analysis case, and the 
person I worked with started by asking me about the "BIOS settings" on our SAN 
array, and whether the setting was "left to right" or "right to left".  For 
some reason, that person could not cope with answers like "we don't have a 
BIOS" and "neither left-to-right nor right-to-left".  Once I hit that road 
block I decided not even to bother mentioning that our SAN device included page 
based virtualization.  Never did hear anything further.  :-)

paul



[cctalk] Re: Cleanup time again

2024-03-26 Thread Paul Koning via cctalk



> On Mar 26, 2024, at 10:08 AM, Bill Gunshannon via cctalk 
>  wrote:
> 
> 
> 
> 
> On 3/26/2024 9:15 AM, Paul Koning wrote:
>>> On Mar 26, 2024, at 8:57 AM, Bill Gunshannon via cctalk 
>>>  wrote:
>>> ...
>> Do you have just part of the RAID set, or enough disks to make a complete 
>> one?  
> 
> Don't know, but doubt it.  Some of the disks have probably been used
> for other purposes since the VAXen went away more than 20 years ago.
> 
>> If the latter then it's a matter of reverse engineering the RAID layout,  
>> which is likely to be doable.
> 
> While possible, I think hardly likely.  I don't even remember what the
> appliance was.  Something DECish.

Chances are those were classic RAID systems, with fixed layouts across much of 
the RAID set (not "mapped RAID") exposing what looks like a regular device LUN 
(no page based virtualization).  If so, there is only a limited set of 
possibilities, basically a question of stripe sizes, drive count, and drive 
order.  Given a guess (or better) of what's on it, such as what file system 
type, the right layout would be clear from the fact that it produces valid 
content.

It would be a pain to try this with modern complex SAN devices, but with those 
of 30+ years ago it's not quite so bad.

paul



[cctalk] Re: Cleanup time again

2024-03-26 Thread Paul Koning via cctalk



> On Mar 26, 2024, at 8:57 AM, Bill Gunshannon via cctalk 
>  wrote:
> 
> 
> 
> On 3/25/2024 9:51 PM, Henry Bent wrote:
>> On Mon, 25 Mar 2024 at 20:14, Bill Gunshannon via cctalk 
>> mailto:cctalk@classiccmp.org>> wrote:
>>Oops.  I guess the fingers work as good as the memory.  Sorry
>>about that.  I've got about 20 of them.  I know they haven't
>>been used since they were taken out of the VAX Cluster I ran
>>at the University.  Nothing I have used the SB boxes with since
>>then would know what to do with 9GB of disk space.  :-)
>>But, if needed I could probably test them on a PC I have with
>>an Adaptec SCSI in it.  It's intended for Ersatz-11 but I expect
>>does could use a disk that big.  Too bad there's no way to read
>>them.  Might be some interesting stuff left behind by the VAX.
>> Why is there no way to read them?  If you have a PC with a SCSI card you can 
>> easily boot into the Linux or BSD distro of your choice and make a dd (or 
>> ddrescue) image of the entire drive, which could then be accessed by 
>> whatever means.
> 
> 
> These disks were part of a really large RAID array in a SAN connected to
> the VAX cluster.  There is no way of reconstructing it and so no way to
> extract usable information.
> 
> bill

Do you have just part of the RAID set, or enough disks to make a complete one?  
If the latter then it's a matter of reverse engineering the RAID layout, which 
is likely to be doable.

paul



[cctalk] Re: DEC Processor Books

2024-03-24 Thread Paul Koning via cctalk



> On Mar 24, 2024, at 12:31 PM, Adrian Godwin via cctalk 
>  wrote:
> 
> It's well known that a necktie restricts the supply of blood to the brain.

One of my DEC coworkers called it a "cranial tourniquet".

paul



[cctalk] Re: Cleanup time again

2024-03-23 Thread Paul Koning via cctalk



> On Mar 23, 2024, at 11:16 AM, Bill Gunshannon via cctalk 
>  wrote:
> 
> 
> 
> Here's something operators of older systems might find useful.
> 
> Allied Telesis CentreCOM 210TS Twisted Pair Transciever
>   IEE 802.3 10 BASE-T (MAU)
> 
> I have 14 used and another 14 still in the box, never been opened.
> 
> bill

Nice.

FWIW, 10BaseT transceivers are still made, for example 
https://www.l-com.com/ethernet-converters-l-com-10baset-to-t-aui-ethernet-transceiver
 and 
https://www.omnitron-systems.com/product-families/flexpoint-unmanaged-media-converters/flexpoint-ethernet-copper-to-fiber-media-converters/flexpoint-10aui-t

paul



  1   2   3   4   5   6   7   8   9   10   >