4.2BSD TU58 distribution tape for VAX-11/750?

2016-04-23 Thread Josh Dersch

Hey all --

I'm researching what I need to have on hand to get 4.2BSD installed 
running on my 11/750.  I'm pretty close to having mass storage working, 
I have a SCSI TMSCP tape controller that should do the job in 
conjunction with a SCSI 9-track drive, and the VAX itself seems to be 
happy.  What I don't have is a copy of the TU58 cassette that would have 
been provided with the 4.2BSD distribution (at least, according to the 
installation documents).  This contains utilities for formatting the 
disk and copying the root filesystem (from a *real* tape drive) to the 
root partition, so they're pretty essential for bringing a machine up 
from scratch.


If I had a SCSI *disk* controller, I could cheat and do the installation 
on SIMH (which avoids using the TU58 by cheating in a different way) and 
DD the whole thing over, but I'm not so blessed.


I can't seem to track down a copy of this TU58 on the 'net -- anyone 
have one squirreled away somewhere, or know where I should be looking?


Thanks,
Josh


Re: COSMAC Elf switch panel using PCBs

2016-04-23 Thread Eric Smith
On Sat, Apr 23, 2016 at 8:18 PM, dwight  wrote:
> You'd need to decide, LOAD is switch up or
> LOAD is switch down.

Which still doesn't explain how a toggle (not momentary) can be said
to have NC and NO pins. But at this point I'm flogging a dead horse.

> Even if a single wire, it needs to be debounced.

Yes. Elf2K uses both contacts with an S-R FF. My other system is VHDL
in an FPGA and has a counter-based debouncer.


Re: Tadpole Sparcbook Hard Drive

2016-04-23 Thread Ben Sinclair
I have nothing unfortunately! It's that caddy and cable that's the hard
part.

On Sat, Apr 23, 2016 at 9:28 PM, Ian Finder  wrote:

> What's your scenario? Do you have the caddy and cable? If so, a SCSI2SD is
> a good bet.
>
> Sent from Outlook for iPhone
>
> 
> From: cctalk  on behalf of Ben Sinclair <
> b...@bensinclair.com>
> Sent: Saturday, April 23, 2016 6:16:10 PM
> To: General Discussion: On-Topic and Off-Topic Posts
> Subject: Tadpole Sparcbook Hard Drive
>
> This is a long shot, but does anyone have a Tadpole Sparcbook 3TX hard
> drive?
>
> Their existence may be just a myth.
>
> --
> Ben Sinclair
> b...@bensinclair.com
>



-- 
Ben Sinclair
b...@bensinclair.com


Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread dwight
I recall going to Mike Quinn's and seeing barrels of RTL.
I wish now that I'd bought a bunch of them.
Most DTL can be replace by a TTL except a few with different
pinouts and the NAND with the diode expand pin.
My oldest equipment has a mix of DTL and TTL.
Dwight




Re: COSMAC Elf switch panel using PCBs

2016-04-23 Thread dwight
You'd need to decide, LOAD is switch up or
LOAD is switch down.
Even if a single wire, it needs to be debounced.
Dwight



From: cctalk  on behalf of Eric Smith 

If it was momentary, it would have to be momentary-off-momentary, so
it still wouldn't really make sense (IMNSHO) to refer to the contacts
as NC and NO.


Re: COSMAC Elf switch panel using PCBs

2016-04-23 Thread Eric Smith
On Sat, Apr 23, 2016 at 5:09 PM, dwight  wrote:
> Lf it were mine to make, it would be a spring return
> momentary SPDT. NC would be ground and NO would be +5V,
> as per the schematic.
> I'm not sure what the manual says about it.
> It is debounced with a jam latch, as per schematic.

That would be fine for the Elf2K, but I want to also use the same
switch panel for a different design that uses a single-ended input
only, so I need it to be a true toggle rather than momentary.

If it was momentary, it would have to be momentary-off-momentary, so
it still wouldn't really make sense (IMNSHO) to refer to the contacts
as NC and NO.


FA: 5.25" floppy mailers

2016-04-23 Thread Chuck Guzis
I just came across two unopened boxes (500 each) of 6"x0" 5.25" floppy
disk mailers.

Anyone want them?  You can have them for shipping, FOB 97405.

--Chuck


Re: Accelerator boards - no future? Bad business?

2016-04-23 Thread Chris Hanson
Overall I'm personally much more about using the system *as a whole* than using 
it *as it was*.

For example, I have a Mac IIci with maxed-out RAM, some large SCSI disks, 
Ethernet, and an accelerated NuBus video card, all possible at the time. 
(Though 128MB RAM and the 1GB disks would have bankrupt a small nation in 
1989.) I'm not going to plug it into a period monitor though, I just acquired a 
multi sync LCD with which to replace its current late-1990s CRT. Similarly, I 
have a BigMessOWires MacFloppyEmu for most of my real storage needs, which has 
virtually no latency and virtually infinite capacity, so I can spend my time 
with the system using it rather than spend it all booting it and launching 
applications.

Similarly, I pulled the 4GB drive from my SPARCstation 20[1] and put in a pair 
of 167GB 15.5K Cheetahs, so I could run Solaris 8 and NetBSD fast and never 
worry about space. I have an external 411 to put the 4GB drive in, so I can 
still run SunOS, I've put in a SunSwift 100Base-T card to make getting things 
to the system faster, and if I add more storage it'll be with a SCSI2SD or 
equivalent, again so I can use the system rather than wait to use it.

I've even looked a little at ProFile emulation for my Lisa 2/10. I don't even 
know if its current ProFile still works, since all ProFiles are old enough at 
this point that their formatting is decaying, and I've also moved a few times 
in the dozen years since I last booted it. (At least the MacFloppyEmu will also 
work with the Lisa, so I don't need its 3.5in drive to work, or to run Dart on 
my IIci to write 400KB floppies…) And again, to avoid spending all my time with 
it waiting, I've looked a tiny bit at how to replace the two 512KB RAM boards 
with a single board with 2MB, 8MB, or whatever it will take, using some more 
modern hardware.

I want to use the systems as a whole enough not to just live in emulation, but 
I only have a limited amount of time to spend with them, so replacing just a 
few subsystems in ways that make the use of the overall systems smoother seems 
like a reasonable compromise.

  -- Chris

[1] When I first got my SS20 home and booted it, it came up as 
ids-three.smcc.sun.com and said it was starting Cadence license servers. I 
assume it was an identity server for Sun Microcomputer, if anyone knows more 
I'd love to hear about it.


Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Brent Hilpert

On 2016-Apr-23, at 4:15 PM, Jon Elson wrote:

> On 04/23/2016 05:46 PM, Chuck Guzis wrote:
>> On 04/23/2016 02:34 PM, Brent Hilpert wrote:
>> 
>>> I was surprised by the early date code on the 7490s when I ran across
>>> them in a piece of test equipment.
>> What was surprising to me is how quickly the industry standardized on
>> the TI 7400/5400 parts.   Early (ca 1967) Moto databooks had MTTL I,
>> MTTL II and MTTL III that were essentially sui generis.  By 1969, the
>> MC7400/5400 had pretty much taken over.  Things moved really quickly
>> back then.
>> 
>> 
> Lots of designers and system manufacturers were VERY leery of adopting 
> anything single-source.
> When a number of chip makers (Nat Semi, Motorola, Signetics, Fairchild) all 
> jumped onto making compatible 7400 parts, the industry had the confidence 
> that parts in the series would be available for a long time.  Back in the 
> late 60's, early 70's the industry was moving at a breakneck pace, and chip 
> families had very short lifetimes before their makers hopped onto the next 
> new thing. (Oh, yeah, you said the same thing in the last sentence!)

Fellow here did some research into the 181 history and came to a similar 
conclusion:
http://ygg-it.tripod.com/id1.html
He suggests TI being the first to contract for a second-source sent the buyers 
(esp. military) to the 7400 series.



First Retail version of MS-DOS was Re: Ibm s-100 system?

2016-04-23 Thread Ali
I am not sure if 5.00 was the first Retail version. I know that for fact there 
is a 3.2 version released in the blue plexiglass Microsoft retail packaging. 
The 4.x versions are usually gray boxed with some having OEM/new computer 
stickers.
-Ali

Re: COSMAC Elf switch panel using PCBs

2016-04-23 Thread Brent Hilpert
I think you two are talking different versions of the Elf: Eric is talking 
specifically about the Elf 2000 whose circuit design has been modified 
considerably from the original version Dwight appears to be referring to.

For my part, when I refurbished a period homebrew implementation (someone's 
high school project) I relabeled the switches to indicate all 4 of the 
functional states of the 2 switches rather than the somewhat inexplicable 
"LOAD" and "RUN".
http://www.cs.ubc.ca/~hilpert/e/cosmacElf/index.html


On 2016-Apr-23, at 4:09 PM, dwight wrote:

> Lf it were mine to make, it would be a spring return
> momentary SPDT. NC would be ground and NO would be +5V,
> as per the schematic.
> I'm not sure what the manual says about it.
> It is debounced with a jam latch, as per schematic.
> Tinker Dwight
> 
> 
> 
> From: cctalk  on behalf of Eric Smith 
> 
> 
> On Sat, Apr 23, 2016 at 11:28 AM, dwight  wrote:
>> I looked at the schematic pdf and it looks right.
>> The ground lead is the NC.
> 
> It's an SPDT toggle. Neither the NC or NO pin of the switch should be
> tied to ground; it's the common pin that's grounded. So for a toggle
> switch, which side is NC and which is NO?
> 
> Anyhow, apparently if I'd read it more carefully, it is explained in the 
> manual.



Re: Ibm s-100 system?

2016-04-23 Thread Chuck Guzis
On 04/23/2016 05:09 PM, Fred Cisin wrote:

> After Microsoft upgraded PC-DOS from 1.00 to 1.10, they then
> provided computer OEMs with a similar product numbered 1.25 OEMs were
> expected to make their own personalized and customized IO.SYS, 
> MODE.COM (could also do stuff such as switching between 
> internal/external monitor, etc.), etc. OEMs did not even need to call
> it "MS-DOS"  (Zenith Z-DOS)

I have a recollection that Bill had a problem with licensing terms for
DOS--something about a minimum license fee for OEMs.  They cooked up a
way to take a copy of PC DOS 1.1 and replace the IBMBIO.SYS component
and get the thing to work on a 85/88 system.

--Chuck



Re: Ibm s-100 system?

2016-04-23 Thread Fred Cisin

On Sat, 23 Apr 2016, william degnan wrote:

I have a copy of a MS or IBM DOS for my CompuPro on 8" disk, I think it's
v. 1.25.


1.25 would be MS-DOS.  The PC-DOS equivalent was 1.10
("equivalent", NOT exactly the same (GWBASIC, MODE.COM differences, 
IO.SYS/IBMBIO.COM differences, FORMAT, DISKCOPY, DISKCOMP differences))


MS-DOS 1.25 could be configured by OEM for various disk formats.
At that time, MS-DOS was not available for direct retail sale.  "Only 
available from computer OEM".  OEMs could sell it retail to buyers of 
their computers, and theoretically ONLY to them.  In reality, few OEMs 
wanted to, nor bothered to, confirm "legitimacy" of their customers, 
and many OEM copies found their way to retail sales. ("gray market")  In 
fact NO version of MS-DOS was legally retail until 5.00.  MANY OEMs let 
out "OEM" MS-DOS for retail sale by third parties.

PC-DOS 1.10 was 160K and 320K ONLY.
Many OEM versions of MS-DOS 1.25 (such as Compaq) deliberately lockstepped 
with IBM, and did not offer any diverging capabilities.   For many of 
those OEMs, the main thing that they had to offer was a [cheaper] way to 
get a 5150, with possibly some clever developments.



I also have a copy of 1.25 on my CBM 256x, with 8086
co-processor.   Neither of these were BEFORE the original IBM PC, but hint
at a time when there was a "DOS" in development floating around before the
actual IBM system.  This DOS could have been demoed in Europe on an S-100.
lots of conjecture here.  Just speculating.
1.25 could NOT have been demoed in Europe, nor anywhere else until 1982, 
well after the 5150.


HOWEVER, it is entirely possible that that "IBM S-100" was merely a 
Seattle Computer Products S100, or even an SCP 8086 CPU board in a 
Cromemco S100 system, with a piece of tape changing the name.
(In order to "test the waters' about how an IBM 808x machine would be 
received?)



1.00 came out in August 1981.
PC-DOS 1.10 came out in May 1982.
MS-DOS 1.25 came out in June 1982.

The existence of later versions AFTER the first one, does NOT "hint at a 
time when there was a DOS in development floating around before" the first 
one. Existence of a later version never hints at presence of something 
before the first one, although CONTENT might give some clues of what was 
being considered at various times.  It hints at further development, and 
the possibilities of spreading out into areas where IBM did not tread.



But, YES, the very first version WAS for an S100 machine made by Seattle 
Computer Products.  Written by Tim Paterson, it was called "QDOS" (Quick 
and Dirty OS), and then "86-DOS".  (and, for a while, "SB-86")  YES, it 
supported 8".


Then, in July 1981, SCP sold it to Microsoft, who handed off to IBM.

Microsoft's contract permitted them to sell it to other computer OEMs, but 
ONLY through other computer OEMs.  THAT restriction ended in 1991, 
commemorated with MS-DOS 5.00.


After Microsoft upgraded PC-DOS from 1.00 to 1.10, they then provided 
computer OEMs with a similar product numbered 1.25
OEMs were expected to make their own personalized and customized IO.SYS, 
MODE.COM (could also do stuff such as switching between internal/external 
monitor, etc.), etc.

OEMs did not even need to call it "MS-DOS"  (Zenith Z-DOS)


Trivia: there was no one point one.  it was one point ten.
Internally, the minor version of the OS was stored as a two digit decimal 
number, obviously stored in binary.  Thus, "1.1" was stored as a 1 and 
0Ah. 
MOV AH, 30h

INT 21h
returns major version in AL, minor in AH.  "1.1" returns 0A01, 1.25 
returns 1901, 3.30 returns 1E03, etc.


--
Grumpy Ol' Fred ci...@xenosoft.com


Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Jon Elson

On 04/23/2016 05:46 PM, Chuck Guzis wrote:

On 04/23/2016 02:34 PM, Brent Hilpert wrote:


I was surprised by the early date code on the 7490s when I ran across
them in a piece of test equipment.

What was surprising to me is how quickly the industry standardized on
the TI 7400/5400 parts.   Early (ca 1967) Moto databooks had MTTL I,
MTTL II and MTTL III that were essentially sui generis.  By 1969, the
MC7400/5400 had pretty much taken over.  Things moved really quickly
back then.


Lots of designers and system manufacturers were VERY leery 
of adopting anything single-source.
When a number of chip makers (Nat Semi, Motorola, Signetics, 
Fairchild) all jumped onto making compatible 7400 parts, the 
industry had the confidence that parts in the series would 
be available for a long time.  Back in the late 60's, early 
70's the industry was moving at a breakneck pace, and chip 
families had very short lifetimes before their makers hopped 
onto the next new thing. (Oh, yeah, you said the same thing 
in the last sentence!)


Jon


Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Jon Elson

On 04/23/2016 04:34 PM, Brent Hilpert wrote:



The interesting thing was that there seemed to be a distrust of LSI
chips early on.  I recall working on a project around 1973, where the
lead engineer preferred to design his own UART from SSI rather than use
one of the new UART chips.

Well, he may have been worried about availability, or 
possibly the part going obsolete.
Those are other issues that a designer might be concerned 
about, as well as reliability.


Jon


Re: COSMAC Elf switch panel using PCBs

2016-04-23 Thread dwight
Lf it were mine to make, it would be a spring return
momentary SPDT. NC would be ground and NO would be +5V,
as per the schematic.
I'm not sure what the manual says about it.
It is debounced with a jam latch, as per schematic.
Tinker Dwight



From: cctalk  on behalf of Eric Smith 


On Sat, Apr 23, 2016 at 11:28 AM, dwight  wrote:
> I looked at the schematic pdf and it looks right.
> The ground lead is the NC.

It's an SPDT toggle. Neither the NC or NO pin of the switch should be
tied to ground; it's the common pin that's grounded. So for a toggle
switch, which side is NC and which is NO?

Anyhow, apparently if I'd read it more carefully, it is explained in the manual.


Re: Ibm s-100 system?

2016-04-23 Thread william degnan
On Sat, Apr 23, 2016 at 5:56 PM, Jim Brain  wrote:

> On 4/22/2016 11:02 AM, Guy Sotomayor wrote:
>
>> I wrote a somewhat long post a while ago on why we’re still stuck with
>> various timing artifacts due to the original PC’s choice to use an NTSC
>> color burst crystal as the main crystal for the PC. TTFN - Guy
>>
> Link?
>
> --
> Jim Brain
> br...@jbrain.com
> www.jbrain.com
>
>

I know Jon (Glitch works) has an S-100 card with this IBM square RAM.I
saw it at VCF East the other weekend.

Anyway, I seem to recall that an S-100 system was used to actually test and
port the original IBM DOS 1.0 for 5 1/4" for the new IBM system.

I have a copy of a MS or IBM DOS for my CompuPro on 8" disk, I think it's
v. 1.25.  I also have a copy of 1.25 on my CBM 256x, with 8086
co-processor.   Neither of these were BEFORE the original IBM PC, but hint
at a time when there was a "DOS" in development floating around before the
actual IBM system.  This DOS could have been demoed in Europe on an S-100.

lots of conjecture here.  Just speculating.

-- 
@ BillDeg:
Web: vintagecomputer.net
Twitter: @billdeg 
Youtube: @billdeg 
Unauthorized Bio 


Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Chuck Guzis
On 04/23/2016 02:34 PM, Brent Hilpert wrote:

> I was surprised by the early date code on the 7490s when I ran across
> them in a piece of test equipment.

What was surprising to me is how quickly the industry standardized on
the TI 7400/5400 parts.   Early (ca 1967) Moto databooks had MTTL I,
MTTL II and MTTL III that were essentially sui generis.  By 1969, the
MC7400/5400 had pretty much taken over.  Things moved really quickly
back then.

I recall the mW MRTL "experimenter's pack" with HEP part numbers.  IIRC,
about half the projects in the accompanying booklet used the RTL stuff
in analog, not digital applications.

--Chuck





Re: Ibm s-100 system?

2016-04-23 Thread Jim Brain

On 4/22/2016 11:02 AM, Guy Sotomayor wrote:
I wrote a somewhat long post a while ago on why we’re still stuck with 
various timing artifacts due to the original PC’s choice to use an 
NTSC color burst crystal as the main crystal for the PC. TTFN - Guy 

Link?

--
Jim Brain
br...@jbrain.com
www.jbrain.com



WTB: Sun Voyager Bag, Oberheim Synthesizer

2016-04-23 Thread ethan


Two things on the hunt list:

1. Sun Sparcstation Voyager bag (the bag to put it in)

2. Oberheim Matrix 6, 6R or 1000 synthesizers.



Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Brent Hilpert

On 2016-Apr-23, at 10:06 AM, Chuck Guzis wrote:

> On 04/23/2016 05:41 AM, Noel Chiappa wrote:
>>> From: Brent Hilpert
>> 
>>> I'd say the 74181 (1970) deserves a mention here. Simpler (no
>>> register component, ALU only) but it pretty much kicked off the
>>> start of IC-level bit slicing.
> 
> I recall reading about the 74181 introduction back in the day--it
> created great excitement and speculation about how far the industry was
> from a computer-on-a-chip.  I think I still have a couple of the things
> in my hellbox.

In 1972 or 1973 one of Radio Electronics or Popular Electronics had a 
construction article for the E Instruments Digi-Designer.
If you recall, the Digi-Designer was essentially a vehicle for E's new 
plug-in breadboard.  For those younger, yes, -those- plug-in breadboards, that 
are still the most prevalent hardware prototyping/educational technique today.

AIR, the 74181 was featured as an experiment to wire up on the Digi-Designer in 
that article.


> In the day, I'm not certain that TTL had the edge on integration,
> however.  It always seemed that DTL and RTL had the edge in complexity.
> Before the 181, I was playing around with the RTL 796 dual full adder
> and an 8-bit Fairchild DTL memory--IIRC the latter used a 7V clock.

I think TTL was quickly on par for density with DTL & RTL and overtook them by 
the late 60s.

I have 7490s (decade counter) from late-1966 and early-1967, and many TTL MSI 
functions were there by 1969.
The 7484 (16-bit memory) is listed in the TI 1969 TTL databook.
RTL was passe by then and DTL was heading that way. I don't think DTL got any 
more complex than such as the 8-bit memory you mention, at least in the main.

I was surprised by the early date code on the 7490s when I ran across them in a 
piece of test equipment.




> The interesting thing was that there seemed to be a distrust of LSI
> chips early on.  I recall working on a project around 1973, where the
> lead engineer preferred to design his own UART from SSI rather than use
> one of the new UART chips.
> 
> --Chuck
> 
> 
> 
> 



RE: AlphaStation 200 NVRAM Problem

2016-04-23 Thread Peter Coghlan
>
>  First of all you might be able to run some SRAM diagnostics yourself, 
> either from the console (if it has a tool for this; at worst you could 
> poke at it manually with deposit/examine commands, but the complicated 
> flashbus access protocol will make it a tedious task unless there is a way 
> to script it) or from the OS (can't help how to do this from VMS; under 
> Linux you could mmap(2) /dev/mem at the right address and then poke at it 
> with a little program doing the right dance to get the flashbus access 
> protocol right), to see if it shows any symptoms of misbehaviour.
>

The equivelant in VMS is sys$crmpsc.  I can supply a sample program
(in macro32) which calls it to read the firmware in a VAXStation 2000.
I don't know the right addresses to use for an Alphastation 200 though.

(Congratulations on the progress made so far.  Maybe there is hope yet
for my two DEC 3000/600 machines which have similar symptoms.)

Regards,
Peter Coghlan.



RE: AlphaStation 200 NVRAM Problem

2016-04-23 Thread Robert Jarratt


>  First of all you might be able to run some SRAM diagnostics yourself,
either
> from the console (if it has a tool for this; at worst you could poke at it
> manually with deposit/examine commands, but the complicated flashbus
> access protocol will make it a tedious task unless there is a way to
script it)

I have been trying to work out how to do that very thing. I don't think
there is quite enough info in the technical manual.

> or
> from the OS (can't help how to do this from VMS; under Linux you could
> mmap(2) /dev/mem at the right address and then poke at it with a little
> program doing the right dance to get the flashbus access protocol right),
to
> see if it shows any symptoms of misbehaviour.


If necessary I should be able to install linux, but still, working out the
address doesn't seem trivial.

> 
>  To experiment with DROM you might be able to find a DIP-to-PLCC socket
> adapter, pinouts for ROMs are pretty standard I believe.

I have ordered an adapter so I can read the ROM. Would be good, as you
suggest, to read someone else's to compare...


> A ROM emulator
> might help too if you can get your hands on one, second-hand units are not
> exactly expensive nowadays as they went out of favour it would seem.
> 
>  Overall, hard to say which failure case would be better (or worse).  It
looks to
> me like at this point you have several options to proceed with.
> 
>  Good luck with your investigation and recovery!  I wish I had one of
these
> machines, they are sweet and they run Linux out of the box. ;)
> 
>   Maciej

Many thanks for all your help today!

Regards

Rob



RE: AlphaStation 200 NVRAM Problem

2016-04-23 Thread Maciej W. Rozycki
On Sat, 23 Apr 2016, Robert Jarratt wrote:

> >  Second: it may also be that DROM itself is faulty, a bit may have flipped
> for
> > example; NB this is UV EPROM, so things happen.  Maybe someone can
> > share a known-good image.
> > 
> 
> When I put the DROM back the failure returned. So it is either a bad SRAM, a
> short/open somewhere, or the DROM code itself. I have a PROM programmer, but
> it is only for DIP packages, I don't have the facilities to read a PROM in
> this type of package. I will have another look round for any obvious
> problems on the board, but it looked OK to me when I checked. Getting the
> SROM diags out might help too, I will have a look at that.

 First of all you might be able to run some SRAM diagnostics yourself, 
either from the console (if it has a tool for this; at worst you could 
poke at it manually with deposit/examine commands, but the complicated 
flashbus access protocol will make it a tedious task unless there is a way 
to script it) or from the OS (can't help how to do this from VMS; under 
Linux you could mmap(2) /dev/mem at the right address and then poke at it 
with a little program doing the right dance to get the flashbus access 
protocol right), to see if it shows any symptoms of misbehaviour.

 To experiment with DROM you might be able to find a DIP-to-PLCC socket 
adapter, pinouts for ROMs are pretty standard I believe.  A ROM emulator 
might help too if you can get your hands on one, second-hand units are not 
exactly expensive nowadays as they went out of favour it would seem.

 Overall, hard to say which failure case would be better (or worse).  It 
looks to me like at this point you have several options to proceed with.  

 Good luck with your investigation and recovery!  I wish I had one of 
these machines, they are sweet and they run Linux out of the box. ;)

  Maciej


Re: Keys - Non-Ace was RE: ACE Key codes (xx2247 etc.)

2016-04-23 Thread Dennis Boone
 > On a related note, a former DEC field engineer gave me this key (and
 > keychain). He thought it was a PDP-8 key at first, but it's not the
 > standard XX2247. It says KBM1100...any ideas what this might go to?

VAXen were used in GE EDACS repeater controllers, so perhaps one of
those systems?

De


Re: CDC 6600/Cyber 73 Memories - WAS: Harris H800 Computer

2016-04-23 Thread Camiel Vanderhoeven
Now that we're on the subject of 6600's and the like... I have a bit
of a puzzle. I have some CDC 7600 modules; these consist of 8 thin
PCB's, with metal shielding in between. On the back, there are 8 rows
of 16 pins, and on the front there are 8 rows of 6 recessed pins,
staggered (I believe for testing/debugging purposes). On the top and
bottom of the front is a tab with a screwhole in it, black like the
entire front of the modules, and as wide as the module it self. So far
so good, this matches all the photo's of 7600 modules I've seen
online.

Now, I also have some modules that are the same form factor, same
number of pins in the back, but they're different. There's only 4
pcb's, no shielding between them, except in the middle, 4 x 8 pins in
the front, and the tabs on the top and bottom of the front are small
(not as wide as the module itself, and silver-colored. The pcb's are
not connected with soldered-in wiring, but with goldplated wires that
run between sockets in the pcb's. On the pcb's are loose transistors,
resistors, etc like in the 7600 modules, but also some square Fujitsu
ECL IC's (100550 and the like), with date codes that indicate 1983,
which sounds a bit late for a 7600.

Any idea what these modules might have been used in?

Camiel



On Fri, Apr 22, 2016 at 2:50 AM, Paul Koning  wrote:
>
>> On Apr 21, 2016, at 7:33 PM, Chuck Guzis  wrote:
>>
>> ...
>>> Neat.  PLATO made extensive use of ECS, swapping per-terminal state
>>> and programs in and out of ECS for fast interactive service.  ECS was
>>> also where most I/O buffers went, with PPUs doing disk and terminal
>>> I/O from/to ECS rather than central memory.  A dual mainframe 6500
>>> system (4 "unified" processors total) did a decent job supporting 600
>>> concurrent logged-in terminals, out of a total of 1008 connected.
>>> That was around 1977 when I worked on that system at the U of
>>> Illinois.
>>
>> Was that UIUC?  I processed some CYBER tapes from there a couple of
>> years ago--there's an archivist there who uses us to retrieve contents
>> of various dusty items.
>
> Yup.  A couple of us helped put the PLATO copy running on the DtCyber 
> emulator together, see cyber1.org.
>
> paul
>
>


Re: COSMAC Elf switch panel using PCBs

2016-04-23 Thread Eric Smith
On Sat, Apr 23, 2016 at 11:28 AM, dwight  wrote:
> I looked at the schematic pdf and it looks right.
> The ground lead is the NC.

It's an SPDT toggle. Neither the NC or NO pin of the switch should be
tied to ground; it's the common pin that's grounded. So for a toggle
switch, which side is NC and which is NO?

Anyhow, apparently if I'd read it more carefully, it is explained in the manual.


Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Jon Elson

On 04/23/2016 11:29 AM, Noel Chiappa wrote:

 > From: Jon Elson

 > The 11/45 and 11/70 are mostly the same processor. ...
 > the data paths boards and FPU are the same part numbers

'Yes' to the FPP (well, there are two versions, the FP11-B and FP11-C, but
they are both identical in the two machines).

'No' to the data paths, though: e.g. the M8100 in the /45 (the board with the
74S181's on it) is replaced by the M8130 in the /70. The two are _very_
similar, but I suspect not interchangeable (examing the prints shows minor
differences).

OK, didn't know that!

Jon


Re: High performance coprocessor boards of the 80s and 90s - was Re: SGI ONYX

2016-04-23 Thread Jules Richardson

On 04/20/2016 10:32 AM, Pete Turnbull wrote:

On 20/04/2016 16:00, Toby Thain wrote:

On 2016-04-20 10:27 AM, Pete Turnbull wrote:

It did indeed - I have one.  Also a couple of 6502 CoPros, a 65C102, a
32016 and a pair of Z80s, which were nice in their day.


Nice collection. I'd forgotten about the 32016! What software ran on
these respective processors?


There was a collection of "scientific" software for the 32016 - things like
Spice, some maths software, and assorted CAD stuff


I remember there being some connection with the ACW and Quickchip CAD, but 
IIRC the disks I had had been reformatted, and it wasn't obvious if any of 
the product actually ran on the 32016 side of the machine, or if it was 
just the same old 6502-side application which was available for regular BBC 
micros.



licensed for the 32016 ACW and the Master Scientific, which came later.
The Z80 CoPro ran CP/M - real licensed CP/M 2.2, not the bastardised
often-not-compatible "CPN" lookalike offered by Torch, and came with GEM
and various office software.  The ARM CoPro originally had little software
beyond TWIN (the Two Window Editor), assembler, BASIC, and utilities.  The
6502 variants - including the 65C102 that was used for the Master Turbo -
just ran whatever you'd otherwise have on the Beeb itself


On the back of that... the Cumana 68008 board ran OS-9, the 'bigger' of the 
two Torch 68000 boards could run System III Unix, and if I remember right 
the Casper 68000 board just came with a bunch of utilities and programming 
tools (although I think FLEX may have been available as an optional extra 
purchase). I'm not sure what the PEDL Z80 board came with. The Torch 8088 
board ran MSDOS, I believe.


cheers

Jules



Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Noel Chiappa
> AFAIK, the only non-FPP board in the CPU which is interchangeable
> between the two machines is the M8132 (instruction register decode &
> condition codes)

So it seems like there's an(other) error in the DEC documentation.

If one looks at 11/70 Maintenance Manual (EK-11070-MM-002), it says (pg. 1-3)
that the KB11-C (11/70 later CPU) contains an M8133 ROM and ROM Control
board, the same as the KB11-B (earlier CPU, pg. 1-4), _but_ ...

The KB11-C prints include the drawings for the M8123 (also used by the
KB11-D, the later /45 CPU). Other manuals confirm that the KB11-C uses the
M8123 (see, e.g., the KB11-A,D Maintainence Manual, EK-KB11A-MM-004, pg 1-1).

I _thought_ the KB11-D used two of the same boards as the KB11-C, but then,
when I went to check, to be sure I had the correct info (before sending out
my email intended to "just want to be accurate", sigh), I relied on the DEC
manual... :-(

Oh well, that's what I get for relying on DEC manuals! :-)

Noel


Tadpole Sparcbook Hard Drive

2016-04-23 Thread Ben Sinclair
This is a long shot, but does anyone have a Tadpole Sparcbook 3TX hard
drive?

Their existence may be just a myth.

-- 
Ben Sinclair
b...@bensinclair.com


Re: High performance coprocessor boards of the 80s and 90s - was Re: SGI ONYX

2016-04-23 Thread Jules Richardson

On 04/21/2016 09:51 AM, Jon Elson wrote:

On 04/21/2016 07:04 AM, Jules Richardson wrote:

On 04/20/2016 10:00 AM, Toby Thain wrote:

Nice collection. I'd forgotten about the 32016! What software ran on these
respective processors?


OS-wise the 32016 ran something called Panos, with Pandora as the
firmware - mostly written in Modula-2.  Acorn (working with Logica)
attempted a Xenix port, and some documentation references Xenix as being
available, but I don't think it was ever released; having to run all the
I/O across the Tube interface just proved to be too much of a bottleneck.


I'm pretty sure I ran both Genix and then Xenix on the Logical
Microcomputer Co. 32016 we bought.


Oh, I'm sure there were ports around which used that particular CPU. But 
within the context of Acorn's co-processor, I've never seen evidence that 
Xenix ever saw the light of day. I do have some internal company documents 
which suggest that they were having some terrible performance issues with 
the port though, so it was certainly attempted...


Jules



Re: High performance coprocessor boards of the 80s and 90s - was Re: SGI ONYX

2016-04-23 Thread Jules Richardson

On 04/22/2016 11:59 AM, Liam Proven wrote:

The only BBC copro that could run GEM, AFAIAA, was the BBC Master 512
with the Intel 80186.


And the '286 copro for the ABC3xx machines, I expect; the '186 which ended 
up in the M512 was essentially a cost-reduced version of that board (slower 
CPU and less RAM), and Acorn used some of the '286 boards in-house for M512 
development.


cheers

Jules



Re: Accelerator boards - no future? Bad business?

2016-04-23 Thread Jules Richardson

On 04/23/2016 10:37 AM, Noel Chiappa wrote:

 > From: Jules Richardson

 > I can't see the point in modern upgrades .. At the point where people
 > start adding emulated storage, USB interfaces, VGA display hardware
 > etc. it stops being a vintage system and starts being a modern version
 > which just happens to still have a few vintage parts.

I agree with you to some degree, but...

Some components are just hard/impossible to find now - like old original disk
drives (seen any RP0x's for sale recently?)


True. I think my personal view is that I'll consider modern replacements to 
things when it's impossible to use the originals - but not simply for 
reasons of speed, cost, convenience.



running the disks is both non-trivial (power/heat) and risks damaging
what are effectively museum pieces.


There I'd just say run them until they break and can't be fixed, and then 
they can become static museum exhibits. Slight caveat there though that 
every effort is made within the community as a whole to document the 
hardware before there are no operational examples left.



building a board that uses an SD memory
card to emulate an RP0x, that's within my grasp. And it takes a lot less room
and power, to boot.


To me it's not nearly as much fun, though... I want the sights and the 
sounds of the original hardware, warts and all.


As I mentioned in a reply to Tony though, I don't mind modern equivalents 
when there's no choice; my issue's really just with using those equivalents 
when in the possession of operational originals, and with adding 
functionality using modern components.



Also, the _systems_ were designed to have upgrades installed, and did, BITD -
many of which were not conceived when the machine first came out. E.g. our
11/45 at LCS wound up with 1MB MOS memory boards in it (much smaller and less
power-hungry than the original memory), and high-speed LANs, neither of which
were ever envisaged when the machine was built.


I've no problem with that at all, within a vintage context. I don't mind 
some ancient board being used in some even-more-ancient machine - but at 
the same time I wouldn't want to use a board that takes whatever memory 
modules people are sticking into PCs these days.


I couldn't come up with any kind of cut-off date for what I'm comfortable 
with, although I suppose a lot of it comes down to not using examples of 
anything that couldn't have been done during the system's typical 
operational lifetime.



I don't see that building, say, a UNIBUS USB interface now is really that
different from building a high-speed LAN board BITD.


I think there I'd be asking myself what the purpose of the USB interface 
was - and if a 'period' equivalent which achieved the same end result was 
feasible.


cheers

Jules



RE: AlphaStation 200 NVRAM Problem

2016-04-23 Thread Maciej W. Rozycki
On Sat, 23 Apr 2016, Robert Jarratt wrote:

> Well! I took out the DROM and switched it on again. The machine bleeped at
> me, but then it gave me a console and I was able to boot VMS!

 I'm glad that it worked, this will certainly make further diagnostics 
easier, and you have a usable machine anyway.

> I will have to see if the NVRAM is now populated and whether it will
> continue to work with the DROM installed.

 First: to double-check, how did you know it was a NVRAM failure, did LEDs 
show DC (xxox xxoo)?

 Second: it may also be that DROM itself is faulty, a bit may have flipped 
for example; NB this is UV EPROM, so things happen.  Maybe someone can 
share a known-good image.

> Thanks!

 You are welcome!

  Maciej


Re: COSMAC Elf switch panel using PCBs

2016-04-23 Thread Mike Stein
>From p.11:

"RUN and LOAD switches S1 and S2 in Fig. 5 control the operation of the 
computer. With both switches set to OFF, ~LOAD is +5V and RUN is at ground 
potential. This resets the 1802."

Note the tilde, suggesting ~LOAD is active low.

m

- Original Message - 
From: "dwight" 
To: "General Discussion: On-Topic and Off-Topic Posts" 
Sent: Saturday, April 23, 2016 1:28 PM
Subject: Re: COSMAC Elf switch panel using PCBs


I looked at the schematic pdf and it looks right.
The ground lead is the NC.
Can you tell us what page reference you think is wrong
or confusing?
Dwight


From: cctalk  on behalf of Eric Smith 

Sent: Friday, April 22, 2016 5:06 PM
To: General Discussion: On-Topic and Off-Topic Posts
Subject: COSMAC Elf switch panel using PCBs

I built a new Elf switch panel, but this time I used two printed
circuit boards for the switches and the bezel.

The bezel PCB has white soldermask with black silkscreen.  The next
revision will have black soldermask with white silkscreen, and the
legend font, weight, and positioning changed to more closely match the
original Elf photo in Popular Electronics.

https://www.flickr.com/photos/22368471@N04/sets/72157667455777465

The 20-pin header has the same pinout as Bob Armstrong used for the
Spare Time Gizmos Elf 2000, but I don't presently have an Elf 2000 to
test it with. For now the main intent is to use the panel for a new
version of my FPGA Elf.

I'm not sure whether I got the wiring of the LOAD switch correct; the
Elf 2000 documentation refers to normally closed and normally open
contacts of that switch, but for a toggle switch that doesn't make any
sense to me.  If anyone can tell me which pins of the Elf 2000
connector are grounded when the load switch is active vs inactive,
that would be appreciated.

The 20-pin header should have been right angle; since I only had a
vertical header on-hand, the ribbon cable had to be plugged in before
the switches were soldered in place, and the switches are not flush
with the switch PCB.

The toggle switches and push-button switch are C 7101SDV3BE and
8125SDV3BE, respectively, which have 0.42 inch actuator, 0.28 inch
threaded bushing with keyway, vertical PCB mount with V-bracket, gold
contacts, chrome actuator finish, and nickel bushing finish. These
particular C switch variants are not very common, so I'll probably
use different ones in the future, without the V-bracket.

I don't yet have enough of the red and white toggle caps, which are
C 896803000 and 896801000, respectively.  The red button for the
push-button switch is C 801803000.


Re: COSMAC Elf switch panel using PCBs

2016-04-23 Thread dwight
I looked at the schematic pdf and it looks right.
The ground lead is the NC.
Can you tell us what page reference you think is wrong
or confusing?
Dwight


From: cctalk  on behalf of Eric Smith 

Sent: Friday, April 22, 2016 5:06 PM
To: General Discussion: On-Topic and Off-Topic Posts
Subject: COSMAC Elf switch panel using PCBs

I built a new Elf switch panel, but this time I used two printed
circuit boards for the switches and the bezel.

The bezel PCB has white soldermask with black silkscreen.  The next
revision will have black soldermask with white silkscreen, and the
legend font, weight, and positioning changed to more closely match the
original Elf photo in Popular Electronics.

https://www.flickr.com/photos/22368471@N04/sets/72157667455777465

The 20-pin header has the same pinout as Bob Armstrong used for the
Spare Time Gizmos Elf 2000, but I don't presently have an Elf 2000 to
test it with. For now the main intent is to use the panel for a new
version of my FPGA Elf.

I'm not sure whether I got the wiring of the LOAD switch correct; the
Elf 2000 documentation refers to normally closed and normally open
contacts of that switch, but for a toggle switch that doesn't make any
sense to me.  If anyone can tell me which pins of the Elf 2000
connector are grounded when the load switch is active vs inactive,
that would be appreciated.

The 20-pin header should have been right angle; since I only had a
vertical header on-hand, the ribbon cable had to be plugged in before
the switches were soldered in place, and the switches are not flush
with the switch PCB.

The toggle switches and push-button switch are C 7101SDV3BE and
8125SDV3BE, respectively, which have 0.42 inch actuator, 0.28 inch
threaded bushing with keyway, vertical PCB mount with V-bracket, gold
contacts, chrome actuator finish, and nickel bushing finish. These
particular C switch variants are not very common, so I'll probably
use different ones in the future, without the V-bracket.

I don't yet have enough of the red and white toggle caps, which are
C 896803000 and 896801000, respectively.  The red button for the
push-button switch is C 801803000.


RE: AlphaStation 200 NVRAM Problem

2016-04-23 Thread Maciej W. Rozycki
On Sat, 23 Apr 2016, Robert Jarratt wrote:

> >  But from the discussion referred I gather DROM outputs its diagnostics to
> > this port too and you might be able to learn what exactly about NVRAM it
> > complains.
> 
> 
> Ah OK, so you think the DROM console also outputs to the SROM diagnostics?

 Yes, it does -- see the dumps reported in the discussion I referred. 

 The diagnostic port is just a primitive (bit-banged) serial port driven 
directly by the CPU.  Any software can poke/peek at it via three CPU pins 
(mapped to the SL_XMIT and SL_RCV internal processor registers) -- which 
is why it is available early on.  There is PALcode support for that port 
actually, saving the user of this interface from the need to calculate 
timings and to handle individual bits presumably.

 The real serial port used for regular operation is wired far away from 
the CPU, behind PCI and the ISA bridge (southbridge), off a Super-I/O 
chip.  You need to have all the hardware almost fully initialised to be 
able to talk to that port, so it's of no use before SRM/ARC has taken 
over.  Also lots of logic has to function correctly for that port to be 
accessible, so it's not as usable in diagnostics.

> I'd need to build some kind of adapter to do that. I'll have to research
> this. Is there a standard part?

 DEC had it internally, obviously, but I don't think you can come across 
one.  I think wiring an EIA/TIA 232 driver/receiver chip, maybe with the 
aid of a small prototyping board, shouldn't be much hassle though -- 
there's surely schematics for that already available somewhere online.

>  Also you might be able to correct configuration, e.g. by poking at
> > NVRAM or elsewhere appropriately;
> 
> 
> Not really sure how to poke the NVRAM without the console, or is that what
> you are suggesting?

 As I say you can use the SROM mini-console for that, flip J1 to enable 
it.  The interface is obviously crude (mind that you're running code out 
of I$, not much space there to fit fancy stuff) and you need to know the 
internals of the machine, to get at the right addresses.  But the manual: 
 
I already referred seems to cover all you need.

> > notice that the manual also suggests
> > you might be able to bypass the DROM sequence and go to SRM/ARC
> > directly, which might help recovery too.
> 
> 
> I have definitely missed that bit. Which manual says this?

 Same manual as above:

"When the SROM code has completed its tasks, it normally loads the DROM 
code and turns control over to it.  The SROM checks to see if the DROM 
contains the proper header and that the checksum is correct.  If either 
check fails, the SROM code reads a location in the TOY NVRAM.  The 
location indicates which console firmware (the SRM or the ARC) should be 
loaded.

"When the console firmware is loaded, the header check and the checksum 
are checked.  If either is in error, the SROM code jumps to its 
mini-console routine.  With the appropriate adapter, you can attach a 
terminal to the CPU's serial port and use the mini-console.  Typically, 
this port is used in the manufacturing environment."

-- so it looks to me you could temporarily pull DROM from its socket to 
bypass this step.  You won't have the ability to load the console from a 
floppy then of course as this is handled by DROM code, and you may need to 
set the TOY NVRAM location for SRM vs ARC correctly (though IIRC the 
Avanti series have enough flash space to keep both consoles at once; many 
years ago I had access to a Melmac machine, which is very similar to 
yours, up to the same form factor).

> >  It is battery backed indeed and it sits right in the upper right hand
> > corner: ,
> > next to the other flashbus devices, as expected -- next to the left there
> is a
> > pair of flashROMs (and a pair of sockets for another two of the four
> > supported total), and a socketed DROM chip.  Being decoded as a DRAM
> > bank they are also close to DRAM sockets.  Jumpers and LEDs are elsewhere,
> > but obviously they have less strict PCB routing requirements.
> > 
> >  HTH,
> 
> Yes it does help! Thanks for that. At least I now know where the NVRAM
> actually is. I have found some parts on Ebay, but I have no idea how to deal
> with surface mount though, another research area...

 Somehow I doubt an SRAM chip has failed TBH, so replacing it might not 
help, there could be something else causing the problem.  It would be good 
to know what exactly has failed, and with DROM output on the diagnostic 
serial port and/or the SROM mini-console you might actually be able to 
figure out what's going on here (e.g. fill NVRAM with some bit patterns, 
see if they come back right, and if they stick across a power cycle).

 Ah, and may I suggest wiping the dust in this area; I dare not think it 
might be causing a short or something.

  Maciej


RE: AlphaStation 200 NVRAM Problem

2016-04-23 Thread Robert Jarratt
>  But from the discussion referred I gather DROM outputs its diagnostics to
> this port too and you might be able to learn what exactly about NVRAM it
> complains.  Also you might be able to correct configuration, e.g. by
poking at
> NVRAM or elsewhere appropriately; notice that the manual also suggests
> you might be able to bypass the DROM sequence and go to SRM/ARC
> directly, which might help recovery too.
> 

Having checked the manual I think you may be referring to the following
text:

" When the SROM code has completed its tasks, it normally loads the DROM
code and
turns control over to it. The SROM checks to see if the DROM contains the
proper header
and that the checksum is correct. If either check fails, the SROM code reads
a location in
the TOY NVRAM. The location indicates which console firmware (the SRM or the
ARC)
should be loaded.
When the console firmware is loaded, the header check and the checksum are
checked. If
either is in error, the SROM code jumps to its mini-console routine. With
the appropriate
adapter, you can attach a terminal to the CPU's serial port and use the
mini-console.
Typically, this port is used in the manufacturing environment."

To get this sequence to work needs two things though. First I have to create
a DROM error. The DROM seems to be fine. Perhaps I could remove the DROM
chip as it is socketed to provoke the error.

However, then it says it checks the TOY NVRAM for which firmware to load.
The battery was flat, so the TOY NVRAM won't have this info. Hopefully it
will default to one of them.

Still, if I had the mini-console adapter, that would probably really help.

Regards

Rob



Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Noel Chiappa
> From: Jon Elson

> The 11/45 and 11/70 are mostly the same processor. ...
> the data paths boards and FPU are the same part numbers

'Yes' to the FPP (well, there are two versions, the FP11-B and FP11-C, but
they are both identical in the two machines).

'No' to the data paths, though: e.g. the M8100 in the /45 (the board with the
74S181's on it) is replaced by the M8130 in the /70. The two are _very_
similar, but I suspect not interchangeable (examing the prints shows minor
differences).

AFAIK, the only non-FPP board in the CPU which is interchangeable between the
two machines is the M8132 (instruction register decode & condition codes) -
and only to the KB11-D /45 variant, not the -A.

{As always, just want to be accurate! :-}

Noel


RE: AlphaStation 200 NVRAM Problem

2016-04-23 Thread Robert Jarratt


> >
> > I don't think the SROM diagnostics are going to help much because the
> > failure is in the DROM sequence, which comes after the SROM.
> 
>  But from the discussion referred I gather DROM outputs its diagnostics to
> this port too and you might be able to learn what exactly about NVRAM it
> complains.


Ah OK, so you think the DROM console also outputs to the SROM diagnostics?
I'd need to build some kind of adapter to do that. I'll have to research
this. Is there a standard part?


 Also you might be able to correct configuration, e.g. by poking at
> NVRAM or elsewhere appropriately;


Not really sure how to poke the NVRAM without the console, or is that what
you are suggesting?


> notice that the manual also suggests
> you might be able to bypass the DROM sequence and go to SRM/ARC
> directly, which might help recovery too.


I have definitely missed that bit. Which manual says this?


> 
>  It's up to you if you want to try this of course, I just thought it might
help as
> the existence of this SROM console might not be universally known.
> 
> > >  Finally the SROM console command reference is here:
> > >  > > alphaserver/technology/literature/srommini.pdf>.
> > > This manual doesn't specifically cover the Avanti, but I'd expect
> > > the user interface to be similar -- it's a low-level tool close to
> > > the CPU after
> > all.
> > >
> > >  NB on Avanti the 8kB NVRAM is separate from the TOY/NVR chip (which
> > > is a Benchmarq BQ4285, providing 114B of general storage only).
> > >
> >
> > I had already located the Benchmarq chip and found the spec to be
> > insufficient for the 8K NVRAM. The problem is, I don't know which chip
> > has the NVRAM, I have not been able to locate it and the manuals don't
tell
> me.
> > I hope it isn't one of the ASICs. I have posted a photo of the board
here:
> > http://bit.ly/1qHQnaB in case anyone can id the NVRAM.
> 
>  It is separately decoded on the flashbus (among flashROM, DROM,
> diagnostic LEDs and jumpers), so I doubt it's in an ASIC, that would be
too
> arcane.  I suspect they wanted to keep it separate from flashROM for
safety.
> 
> > If the NVRAM contents are maintained by the battery then there should
> > be a way to reset the NVRAM contents, but there does not seem to be a
> > way. I wonder if the NVRAM persists without power? The manual seems to
> > say that the TOY is battery backed, but makes no mention of the NVRAM
> > needing the battery.
> 
>  It is battery backed indeed and it sits right in the upper right hand
> corner: ,
> next to the other flashbus devices, as expected -- next to the left there
is a
> pair of flashROMs (and a pair of sockets for another two of the four
> supported total), and a socketed DROM chip.  Being decoded as a DRAM
> bank they are also close to DRAM sockets.  Jumpers and LEDs are elsewhere,
> but obviously they have less strict PCB routing requirements.
> 
>  HTH,

Yes it does help! Thanks for that. At least I now know where the NVRAM
actually is. I have found some parts on Ebay, but I have no idea how to deal
with surface mount though, another research area...

Regards

Rob



Re: Keys - Non-Ace was RE: ACE Key codes (xx2247 etc.)

2016-04-23 Thread Dennis Boone
 > What do you think of the Klom imitation of it?

Initial impressions of the Klom K-747 tubular key cutter

The Klom K-747 cutter is designed to cut Chicago ACE type tubular
keys, and the Fort equivalents.  It is available in at least four
key barrel sizes, 7.0mm, 7.3mm, 7.5mm and 7.8mm.  The "common" size
seems to be 7.8mm, which is the inside dimension of the key barrel.
(In US measurements of such tools given in inches, it seems more
common to specify the O.D.)  The 7.8mm size is the appropriate one
for cutting e.g. DEC XX2247 keys.

Comparing the Klom to the drawings and photos of the HPC device in
the HPC manual, the Klom has some differences: more labeling than
the HPC, less projection of the cutter shaft out the back, and more
contact between the cutter knob and the depth knob at the bottom of
a cut, a rotating key shaft.  In the absence of a Klom manual, the
HPC manual is useful in interacting with the Klom version in spite
of the differences between the devices.

The design concept is straightforward and should be fairly easy to use.
The unit comes with a T-style key gauge, but no manual or 2.5mm hex
key for making adjustments.  The Klom unit provides spring and bearing
detents to help hold the key to the proper pin position and the depth
knob at the selected setting.  With a key inserted for cutting, the
device is about 5 inches long, and just under 2 inches in diameter.
Overall, construction seems sturdy.  The finish is black paint which
seems to scratch fairly easily.

The Klom design does not allow cutting of left or right offset keys.

Both the rotational position knob and the cutter depth arrived in
need of adjustment.  Both operations are obvious.  The rotational
adjustment is trivial, since the shaft on which the key mounts for
cutting has detents.  All one must do is turn the key to the first
position, then loosen the set screw in the knob to align the "1" on
the knob with the true line.  The depth calibration is not as easy,
since one must adjust the distance the cutter shaft is slid into the
device by loosening the set screw in the knob, pushing it in a whisker,
and tightening the set screw.  Since the designated difference between
cut depths is 0.016", this is fiddly.

Chicago ACE numbers pins clockwise from the 1 o'clock position
(looking into the lock).  Fort numbers pins counterclockwise from
the 11 o'clock position.  There is also a difference in pin depth
numbering between the two manufacturers.  The Klom unit matches the
Chicago scheme.  The HPC manual describes these numbering schemes,
and the information there applies to the Klom as well.  You will need
to understand these differences, as well as which variant was used to
specify the bitting you will use, to cut a usable key.  Both brands
have some numbering painted on their knobs.  It seems that it would
quite easy to paint full rotation and depth numbering for both Chicago
and Fort schemes on them, which would make using the devices easier
for novices, but neither does this.

There is a little bit of play in several places that could affect
accuracy: the depth knob rotates a large screw whose threads could be
tighter; the end cap that holds the key on the shaft can wobble enough
to shift the key side to side a wee bit.  The cutter shaft also has
more play than is probably necessary, but that won't affect depth.

Since this is a low-cost Chinese device one would expect a few issues,
and this device does present a few:

* Both sets of detents were a bit grouchy at first, as if there was
a bit of manufacturing debris inside, but seemed to settle down some
after a few minutes use.

* I managed to accidentally rotate the depth adjustment a couple of
times while making a cut.  The knob that turns the cutter meets the
depth adjustment knob at the bottom of the cut, and depending on how
hard you're pressing, friction between the two may be enough to cause
the problem.  Murphy is (as always) on hand to ensure that when the
knob moves, it goes toward a deeper setting, spoiling the key.  This is
a design issue that will have to be worked around by paying careful
attention during use.  It would be nice if the depth could be locked.
There are set screws to adjust the spring tension on the detents,
and I tightened them a little.  This helped some, but not enough.

* In my first attempts, I had some small variations in depth of cut
between different pins that were supposed to have the same value.
The above notes on play probably explain this.  Practice will help.
I haven't tested enough yet to opine on the impact in terms of marginal
or bad keys.

* It would be nice if the key gauge was labeled on both sides, since
when holding it in one hand and the key in the other, one uses it
face up for odd cut sizes and face down for evens.  (You could also
turn the key around, but it's easier to keep track of where you are
if you flip the gauge.)  I find the T-style gauge easier to use for
fine evaluation of depth, and the Southord style 

RE: VCF East pictures

2016-04-23 Thread Ali
> Simplest just to read the article (it's not very long):
> 
> http://www.swtpc.com/mholley/PopularElectronics/Feb1975/PE_Feb1975.htm
> 

Thanks Bill. That is pretty cool!

-Ali



Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Jon Elson

On 04/23/2016 07:41 AM, Noel Chiappa wrote:

 > From: Brent Hilpert

 > I'd say the 74181 (1970) deserves a mention here. Simpler (no register
 > component, ALU only) but it pretty much kicked off the start of
 > IC-level bit slicing.

Yes, it was used in quite a few machines. Among the PDP-11's alone, it is
found in the -11/45, /05, /40, /04 and /34, to name a few that I checked
quickly, and almost certainly others too (e.g. /70).

The 11/45 and 11/70 are mostly the same processor.  
Definitely, the data paths boards and FPU are the same part 
numbers.


Jon


Re: Accelerator boards - no future? Bad business?

2016-04-23 Thread Noel Chiappa
> From: Jules Richardson

> I can't see the point in modern upgrades .. At the point where people
> start adding emulated storage, USB interfaces, VGA display hardware
> etc. it stops being a vintage system and starts being a modern version
> which just happens to still have a few vintage parts.

I agree with you to some degree, but...

Some components are just hard/impossible to find now - like old original disk
drives (seen any RP0x's for sale recently?), or Able ENABLE's - and in any
case running the disks is both non-trivial (power/heat) and risks damaging
what are effectively museum pieces.

So one is left with the choice of modern replacements, or nothing. And I'm
not capable of building an RP0x, but building a board that uses an SD memory
card to emulate an RP0x, that's within my grasp. And it takes a lot less room
and power, to boot.

Also, the _systems_ were designed to have upgrades installed, and did, BITD -
many of which were not conceived when the machine first came out. E.g. our
11/45 at LCS wound up with 1MB MOS memory boards in it (much smaller and less
power-hungry than the original memory), and high-speed LANs, neither of which
were ever envisaged when the machine was built.

I don't see that building, say, a UNIBUS USB interface now is really that
different from building a high-speed LAN board BITD.


I do agree that if you replace stuff that _is_ still available and perfectly
functional (e.g. QBUS memory and processors), you might just as well run a
simulator. But there's a lot of stuff that's not in that category (above).

Noel


Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Jon Elson

On 04/22/2016 11:10 PM, Jon Elson wrote:

Yikes, too many typos, let me try over!
I built a 32-bit micro-engine for a project that was 
eventually going to be an IBM 360-like CPU.
I picked the 360, not because it was the greatest design, 
but it was VERY well laid-out and would be easy to write 
efficient microcode for.  I used the 2903 with 2910 
controller.  I was able to get it to run at 8 MHz, with 
3-address operations running at 6 MHz.


But, the project got bogged down, at a certain point, I 
realized HOW MUCH more work lay ahead of me to get a 
working system.  I had to add 2 more features to the 
micro-engine - a 256-way branch from the op-code, and some 
OR gates to OR in the register address fields.  Then, I 
had to build a system bus and memory interface.
(I was going to make the I/O architecture much more like a 
PDP-11 than the 360 channel architecture.)  Then, I had to 
design a general-purpose peripheral controller.  I had a 
VERY rough sketch for about a 20-chip micro-machine using 
(probably) 3X byte-wide EPROMS for instructions) that 
would hopefully run at 4 MHz.  Then, I had to build a SCSI 
controller (I already had a SASI disk on my S-100 system), 
a serial mux and a tape controller.
Finally, I had to write at least a primitive OS and figure 
out how to come up with compilers for it.  Had I known 
that UNIX-360 existed, I might have tried to make some 
kind of port of that. But, obviously, YEARS of work would 
have been needed to make it usable.


See http://pico-systems.com/stories/1982.html for some 
pics and description of it.


Jon




RE: Accelerator boards - no future? Bad business?

2016-04-23 Thread tony duell

> Honestly, I can't see the point in modern upgrades except perhaps for
> temporary use in order to get data to/from original equipment. At the point
> where people start adding emulated storage, USB interfaces, VGA display
> hardware etc. it stops being a vintage system and starts being a modern
> version which just happens to still have a few vintage parts. May as well
> say screw it and just use an emulator for the whole thing...
> 
> Now upgrades within the realm of what would have been possible during a
> system's lifetime I can get on board with - using period components to
> implement things such as Ethernet interfaces, accelerators, extra memory 
> etc...

I'm with you on this, generally...

A concrete example. As many of you know I have a VAX 11/730 that I am restoring
(It's currently on hold as I may have a lead on a scrap RA80 that I can get the 
brackets, etc, I need to repair the R80 from (as well as a spare HDA) so until
I know one way or the other I am not going to do metalbashing...). I do NOT want
to use any of the common TU58 emulators. It seems ridiculous to use something 
like an Rpi to boot an 11/730 CPU. If I can't get the real tape-based TU58 
running
then any emulator I make for it will use a CPU contemporary with the rest of
the machine (probably an 8085 as used in the real TU58). 

Similarly I want to keep FPGAs and the like (hacker-unfriendly, closed, devices)
away from my classics. I want proper documentation -- that's one reason
I run the classics in the first place. Not a closed-source compiler that does
$diety-knows-what to my design. I will not stick an FPGA-based board in my 
Unibus, there were no such things with the Unibus was 'current' .

However, how far do you go (I am asking, I am not sure of the answer). Is it
'OK' to use a modern machine running a terminal emulator in place of a 
real contemporary-to-the-machine terminal (FWIW, I do try to have at 
least the console as a 'real' terminal in the end but might well use a
terminal emulator when getting it all working). What about mass storage
units that just connect to a peripheral interface (I am thinking of things
like the HPIB-interfaced drives on HP9000/200 machnes). Should you
not use modern machines and compilers to cross-develop software for
classic computers? Should you only use test gear that was contemporary
with the machine (so no DSO's when working on classics, I should not
use my (ancient) logic analysers, even less the LogicDart on my 
PDP11s)?

-tony


Re: Accelerator boards - no future? Bad business?

2016-04-23 Thread Jules Richardson

On 04/22/2016 01:51 PM, Eric Christopherson wrote:

I like the new types of peripherals but it makes me a little uncomfortable
knowing that e.g. in the case of the uIEC-SD for Commodores, the clock
speed of the peripheral is 16 to 20 times that of the original host CPU.


Honestly, I can't see the point in modern upgrades except perhaps for 
temporary use in order to get data to/from original equipment. At the point 
where people start adding emulated storage, USB interfaces, VGA display 
hardware etc. it stops being a vintage system and starts being a modern 
version which just happens to still have a few vintage parts. May as well 
say screw it and just use an emulator for the whole thing...


Now upgrades within the realm of what would have been possible during a 
system's lifetime I can get on board with - using period components to 
implement things such as Ethernet interfaces, accelerators, extra memory etc...


J.





Re: Accelerator boards - no future? Bad business?

2016-04-23 Thread Maciej W. Rozycki
On Sat, 23 Apr 2016, Sean Conner wrote:

> > >   One major problem with adding a faster CPU to an SGI is the MIPS chip
> > > itself---code compiled for one MIPS CPU (say, the R3000) won't run on
> > > another MIPS CPU (say, the R4400) due to the differences in the pipeline.
> > > MIPS compilers were specific for a chip because such details were not 
> > > hidden
> > > in the CPU itself, but left to the compiler to deal with.
> > 
> > Having written a bunch of R3000 and R4000/4200/4300/4400/4600 assembly
> > code in the 1990s, my (possibly faulty) recollection disagrees with
> > you. There are differences in supervisor-mode programming, but I don't
> > recall any issues with running 32-bit user-mode R3000 code on any
> > R4xxx. The programmer-visible pipelline behavior (e.g., branch delay
> > slots) were the same.
> 
>   Hmm ... I might have been misremembering.  I just checked the book I have
> on the MIPS, and yes, the supervisor stuff is different between the R2000,
> R3000, R4000 and R6000.  Also, the R2000, R3000 and R6000 have a five stage
> pipeline, and the R4000 has an eight stage pipeline.

 Pipeline restrictions were gradually relaxed by adding more and more 
interlocks as the architecture evolved.  So while user mode code compiled 
for a higher ISA might not necessarily work with an older one even if it 
only used instructions defined in the older ISA, there was no issue the 
other way round, old code was forward compatible with newer hardware (or, 
depending on how you look at it new hardware was backward compatible with 
older code).

 The timeline was roughly:

- MIPS II -- removed load delay slots -- for memory read instructions 
 targetting both general purpose and coprocessor registers,

- MIPS IV -- removed coprocessor transfer and condition code delay slots 
 -- for instructions used to move data between general purpose 
 and coprocessor registers as well as ones setting or reading
 coprocessor condition codes.

The original MIPS I ISA only had an interlock on multiply-divide unit 
(MDU) accumulator accesses, so all the other pipeline hazards had to be 
handled in software, by inserting the right number of instructions between 
the producer and the consumer of data; NOPs were used where no useful 
instructions could be scheduled.

 Some operations continued to require a manual resolution of pipeline 
hazards even in the MIPS IV ISA, like moves to the MDU accumulator, as 
well as many privileged operations (TLB writes, mode switches, etc.).  
For these the SSNOP (superscalar NOP) instruction was introduced, which 
was guaranteed not to be nullified with superscalar pipelines.  The 
encoding was chosen such that it was backwards compatible, using one of 
the already existing ways to express an operation with no visible effects 
other than incrementing the PC, which given the design of the MIPS 
instruction set there has been always a plethora of.  Consequently SSNOP 
was executed as an ordinary NOP by older ISA implementations.

 NB despite the hardware interlocks it has always been preferable to avoid 
pipeline stalls triggered by them by scheduling the right minimum number 
of instructions between data producers and the respective consumers anyway 
and compilers have had options to adapt here to specific processor 
implementations.  The addition of hardware interlocks made the life of 
compiler (and handcoded assembly) writers a little bit easier as a missed 
optimisation didn't result in broken code.  Also more compact code could 
be produced where there was no way to schedule useful code to satisfy 
pipeline hazards and NOP would have to be inserted otherwise.

 I won't dive into the details of the further evolution with modern MIPS 
ISAs here, for obvious reasons.

  Maciej


Re: Accelerator boards - no future? Bad business?

2016-04-23 Thread Jules Richardson

On 04/22/2016 01:03 PM, Swift Griggs wrote:


Remember all the accelerator boards for the Mac, Amiga, and even PCs in the
90's ?  I've often wished that I could get something similar on my older SGI
systems.


Well, I seem to remember that some of the desktop SGI machines could take a 
variety of CPUs. Often though they were designed for a certain performance 
point and if you wanted more you bought the next model up - and when I 
worked with them commercially, a lot of hardware was technically under 
lease; they'd bring out a new model and throw it our way, then take the old 
one away (where I was told it got sent to the crusher).


On the server side of things, they were generally pretty expandable - if 
you wanted higher performance, you just added more CPUs / disks / 
backplanes, rather than fitting faster versions of individual components.



So, here's the question. Is my dream likely to ever be possible enough that
a boutique shop could pull it off and not lose their shirt on the production
costs and R to do it ?


I think you'd be wasting your time, even if it could be done... for a lot 
of tasks the CPU isn't the limiting factor anyway - disk speed, bus 
bandwidth etc. all play a part, too.


Then there's the "what's the point?" angle... I mean, why take a vintage 
machine that's dog-slow in comparison to modern hardware and try to make it 
slightly less dog-slow?


cheers

Jules



RE: AlphaStation 200 NVRAM Problem

2016-04-23 Thread Robert Jarratt
>  Information on using that is however scarce and scattered, you can find
> some here to start:  and then the
> pinout for the serial diagnostic port is included here:
>  199909/cd1/alpha/pcdsatia.pdf>
> (the boot sequence is also described here, so you'll know that an NVRAM
> failure is reported by DROM code, i.e. before the final SRM or ARC console
> takes over and is able to use the regular serial port).


Indeed, I don't get any output on the standard serial port. 

> 
>  That's a bit cryptic, but knowing that this is a low-level CPU interface
you can
> gather the wiring from this document:
>  alphaserver/technology/literature/164lxtrm.pdf>.
> So BSROMCLK is Tx and SROMCDAT is Rx, but as noted here and in the
> discussion in the first reference you need an EIA/TIA 232 driver and
receiver
> (there is power available on the diagnostic port, so you can use it for
the
> circuit), and of course you need to cross the lines wiring them to your
host.
> 

I don't think the SROM diagnostics are going to help much because the
failure is in the DROM sequence, which comes after the SROM.


>  Finally the SROM console command reference is here:
>  alphaserver/technology/literature/srommini.pdf>.
> This manual doesn't specifically cover the Avanti, but I'd expect the user
> interface to be similar -- it's a low-level tool close to the CPU after
all.
> 
>  NB on Avanti the 8kB NVRAM is separate from the TOY/NVR chip (which is a
> Benchmarq BQ4285, providing 114B of general storage only).
> 

I had already located the Benchmarq chip and found the spec to be
insufficient for the 8K NVRAM. The problem is, I don't know which chip has
the NVRAM, I have not been able to locate it and the manuals don't tell me.
I hope it isn't one of the ASICs. I have posted a photo of the board here:
http://bit.ly/1qHQnaB in case anyone can id the NVRAM.

If the NVRAM contents are maintained by the battery then there should be a
way to reset the NVRAM contents, but there does not seem to be a way. I
wonder if the NVRAM persists without power? The manual seems to say that the
TOY is battery backed, but makes no mention of the NVRAM needing the
battery.

Regards

Rob



Re: bit slice chips (was Re: Harris H800 Computer)

2016-04-23 Thread Noel Chiappa
> From: Brent Hilpert

> I'd say the 74181 (1970) deserves a mention here. Simpler (no register
> component, ALU only) but it pretty much kicked off the start of
> IC-level bit slicing.

Yes, it was used in quite a few machines. Among the PDP-11's alone, it is
found in the -11/45, /05, /40, /04 and /34, to name a few that I checked
quickly, and almost certainly others too (e.g. /70).

Noel


Re: Accelerator boards - no future? Bad business?

2016-04-23 Thread Pete Turnbull

On 23/04/2016 06:16, Eric Smith wrote:

On Fri, Apr 22, 2016 at 9:29 PM, Sean Conner  wrote:

   One major problem with adding a faster CPU to an SGI is the MIPS chip
itself---code compiled for one MIPS CPU (say, the R3000) won't run on
another MIPS CPU (say, the R4400) due to the differences in the pipeline.
MIPS compilers were specific for a chip because such details were not hidden
in the CPU itself, but left to the compiler to deal with.


Having written a bunch of R3000 and R4000/4200/4300/4400/4600 assembly
code in the 1990s, my (possibly faulty) recollection disagrees with
you. There are differences in supervisor-mode programming, but I don't
recall any issues with running 32-bit user-mode R3000 code on any
R4xxx. The programmer-visible pipelline behavior (e.g., branch delay
slots) were the same.

That's only considering the CPU itself, which I used as an embedded
processor; I never used IRIX so I don't know whether IRIX on R4xxx
might have somehow prevented use of IRIX R3xxx binaries (e.g., by
different system call conventions or the like).


Nope, you're right.  I've got R3000 and R4000 Indigos, R4000, R4400 , 
R4600 and R5000 Indys, R5K O2s and an R10K Origin 2000 running Irix 5.3 
and 6.5, and the code written for the R3000 Indigos works fine on all 
the others - with one exception (COFF vs ELF).


The same isn't necessarily true the other way around, of course, as the 
later processors and later IRIX versions had things that didn't 
translate back.  For example, cc under IRIX 5.3, even with an R5000SC 
CPU, compiles 32-bit "-mips1" by default and the resulting code will run 
on any of the above and also on R2000 machines.  However, in later 
versions of IRIX the default became "-mips2" or higher, and of course on 
some machines/IRIX versions the default became "-n32" or "-64" and in 
some cases "-mips3" or "-mips4".  Nevertheless there was still a "-o32" 
option and I've often compiled software on my O2K running 6.5.22 using 
"-o32 -mips1" and run the resulting binary on an R3K Indigo as well as 
the Origin (only somewhat slower :-)).  About half of my /usr/local/bin 
is compiled that way.


Another change was that IRIX 5.3 was the last version to support COFF 
binaries, and the compiler itself would only produce ELF; later versions 
of IRIX couldn't load COFF and would only run ELF binaries (with the 
possible exception 6.0.1, but I don't remember).  Under 5.3, you had to 
use as(1) directly to generate COFF output.


--
Pete


Re: Z80 /WAIT signal question

2016-04-23 Thread Eric Smith
On Fri, Apr 22, 2016 at 12:26 PM, Eric Smith  wrote:
> I thought at one point I saw Zilog or Mostek Z80 documentation
> that gave the specific details of every M cycle of every instruction,
> but I can find such a thing at the moment.  :-(

Found it. Section 12.0 of the Mostek MK3880 Central Processing Unit
Technical Manual, as found in the Mostek Microcomputer Z80 Data Book,
publication number 79602, August 1978. Also in the Mostek 1982/83 Z80
Designer's Guide, June 1982.

I'm still looking for the Z80 DMA Technical Manual.  **NOT** the data
sheet, I've got multiple copies of that.


Re: Seeking immediate rescue of full-rack SGI ONYX near Northbrook, IL

2016-04-23 Thread Andrew M Hoerter


On 4/19/16 14:58, Swift Griggs wrote:


one of your software vendors.  Ugh.  Of course, watching SGI under Rick
Belluzo (I hated that guy) wasn't much easier.  "Ohhh, I'm ex-Microsoft so
let's make Windows NT workstations." Ugh, Puh!, Bleh  grrraaat idea,
guys.  I wish the board could be retroactively fired for that.


It so happened that I had a front-row seat for that debacle (ok, maybe 
fourth-row), having attended one of the SGI road show events at which 
the Visual Workstation line was introduced, by none other than Rick B. 
himself.  I worked with several people who were doing 3D modeling at the 
time.  None of us could figure out why SGI would dive into the low end 
of the market when it seemed as though 3D hardware was about to become 
commoditized in a big way.  Maybe they were gambling on becoming Nvidia 
or 3DFX, and lost.


That wasn't the only SGI-Microsoft collaboration which went bad though; 
Around that time, I also went to a technical presentation about the 
Fahrenheit graphics API which was going to replace OpenGL.  It was 
stillborn and went nowhere.  Although I think I've read that bits and 
pieces of Fahrenheit went into Vulkan, much later on.





Re: Ibm s-100 system?

2016-04-23 Thread william degnan
On Thu, Apr 21, 2016 at 7:52 PM, Guy Sotomayor  wrote:

> Nothing I ever heard of and I was in IBM Boca at the time and would have
> heard
> *something* about it.
>
> TTFN - Guy
>
>

Are you sure, the IBM S-100 system was demoed only in Europe, I assume
developed there too.  I know of the S-100 cards with IBM mincomputer memory
that also eventually appeared in early RAM cards for the 5150.  Here is an
example:

http://vintagecomputer.net/ibm/5155/ibm_256KB_memory_card_6407740.jpg

-- 
@ BillDeg:
Web: vintagecomputer.net
Twitter: @billdeg 
Youtube: @billdeg 
Unauthorized Bio 


Re: Ibm s-100 system?

2016-04-23 Thread Guy Sotomayor

> On Apr 22, 2016, at 5:18 AM, william degnan  wrote:
> 
> On Thu, Apr 21, 2016 at 7:52 PM, Guy Sotomayor  wrote:
> 
>> Nothing I ever heard of and I was in IBM Boca at the time and would have
>> heard
>> *something* about it.
>> 
>> TTFN - Guy
>> 
>> 
> 
> Are you sure, the IBM S-100 system was demoed only in Europe, I assume
> developed there too.  I know of the S-100 cards with IBM mincomputer memory
> that also eventually appeared in early RAM cards for the 5150.  Here is an
> example:
> 
> http://vintagecomputer.net/ibm/5155/ibm_256KB_memory_card_6407740.jpg
> 

Actually, that memory is what went into the mainframes at the time.  IBM at the
time was one of the largest (if not the largest) producer of semiconductor 
memory.
It only went out and bought vendor memory when its fabs couldn’t meet all of the
internal demand.

The original PC used vendor DRAM because the point of the PC was to use
readily available (outside of IBM) components.  I wrote a somewhat long post
a while ago on why we’re still stuck with various timing artifacts due to the 
original
PC’s choice to use an NTSC color burst crystal as the main crystal for the PC.

TTFN - Guy