[cctalk] SMD/ESDI emulator progress report

2024-03-14 Thread Guy Sotomayor via cctalk

Hi,

I just wanted to provide a bit of a progress report on the SMD/ESDI 
emulator project.


Now that I'm retired I have a bit more time to actually work on it.  
Previously I was just doing a bunch of research and writing notes on the 
design.  I now have a solid design and I'm starting with the implementation.


I'm going to list some of the design goals and then sketch out a few of 
the major issues and how they're being addressed.


Goals:

 * Emulate any drive geometry
 * Emulate a drive's performance characteristics
 * Work across different interface types
 * Fully built emulator cost below $500

Major Issues:

 * SMD/ESDI have head switch times < 10 microseconds (basically the
   time it takes for the read amplifiers to settle on a "real" drive). 
   Solving this issue drives the majority of the design
 * Address marks on a "real" drive are implemented by writing a DC
   signal on the track and the read circuitry detects that and
   generates the address mark signal

When looking at the specifications for SMD and ESDI disks there aren't 
really a lot of difference in how the drive behaves.  The interfaces 
differ in detail but in reality the differences are pretty minor.  So 
the current design should allow for 95+% code sharing between SMD and 
ESDI emulators.


To solve the head switch performance, it is necessary to have an entire 
cylinder in some sort of RAM.  This allows for very fast head switch 
times (e.g. the selected head just addresses a particular portion of the 
RAM).  However, this means that loading a cylinder (which in some cases 
could be as much as 1MB) could take considerable time.  It will take 
even longer if some of the tracks in the cylinder are "dirty" due to 
them having being written to prior to the seek.


Since I want the emulator to be able to faithfully emulate drives in all 
respects, the limiting factor is the cylinder-to-cylinder seek time 
(e.g. moving from one cylinder to another cylinder that is adjacent).  
This is typically in the 4-8ms range.  So doing the math, one must move 
1MB in 4ms (that turns out to be ~250MB/sec of bandwidth...using 32-bit 
transfers, this means over 60M transfers/sec).


The above implies that the cylinder RAM and where the storage holding 
the cylinders of the image must be capable of at least 60M transfers/sec 
between them.  This is going to involve a complex FPGA that is able to 
have large internal RAMs and a direct connection to some sort of DRAM to 
hold the full image.  I've chosen to use a SOM (System-On-Module) 
version of the Xilinx Zynq 7020.  This has dual 32-bit ARM cores (plus a 
whole bunch of peripherals), 32-bit DDR3 memory interface, plus a fairly 
large FPGA with enough block RAM to contain the maximum cylinder.  The 
calculations I've done should allow a new cylinder to be loaded from 
DRAM into the cylinder RAM in 4-8ms (I think with a few tricks I can 
keep it in the lower range).


I've looked a quite a few Zynq SOMs (and have acquired quite a few for 
evaluation purposes).  I finally found one that's ~$200 (most of the 
others are in the $400-$1000+ range).  This SOM brings out most of the 
Zynq's I/Os (94 I/Os) in addition to having ethernet, USB, serial, SD 
card, etc. as well as 1GB of 32-bit DDR3 DRAM.  It also runs Linux which 
means that developing the SW is fairly straight forward.


The next issue was how to emulate address marks.  The emulated drive 
will have a bit clock which is necessary for clocking out the data when 
reading (or out when writing).  The bit clock is always running (just 
like a "real" drive when spinning).  That will drive a counter (which 
represents which bit is under the emulated "head"), that counter (along 
with the head number) will be used to address the block RAM.  The 
counter is always running, so as to emulate the spinning disk.  The 
address marks are emulated by having a series of comparators (one for 
each possible sector).  They compare the bit counter with the value 
associated with the comparator, if there's a match then that signals an 
address mark.  It's bit more complicated because writing address marks 
(in the case of soft-sectors) has to be dealt with.


The emulator is composed of 4 major components:

1. Emulator application plus some utilities
   I'm currently writing all of this code...since I'd been a SW
   engineer for 50+ years, this is all "production" quality code and is
   extremely well documented...still a ways to go.
2. Linux device driver which provides the interface between the
   emulator application and the emulator HW
   I haven't started on the driver yet but it should be fairly straight
   forward as it really only provides an interface to the emulator HW
   to the emulator application
3. Emulator HW RTL
   I haven't started on this other than to do some basic blocks of
   what's here.  It mainly is the cylinder RAM, serdes (I *may* be able
   to finesse this by having 32-bits on the AXI bus and 1 bit on the
   interface side...a nice 

[cctalk] Re: Getting floppy images to/from real floppy disks.

2023-05-25 Thread Guy Sotomayor via cctalk



On 5/25/23 13:21, Paul Koning via cctalk wrote:



On May 25, 2023, at 3:30 PM, Chuck Guzis via cctalk  
wrote:

On 5/25/23 10:06, Guy Sotomayor via cctalk wrote:

The way SPARK works is that you have code and then can also provide
proofs for the code.  Proofs are you might expect are *hard* to write
and in many cases are *huge* relative to the actual code (at least if
you want a platinum level proof).

...and we still get gems like the Boeing 737MAX...

--Chuck

Yes.  The problem is the gap between informal understanding and formal 
description.  For many programmers, that gap occurs when the program source is 
created.  If the programs are subjected to formal proofs, the gap occurs when 
the formal specs are written.

So such things are largely a non-solution.  They may help a little if the gap 
to the formal spec is smaller.  If, as Guy is saying, the formal spec is larger 
than the code, then obviously that won't be the case.


In our particular case, we spend about 10x developing all of the 
"safety" collateral (requirement docs, architecture docs, design docs, 
etc) than actually writing, debugging and testing the code.


Part of the problem is that most of the automotive safety standards were 
developed for fairly simple use cases (1000s to a few 10's of 1000s 
lines of code).  In our particular case, we're looking at 10's of 
millions of lines of code and we've discovered that a lot of the 
processes specified by the standards do not scale well to that level of 
code.  :-/



Languages other than C and C++ have advantages in that they detect, or avoid, a 
whole lot of bugs that C/C++ ignore, like bounds violations or memory leaks.  
So Ada can be helpful in that some bugs are harder or impossible to create, or 
more likely to be detected in testing.  But, in spite of having taken a very 
interesting week-long course on program proofs by pioneer E.W. Dijkstra, I 
don't actually believe in those things.
I don't either.  ;-)  Proofs are *hard* and take a special way of 
thinking about the problem.  For example, prove that a doubly linked 
list points only to elements allowed in the linked list (e.g. things 
that have only been placed on the list) and that the forward and 
backward pointers actually point to the elements they're supposed 
to...and that's one of the simpler things that needs to be proved. It 
gets *really* interesting when you try and prove that the scheduler is 
actually scheduling the way it's supposed to.  :-/


The 737MAX is a classic example of designers turning off their brains before 
doing their work.  It is obvious even to me (who have never created 
safety-sensitive software) that you don't attach systems with single points of 
failure such as non-replicated sensors to a control system whose specific 
purpose is to point the airplane nose DOWN.  If you do your work with your 
brain disabled you can't produce correct software, with or without formal 
proofs.
Yes, in self-driving cars we do "sensor fusion" which allows us to 
derive (and validate/replicate) data from various sensors.  For example, 
we use cameras, LIDAR, etc to validate each other's data. The point is 
to not have a "single point of failure".


--
TTFN - Guy



[cctalk] Re: ***SPAM*** Re: ***SPAM*** Re: Getting floppy images to/from real floppy disks.

2023-05-25 Thread Guy Sotomayor via cctalk



On 5/25/23 10:00, Chuck Guzis via cctalk wrote:

On 5/25/23 08:58, Guy Sotomayor via cctalk wrote:

ADA and SPARK (a stripped down version of ADA) are used heavily in
embedded that has to be "safety certified".  SPARK also allows the code
to be "proven" (as in you can write formal proofs to ensure that the
code does what you say it does).  Ask me how I know.  ;-)

I was aware of Ada's requirements in the defense- and aerospace-related
industry.  Is that where your experience lies?  Is SPARK the "magic
bullet" that's been searched for decades to write provably correct code?


I'm familiar with it from the higher end automotive perspective 
(self-driving cars).  Even when using C/C++ we have *lots* of standards 
that we have to adhere to (MISRA, CERT-C, ISO-26262, etc).


The way SPARK works is that you have code and then can also provide 
proofs for the code.  Proofs are you might expect are *hard* to write 
and in many cases are *huge* relative to the actual code (at least if 
you want a platinum level proof).


--
TTFN - Guy



[cctalk] Re: ***SPAM*** Re: Getting floppy images to/from real floppy disks.

2023-05-25 Thread Guy Sotomayor via cctalk



On 5/25/23 07:55, Chuck Guzis via cctalk wrote:

On 5/25/23 04:52, Tony Duell via cctalk wrote:

For the programming language, I stick with C, not C++, not Python and
plain old makefiles--that's what the support libraries are written in.
I don't use an IDE, lest I become reliant on one--a text editor will do.
I document the heck out of code.  Over the 50 or so years that I've been
cranking out gibberish, it's nice to go back to code that I wrote 30 or
40 years ago and still be able to read it.


That's basically what I do too.  It's too easy to get stuck with an 
unsupported environment.  A text editor and makefiles mean that I can 
(generally) port my code over to any new environment fairly easily.




I'm all too aware of the changing trends in the industry--and how
quickly they can change.  I remember when there was a push in embedded
coding not long ago to use Ada--where is that today?
ADA and SPARK (a stripped down version of ADA) are used heavily in 
embedded that has to be "safety certified".  SPARK also allows the code 
to be "proven" (as in you can write formal proofs to ensure that the 
code does what you say it does).  Ask me how I know.  ;-)


--
TTFN - Guy



Re: DEC OSF/1 for i386?

2022-04-29 Thread Guy Sotomayor via cctalk
I knew folks who worked on A/UX at Apple, but I don't have any details 
about it's internals.


TTFN - Guy

On 4/29/22 11:40, Cameron Kaiser via cctalk wrote:

but I know at IBM we had 2 principle "ports" that we maintained (PPC


Did this have anything to do with Apple's alleged "A/UX for PowerPC" which was
supposedly OSF/1 based?


--
TTFN - Guy



Re: DEC OSF/1 for i386?

2022-04-29 Thread Guy Sotomayor via cctalk
I was at IBM when OSF (and subsequently OSF/1) was created and had a lot 
of discussions with OSF at that time.  At IBM I was working on the IBM 
Microkernel.  OSF/1 also used Mach (but a different source base) as the 
kernel.  The big effort was to keep the APIs and documentation 
"similar".  We had huge arguments about RPC and I think that's the area 
that we didn't converge which I think made the whole thing pointless 
since the IPC/RPC was one of the main points of Mach.  :-/


I don't know what DEC did in terms of their OSF/1 product, but I know at 
IBM we had 2 principle "ports" that we maintained (PPC & x86) as well as 
a few others (MIPS, StrongARM, 68K being the other ones as I recall) 
that we "kept alive".


TTFN - Guy

On 4/29/22 07:45, Dennis Grevenstein via cctech wrote:

Hi,

just recently I found this archive:

https://vetusware.com/download/OSF1%20Source%20Code%201.10/?id=11574

this is a package of source code for DEC OSF/1 V 1.0. I knew that this is
supposed to run on DECstations (with MIPS), in fact I have a DS3100
running it myself.
However, one thing really puzzled me: This archive apparently includes
support for i386. There is even a kernel build log from 1990.
Now that was news to me. I never realized that this worked on i386.
Can anybody here tell any stories about this?

regards,
Dennis


--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-20 Thread Guy Sotomayor via cctalk
I'm using Zynq SOMs (System on a module) that will plug into a "base 
board" (with hilrose connectors).  It is the base board that will have 
the "personality" of the emulator.  The baseboard will be fairly simple 
(level shifters, a small bit of logic and the drive interface 
transceivers).  So the base board is fairly simple (I think I have an 
early version in KiCAD...but it needs updating).


I'm trying to use as much as I can from the free libraries so I'm trying 
to keep stuff as simple as possible from a logic design perspective.  
Since I already have everything (in multiples) except for the base 
board, the cost to me is time at this point (which I don't have a lot of 
at the moment).


I also didn't want to get into doing any design with BGAs (at least 
where I need to worry about it) hence the decision to go with SOMs.  
With those, the SOM has the Zynq FPGA, flash, DRAM, etc (including the 
critical VRs and clocks).  All I need to provide is 3.3v.  ;-)


I should be able to dig up the docs.  Many are already on bitsavers.  
Let me know what you can't find on Bitsavers.


TTFN - Guy

On 4/20/22 11:22, shad via cctech wrote:

Guy,
I agree that accessing data in blockram (synchronous with fixed latency) is 
really easier than accessing it from RAM (asynchronous with variable latency).
Anyway I'm weighting the "cost" of additional complexity, which in change would 
allow to spare on Zynq cost.
In any case memory access is never sequential, but a sequence of bursts with 
shorter length (16 beats or less).
Considering this, the fact of starting or ending a sequential transfer is just 
a matter of generating addresses multiple of burst length. For this however you 
have to forget about Xilinx's free IP cores, and work directly with AXI3 bus of 
HP ports.

As I would have to invest a large amount of time and of money, it would be nice 
to have somebody interested in buying a working and assembled kit with moderate 
price gain, in way to repay part of the investment.
This however drives to bottom end FPGAs, with very limited amount of internal 
memory... whence the memory-spare design.

About documentation: you mentioned several documents about SMD/ESDI standards 
and related details.
Would you mind sharing this collection?

Many thanks.

Andrea


--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-19 Thread Guy Sotomayor via cctalk
It's not really fast enough and you'll get into all sorts of 
complications once you start to think about trying to keep up with 
simulation rotations.  For example, if someone starts a read at half way 
through a rotation (e.g. after the index pulse) now you have to have 
logic/code that can start/stop the transfer in random places.  The way 
that I have it designed, it's all sequential so no random start / 
lengths and it's all done during a seek which the data isn't being 
clocked out.


The Zynq-7020 (which is my low end design) has 4.9Mb of block RAM (in 
140 36Kb blocks).  In the cylinders I actually use 9-bits per byte as I 
need an escape in order to encode some other data.  ;-) With that it can 
hold the 512KB needed with some to spare.  My high end design will use 
the Zynq-Ultrascale+ (ZU3CG) has 7.6Mb of block RAM (in 216 36Kb 
blocks).  If I go to the next higher version (ZU4CG)the block RAM goes 
down to 4.5Kb (in 128 36Kb blocks) but gains 13.5Mb in "UltraRAM" which 
should allow for any reasonable cylinder buffering.


Of course, I'm just describing my design and design requirements.  First 
and foremost I wanted a simple HW & SW design that could provide 
accurate drive timings (e.g. I could faithfully reproduce the timings of 
any particular drive) so as to maximize the compatibility with any 
controller (and I have some weird ones).


I've been pouring over ANSI specs, controller specs and drive specs for 
SMD/ESDI for a few years now and have thought about a number of 
different ways to do this and what I've described is what I've come up with.


You may have different goals which may drive you to make different 
choices/decisions.


TTFN - Guy

On 4/19/22 11:49, shad via cctalk wrote:

Guy,
I understand that cylinder command has no particular timing requirements, while 
head command must be effective within microseconds. My doubt is, RAM access on 
high performance port could be fast enough to satisfy also the latter.
In case it couldn't or was not assured, I think the best strategy could be to 
preload only a small block of data for each head, for prompt start on head 
command; enough to manage safely RAM access latency.
Each block also would work as buffer for data of subsequent RAM accesses, until 
whole cylinder had been processed.
This strategy would remove the strict requirement of blockram capacity for the 
Zynq, and given that bigger models cost a lot, it would be a significant spare 
for anybody.
Furthermore,  support for any hypotetical disk with bigger cylinder (not SMD) or for tape 
with very large blocks or "infinite" streams could not be feasible with the 
whole cylinder design. I would prefer to avoid any such limitation, in way to possibly 
reuse the same data transfer modules for any media.

Andrea


--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-19 Thread Guy Sotomayor via cctalk
The problem is that you don't get the cylinder and head information in 
the same command (they are 2 different commands). So when you're doing a 
seek, you don't know which track(s) to prioritize.  That is why during a 
seek command I will transfer the entire cylinder so when the head 
command arrives, it can be handled quickly.  That's the only way I could 
think of to ensure maximum compatibility with the controllers (e.g. I 
can provide identical timings to an actual drive...you never really know 
what assumptions a particular controller might have).


TTFN - Guy

On 4/18/22 10:26, shad via cctalk wrote:

Guy,
I agree on keeping Linux out of the loop, to allow fast access on head 
location, selection.
However, I'm not convinced on the fact that a whole cylinder must be on 
blockram to achieve this. Given that ram access is fast (on Zynq with PL 
working at 200MHz and HP port at 64bits I'm running at around 1200MB/s peak), 
logic can jump across the whole disk without the software intervention, it's 
just a matter of being able to calculate conversion from CHS to address and 
read with sufficient buffer.
Probably using Xilinx IP cores could be a severe limit, as these are really 
full of bugs and inefficient implementations... but are free, so you can't 
argue.

On software side, given that you can go also slow, there's no need for very 
complex driver development, just an user level UIO driver could make do.
About language, I know very well VHDL, and it's a little bit at higher level 
than Verilog, so development with implementation parameters is maybe a little 
easier.

About interfaces which doesn't have separated clock recovery: these need a sort 
of oversampling, but you don't need to store every sample, just the ones with 
state change. Leveraging on IOSERDES you can work at a multiple of internal 
clock.

Please keep in consideration that the idea is to develop a single device that 
can work both as drive and as interface, so implementation should be reversible.
Probably this is not very difficult to obtain, as fast data paths for read and 
write are already in opposite directions.

Andrea


I have proceeded as far as full block diagrams (still have to write all
of the verilog) and basic SW architecture.? This is why I've had this
discussion.? I've thought about this *a lot* and have gone through
several iterations of what will or will not work given timing constraints.

I have all of the components for putting a prototype together but I just
haven't had the time yet to write the verilog, the Linux device driver
and the "personality board".? That is, there is still a lot to do.? ;-)

Some requirements that I've put on my design:

   * straight forward SW architecture
   * SW is *not* time critical (that is I didn't want SW in the critical
     path of keeping the data stream)
   * Must be able to emulate any SMD/ESDI drive
   * Must be able to match performance of the drive (or better)
   * Must be able to work with any controller (ESDI or SMD...depending
     upon interface)

With those in mind, that's how I came up with my design.

I found that the Zynq has sufficient Block RAM to contain a full
cylinder of 512KB.? I'm keeping a full cylinder because that allows
everything to be done in verilog except for seeks (see SW not being
required to be in the critical path).? If I didn't do that, then SW
would have to be involved in some aspects of head switch, etc and those
can have tight (<< 100us) latencies and I just didn't want to try and
get Linux to handle that.? Yes, I could use some form of RTOS (I'm
actually in the middle of writing one...but that's still a ways away)
but I don't see any that are really up to what I need/want to do for
this project.

BTW, I'm basing my initial implementation on the Zynq 7020 which has 1GB
of DRAM.? However, I'm also planning on a "bigger/better" one based upon
the Zynq Ultrascale+ which has 4GB of DRAM so that I can support
multiple/larger drives.

The amount required by Linux doesn't have to be large...I plan on having
the KMD just allocate a really big buffer (e.g. sufficient for
containing the entire drive image).? Linux will run happily in
128MB-256MB since there won't be any GUI.? It could be significantly
less if I were to strip out everything that isn't needed by the kernel
and only have a basic shell for booting/debug.? My plan is to have the
emulated drive data and the configuration file on the SD card...so
there's no real user interaction necessary (and Linux would not be on
the SD card but on the embedded flash on the Zynq module).


I chose ESDI and SMD fundamentally because the interface is 100% digital
(e.g. the data/clock separator is in the drive itself). So I don't need
to do any oversampling.


--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-17 Thread Guy Sotomayor via cctalk
I have proceeded as far as full block diagrams (still have to write all 
of the verilog) and basic SW architecture.  This is why I've had this 
discussion.  I've thought about this *a lot* and have gone through 
several iterations of what will or will not work given timing constraints.


I have all of the components for putting a prototype together but I just 
haven't had the time yet to write the verilog, the Linux device driver 
and the "personality board".  That is, there is still a lot to do.  ;-)


Some requirements that I've put on my design:

 * straight forward SW architecture
 * SW is *not* time critical (that is I didn't want SW in the critical
   path of keeping the data stream)
 * Must be able to emulate any SMD/ESDI drive
 * Must be able to match performance of the drive (or better)
 * Must be able to work with any controller (ESDI or SMD...depending
   upon interface)

With those in mind, that's how I came up with my design.

I found that the Zynq has sufficient Block RAM to contain a full 
cylinder of 512KB.  I'm keeping a full cylinder because that allows 
everything to be done in verilog except for seeks (see SW not being 
required to be in the critical path).  If I didn't do that, then SW 
would have to be involved in some aspects of head switch, etc and those 
can have tight (<< 100us) latencies and I just didn't want to try and 
get Linux to handle that.  Yes, I could use some form of RTOS (I'm 
actually in the middle of writing one...but that's still a ways away) 
but I don't see any that are really up to what I need/want to do for 
this project.


BTW, I'm basing my initial implementation on the Zynq 7020 which has 1GB 
of DRAM.  However, I'm also planning on a "bigger/better" one based upon 
the Zynq Ultrascale+ which has 4GB of DRAM so that I can support 
multiple/larger drives.


The amount required by Linux doesn't have to be large...I plan on having 
the KMD just allocate a really big buffer (e.g. sufficient for 
containing the entire drive image).  Linux will run happily in 
128MB-256MB since there won't be any GUI.  It could be significantly 
less if I were to strip out everything that isn't needed by the kernel 
and only have a basic shell for booting/debug.  My plan is to have the 
emulated drive data and the configuration file on the SD card...so 
there's no real user interaction necessary (and Linux would not be on 
the SD card but on the embedded flash on the Zynq module).


TTFN - Guy

On 4/17/22 10:28, shad via cctech wrote:

hello,
there's much discussion about the right  method to transfer data in and out.
Of course there are several methods, the right one must be carefully chosen 
after some review of all the disk interfaces that must be supported. The idea 
of having a copy of the whole disk in RAM is OK, assuming that a maximum size 
of around 512MB is required, as the RAM is also needed for the OS, and for Zynq 
maximum is 1GB.
About logic implementation, we know that the device must be able to work with 
one cylinder at a time. Given RAM bandwidth, this doesn't means that it must 
fit completely in blockram, also it can be produced at output while it is read, 
so delay time is really the time between first data request and actual read 
response. In between an elastic FIFO is required to adapt synchronous constant 
rate transfer of the disk to the burst transfer toward RAM.

Guy, you mentioned about development of a similar interface.
So you already produced some working hardware?

Andrea


--
TTFN - Guy


Re: idea for a universal disk interface

2022-04-17 Thread Guy Sotomayor via cctalk
I chose ESDI and SMD fundamentally because the interface is 100% digital 
(e.g. the data/clock separator is in the drive itself). So I don't need 
to do any oversampling.


TTFN - Guy

On 4/17/22 11:12, Paul Koning via cctalk wrote:



On Apr 17, 2022, at 1:28 PM, shad via cctalk  wrote:

hello,
there's much discussion about the right  method to transfer data in and out.
Of course there are several methods, the right one must be carefully chosen 
after some review of all the disk interfaces that must be supported. The idea 
of having a copy of the whole disk in RAM is OK, assuming that a maximum size 
of around 512MB is required, as the RAM is also needed for the OS, and for Zynq 
maximum is 1GB.

For reading a disk, an attractive approach is to do a high speed analog capture 
of the waveforms.  That way you don't need a priori knowledge of the encoding, 
and it also allows you to use sophisticated algorithms (DSP, digital filtering, 
etc.) to recover marginal media.  A number of old tape recovery projects have 
used this approach.  For disk you have to go faster if you use an existing 
drive, but the numbers are perfectly manageable with modern hardware.

If you use this technique, you do generate a whole lot more data than the 
formatted capacity of the drive; 10x to 100x or so.  Throw in another order of 
magnitude if you step across the surface in small increments to avoid having to 
identify the track centerline in advance -- again, somewhat like the tape 
recovery machines that use a 36 track head to read 7 or 9 or 10 track tapes.

Fred mentioned how life gets hard if you don't have a drive.  I'm wondering how difficult 
it would be to build a useable "spin table", basically an accurate spindle that 
will accept the pack to be recovered and that will rotate at a modest speed, with a head 
positioner that can accurately position a read head along the surface.  One head would 
suffice, RAMAC fashion.  For slow rotation you'd want an MR head, and perhaps supplied 
air to float the head off the surface.  Perhaps a scheme like this with slow rotation 
could allow for recovery much of the data on a platter that suffered a head crash, 
because you could spin it slowly enough that either the head doesn't touch the scratched 
areas, or touches it slowly enough that no further damage results.

paul



--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-17 Thread Guy Sotomayor via cctalk
I think the issue is that you're thinking of somehow emulating the 
formatted data.  I'm working on just emulating the bit-stream as then 
it'll work with any controller and sector/track layout so I won't 
actually know what a sector really is (unless I do "hard sectoring" 
which some drives did support).


At a 15Mhz clock rate, 30 bytes is 1.us.  Not a lot of time. And 
frankly, that's defined by the controller and not the drive (though 
usually the drives specify some layout but that's only a 
recommendation).  Dealing with drive speed variations doesn't solve 
anything because it's actually done via the drive itself (e.g. the drive 
provides the clock to the controller so any variation is already 
accounted for).  The drive really cares about total bits (e.g. 
bits-per-inch) that the media supports.


If we assume 32KB track at 500MB/s DMA transfer rate, that takes 65us.  
But as I've said, the spec says that the time between a head select and 
read is 15us or so, you can see that you can't just transfer a track and 
still meet the minimum timings.  I will agree that you can probably take 
longer but I'm trying to have a design that can meet all of the minimum 
timings so I can emulate any drive/controller combination with at least 
the same performance as a real drive (and in many cases I can provide 
*much* higher performance).


By keeping a full cylinder in the FPGA Block RAM I can keep the head 
select time < 1us (it's basically just selecting the high order address 
bits going to the block RAM).


By keeping the entire disk image in DRAM, I can emulate any drive (that 
fits in the DRAM) with identical (or faster) performance. If I wanted to 
do something simpler (not much though) I could have a smaller DRAM (but 
since the Zynq modules I'm using have 1GB or 4GB of DRAM there isn't 
much motivation) but then any seek would be limited by access to the 
backing store.  Also remember, in the worst case you have to write the 
previous track out if it was written to so that will slow things down as 
well.  With the full image maintained in DRAM, any writes can be 
performed in a lazy manner in the background so that won't impact the 
performance of the emulated drive.


TTFN - Guy

On 4/16/22 14:32, Tom Gardner wrote:

-Original Message-
From: Guy Sotomayor [mailto:g...@shiresoft.com]
Sent: Friday, April 15, 2022 3:25 PM
To: t.gard...@computer.org; cct...@classiccmp.org
Subject: Re: idea for a universal disk interface

I'm looking at what the spec says.  ;-)  The read command delay from the head 
set command is 15us (so I was wrong) but still not a lot of time (that is after 
a head set, a read command must be at least 15us later).



-
And after the read command is given there is a gap, usually all zeros, at the 
end of which is a sync byte which is then followed by the first good data (or 
header) byte.  In SMD the gaps can be  20 or 30 bytes long so there is quite a 
bit of time until good data.

Tom



--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-15 Thread Guy Sotomayor via cctalk
I'm looking at what the spec says.  ;-)  The read command delay from the 
head set command is 15us (so I was wrong) but still not a lot of time 
(that is after a head set, a read command must be at least 15us later).


Since I'm not looking at formatted data rate (just handling the raw bit 
stream) it doesn't really matter what the formatted rate is...and the 
formatted data is different between different controllers, so I don't 
want to even try to do that on the fly (and they might do tricks where 
different tracks/cylinders have different formats.


If some one wants the "formatted" data, then I'd let them post process 
that off the captured data.


As I said, I'm trying to do this with fairly simple logic and low cost 
storage (as such this isn't going to particularly cheap).  I don't want 
to add another $100+ to the cost just to have a high performance drive 
when the HW is capable of doing a suitable job with a $10 SD card.


In reality an SD card (from a storage perspective) is way overkill.  
We're talking about emulating drives with capacities < 1GB and good 
quality SD cards contain 32GB for $10 or so.


TTFN - Guy

On 4/15/22 12:12, Tom Gardner wrote:


I haven't looked it up but I bet the head switch time is a lot longer 
than 1-2 usec - that's what the leading gap is for and the sync took 
most of the gap back in those days.


The issue is sustained data rate isn't it?  The ESMD raw data rate is 
24 Mb/s but the formatted data is something like 80% of that or maybe 
2.5 MB/sec.  A modern HDD in sequential mode can sustain a much higher 
rate, e.g. Seagate SAS 
 
at 520 MB/sec.  My understanding is that the sectors are slipped 
and/or cylinders are horizontal so that head switching doesn't lose 
any revolutions.  Maybe one would run into a problem at the cylinder 
seek moment so maybe one would have to keep each full emulated 
cylinder on the modern drive’s cylinder, but with Terabytes of data on 
a modern drive who cares about some wasted storage


Tom

-Original Message-
From: Guy Sotomayor [mailto:g...@shiresoft.com]
Sent: Friday, April 15, 2022 10:56 AM
To: t.gard...@computer.org; cct...@classiccmp.org
Subject: Re: idea for a universal disk interface

I ran the numbers for Zynq FPGAs.  First of all for ESDI and SMD the 
head switch time is 1-2us (basically the time it takes for the clocks 
to re-lock on the new data).


Two tracks isn't sufficient (which is the "other" track...you will be 
wrong).


So I decided to go and have a full cylinder (I'm allowing for up to 
32KB tracks and up to 16 heads) which is 512KB.  The Zynq DMA from HW 
block RAM to DRAM (at 500MB/s) is ~1ms.  Given that the previous 
cylinder could be dirty (e.g. has written data), the worst case seek 
time is ~2ms.  This allows me to emulate any seek latency curve(s) I want.


In my design, any dirty data is written back to storage in a lazy 
manner so the performance of the storage isn't really an issue.


I should note that the Zynq 7020 module has 1GB of DRAM on it, so 
there is no additional cost to just put the entire disk contents in 
DRAM and I'm using the attached SD Card interface for storage (so you 
can use a


$10 SD Card for storage).  Adding a high speed disk interface (e.g.

MD.2, PCIe, or other serially attached storage) would add additional 
cost in terms of having to create the interface as well as a 
reasonably fast drive and I don't see the advantage.


I'm planning on using a Zynq UltraScale+ module to allow for larger 
disks and multiple disk emulations (it has more block RAM and 4GB of 
DRAM on the module).


TTFN - Guy

On 4/14/22 23:34, Tom Gardner wrote:

> I suggest if we are talking about an emulator it really isn't 
necessary to have the entire disk in DRAM, two tracks of DRAM acting 
as a buffer with a modern HDD holding the emulated drive's data should 
be fast enough to keep any old iron controller operating without 
missing any revolutions.  The maximum unformatted track length of any 
old iron drive is well known and therefore one can allocate the number 
of blocks sufficient to store a full track and then write every track, 
gaps and all to the modern disk.  Given the data rate, track size and 
sequential seek times of a modern HDD one should be able to fill then 
next track buffer before the current track buffer is read into the 
controller.  If two track buffers and an HDD isn't fast enough then 
one could add a track buffer or two or go to SSD's.


>

> This was the approach IBM used in it's first RAMAC RAID where I think

> they had to buffer a whole cylinder but that was many generations ago

>

> Tom

>

> -Original Message-

> From: Guy Sotomayor [mailto:g...@shiresoft.com 
]


> Sent: Wednesday, April 13, 2022 10:02 AM

> To: cct...@classiccmp.org 

> Subject: Re: idea 

Re: idea for a universal disk interface

2022-04-15 Thread Guy Sotomayor via cctalk
I ran the numbers for Zynq FPGAs.  First of all for ESDI and SMD the 
head switch time is 1-2us (basically the time it takes for the clocks to 
re-lock on the new data).


Two tracks isn't sufficient (which is the "other" track...you will be 
wrong).


So I decided to go and have a full cylinder (I'm allowing for up to 32KB 
tracks and up to 16 heads) which is 512KB.  The Zynq DMA from HW block 
RAM to DRAM (at 500MB/s) is ~1ms.  Given that the previous cylinder 
could be dirty (e.g. has written data), the worst case seek time is 
~2ms.  This allows me to emulate any seek latency curve(s) I want.


In my design, any dirty data is written back to storage in a lazy manner 
so the performance of the storage isn't really an issue.


I should note that the Zynq 7020 module has 1GB of DRAM on it, so there 
is no additional cost to just put the entire disk contents in DRAM and 
I'm using the attached SD Card interface for storage (so you can use a 
$10 SD Card for storage).  Adding a high speed disk interface (e.g. 
MD.2, PCIe, or other serially attached storage) would add additional 
cost in terms of having to create the interface as well as a reasonably 
fast drive and I don't see the advantage.


I'm planning on using a Zynq UltraScale+ module to allow for larger 
disks and multiple disk emulations (it has more block RAM and 4GB of 
DRAM on the module).


TTFN - Guy

On 4/14/22 23:34, Tom Gardner wrote:

I suggest if we are talking about an emulator it really isn't necessary to have 
the entire disk in DRAM, two tracks of DRAM acting as a buffer with a modern 
HDD holding the emulated drive's data should be fast enough to keep any old 
iron controller operating without missing any revolutions.  The maximum 
unformatted track length of any old iron drive is well known and therefore one 
can allocate the number of blocks sufficient to store a full track and then 
write every track, gaps and all to the modern disk.  Given the data rate, track 
size and sequential seek times of a modern HDD one should be able to fill then 
next track buffer before the current track buffer is read into the controller.  
If two track buffers and an HDD isn't fast enough then one could add a track 
buffer or two or go to SSD's.

This was the approach IBM used in it's first RAMAC RAID where I think they had 
to buffer a whole cylinder but that was many generations ago

Tom

-Original Message-
From: Guy Sotomayor [mailto:g...@shiresoft.com]
Sent: Wednesday, April 13, 2022 10:02 AM
To: cct...@classiccmp.org
Subject: Re: idea for a universal disk interface

I've had a similar project in the works for a while (mainly for ESDI and SMD).

I think the main issue you're going to face is that what you need to do for 
something like ESDI or SMD (or any of the bit serial interfaces) is going to be 
radically different than something like IDE or SCSI.  This is not just the 
interface signals but also what's needed in the FPGA as well as the embedded SW.

For example, for the ESDI and SMD interface in order to meet the head switch 
times (1-2 microseconds) requires that a full cylinder be cached in HW.  Once 
you do that and look at the timings to move a max cylinder between the HW cache 
(that will serialize/de-serialize the data over the
interface) and storage, you'll see that the only way to have any reasonable 
performance (e.g. not have seek times be > 40ms for *any*
seek) is to cache the entire drive image in DRAM and lazily write back dirty 
tracks.

I've been looking at the Xylinx Zynq SoCs for this (mainly the Zynq 7020 for single drive 
emulation and the Zynq Ultrascale+ for up to 4 drives).  In my case the HW, FPGA logic 
and SW will share significant portions but they will not be identical.  In my case there 
is no need for an external PC (just adds complexity) other than something to do basic 
configuration (e.g. drive parameters such as number of heads, number of cylinders, etc) 
which will actually be over USB/serial.  The actual persistent storage will be an SD card 
since all reading will be done at "boot time" and writes will be handled in a 
lazy manner (since the writes will first go to the DRAM based upon time or seek).

It may also be sufficient for configuration purposes to have a file
(text) on the SD card that defines the configuration so no external 
interactions would be necessary.  I'm still thinking about that one.  ;-)

TTFN - Guy

On 4/12/22 22:35, shad via cctech wrote:

Hello,
I'm a decent collector of big iron, aka mini computers, mainly DEC and DG.
I'm often facing common problems with storage devices, magnetic discs and tapes 
are a little prone to give headaches after years, and replacement drives/media 
in case of a severe failure are unobtainable.
In some cases, the ability to make a dump of the media, also without a running 
computer is very important.

Whence the idea: realize an universal device, with several input/output 
interfaces, which could be used both as storage emulator, to run a 

Re: idea for a universal disk interface

2022-04-13 Thread Guy Sotomayor via cctalk
I've had a similar project in the works for a while (mainly for ESDI and 
SMD).


I think the main issue you're going to face is that what you need to do 
for something like ESDI or SMD (or any of the bit serial interfaces) is 
going to be radically different than something like IDE or SCSI.  This 
is not just the interface signals but also what's needed in the FPGA as 
well as the embedded SW.


For example, for the ESDI and SMD interface in order to meet the head 
switch times (1-2 microseconds) requires that a full cylinder be cached 
in HW.  Once you do that and look at the timings to move a max cylinder 
between the HW cache (that will serialize/de-serialize the data over the 
interface) and storage, you'll see that the only way to have any 
reasonable performance (e.g. not have seek times be > 40ms for *any* 
seek) is to cache the entire drive image in DRAM and lazily write back 
dirty tracks.


I've been looking at the Xylinx Zynq SoCs for this (mainly the Zynq 7020 
for single drive emulation and the Zynq Ultrascale+ for up to 4 
drives).  In my case the HW, FPGA logic and SW will share significant 
portions but they will not be identical.  In my case there is no need 
for an external PC (just adds complexity) other than something to do 
basic configuration (e.g. drive parameters such as number of heads, 
number of cylinders, etc) which will actually be over USB/serial.  The 
actual persistent storage will be an SD card since all reading will be 
done at "boot time" and writes will be handled in a lazy manner (since 
the writes will first go to the DRAM based upon time or seek).


It may also be sufficient for configuration purposes to have a file 
(text) on the SD card that defines the configuration so no external 
interactions would be necessary.  I'm still thinking about that one.  ;-)


TTFN - Guy

On 4/12/22 22:35, shad via cctech wrote:

Hello,
I'm a decent collector of big iron, aka mini computers, mainly DEC and DG.
I'm often facing common problems with storage devices, magnetic discs and tapes 
are a little prone to give headaches after years, and replacement drives/media 
in case of a severe failure are unobtainable.
In some cases, the ability to make a dump of the media, also without a running 
computer is very important.

Whence the idea: realize an universal device, with several input/output 
interfaces, which could be used both as storage emulator, to run a computer 
without real storage, and as controller emulator, to read/write a media without 
a running computer.
To reduce costs as much as possible, and to allow the better compatibility, the 
main board shall host enough electrical interfaces to support a large number of 
disc standard interfaces, ideally by exchanging only a personality adapter for 
each specific interface, i.e. connectors and few components.

There are several orders of problems:
- electrical signals, number and type (most disk employ 5V TTL or 3.3V TTL, 
some interfaces use differential mode for some faster signals?)
- logical implementation: several electrical signals are used for a specific 
interface. These must be handled with correct timings
- software implementation: the universal device shall be able to switch between 
interface modes and be controlled by a remote PC

I suppose the only way to obtain this is to employ an FPGA for logic 
implementation of the interface, and a microprocessor running Linux to handle 
software management, data interchange to external (via Ethernet). This means a 
Xilinx Zynq module for instance.
I know there are several ready devices based on cheaper microcontrollers, but 
I'm sure these can't support fast and tight timing required by hard disk 
interfaces (SMD-E runs at 24MHz).

The main board should include a large enough array of bidirectional 
transceivers, possibly with variable voltage, to support as much interfaces as 
possible, namely at least Shugart floppy, ST506 MFM/RLL, ESDI, SMD, IDE, SCSI1, 
DEC DSSI, DEC RX01/02, DG6030, and so on, to give a starting point.
The common factor determining what kind of disc interface can be support on 
hardware side is obviously the type of transceiver employed, for instance a 
SATA would require a differential serial channel, which could not be available.
But most old electronic is based on TTL/CMOS 5V logic, so a large variety of 
computer generations should be doable.

For the first phase, I would ask you to contribute with a list of interfaces 
which could be interesting to emulate, specially if these are similar to one 
from my list.
I please submitters to send me by email or by web link when possible, detailed 
documentation about the interface they propose, so I can check if it could be 
doable and what kind of electrical signals are needed.
Also detailed information about interfaced I listed is appreciated, as could 
give some detail I'm missing.

Thanks
Andrea


--
TTFN - Guy



Re: IBM 5110 (5100)

2022-03-17 Thread Guy Sotomayor via cctalk

But it has APL (you can tell by the keyboard *and* the BASIC/APL switch).

I can't say if the price is worth it for that...but having the APL ROS 
and the keytops has some value.


TTFN - Guy

On 3/17/22 17:56, Brent Hilpert via cctalk wrote:

On 2022-Mar-17, at 5:02 PM, D. Resor via cctalk wrote:

Was the computer auction in question a 5100 or a 5110?

Presently I see there is a 5110-C for sale
https://www.ebay.com/itm/294865912729

Yes, that was/is the one. So it has been relisted. It had been listed as a 5100 
or 5100-C earlier, then delisted, and I couldn't find a new listing for 
anything like a 5100 or 5110 this morning.

It might be easy to fix. Or it might have the computing potential of a doorstop.



There were also a 5110 with 8" external drives and a printer which sold:
https://www.ebay.com/itm/304377532685


--
TTFN - Guy



Re: VAX 780 on eBay

2022-01-01 Thread Guy Sotomayor via cctalk



On 1/1/22 10:40 AM, Paul Koning via cctalk wrote:



On Jan 1, 2022, at 1:12 PM, Noel Chiappa via cctalk  
wrote:

This:

https://www.ebay.com/itm/275084268137

The starting price is expensive, but probably not utterly unreasonable,
given that:

- the 780 was the first VAX, and thus historically important

- 780's are incredibly rare; this is the first one I recall seeing for sale
  in the classic computer era (versus several -11/70's, /40s, etc)

- this one appears to be reasonably complete; no idea if all the key CPU
  boards are included, but it's things like the backplane, etc (all of which
  seem to be there) which would be completely impossible to find now - if any
  boards _are_ missing, there's at least the _hope_ that they can be located
  (780 boards seem to come by every so often on eBait), since people seem to
  keep boards, not realizing that without the other bits they are useless

Interesting, but the argument for why it's not tested is implausible which 
makes me very suspicious.  I suppose there might be a few American homes that  
have only 110 volt power, but I'm hard pressed to think of any I have ever 
seen, and that includes really old houses.


Without replacing the power controller in the 11/780, you need 208v 
3-phase to run it.  It's not impossible...nothing in the CPU actually 
*needs* 3-phase as the individual power supplies are 120v but the 
overall maximum load is greater than a 30A 120v circuit.


TTFN - Guy



Re: RC11 controller (Was: Reproduction DEC 144-lamp indicator panels)

2021-12-10 Thread Guy Sotomayor via cctalk



On 12/10/21 6:21 AM, Jay Jaeger via cctalk wrote:

On 12/9/2021 11:06 PM, Guy Sotomayor via cctalk wrote:


On 12/9/21 8:15 PM, Jay Jaeger via cctalk wrote:


One could perhaps emulate the RS64 data stream using a fast-enough 
micro, ala the MFM emulator.


Why does everyone seem to want to emulate HW like this with a micro 
when a reasonable FPGA implementation with some external FRAM would 
do the job?




1)  Because not everyone has that kind of design experience and 
capability (I do, but that is beside the point).  In such a case, 
suggesting a FPGA might cause those readers to just skip it without 
further thought, whereas suggesting a micro is less likely to have 
that effect on someone who *does* have the design experience.


2)  Because the tooling on FPGAs is sometimes a pain and the parts 
themselves are always in flux, and the updated tools often don't 
support the older parts.  Over the last 20 years I have gone through 
at least 3 different FPGA development boards and toolsets, where as my 
original Arduino is just as useful as ever.


3)  Because a highly flexible FPGA development board costs a lot more 
than a micro, and micros would be a lot cheaper on a stand-alone PCB 
than an FPGA part (or an FPGA through-hold carrier for those who are 
not up to doing something like an FPGA part on a PCB.)


4)  Because a micro form factor is smaller than an FPGA development 
board.


5)  For someone well versed in software but not as well versed in 
design (though enough that they could still do what you suggest), 
doing the software might only take a couple of days for something like 
a 64Kw disk (if it isn't too fast), and easier to debug/fix/enhance as 
well.


6)  Because it was just a *suggestion* that one might emulate the disk 
itself in hardware (see also point 1).


All valid points.  My frustration has been where I see projects that use 
a RPi for something that a simple HW circuit/CPLD/FPGA could have done 
it simpler and more efficiently.


I've lost count of the FPGA boards that I have.  I also typically don't 
use eval boards for actual projects other than testing a few "flows".  
Everything gets done with a custom board because I typically need other 
components and it gets too messy if I'm just using an "off the shelf" 
eval board (and more than likely the eval board doesn't have enough I/Os).


I should also note that the Beagle Bone MFM emulator isn't actually fast 
enough.  It works OK if you only have one drive but it's not fast enough 
to handle the drive select signal when you have more than one.




On the other hand, speed kills, and some disks are just too fast for a 
micro alone to do.

Project below.  ;-)


The SMD/ESDI emulator that I've been working on has to "brute force" 
the emulation because of BW concerns.  That is, it has to read the 
entire emulated disk image into DRAM because:


1. You need at least a track's worth of buffering to send/receive the
    data though the data interface (serial)
2. You don't have enough time to transfer tracks in/out of the track
    buffer to flash (or what ever) to meet the head switch times
3. You don't have enough time to transfer whole cylinders in/out of the
    cylinder buffer to flash (or what ever) to have reasonable
    track-to-track seek times

So it will require a micro, but that's mainly to manage what's going 
in/out of the (large) DRAM back to flash (it reads the entire 
emulated disk image into DRAM during boot).  All of the actual 
commands and data movement across the interface are all done by an FPGA.




Cool.  Would love an ESDI emulator for my Apollo DN3000 and SMD 
emulation for my VAXen and PDP-11/24.


Yes, it's on my project list...I have it mostly designed but other stuff 
has pushed in front of it.  The big issue is the SW (HW is fairly 
straightforward)..which is funny because I'm a system SW guy.  ;-)  I'm 
using a Xilinx Zynq FPGA mainly because I need:


 * A reasonably fast processor for handling the run-time management of
   buffers (one version has 4 ARM Cortex-A9 CPUs running at 1+GHz and
   the other has 4 ARM Cortex-A53 CPUs running at 1.5GHz).
 * Lots of DRAM (has to contain the entire emulated disk image).
   Smaller Zynq FPGA will support 1GB of DRAM and the larger one (at
   least the one that I'm using) supports 4GB of DRAM.
 * Lots of internal RAM (has to contain the maximum sized cylinder)
 * A *fast* connection between the DRAM and internal RAM (this
   determines the track-to-track latency).  Cylinders can be up to 1MB
   (32KB/track, 32heads per cylinder) so when seeking, up to 2MB (1MB
   in, 1MB out) has to be moved to/from DRAM.  I'm trying to keep the
   seek times < 10ms (ideally ~4ms) so that means my data rate has to
   be on the order of 200-500MB/s.

I'm doing this for my Symbolics machines (ESDI and SMD) and 11/70 (SMD).

--
TTFN - Guy



Re: RC11 controller (Was: Reproduction DEC 144-lamp indicator panels)

2021-12-09 Thread Guy Sotomayor via cctalk



On 12/9/21 8:15 PM, Jay Jaeger via cctalk wrote:


One could perhaps emulate the RS64 data stream using a fast-enough 
micro, ala the MFM emulator.


Why does everyone seem to want to emulate HW like this with a micro when 
a reasonable FPGA implementation with some external FRAM would do the job?


The SMD/ESDI emulator that I've been working on has to "brute force" the 
emulation because of BW concerns.  That is, it has to read the entire 
emulated disk image into DRAM because:


1. You need at least a track's worth of buffering to send/receive the
   data though the data interface (serial)
2. You don't have enough time to transfer tracks in/out of the track
   buffer to flash (or what ever) to meet the head switch times
3. You don't have enough time to transfer whole cylinders in/out of the
   cylinder buffer to flash (or what ever) to have reasonable
   track-to-track seek times

So it will require a micro, but that's mainly to manage what's going 
in/out of the (large) DRAM back to flash (it reads the entire emulated 
disk image into DRAM during boot).  All of the actual commands and data 
movement across the interface are all done by an FPGA.


--
TTFN - Guy



Re: RK11-C indicator panel inlays?

2021-12-06 Thread Guy Sotomayor via cctalk
I haven't priced anything out yet.  My current project will have 
reasonably large sized board but will be using DIN 41612 style 
connectors (so I don't need edge fingers).  I haven't gone to different 
board vendors yet to see what pricing will be yet (still settling on 
board size and number of layers...right now it looks like it'll be 4 
layers).


On 12/6/21 3:50 PM, Mike Katz via cctalk wrote:
For other boards without gold fingers where would you recommend and 
how expensive for omnibus size boards?


On 12/6/2021 5:07 PM, Guy Sotomayor via cctalk wrote:


On 12/6/21 2:45 PM, Mike Katz via cctalk wrote:



If I may8 ask a question.  I have never had boards made before. How 
do I find a good board house that is reasonable and how do I specify 
the board especially for the PDP-8 Omnibus which should have gold 
fingers on the edge connectors?


Anything the size of an Omnibus board with gold fingers is *not* 
going to be "reasonable" especially if you want "hard gold" (which 
IMHO is the only way to go if you want reasonable life of the boards 
and sockets).


I've used Advanced Circuits for all my boards that needed gold 
fingers (they are *not* cheap...you've been warned).  When you submit 
your Gerber files, you also specify if you have edge fingers and how 
you want them plated.  I have been 100% satisfied with the boards 
that I've received from them.


TTFN - Guy





--
TTFN - Guy



Re: RK11-C indicator panel inlays?

2021-12-06 Thread Guy Sotomayor via cctalk



On 12/6/21 2:45 PM, Mike Katz via cctalk wrote:



If I may8 ask a question.  I have never had boards made before. How do 
I find a good board house that is reasonable and how do I specify the 
board especially for the PDP-8 Omnibus which should have gold fingers 
on the edge connectors?


Anything the size of an Omnibus board with gold fingers is *not* going 
to be "reasonable" especially if you want "hard gold" (which IMHO is the 
only way to go if you want reasonable life of the boards and sockets).


I've used Advanced Circuits for all my boards that needed gold fingers 
(they are *not* cheap...you've been warned).  When you submit your 
Gerber files, you also specify if you have edge fingers and how you want 
them plated.  I have been 100% satisfied with the boards that I've 
received from them.


TTFN - Guy




Re: PDP-11/70 Boards

2021-11-30 Thread Guy Sotomayor via cctalk



On 11/30/21 10:06 AM, Noel Chiappa via cctalk wrote:


 From the blog of someone who got a KB11-A working, you'll really need KM11
cards; dunno if Guy Steele still has those clones he was selling.


I think you meant me.  Guy Steele is from Common LISP fame.  ;-)

I do still have KM11 boards and some overlays (I'd have to check to see 
if I have the appropriate overlays for the 11/70).  I don't 
unfortunately have any light masks or full kits.


--
TTFN - Guy



Re: The precarious state of classic software and hardware preservation

2021-11-22 Thread Guy Sotomayor via cctalk
In my case it's stuff that *I* didn't save and just tossed it because 
"Why would I ever want this anymore?".  I *really* regret tossing all of 
the source for stuff I wrote while I was at IBM. It was after all IBM's 
property (since I wrote it all as an IBM employee) and I doubt any of it 
survives in any form anywhere, but I still wish I had some of it.  :-(


After it's all said and done, one has to wonder if we really leave any 
lasting impact.  :-/


TTFN - Guy

On 11/22/21 2:00 PM, s shumaker via cctalk wrote:
and yet, after it's over and there's *nothing* left from 30+ years of 
collecting, there are occasional reflections on what you left behind...


just saying...

Steve


On 11/22/2021 11:50 AM, John Ames via cctalk wrote:

On 2021-11-21 9:45 a.m., Adam Thornton via cctalk wrote:

On 11/19/21 9:33 PM, Steve Malikoff via cctalk wrote:

And what happens when you wake  up one morning to find archive.org is
gone, too?



Fundamentally, eventually we're all going to be indistinguishable
mass-components inside the supermassive black hole that used to be the
Milky Way and Andromeda galaxies anyway.

Smoke 'em while you got 'em.

Yeah, I had a long, hard think about this while the Caldor Fire was
looking like it was about to come knocking on my doorstep this fall
and I was trying to prep myself for a short-notice evacuation and
decide what I could and couldn't take (read: leave stowed in the trunk
of the car for the next couple weeks.) Ultimately, while I'd *like*
what I have and enjoy to pass on to someone else once I get busy
decomposing, in the long run it's all dust, so I'm not gonna worry
myself too much over it.



--
TTFN - Guy



Re: Found my favorite DOS editor

2021-09-28 Thread Guy Sotomayor via cctalk



On 9/28/21 3:41 PM, Fred Cisin via cctalk wrote:
"I've been using vi for about two years, mostly because I can't 
figure out how to exit it."

:q
you're welcome

Or having to power cycle the machine to get out of EMACS.

On Tue, 28 Sep 2021, Mike Katz via cctalk wrote:

To Exit EMACS:  Control-X Control-C



Can EMACS be expanded enough to emulate VI?


Yes.  There is an elisp package called EVIL (Extensible VI Layer) that 
emulates VI in EMACS.


Since EMACS has a full programming language (elisp), you can write 
anything you want in it (mail readers, browsers, calendar apps, other 
editors, etc).  I've written a few things in elisp to mainly deal with 
global changes that were more complicated than I could figure out with a 
SED script.



Can VI be expanded enough to emulate EMACS?

No idea.

--
TTFN - Guy



Re: Found my favorite DOS editor

2021-09-28 Thread Guy Sotomayor via cctalk



On 9/28/21 3:02 PM, jim stephens via cctalk wrote:



On 9/28/2021 2:48 PM, Al Kossow via cctalk wrote:


"I've been using vi for about two years, mostly because I can't 
figure out

how to exit it."


:q

you're welcome


Or having to power cycle the machine to get out of EMACS.


Why would you ever want to get out of EMACS?  ;-)

Editors I've used:

 * SOS (Son-Of-Stopgap) on TOPS-10
 * TECO-10 on TOPS-10
 * XEDIT on VM/370
 * EMACS

I only use VI if I absolutely must and always have issues with the modality.

--
TTFN - Guy



Re: Multiprocessor Qbus PDP-11

2021-08-20 Thread Guy Sotomayor via cctalk
There were a couple of other PDP-11 multiprocessors that I know of (and 
used):


 * C.MMP (eventually 16 PDP-11/40e's in an SMP configuration with a
   crosspoint switch accessing a large memory).  It ran a capability
   based OS called Hydra.
 * CM*  this was a cluster of LSI-11s (as I recall) that were
   hierachially interconnected to allow for distributed operation (I
   think it was potentially capable of running with 255 nodes). I don't
   recall what OS CM* used.

Of course both of the above did not use off the shelf OS's or software.

TTFN - Guy

On 8/20/21 12:41 PM, Alan Frisbie via cctalk wrote:

Charles Dickman  wrote:

> There are indications in the KDJ11-B processor spec on bitsavers that
> the M8190 could be used in a multiprocessor configuration. For
> example, bit 10 of the Maintenance Register (17 777 750) is labeled
> "Multiprocessor Slave" and indicates that the bus arbitrator is
> disabled. There is also section 6.6, "Cache Multi-Processor Hooks",
> that describes cache features that allow multiprocessor operation.
>
>Would it be as simple as connecting to 11/83 qbus together? And adding
> the proper software.
>
> Anybody ever heard of such a thing?

Such a system was put together and tested at DEC with the RSX group
(who did the PDP-11/74 multiprocessor work).  I'm told that while it
worked, it wasn't terrible successful, and the project was abandoned.

I was given a gift of one of the CPU modules that was used in the test
and I might still have it around here.  I can't recall for certain,
but I think the module required some ECOs to make it work in a
multi-processor configuration.

The person to ask about this, Brian McCarthy, is unfortunately no
longer with us.  :-(

Alan Frisbie


--
TTFN - Guy



Re: Reading MT/ST Tapes

2021-07-31 Thread Guy Sotomayor via cctalk



On 7/31/21 9:19 AM, Chuck Guzis via cctalk wrote:

On 7/31/21 8:55 AM, Paul Berger via cctalk wrote:


Since there was still a few 360s around when I started I also got to see
the inside of a 1052 a few times, they are a really stripped down
keyboardless selectric.  They used a function cam to space and since
they did not have a tab rack they would space a lot which would cause
the space cam to wear, I remember one that was so worn  that when it
cycled it wobbled very noticeably, the customer would not let us replace
it as this was the console for the 360 and they did not want it
unavailable for the time it would take to replace it.  Some customers
apparently would have a spare 1052 onsite.  The keyboard on the 1052 is
the keyboard from a keypunch machine.

Did the 1620 Mod II and the 1130 use the same Selectric mechanism as the
S/260 1052?  I remember that the Model B on the CADET always felt as if
it would shake itself to pieces every time the carriage returned.


I was "loaned" a 1052 when I was in college and it was built like a 
tank.  Much heavier than typical selectrics from what I could tell.


I built my own 48v drivers to run it and wrote  bunch of code (8080/z80) 
to run it as an ASCII terminal (yea, I know). Unfortunately, I've lost 
all of it in various moves/purges.


--
TTFN - Guy



Re: Looking for VAX6000 items

2021-07-14 Thread Guy Sotomayor via cctalk



On 7/14/21 9:50 AM, Paul Koning wrote:



On Jul 14, 2021, at 12:33 PM, Guy Sotomayor via cctalk  
wrote:

I've found 2 issues w.r.t. "rotary converters".

* They *always* consume lots of power regardless of the actual load

Really?  That seems odd.  A rotary converter is merely a three phase motor with 
run capacitors.  Just like any other motor, its power demand depends on the 
applied load.  A normal motor spinning without anything connected to it 
consumes power to overcome electrical, magnetical, and friction losses, but 
none of these are particularly large.

Can you cite a source for this?
Spec sheets for various rotary converters that I looked at.  I'd have to 
go back and find them again but they typically drew full load power all 
the time...and they were *loud*.



* They typically don't have great frequency regulation as they are
   really designed for machine tools (which are pretty tolerant) so if
   the load varies, the frequency will vary until the "mass" catches up

They have no frequency regulation at all; what comes out of the third wire is a 
phase shifted version of the line input.

You may be thinking about motor-generators, where the output frequency is 
defined by the construction of the generator section and how fast it spins.  
Yes, under high load those will slow down some, reducing the output frequency.


I did a fair amount of investigation of this in order to power the peripherals 
for my IBM 4331.  The peripherals in total require on the order of 21KVA of 
3-phase power and with them (printer, card reader/punch and tape drives) the 
load will vary *a lot) which would screw up the DASD (string of 3340 drives and 
some 3350 clones).

Yes, I would expect that.  Power supplies would not care much.  Another example 
is the CDC 6000 series, which uses 400 Hz M/G sets feeding power supplies.  The 
disk drives run off mains power, so any M/G speed variations is not a factor.


I ended up looking at a solid state phase converter (takes in 220v single phase 
and produces 208v 3-phase).  It has a good (< 1% frequency regulation) and only 
consumed 100W at idle.  Plus it's relatively small and quiet.  The downside is 
cost (~$5000).

$5000 ???  I have a VFC on my lathe (3 hp rating, so about 2 kW electric).  It 
cost only $150 or so as I recall -- TECO Westinghouse brand. I think they are 
still around.  That particular model was rated for single phase input.  Larger 
ones are not, though I'm told that they still work if connected that way (220 
to two of the input terminals and the third left open) at reduced rating.

Here is a current example, 3 hp single phase input: 
https://www.wolfautomation.com/vfd-3hp-230v-single-phase-ip20/

The concern with VFCs is the pulse width modulated output waveform, which I am 
told will bother some types of loads (some electronics) but not others.  Motors 
will certainly be fine with them, so if you're looking at feeding disk drive 
motor loads, this is the perfect answer.


The one I looked at produced full sine wave output for all 3 phases.  I 
don't recall the THD but it was sub 1%.


21KVA I think works out to 15 or 20HP.  The input for what I was looking 
at was 75A @ 220v single phase.  So it's quite a bit more than 2KW and 
the MOSFETs they use are *huge*.


Yes, the "small" VCFs are relatively inexpensive if they are just PWM 
outputs.


I'm was concerned because there are ferro-resonant  transformers in some 
of this gear and the IBM specs for these devices was pretty tight on 
frequency and THD.  Given the nature of this gear, I'd rather not have 
to go and start replacing unobtainium parts due to poor quality power.


--
TTFN - Guy



Re: Looking for VAX6000 items

2021-07-14 Thread Guy Sotomayor via cctalk



On 7/14/21 6:21 AM, Paul Koning via cctalk wrote:



On Jul 13, 2021, at 11:34 PM, Chris Zach via cctalk  
wrote:


When we got an 8530 at work in the early 90s (needed a machine with a
Nautilus bus for specific hardware testing), it was definitely a
3-phase machine and since we were in an industrial setting, I just
tapped into our panel at the back of the warehouse and wired up a
3-phase outlet for it.  It never sat on our datacenter floor as a
result, but it really only ever had one purpose and that wasn't a
daily driver.  Too much power, too much heat for so few employees (at
that stage of the company).

Interesting. Were the power supplies 3 phase input? Like you I have noticed 
that most pdp and vax gear just pull 120 volt legs off the 3 phase to balance 
power loads. So you can run them on a couple of 120 circuits. Outside of say 
the RP07 (which is a real 3 phase motor)

A number of the large disk drives use 3 phase motors; RP04/5/6 are examples as 
well.

Three phase motors won't run on single phase power without help from run capacitors.  
(There is no such thing as "two phase power" -- 220 volts is single phase, 
balanced.)

If the issue is motors, a "variable frequency converter" will do the job 
easily.  I have suggested in the past that three phase power supplies could run from 
those, but others have pointed out I overlooked some issues.  So that's probably not a 
good idea.

If you need three phase power to feed power supplies or other non-motor power consumers, 
the best answer is probably a "rotary converter".  You can find those in 
machine tool supply catalogs.  Basically they are a three phase motor equipped with run 
capacitors so they can be fed single phase power; the three phase power needed is then 
taken off the three motor terminals.  You can think of these as rotary transformers -- 
dynamotors in a sense, for those of you who remember electronics that old.  :-)

Don't look at "static converters" -- those are only for motors, it seems they 
aren't much more than run capacitors in a box.  They won't help you for anything other 
than a motor, and even for motors they aren't very good.

paul


I've found 2 issues w.r.t. "rotary converters".

 * They *always* consume lots of power regardless of the actual load
 * They typically don't have great frequency regulation as they are
   really designed for machine tools (which are pretty tolerant) so if
   the load varies, the frequency will vary until the "mass" catches up

I did a fair amount of investigation of this in order to power the 
peripherals for my IBM 4331.  The peripherals in total require on the 
order of 21KVA of 3-phase power and with them (printer, card 
reader/punch and tape drives) the load will vary *a lot) which would 
screw up the DASD (string of 3340 drives and some 3350 clones).


I ended up looking at a solid state phase converter (takes in 220v 
single phase and produces 208v 3-phase).  It has a good (< 1% frequency 
regulation) and only consumed 100W at idle.  Plus it's relatively small 
and quiet.  The downside is cost (~$5000).


--
TTFN - Guy



Re: PDP-11/05 (was: PDP-11/05 microcode dump?)

2021-06-15 Thread Guy Sotomayor via cctalk



On 6/15/21 12:16 PM, Tom Uban via cctalk wrote:

On 6/15/21 2:02 PM, Josh Dersch wrote:

Just to provide some real-world data, I used a pair of KM11's to debug my 
11/05, see the picture here:

http://yahozna.dyndns.org/scratch/1105-debug.jpg 


They worked fine.  (These are clones, from Guy Sotomayor's kit.)  I can verify 
tonight whether I
have the earlier or later rev CPU set, if that helps.

- Josh


Interesting! From your pic, you have the M7260 without the circular baud rate 
selector switch, but I
cannot tell which M7261 board you have.

Does the machine come up and run normally with the boards in and the switches 
all the disabled
positions or do you have to do a special sequence to start?

I will have to look at the schematics to see how the two slots connect to the 
processor on each of
the board versions and maybe also take a look at Guy's KL11 schematic if it is 
on his site.


The schematic should be in the user's manual for the KM11.

--

TTFN - Guy



Re: Is this a new record?

2021-04-22 Thread Guy Sotomayor via cctalk
I have a number of keyboards that folks of this ilk like (several 
Symbolics keyboards and a number of 3278/9 keyboards). Fortunately, 
they're all connected to respective machines.


I did see that someone (on ebay) had taken an APL 3278 keyboard and 
converted it to USB!  Grr.  These people make me mad.


On 4/22/21 4:19 PM, Josh Dersch via cctalk wrote:

https://www.ebay.com/itm/164815576309

$9570 for a keyboard.

As much as I'd like to find a keyboard for my Lambda's second head, I
somehow doubt that's going to happen.  And now I think I need to go find a
really, really (really) safe place to keep the keyboard I *do* have...

- Josh


--
TTFN - Guy



Re: Anyone know ancient versions of XLC?

2021-04-15 Thread Guy Sotomayor via cctalk



On 4/15/21 9:42 AM, Liam Proven via cctalk wrote:

On Thu, 15 Apr 2021 at 16:00, Stefan Skoglund  wrote:


FRAME from that era was nice and fast.

As in FrameMaker? I barely know it. Back in the '80s I was a total
Aldus PageMaker fanboy. :-) IMHO one of the greatest GUI apps ever
written.

I've used FrameMaker a lot...it's great for handling large documents and 
collections of documents.  Used it quite a bit at IBM and handled 1000+ 
page documents (of course that wasn't all one "source" file).


I could never get my head around Word for anything more than 10 pages or 
so.  Just too hard to deal with everything in massive documents.


Now I almost exclusively use LaTeX.  I've found that being able to use 
my own text editor to actually write the content means I don't have to 
switch between different notions on how moving around and edit should 
work.  Using a mark-up language also means I generally have more control 
on how things appear in the document (something that continually 
frustrated me with Word especially when dealing with cross references 
and figures).


--
TTFN - Guy



Re: 80286 Protected Mode Test

2021-03-15 Thread Guy Sotomayor via cctalk

On 3/15/21 7:23 AM, Noel Chiappa via cctalk wrote:

 > From: Guy Sotomayor

 > the LOADALL instructions including all of it's warts (and its inability
 > to switch back from protected mode)

Good to have that confirmed (for the 286; apparently it works in the 386).
The 386 loadall instruction was different (not really a surprise since 
the internal microarchitecture was different).  The 386 didn't need to 
do this "hack" because it had vm86 mode for tasks so that accomplished 
what everyone was really using LOADALL on the 286 for.


 > the other way to get back to real mode from protected mode is via a
 > triple-fault.

Any insight into why IBM didn't use that, but went with the (allegedly slow)
keyboard hack?
At this point I don't recall.  But I suspect it was allegedly simpler 
conceptually.


--
TTFN - Guy



Re: 80286 Protected Mode Test

2021-03-14 Thread Guy Sotomayor via cctalk



On 3/14/21 11:09 AM, Peter Corlett via cctalk wrote:

On Sun, Mar 14, 2021 at 04:32:20PM +0100, Maciej W. Rozycki via cctalk wrote:

On Sun, 7 Mar 2021, Noel Chiappa via cctalk wrote:

The 286 can exit protected mode with the LOADALL instruction.

[...]

The existence of LOADALL (used for in-circuit emulation, a predecessor
technique to modern JTAG debugging and the instruction the modern x86 RSM
instruction grew from) in the 80286 wasn't public information for a very
long time, and you won't find it in public Intel 80286 CPU documentation
even today. Even if IBM engineers knew of its existence at the time the
PC/AT was being designed, surely they have decided not to rely in their
design on something not guaranteed by the CPU manufacturer to exist.


I can say with a fair amount of certainty, that we at IBM knew of the 
existence of the LOADALL instructions including all of it's warts (and 
its inability to switch back from protected mode) from the earliest days.


There were many heated discussions in various task forces (this was of 
course IBM) about the next generation OS (to become OS/2) about the 
'286.  First and foremost was how to be able to run DOS programs on the 
'286. Over very vocal opposition, management decided to use "mode 
switching" rather than any of the other techniques.  It should be noted, 
that a significant portion of us advocated abandoning the '286 in favor 
of the '386 to solve this problem.  The argument that management made 
against that approach assumed that OS/2 would be ready in 9 months and 
that the '386 would be late ('386 at the time was about 12-18 months 
away).  It turned out that OS/2 took well over 18 months to develop.


At the time I was fairly familiar with the LOADALL instruction.  I had 
modified PC/AT Xenix to use the LOADALL instruction to allow for running 
Xenix programs and multiple DOS programs simultaneously.  I gave 
multiple demos to various folks in management but to no avail.  They had 
decided that mode switching as *the* way that OS/2 was going to work.


I should also note, that the other way to get back to real mode from 
protected mode is via a triple-fault.  What gets me (and I railed on 
Intel when I worked there for a time) that it still existing in the 
architecture even though they have a machine check architecture now 
(which while at IBM pushed Intel to implement for the '386!).



The Wikipedia page on LOADALL claims "The 80286 LOADALL instruction can not
be used to switch from protected back to real mode (it can't clear the PE
bit in the MSW). However, use of the LOADALL instruction can avoid the need
to switch to protected mode altogether."

I find that paragraph very persuasive. The author knows about LOADALL and
the desire to use it to avoid going into protected mode, and also explains
that there's a specific exception in its behaviour which prevents returning
to real mode. All of the other hacky uses of LOADALL would be unnecessary if
it could be used to switch modes at will. It just doesn't seem like
something that would be written if it was wrong.

Is Wikipedia incorrect and the 286 LOADALL *can* exit protected mode, and if
so, how?


--
TTFN - Guy



Re: DEC RK11-C Disk Controller - on ebay...or is it?

2021-02-08 Thread Guy Sotomayor via cctalk
It looks like it could be an RK11-C.  Are you possibly thinking of the 
RK11-D which fits in a BA11 chassis?


TTFN - Guy

On 2/8/21 2:53 PM, Bill Degnan via cctalk wrote:

If you search ebay for "DEC RK11-C Disk Controller", you'll find a listing
of a backplane of flipchip cards, but it's not like any RK11-C I have ever
seen.  Am I right, this is a mis-labeled auction?
Bill


--
TTFN - Guy



Re: PDP-11/70 debugging advice

2021-01-31 Thread Guy Sotomayor via cctalk
Did you check to make sure that power is wired correctly to the 
PEP-70/Hypercache?  They are typically installed in "empty" slots and 
don't have power (or anything else) routed to them.  They require some 
additional jumpers to be installed on the backplane so that they get power.



On 1/31/21 2:31 PM, Josh Dersch via cctalk wrote:

Hi all --

Making some progress with the "fire sale" PDP-11/70. Over the past month
I've rebuilt the power supplies and burned them in on the bench, and I've
gotten things cleaned up and reassembled.  I'm still waiting on some new
chassis fans but my curiosity overwhelmed my caution and I decided to power
it up for a short time (like 30 seconds) just to see what happens.  Good
news: no smoke or fire.  Voltages look good (need a tiny bit of adjustment
yet) and AC LO and DC LO looked good everywhere I tested them.  Bad news:
processor is almost entirely unresponsive; comes up with the RUN and MASTER
lights on, toggling Halt, and hitting Start causes the RUN light to go out,
but that's the only response I get from the console.

I got out the KM11 boardset and with that installed I can step through
microinstructions and it's definitely executing them, and seems to be
following the flow diagrams in the engineering drawings.  Left to its own
devices, however, the processor doesn't seem to be executing
microinstructions at all, it's stuck at uAddress 200.

In the troubleshooting section of the 11/70 service docs (diagram on p.
5-16) it states:

IF LOAD ADRS DOES NOT WORK AND:
- RUN, MASTER & ALL DATA INDICATORS ARE ON
- uADRS = 200 (ZAP)
THEN MEMORY HAS LOST POWER

Which seems to adequately describe the symptoms I'm seeing, but as far as I
can tell the AC and DC LO signals are all fine.  (This system has a Setasi
PEP70/Hypercache installed, so there's no separate memory chassis to worry
about.)  I'm going to go back and re-check everything, but I was curious if
anyone knows whether loss of AC or DC would prevent the processor from
executing microcode -- from everything I understand it should cause a trap,
and I don't see anything in the docs about inhibiting microcode execution.
But perhaps if this happens at power-up things behave differently?  And the
fact that the troubleshooting flowchart calls out these exact symptoms
would seem to indicate that this is expected.  But I'm curious why the KM11
can step the processor, in this case.

I'm going to wait until the new fans arrive (hopefully tomorrow or tuesday)
before I poke at this again, just looking for advice here on the off chance
anyone's seen this behavior before.

Thanks as always!
- Josh


--
TTFN - Guy



Re: APL\360

2021-01-30 Thread Guy Sotomayor via cctalk



On 1/30/21 9:52 AM, Chuck Guzis via cctalk wrote:

On 1/29/21 10:03 PM, Guy Sotomayor via cctalk wrote:


And unfortunately some industries it is prohibited.  Those industries
*require* conformance to MISRA, CERT-C, ISO-26262 and others.  There is
*no* choice since the code has to be audited and compliance is *not*
optional.

Just an illustration of what happens when you take a "portable
alternative to assembly" and put lipstick on it.   I've been programming
C since System III Unix and I still consider it to be a portable (sort
of) alternative to assembly.

One of the problems with C, in my view, is a lack of direction.  There
are plenty of languages that aim for specific ends.  (e.g. COBOL =
business/commercial, FORTRAN = scientific, Java = web applications,
etc.).   But whence C or C++?

In my dotage, I do a fair amount of MCU programming nowadays, and C is
the lingua franca in that world; the only real alternative is assembly,
so that makes some sense.  Python, Ada, etc. never really managed to
make much headway there.  C is far more prevalent than C++ in that
world, FWIW.

Does standard C have vector extensions yet?  I was an alternate rep for
my firm for F90 (was supposed to be F88) for vector extensions; it's
just a matter of curiosity.


I've been writing in C since 1977 (Unix V6 days and went through the =+ 
to += conversion in V7).  I've seen *a lot* of changes in C over that time.


Most of what I do is low level stuff (OS, RTOS, etc) and actually 
*rarely* even use the C library (most of what I build is built with 
-nostdlibs).


I typically build using -c99 but I'm looking at C11 because of atomics 
that were introduced then but I have to see what's native compiler 
generated versus what it relies on for the atomic operations.  I haven't 
yet seen what's in C17 yet.  I've also been known to write a special 
hand crafted function so that an entire portion of the C library doesn't 
get pulled in.  Not only did it save a bunch of space but it was *much* 
faster too.



TTFN - Guy




Re: APL\360

2021-01-29 Thread Guy Sotomayor via cctalk



On 1/29/21 4:32 PM, Fred Cisin via cctalk wrote:

if ( !(myfile = fopen( filename, "r"))


On Fri, 29 Jan 2021, Guy Sotomayor via cctalk wrote:
In a lot of industry standard coding practices (MISRA, CERT-C) that 
type of statement is prohibited and *will* result in an error being 
reported by the checker/scanner.
The if statement in your example has at least 2 errors from MISRA's 
perspective:

* assignment within a conditional statement
* the conditional not being a boolean type (that is you can't assume 0
  is false and non-0 is true...you actually need to compare...in this
  case against NULL)


That particular structure has become an industry standard.
MOST dialects of C return a NULL pointer on fopen error.
Similarly the code in strcpy has an assignment and is using the 
numeric valus of each character as if it were boolean, with the 
terminating NULL ending the while condition.



And unfortunately some industries it is prohibited.  Those industries 
*require* conformance to MISRA, CERT-C, ISO-26262 and others.  There is 
*no* choice since the code has to be audited and compliance is *not* 
optional.



--
TTFN - Guy



Re: APL\360

2021-01-29 Thread Guy Sotomayor via cctalk
In a lot of industry standard coding practices (MISRA, CERT-C) that type 
of statement is prohibited and *will* result in an error being reported 
by the checker/scanner.


The if statement in your example has at least 2 errors from MISRA's 
perspective:


 * assignment within a conditional statement
 * the conditional not being a boolean type (that is you can't assume 0
   is false and non-0 is true...you actually need to compare...in this
   case against NULL)


On 1/29/21 3:59 PM, Fred Cisin via cctalk wrote:

On Fri, 29 Jan 2021, Chuck Guzis via cctalk wrote:


In the past (and occasionally today, I use the following construct:

FILE *myfile;

if ( !(myfile = fopen( filename, "r"))
{
 fprintf( stderr, "Couldn\'t open %s - exiting\n", filename);
 exit (1);
}

Yes, it only saves a line, but neatly describes what's being done.

--Chuck


Yes.
That is another excellent example of where you DO want to do an 
assignment AND a comparison (to zero).  A better example than my 
strcpy one, although yours does not need to save that extra line, but 
a string copy can't afford to be slowed down even a little.


That is why it MUST be a WARNING, not an ERROR.
Of course, the error is when that wasn't what you intended to do.


--
TTFN - Guy



Re: APL\360

2021-01-29 Thread Guy Sotomayor via cctalk



On 1/29/21 12:21 PM, ben via cctalk wrote:

On 1/29/2021 12:59 PM, Fred Cisin via cctalk wrote:



Without OTHER changes in parsing arithmetic expressions, that may or 
may not be warranted, just replacing the '=' being used for 
assignment with an arrow ELIMINATED that particular confusion.  Well, 
mostly.  You can't use a right pointing arrow to fix 3 = X




Blame K with C with the '=' and '==' mess because assignment is a 
operation. I never hear that C or PASCAL have problems.


We complained bitterly about this in the early days (Unix v6 days).  
They at least listened and fixed the = (e.g. =+, =-) because of 
ambiguity but refused to change assignment.  I find it annoying that a 
type-o of forgetting an '=' in a comparison can result in a hard to find 
bug.



TTFN - Guy



Re: DEC backplane power connectors

2021-01-27 Thread Guy Sotomayor via cctalk

Could you post the part numbers?

Thanks.

TTFN - Guy

On 1/27/21 7:19 AM, Tom Uban via cctalk wrote:

Thanks much. I think I found the mating plugs I need on the te.com site and 
digikey has them.

--tom

On 1/27/21 2:05 AM, Mattis Lind wrote:


Den ons 27 jan. 2021 kl 06:59 skrev Tom Uban via cctalk mailto:cctalk@classiccmp.org>>:

 Are the power connectors on the DEC PDP-11 backplanes (e.g. DD11-DF 15pin 
and 6pin) Molex or
 other?
 Are they still commonly available?


They are called Commercial Mate-n-lok.  Company is called TE-Connectivity 
nowadays.

Later on DEC used Universal Mate-n-lok. For example in the VAX-11/750.

/Mattis


 --tnx
 --tom


--
TTFN - Guy



Re: Keyboard storage

2020-12-21 Thread Guy Sotomayor via cctalk
No worries.  I use Uline for all sorts of stuff and they generally
deliver within 2 days (even out here in the boonies). I always find a
use for any extras.  ;-)

I generally avoid USPS partly because they don't deliver to our house,
so we have a P.O. Box (which means I have to talk to the shipper to determine 
what method they use for shipping so I can give them the right address).  
Frankly, I don't understand because UPS and FedEx deliver right to our door 
(although sometimes it's fun to figure out *which* door they left the package 
at).

TTFN - Guy

On Mon, 2020-12-21 at 23:05 -0800, Alan Perry wrote:
> Thanks. I had seen that one before, but didn't know what to do with
> the 
> extra 15 boxes.
> 
> The USPS box has the advantages of being free and being a box I am
> more 
> likely to use to ship something with (because of the flat rate price
> and 
> not having to deal with weighing the box).
> 
> alan
> 
> On 12/21/20 10:59 PM, Guy Sotomayor wrote:
> > Try ULine (uline.com).  They have a keyboard shipping box (p/n S-
> > 6496).
> >   They're only $2.70/ea but the minimum order is 25.  :-(
> > 
> > TTFN - Guy
> > 
> > On Mon, 2020-12-21 at 22:17 -0800, Alan Perry via cctalk wrote:
> > > I have a bunch of Sun keyboards that I need to store more
> > > efficiently
> > > and don't want to risk damaging by stacking on top of each other.
> > > They
> > > are Type 4s, 5s, and 6s (without the wrist rest), maybe 10 in
> > > total.
> > > Anyone here know of a box or boxes that would work well for this?
> > > 
> > > alan



Re: Keyboard storage

2020-12-21 Thread Guy Sotomayor via cctalk
Try ULine (uline.com).  They have a keyboard shipping box (p/n S-6496). 
 They're only $2.70/ea but the minimum order is 25.  :-(

TTFN - Guy

On Mon, 2020-12-21 at 22:17 -0800, Alan Perry via cctalk wrote:
> I have a bunch of Sun keyboards that I need to store more
> efficiently 
> and don't want to risk damaging by stacking on top of each other.
> They 
> are Type 4s, 5s, and 6s (without the wrist rest), maybe 10 in total. 
> Anyone here know of a box or boxes that would work well for this?
> 
> alan



Re: Strange magtape anecdote

2020-10-27 Thread Guy Sotomayor via cctalk
We had a similar problem when I was at IBM and we were developing a
follow on to the PC/AT (it never shipped).  We had a bunch of
prototypes in the lab running tests with stepper HDDs (rather than
voice coils)  We kept having disk errors (failure to find track 0) when
running tests.

It took us a while to figure several things out:
1) all of the machines were run with their covers off
2) all of the machines that failed were by the windows
3) the failures always happened at a particular time of day

After a bit of head scratching we discovered that the tack 0 sensor was
optical and at that particular time of day the sun at a particular
angle did not allow the sensor to register that the drive was at track
0.

TTFN - Guy

 
On Tue, 2020-10-27 at 06:50 +0100, nico de jong via cctalk wrote:
> Hi all,
> 
> Back in the early 70's I was an operator on an IBM 360/40 with 4 
> tapedrives. Nobody could understand that sometimes a tape transfer
> would 
> stop saying "end of tape", mainly around 3 PM, when not called for.
> It 
> was mainly one specific drive, but its two neighbours, one on each
> side, 
> could also behave like this. Tape drive specialists visitied us, 
> scratched their heads, and went off again. When the blinds were
> rolled 
> down, the error disappeared. The reason for the strange behaviour
> was 
> that the sun could shine into the machine room when it was in a
> specific 
> position, so it could send some light into the drive, where the tape 
> then reflected the light into the sensor, making it believe that it
> had 
> met the end-of-tape marker.
> 
> /Nico OZ 1 BMC
> 
> On 2020-10-26 17:01, Al Kossow via cctalk wrote:
> > 
> > http://mnembler.com/computers_mini_stories.html
> > 
> > "George Dragner always wore a belt with a metal dragon buckle.  He
> > was 
> > a colorful character known for pissing off management.  His most 
> > famous act was tossing a chair through the window at a customer
> > site. 
> > The customer refused to believe that the lack of humidity in the
> > room 
> > was screwing up his magnetic tape media.  As the tape heads depend
> > on 
> > the moisture from the air to prevent the magnetic oxide from being 
> > torn off the media from the friction during a rewind. George broke
> > the 
> > window to prove his point.  He was right ! "
> > 
> > There is a minimum RH specified for tape, but "tape heads depend
> > on 
> > the moisture from the air"  ??
> > 



Re: Next project: 11/24. Does it need memory?

2020-10-19 Thread Guy Sotomayor via cctalk
On Mon, 2020-10-19 at 20:20 -0400, Chris Zach via cctalk wrote:
> 
> won't work. Maybe I'll just drag out the 11/05 and get that working 
> first, it's got a nice front panel that doesn't lock up often :-)
> 
> 
The 11/05 was the first 11 that I repaired and got working.  You should
note that the 11/05's front panel is driven by the uCode of the CPU. 
It's connection to the CPU is through a "serial" protocol (it's been
too long...I think it's just a big shift register) to keep the pin
count (e.g. cost) low.

TTFN - Guy




Re: RL02 Disk and maybe pdp11 something at auction.

2020-10-19 Thread Guy Sotomayor via cctalk
On Mon, 2020-10-19 at 17:45 -0400, Noel Chiappa via cctalk wrote:
> > From: Guy Sotomayor ggs at shiresoft.com 
> 
> > It looks like it's 11/84 from the badge on the front.
> 
> In a 10-1/2" box. Seen them in the docs (forget the model number),
> never seen
> a real one.

I had a number of 11/84s in the 10-1/2" box and in the 21" box.  Got
rid of them all in the last move (along with 3(!) 11/78x VAXen...I was
a bit surprised because I thought I had only 2).

TTFN - Guy



Re: RL02 Disk and maybe pdp11 something at auction.

2020-10-19 Thread Guy Sotomayor via cctalk
On Mon, 2020-10-19 at 13:12 -0700, Wayne Sudol via cctalk wrote:
> I spotted this for an auction from the  FORMER OYSTER CREEK NUCLEAR
> GENERATING STATION. 
> Looks like a pair of RL02 with a pdp something in the middle. I can't
> make out what model it is from the photo.
> Anyone know?
> 
> 
> 
https://www.bidspotter.com/en-us/auction-catalogues/bscunited/catalogue-id-united4-10061/lot-9f3350e0-a11b-493d-868b-ac43015bce6d
> 

It looks like it's 11/84 from the badge on the front.

TTFN - Guy



Re: 11/84 print set

2020-10-19 Thread Guy Sotomayor via cctalk
On Mon, 2020-10-19 at 11:22 -0700, Fred Cisin via cctalk wrote:
> On Mon, 19 Oct 2020, Al Kossow via cctalk wrote:
> > yes, I went ahead and got it even though I can't afford to
> > paypal is my normal aek@bitsavers adr
> 
> Done $50
> 
> 
Me too.

TTFN - Guy




Re: Tutor needed for college student

2020-10-12 Thread Guy Sotomayor via cctalk
I agree with the others: go look for other textbooks.  There are also
surprisingly good "webinar's" on various math related topics on YouTube
(free), so it might be worthwhile to have him do a bit of searching.

Oddly, I never had any discrete math courses in school...it was "old
school EE" so everything was differential equations and stochastic
processes.

I did end up teaching myself about finite fields (Galios Fields to be
specific) when I needed to do some work with error correcting codes.  I
ended up with 3 or 4 different textbooks on the topic.  I have since
gone back to refresh myself about them and found several good video
courses on YouTube.

TTFN - Guy



Re: 9 track tapes and block sizes

2020-10-03 Thread Guy Sotomayor via cctalk
On Sat, 2020-10-03 at 08:33 -0700, Chuck Guzis via cctalk wrote:
> 
> In particular, consider a government project where several hundred
> millions of 1970s dollars were spent by the government, yet almost
> nothing other than a few papers survives.  Those involved with
> intimate
> knowledge are inexorably dying off as the community ages out.  The
> lessons of "what did we learn form all of this?" will be gone
> forever.
> 
> Sometimes it seems that we spend as many resources in forgetting as
> we
> spend trying to remember.
> 

I couldn't agree more...and it's not just governments (at all levels)
but companies as well.

In the mid-90's I worked on the IBM Microkernel project (was one of the
original 6 people who started it).  It eventually grew to 100's of
people and morf'd into Workplace OS.

I still have some of the printed documentation from that project but
have long since lost a set of CDs that contained not only the PDFs for
those documents (the source was in FrameMaker) but also all of the IBM
microkernel source *and* build environment and tools.

And that was only a part of the project...there were all of the
personality neutral software as well as the various OS personalities
(including AIX and OS/2).  I seriously doubt if any of that survived in
any form because of the way that the project was shutdown.

The last estimate of the cost to IBM of the project was over
$2,000,000,000 (in 1995 dollars).  To my knowledge not much survived. 
What a waste.

TTFN - Guy



Re: Small C ver 1.00 source?

2020-07-14 Thread Guy Sotomayor via cctalk
Yes, I spent a good amount of my time at CMU in the late 70's re-
writing the TOPS-10 version of that compiler with a new P-Code
definition so that the target code could be run efficiently on small
machines.  I did the original work to target the PDP-11s on C.MMP.

I still have the compiler source, documentation I wrote and all of the
test cases.  Unfortunately I no longer have the PDP11 P-Code
interpreter that I wrote (all in PDP-11 assembler and BLISS-11).  :-( 
However, I *think* I still have the interpreter I wrote in Pascal that
I used for testing the compiler changes and code generation.

TTFN - Guy

On Tue, 2020-07-14 at 12:19 -0600, Eric Smith via cctalk wrote:
> On Tue, Jul 14, 2020 at 10:42 AM Chuck Guzis via cctalk <
> cctalk@classiccmp.org> wrote:
> 
> > The term "p-code" comes from the 1973 Pascal-P version of UCSD
> > Pascal.
> > 
> 
> "p-code" does come from Pascal-P, but Pascal-P wasn't a version of
> UCSD
> Pascal. Pascal-P was developed on the CDC 6600 in 1972.
> 
> UCSD Pascal didn't come about until 1977, so the term p-code predates
> UCSD
> Pascal by five years.



Re: Fixing an RK8E ....

2020-06-19 Thread Guy Sotomayor via cctalk
On Fri, 2020-06-19 at 12:24 -0700, Robert Armstrong via cctech wrote:
>   It appears that my RK8E has a problem - it fails the diskless
> control test
> with
> 
>   .R DHRKAE.DG
>   SR= 
> 
>   COMMAND REGISTER ERROR
>   PC:1160 GD: CM:0001 
>   DHRKAE  FAILED   PC:6726  AC:  MQ:  FL:
>   WAITING
> 
> Ok, maybe a bad bit in the command register so I'll check it
> out.  But then
> it dawns on me - how do you work on this thing?  It's three boards
> connected
> with "over the top" connectors - you can't use a module extender on
> it.
> Worse, the M7105 Major Registers board is the middle one of the
> stack!   Is
> there some secret to working on this thing?  Has anybody fixed
> one?  Any
> suggestions?
> 
>   I hadn't thought about it before, but the KK8E CPU would have the
> same
> problem.  Fingers crossed that one never dies...
> 

I seem to recall that there were some "special" (read unobtanium) over
the top connectors that permitted one of the boards in a board set to
be up on an extender.

TTFN - Guy




Re: Living Computer Museum

2020-05-27 Thread Guy Sotomayor via cctalk
On Wed, 2020-05-27 at 14:57 -0700, geneb wrote:
> On Wed, 27 May 2020, Guy Sotomayor via cctalk wrote:
> 
> > I just received an email from the Living Computer Museum that they
> > were
> > suspending operations.  It wasn't clear from the email what that
> > actually means.
> > 
> 
> They've been closed to visitors since early March I think.

That I knew.  It's just that the email that was sent sounded pretty
ominous.

TTFN - Guy



Living Computer Museum

2020-05-27 Thread Guy Sotomayor via cctalk
I just received an email from the Living Computer Museum that they were
suspending operations.  It wasn't clear from the email what that
actually means.

TTFN - Guy



Re: history is hard (was: Microsoft open sources GWBASIC)

2020-05-25 Thread Guy Sotomayor via cctalk
On Mon, 2020-05-25 at 14:13 -0700, Fred Cisin via cctalk wrote:
> > I hadn't thought about IBMCACHE.SYS in *years*.  I wrote it in 
> > its entirety (there's even a patent that covers some of its
> operation). 
> > I was in an AdTech (Advanced Technology) group at the time and 
> > was looking at how to make disk operations faster in DOS at the
> time 
> > when I came up with the idea. There was a *huge* battle within IBM
> on if 
> > it should be released and in order to do so, it was fairly well 
> > hidden.
> 
> I think that I recall a mention of REFERENCE disk of PS/2?
> (NOT model 25 or 30, which didn't have extended memory)
> 
> 
> Can IBMCACHE co-exist with HIMEM.SYS?
> Or require it?
> Or the A20 support needed by Windows 3.10?
> When SMARTDRV was activated, did it disable IBMCACHE? or conflict
> with it?
> 

No, IBMCACHE was standalone.  As I recall (I wish I'd kept a copy of
the source), you could tell it how much (and starting address) of where
it would use memory > 1MB (I think there was also a mode that allowed
you to use it < 1MB as well).  That was done to allow for co-existence
with HIMEM.SYS.

When the write back cache was enabled (it would always allow write-
thru), in addition to intercepting INT 13 (and timer) it would also
intercept INT 21 so that if you did a "close" it would immediately
flush out the dirty buffers.

One of the differences between between IBMCACHE and SMARTDRV as I
recall (I really didn't spend too much time thinking about SMARTDRV)
was that IBMCACHE was block based versus SMARTDRV being track based. 
It allowed for much better caching (from my own analysis when I was
developing it).  It also allowed for caching blocks that had bad
sectors (which was one of the patents for IBMCACHE).

When IBMCACHE did a write out of dirty blocks they were always in
sorted order (the list of dirty blocks was kept in sorted order).  I
recall playing around with dual elevator algorithms (it knew where the
last read/write was) so it could do the writes that required the
fewest/shortest seeks.  It turned out now to be a huge win (for DOS)
versus the complexity, so I never released that.

I even had a version that cached floppies (but would *never* enable the
write-back cache for devices that it thought were removable).  If it
detected a disk change it would flush the cache for that drive.  

TTFN - Guy



Re: history is hard (was: Microsoft open sources GWBASIC)

2020-05-25 Thread Guy Sotomayor via cctalk
On Mon, 2020-05-25 at 13:21 -0700, Ali wrote:
> 
> >I hadn't thought about IBMCACHE.SYS in *years*.  I wrote it in its
> >entirety (there's even a patent that covers some of its operation).
> I
> >was in an AdTech (Advanced Technology) group at the time and was
> >looking at how to make disk operations faster in >DOS at the time
> when I
> >came up with the idea.
> 
> >There was a *huge* battle within IBM on if it >should be released
> and in
> >order to do so, it was fairly well hidden.
> 
> 
> Guy,
> 
> It is so well hidden I don't think I have ever seen it. Was it part
> of pc-dos? If so what version?

No, it came on one of the diskettes supplied with PS/2 systems though
it would work on any system.  That is, it didn't do anything to detect
that it was running on a PS/2 system.  There was a lot of discussion to
have the "core" of IBMCACHE actually in BIOS and a tiny .SYS file to
allocate the memory above 1MB.

Most interest in it faded when Microsoft started shipping smartdrv.sys
which IMHO was not as good as IBMCACHE, but smartdrv.sys came with DOS.

TTFN - Guy



Re: history is hard (was: Microsoft open sources GWBASIC)

2020-05-25 Thread Guy Sotomayor via cctalk
On Mon, 2020-05-25 at 20:28 +0200, Liam Proven via cctalk wrote:
> On Mon, 25 May 2020 at 20:22, Guy Sotomayor 
> wrote:
> > 
> > I hadn't thought about IBMCACHE.SYS in *years*.  I wrote it in its
> > entirety (there's even a patent that covers some of its operation).
> > I
> > was in an AdTech (Advanced Technology) group at the time and was
> > looking at how to make disk operations faster in DOS at the time
> > when I
> > came up with the idea.
> 
> Oh my word! Well I thank you for it. It helped a very great deal and
> made dozens of users of rather expensive IBM PS/2s in the Isle of Man
> very happy for a while in the late 1980s and early 1990s. :-)

You're very welcome!  I know that there were some bids that IBM
marketing needed IBMCACHE.SYS to win (millions of dollars) and it was
*still* a battle to get it released!

> 
> > There was a *huge* battle within IBM on if it should be released
> > and in
> > order to do so, it was fairly well hidden.
> 
> I can believe that! I think I read of it in a magazine and thought
> "never! I'd know!" -- so I looked and there it was.
> 
> > There was a switch on config.sys statement for IBMCACHE.SYS to turn
> > off
> > the write-back cache (e.g. writes would always go straight to
> > disk).
> > As I recall, there was a 30 second timer for the writeback cache so
> > that if a disk block was "dirty" for more than 30 seconds it would
> > get
> > flushed to disk.
> 
> Yes, both true. I think I may have used the write-through switch for
> some people, but ISTR it reduced performance a little bit. Just
> teaching people to be a bit more patient was sometimes hard -- after
> all, this was a tool that appealed to the impatient!
> I think for them it was easier to teach them to  press C-A-D and then
> wait for the RAM check before turning off.
> 
> Or hit C-A-D, let it boot all the way, then turn it off!
> 
> Great bit of work, if I may say so!

Yea, not only did I have to write it, but I had to write a series of
tests to run through billions of disk operations (and go validate the
internal state of the cache) before it could even be considered for
release.  ;-)

BTW, as a bit of copyright paranoia, if you do an ASCII dump of
IBMCACHE.SYS, you'll see my 3 initials (GGS) (or it may have been
IBM...it's been so long I can't remember).  They are actually
instructions!  It was required at the time to have code embed a text
string as actual instructions that get executed.  It took me a bit of
time to figure out (in x86 assembler) how to generate an appropriate
string.  The idea was that if someone "cloned" the program and just did
a replacement of the string, it would stop working because the string
was actually instructions.

TTFN - Guy



Re: history is hard (was: Microsoft open sources GWBASIC)

2020-05-25 Thread Guy Sotomayor via cctalk
On Mon, 2020-05-25 at 20:00 +0200, Liam Proven via cctalk wrote:
> On Mon, 25 May 2020 at 05:30, Fred Cisin via cctalk
>  wrote:
> > 
> > 
> IBMs came with an installable driver called, I think, IBMCACHE.SYS.
> This used extended RAM (above 1MB) as a hard disk cache, without XMS
> or HIMEM.SYS or any of that. I played with it and was amazed by the
> results. I started enabling it by default on customers' machines. 

I hadn't thought about IBMCACHE.SYS in *years*.  I wrote it in its
entirety (there's even a patent that covers some of its operation). I
was in an AdTech (Advanced Technology) group at the time and was
looking at how to make disk operations faster in DOS at the time when I
came up with the idea.

There was a *huge* battle within IBM on if it should be released and in
order to do so, it was fairly well hidden.

> Most
> were happy but some had the habit of just turning off -- DOS didn't
> really have a shutdown routine. Some, I could train to press
> Ctrl-Alt-Del before turning off. Some I couldn't, so I had to disable
> the disk cache.

There was a switch on config.sys statement for IBMCACHE.SYS to turn off
the write-back cache (e.g. writes would always go straight to disk). 
As I recall, there was a 30 second timer for the writeback cache so
that if a disk block was "dirty" for more than 30 seconds it would get
flushed to disk.

> 
> But for those that could learn and adapt, it made DOS _much_ faster,
> and on a 1MB PS/2 Model 50 or 60, it was about the only thing you
> could do with the extra 386 KB of RAM before MS-DOS 5 came out.
> 

TTFN - Guy



Re: ISO: Diablo 30 heads

2020-05-14 Thread Guy Sotomayor via cctalk
I chatted with him on FB earlier in the day and he's doing fine.

TTFN - Guy

On Thu, 2020-05-14 at 19:45 +, dwight via cctalk wrote:
> I just emailed him an hour ago and he replied. I suspect he is fine.
> Dwight
> 
> 
> From: cctalk  on behalf of Al Kossow
> via cctalk 
> Sent: Thursday, May 14, 2020 9:54 AM
> To: cctalk@classiccmp.org 
> Subject: Re: ISO: Diablo 30 heads
> 
> On 5/13/20 6:28 PM, Jay Jaeger via cctalk wrote:
> 
> > Carl, feel free to contact me off list.
> 
> Has anyone heard anything from Carl?
> I'm a bit concerned since there have been no updates on his 1130 page
> for a while.
> 
> 



Re: APL-11

2020-03-30 Thread Guy Sotomayor via cctalk
I don't have an easy way to dump the ROMs at the moment.

TTFN - Guy

On Mon, 2020-03-30 at 13:49 -0600, Eric Smith wrote:
> On Mon, Mar 30, 2020 at 10:24 AM Guy Sotomayor via cctalk <
> cctalk@classiccmp.org> wrote:
> > I have a DEC Writer III with the APL character set ROM and the APL
> > keyboard!  Just need to hook it up to something that has APL on it
> > and will generate the correct character sequences.  ;-)
> 
> Cool!  When you get a chance, could you please dump the DECwriter III
> ROMs?
> 



Re: APL-11

2020-03-30 Thread Guy Sotomayor via cctalk
On Mon, 2020-03-30 at 11:07 -0400, Diane Bruce via cctalk wrote:
> On Mon, Mar 30, 2020 at 10:58:46AM -0400, Bill Gunshannon via cctalk
> wrote:
> > 
> > Haven't given up on DIBOL.  May try installing the RT-11 version
> > and
> > see if it runs.
> > 
> > But now another language of interest has reared its ugly head.  :-)
> > 
> > Anybody have an image of the tape for APL-11?  Manual claims it
> > runs on all of the PDP-11 OSes and it is another language from
> > my past that I haven't touched (other than to read some programs
> > out of curiosity) in more than two decades.
> 
> Oh neat! Be sure you have the special keyboard and character set for
> it!
> e.g. just overlays for the keyboard.

I have a DEC Writer III with the APL character set ROM and the APL
keyboard!  Just need to hook it up to something that has APL on it
and will generate the correct character sequences.  ;-)

TTFN - Guy




Re: DIBOL and RPG for RSTS

2020-03-29 Thread Guy Sotomayor via cctalk
On Sun, 2020-03-29 at 10:21 -0400, Paul Koning via cctalk wrote:
> > On Mar 28, 2020, at 2:55 PM, dwight via cctalk <
> > cctalk@classiccmp.org> wrote:
> > 
> > There are a few reasons most don't like Forth:
> > 
> >  1.   no type checking ( suppose to save dumb programmers )
> >  2.   Often, no floating point. ( Math has to be well thought out
> > but when done right in integer math it has few bugs ).
> >  3.  Few libraries ( One can often make code to attach to things
> > like C libraries but it is a pain in the A. Often if you know what
> > needs to be done it is easier and better to write your own low
> > level code. Things like USB are tough to get at the low level
> > stuff, though )
> >  4.  To many cryptic symbols ( : , . ! @ ; )
> >  5.  To much stack noise ( dup swap rot over )
> > 
> > I still use Forth for all my hobby work. It is the easiest language
> > to get something working of any of the languages I've worked with.
> > ...
> > Learning to be effective with Forth has a relatively steep learning
> > curve. You have to understand the compiler and how it deals with
> > your source code. You need to get used to proper comments to handle
> > stack usage. You need to learn how to write short easily test words
> > ( routines ). It is clearly not just a backwards LISP. It is not
> > Python either.
> > Dwight
> 
> No, it certainly isn't Python, which is my other major fast-coding
> language.
> 
> FORTH started as a small fast real-time control language; its
> inventor worked in an observatory and needed a way to control
> telescopes.  It's still used for that today.  I recently went looking
> for FORTH processors in FPGA, there are several.  One that looked
> very good was designed for robotics and machine vision work.  The
> designer posted both the FPGA design and the software, which includes
> a TCP/IP (or UDP/IP ?) stack.  He reports that the code is both much
> smaller and faster than compiled C code running on conventional FPGA
> embedded processors.
> 
Yes, that would be J1.  I've used it and even wrote a simulator for it
(in FORTH 'natch) so that I could debug my code.  It's a useful FPGA
implementation.

TTFN - Guy



Re: HPE OpenVMS Hobbyist license program is closing

2020-03-10 Thread Guy Sotomayor via cctalk
Am I forgetting, but isn't BSD (4.3/4.4 as I recall) on the VAX?  That
seems more suitable for running on classic hardware than moving to
something newer.

Of course I got rid of all of my 11/780 and 11/785 systems (along with
a smattering of VAXStations) years ago so I don't have any particular
interest here.  ;-)

TTFN - Guy

On Tue, 2020-03-10 at 16:44 +0100, Jan-Benedict Glaw via cctalk wrote:
> On Tue, 2020-03-10 09:06:57 -0600, Warner Losh via cctalk <
> cctalk@classiccmp.org> wrote:
> > On Tue, Mar 10, 2020 at 3:48 AM Peter Corlett via cctalk <
> > cctalk@classiccmp.org> wrote
> > > Linux has taken thirty years to get this far. It's arguable what
> > > is "major" but to a rough approximation, there are no good open
> > > source clones of other operating systems of similar complexity:
> > > I'm aware of FreeDOS, AROS, EmuTOS and a few others, but they're
> > > relatively simple.
> > 
> > Linux never was a thing on the VAX that was very good. It was too
> > late in
> > its life cycle to get enough love.
> 
> I quite apologize for that!
> 
> > Linux and/or NetBSD/vax would be a good choice, though, to
> > implement the
> > VAX's system calls and execute it's binaries. Though there were
> > more
> > concerted efforts to do this years ago, but I don't know what
> > became of
> > them. Google shows a smattering of efforts littered with broken
> > links. :(
> 
> There was a vax-linux port started by others, and I cared for it for
> a
> good number of years. My life changed a lot since then, I quite
> failed
> (and failed hard!) to bring up the needed time to care for Linux,
> care
> for GCC and Binutils, GNU libc and all those programs silently
> expecting IEEE floating point support.
> 
>   I still have a good number of VAXen around, though all powered off
> and in good storage. We're actually searching for a larger room to
> put
> all the old iron in there, get them on cables (power, network and
> serial) and eventually even restart on hacking them.
> 
>   Hacking VAXen was a great thing do to! ...at least for me. I
> learned
> so much from doing so, about Linux, libc, their interface, about
> Binutils and GCC. It really made me "fit" for paid business. But lets
> face it: I'm in the fourties, have a family and a day still does only
> have 24 hours.
> 
>   So... Once getting all my hardware into usable condition is
> settled,
> I'd be quite willing to hand out serial and power access to them, for
> whatever you'd like to do. (If it's not already too late.)
> 
> MfG, JBG
> 



Re: Mach

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 15:21 -0800, Chris Hanson via cctalk wrote:
> On Jan 5, 2020, at 2:30 PM, Guy Sotomayor via cctalk <
> cctalk@classiccmp.org> wrote:
> > 
> > It did seem for a while that a lot of things were based on Mach,
> > but
> > > 
> > > very few seemed to make it to market. NeXTstep and OSF/1, the
> > > only
> > > version of which to ship AFAIK was DEC OSF/1 AXP, later Digital
> > > UNIX,
> > > later Tru64.
> > 
> > Yes, a lot of things were based on Mach. One OS that you're
> > forgetting
> > is OS X. That is based upon Mach 2.5.
> 
> Nope, Mac OS X 10.0 was significantly upgraded and based on Mach 4
> and BSD 4.4 content (via FreeBSD among other sources). It was NeXT
> that never got beyond Mach 2.5 and BSD 4.2. (I know, distinction
> without a difference, but this is an issue of historicity.)
> 
> I think only some of the changes from Mach 2.5→3→4 made it into Mac
> OS X Server 1.0 (aka Rhapsody) so maybe that’s what you’re
> remembering.

You're probably thinking about the user space.  I was working on the
OS X kernel from 2006-2012.  I can tell you that most of the kernel
that was still Mach related (most actually got removed...about all that
was left was mach message) was 2.5 based with some enhancements.

> 
> > > MkLinux didn't get very far, either, did it?
> > > 
> > 
> > I think that was the original Linux port for PPC.
> 
> It was the original Linux port for NuBus PowerPC Macs at least. It
> was never really intended to “get very far” in the first place, it
> was more of an experimental system that a few people at Apple threw
> together and managed to allow the release of to the public.
> 
> MkLinux was interesting for two reasons: It documented the NuBus
> PowerMac hardware such that others could port their OSes to it, and
> it enabled some direct performance comparisons of things like running
> the kernel in a Mach task versus running it colocated with the
> microkernel (and thus turning all of its IPCs into function calls).
> Turns out running the kernel as an independent Mach task cost 10-15%
> overhead, which was significant on a system with a clock under
> 100MHz. Keep in mind too that this was in the early Linux 2.x days
> where Linux “threads” were implemented via fork()…

At IBM we spent a *significant* amount of time optimizing the
microkernel performance.  I recall that on a 90MHz 601 PPC, we got
round-trip RPC below 1 micro-second.

I personally spent a significant amount of time optimizing the
Pentium kernel entry/exit code and optimizing the CPU specific
porition of Mach RPC (it actually took advantage of the x86
segmentation hardware).

> 
> I don’t recall if anyone ever did any “multi-server” experiments with
> it like were done at CMU, where the monolithic kernel were broken up
> into multiple cooperating tasks by responsibility. It would have been
> interesting to see whether the overhead stayed relatively constant,
> grew, or shrank, and how division of responsibility affected that.

The IBM microkernel project was *very* multi-server.  There was a
version of AIX and OS/2 that ran on top of the IBM microkernel (which
was a heavily modified version of Mach 3.0) were there were quite a few
OS neutral servers (including most device drivers) that were all in
their own server tasks.

-- 
TTFN - Guy



Re: Taligent

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 15:06 -0800, Chris Hanson via cctalk wrote:
> On Jan 5, 2020, at 12:56 AM, Jeffrey S. Worley via cctalk <
> cctalk@classiccmp.org> wrote:
> 
> > Does Talingent Pink sound familiar?  OS/2 was ported to powerPC,
> > and so
> > was Netware iirc.  The field was quite busy with hopeful Microsoft
> > killers.  OS/2 was to be morphed into a cross-platform o/s, to wean
> > folks from dos/x86. Then PPC kills the x86 and we all get a
> > decent
> > os.  That was the plan anyway.   I never saw OS2 for PPC or Netware
> > for
> > OS/2, thought I know both to have shipped.
> 
> Pink was the C++ operating system project at Apple that became
> Taligent. I know a couple of people who did a developer kitchen for
> Pink pre-Taligent, and I also know a number of folks who worked on
> the Taligent system and tools—and have personally seen a demo of the
> Taligent Application Environment running on AIX.
> 
> I’ve even seen a CD set for Taligent Application Environment (TalAE)
> 1.0 on AIX, and I have a beta developer and user documentation set.
> Unfortunately my understanding is that the CD sets given to employees
> to commemorate shipping TalAE were all *blank*—the rumor I’ve heard
> is that IBM considered it too valuable to give them the actual
> software that they had worked for years on. (Maybe there were tax
> implications because of what IBM valued the license at, and the fact
> that it would have to be considered compensation?)
> 
> Taligent itself was only one component of IBM’s Workplace/OS
> strategy, which was a plan to rebase everything atop Mach so you
> could run AIX and OS/2 and Taligent all at once on the same hardware
> without quite using virtual machines for it all. The idea is that
> Apple would do pretty much the same with Copland and Taligent atop
> NuKernel rather than Mach.
> 
> It would be really great to actually get the shipping Taligent
> environment and tools archived somewhere. While only bits and pieces
> of it are still in use—for example, ICU—a lot of important and
> influential work was done as part of the project. For example, the
> design of most of the unit testing frameworks today actually comes
> from *Taligent*, since Kent Beck wrote SUnit to re-create it in
> Smalltalk, and JUnit and OCUnit were based on SUnit’s design and
> everything else derived from JUnit…

No, you don't.  The object model that they used was *seriously*
deranged.  When I last looked at it there were >1200 objects and they
were so interdependent that it was nearly impossible to make a change
to one object without the change cascading across a large number of
objects.  They were also proud of the fact that on average only 6
*instructions* would be executed between method invocations...so
performance sucked because you were just doing method calls.

Rather than having a standardized "size" method for an object they
actually had code in the object look at the new operator for the
object (e.g. the binary machine code) in order to determine its
size.

As I said, I have scars from my interactions with Taligent.

-- 
TTFN - Guy



Re: cctalk Digest, Vol 64, Issue 3

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 23:41 +0100, Liam Proven via cctalk wrote:
> On Sun, 5 Jan 2020 at 23:30, Guy Sotomayor via cctalk
>  wrote:
> > 
> > Yes.  We first started with Mach 3.0 build MK58.  We did our final
> > fork at MK68.  We made some *significant* changes from what CMU
> > had (things like changing mach messages from IPC to RPC) and a
> > whole lot of work in the area of scheduling.
> 
> Very interesting. If you are allowed to, you should blog about this
> somewhere -- it is historic stuff.

Yea, unfortunately I've lost most of the historical documentation
starting when we were all packed up to move from Boca Raton, FL to
Austin, TX and then when I left IBM in 1997.

I still have a set of the IBM Microkernel manuals (several 1000 pages
that was all written in Framemaker) and I *may* still have a CD with
the final set of sources on it (but where that might be would be an
interesting question).

> 
> > Yes, a lot of things were based on Mach. One OS that you're
> > forgetting
> > is OS X. That is based upon Mach 2.5.
> 
> Well, firstly, no, I wasn't. I didn't mention OS X, or macOS as it's
> called now, because it's based on NeXTstep. It's a later version of
> the same OS.
> 
> Secondly, AIUI, NeXTstep used Mach 2.5 but one of the changes in Mac
> OS X 1.0 is that they moved to Mach 3 and updated the userland from
> BSD 4.4-Lite to FreeBSD then-current, hiring Jordan Hubbard to do
> much
> of that work..

No OS X uses Mach 2.5.  I worked in the kernel group at Apple for a
number of years and am fairly familiar with the kernel.  They may have
pulled a few things from Mach 3.0, but it is still fundamentally
Mach 2.5.

> 
> > > MkLinux didn't get very far, either, did it?
> > > 
> > 
> > I think that was the original Linux port for PPC.
> 
> It was, and I think only on Apple hardware. There were a few dev
> builds and then it disappeared, IIRC.
> 
> [*Checkes*]
> 
> Yup, OldWorld-ROM NuBus PowerMacs, and later OldWorld PCI PowerMacs
> --
> but later Linux supported PCI Macs directly.
> 
> There were apparently 4 "developer releases", an R1 and an unfinished
> R2. Supplanted by Mac OS X, but apparently the Mach work really
> helped
> to get NeXTstep and "Rhapsody" bootstrapped on PowerMacs.
> 
-- 
TTFN - Guy



Re: cctalk Digest, Vol 64, Issue 3

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 21:54 +0100, Liam Proven via cctalk wrote:
> On Sun, 5 Jan 2020 at 19:02, Guy Sotomayor via cctalk
>  wrote:
> 
> I had been working on the IBM Microkernel (was one of the original 6
> people onthat team).  It was eventually to form the basis of OS/2 for
> PPC.  The way thatthe microkernel project was structured was that
> most
> of the "OS" was personalityneutral (e.g. could be used for Unix,
> OS/2,
> DOS, etc) and then there was an OSpersonality that ran on top of the
> infrastructure.  OS/2 on PPC was supposed tobe the first to ship.
> 
> I think I read that it was based on CMU Mach -- is that right?

Yes.  We first started with Mach 3.0 build MK58.  We did our final
fork at MK68.  We made some *significant* changes from what CMU
had (things like changing mach messages from IPC to RPC) and a
whole lot of work in the area of scheduling.


> It did seem for a while that a lot of things were based on Mach, but
> very few seemed to make it to market. NeXTstep and OSF/1, the only
> version of which to ship AFAIK was DEC OSF/1 AXP, later Digital UNIX,
> later Tru64.

Yes, a lot of things were based on Mach. One OS that you're forgetting
is OS X. That is based upon Mach 2.5.

> MkLinux didn't get very far, either, did it?
> 

I think that was the original Linux port for PPC.



-- 
TTFN - Guy



Re: cctalk Digest, Vol 64, Issue 3

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 03:56 -0500, Jeffrey S. Worley via cctalk wrote:
> A lot of odd PPC work happened in a group a friend worked for
> inAustin TX, but not sure if they did Netware work there.? There was
> a lot ofOS2 work there as well, but that's off track a bit more.
> thanksJim
> I was lead tech at a small computer company in Asheville, NC. in
> thosedays.  I ran OS/2 from version 2 in the early 90's to
> Ecomstation inthe early 2000's.
> Does Talingent Pink sound familiar?  OS/2 was ported to powerPC, and
> sowas Netware iirc.  The field was quite busy with hopeful
> Microsoftkillers.  OS/2 was to be morphed into a cross-platform o/s,
> to weanfolks from dos/x86. Then PPC kills the x86 and we all get
> a decentos.  That was the plan anyway.   I never saw OS2 for PPC or
> Netware forOS/2, thought I know both to have shipped.
> Jeff
> 

Yes, Taligent Pink is very familiar (and I still have the scars to
prove it!).I was part of the IBM team that evaluated Pink.  We (IBM)
was mainly looking atit to see how to converge OS's between IBM and
Apple...at least in terms of themicro-kernel.  The Pink teams was shall
we say "difficult to work with".
I had been working on the IBM Microkernel (was one of the original 6
people onthat team).  It was eventually to form the basis of OS/2 for
PPC.  The way thatthe microkernel project was structured was that most
of the "OS" was personalityneutral (e.g. could be used for Unix, OS/2,
DOS, etc) and then there was an OSpersonality that ran on top of the
infrastructure.  OS/2 on PPC was supposed tobe the first to ship.

-- 
TTFN - Guy


Re: Ordering parts onesie twosie

2020-01-03 Thread Guy Sotomayor via cctalk
On Fri, 2020-01-03 at 09:22 -0400, Paul Berger via cctalk wrote:
> On 2020-01-03 2:51 a.m., Chuck Guzis via cctalk wrote:
>   On 2020-01-02 9:58 p.m., Nemo Nusquam via cctalk wrote:
>   >Well, Canada Post stopped delivering to individual >houses years
> ago.
> I assume that rural delivery still goes house-to-house.
> --Chuck   
> Rural delivery is done to mail boxes along the roads, which means the
> people have to travel from their house to said road to get their
> mail.  We lived on a farm for part of the time I was growing up and
> for us that was 3/4 of a mile, and that was not uncommon in the area,
> for some it was even further.  Quite different from walking a block,
> maybe, to a community box.
> 
> 
> 
Yea, our bank of mailboxes is 2.5 miles from our house.  We got intoheated 
arguments with the Post Office because we didn't go down and emptyour box every 
day.  We finally got a P.O Box at a different (more convenient)Post Office.  
Now we have to deal with folks who don't understand that ourmailing address and 
physical address are different.  :-/
It also infuriates me that *every* other shipper (UPS, FedEx) can deliverright 
to our door but USPS can't be bothered.
-- 
TTFN - Guy


SMD disk specifications

2019-12-13 Thread Guy Sotomayor via cctalk
Hi,

I’ve been trying to find *detailed* specifications (mainly detailed signal 
timings) for the SMD disk interface but all I’ve found so far are the interface 
specifications for individual disks (CDC, Fujitsu, etc).  I’ve looked in the 
usual places (bitsavers mostly) and haven’t found the spec itself.  If anyone 
has any pointers, I’d appreciate it.

Thanks.

TTFN - Guy

Identification of an HP minicomputer

2019-08-12 Thread Guy Sotomayor via cctalk
Hi,

I have sitting in my pile of stuff an HP minicomputer that I’m trying to 
identify (at least in terms of exactly what it is and what sort of 
configuration it might have).

As far as I can tell, it’s an HP-1000 M-Series minicomputer (that should 
hopefully get us *some* details).  The “asset tag” lists the part number as 
2113023-108.  Looking at the back there’s space for 9 I/O cards (5 are 
occupied).

So my question is which of the several CPUs could this be and how do I tell 
(for example) what the configuration is (e.g. how much memory, etc).

Yes, I have looked on bitsavers, but short of disassembling the box to look at 
the (at least) 2 boards that are below the I/O slots, I can’t tell what’s there 
and I’d like to see if there’s a way to determine what this is without 
resorting to disassembly.

Thanks.

TTFN - Guy

CDC Cyber 180-960 available

2019-06-28 Thread Guy Sotomayor via cctalk
Hi,

It has come to my attention that a CDC Cyber 180-960 is available.  Apparently 
this is from a supplier that was supporting Vandenburg AFB (California) with 
spares.  Since Vendenburg is decommissioning it’s Cyber systems, the supplier 
wants to get rid of the spare machine that they have.

I think the supplier just want the machine “to go away” so the price is likely 
to be negligible.

Please contact me off-list if interested and I’ll get you in touch with the 
relevant folks.

TTFN - Guy

Keyboard "enthusiasts"

2018-01-23 Thread Guy Sotomayor via cctalk
…are the bane of my existence and should all rot in hell.

Sorry, I just received an email from a “keyboard enthusiast” who was looking for
various IBM 327x keyboards and wanted to know if I could help him and I needed
to vent a little.

I sent him a polite “no way in hell” response but I’m still angry about it.  
These 
terminals are hard enough to find.  And more often than not, the keyboard is
missing because some “enthusiast” thought it would be cool to convert it to a PC
keyboard.  ARG!  And of course the keyboards that they want are the “typewriter”
keyboards (all of my 3278 terminals have the “data entry keyboard”).

TTFN - Guy



non-PC Floppy imaging

2018-01-05 Thread Guy Sotomayor via cctalk
Hi,

I now have a number of uCode diskettes for my IBM 4331.  I would somehow like 
to image them so:
a) I have backups in case the floppies themselves go bad
b) be able to investigate their contents in case I have to “merge” the contents 
of multiple floppies to
 make a single good one

These are all 8” diskettes.

The complicating factors in all of this are:
a) any text (e.g. strings) are going to be in EBCDIC rather than ASCII
b) each uCode diskette was presumably serialized to the CPU it was for
c) not sure what the “on-disk” structure looks like
d) the only 8” diskette drives that I have are in IBM (non-PC) equipment

Any ideas/comments would be welcome.

Thanks.

TTFN - Guy



Lisa Source Code

2017-12-27 Thread Guy Sotomayor via cctalk
Hi,

I don’t know if I missed the announcement on this list but I just saw this 
article:
https://9to5mac.com/2017/12/27/apple-lisa-source-code-to-be-released/

It features quotes from our own Al Kassow.  ;-)  Way to go Al!!!

TTFN - Guy