Hello, I'm looking into simulating a disk controller used with the MIT AI lab PDP-10: a Systems Concepts DC-10. It looks like it's a custom made channel interfacing to IBM disks. (It has been claimed those are 2311 or 2314, but the geometry doesn't match.)
Formatting a disk pack uses a WRITE IMAGE command, which I believe writes raw bits directly onto the disk surface. I'd like to decode these to the data bits as seen by a normal READ or READ HEADER command. I wonder if anyone recognizes the kind of encoding used? For every track, it goes like this: 1. Wait for the physical start of the track. 2. Write 125460 bits of ones to cover the entire track. These will be overwritten by the following data. 3. Write a 108-bit header preamble: 111111111111111111111111111111111111111111111111111111 111111111111101010110101101011010110101101011010110101 The preamble signals a header with with a zero, and then eight 10101 words. 4. Write a 108-bit header, converted to the on-disk encoding. 5. Write a 252-bit postamble: 111111111111111111111111111111111111111111111111111111 111111111111111111111111111111111111111111111111111111 111111111111111111111111111111111111111111111111111111 111111111111111111111111111111111111111111111111111111 111111111111111111111111111111111101 6. Write 55620 bits of alternating ones and zeroes. This is the data and checksum. These will come out as all 37080 zeroes with a normal READ command. Note the ratio between raw bits and data is exactly 1.5. 7. Write N bits of ones for the sector gap. N is computed from the length of the track minus the data fields, divided by the number of sectors. 8. Write next sector, go back to step 3. Now, in particular I'm curious about what encoding will transform a zero bit to, on average, 1.5 "10" (or "01") bits? _______________________________________________ Simh mailing list Simh@trailing-edge.com http://mailman.trailing-edge.com/mailman/listinfo/simh