Re: Scanning docs for bitsavers

2019-12-02 Thread Grant Taylor via cctalk

On 12/2/19 9:06 PM, Grant Taylor via cctalk wrote:
In my opinion, PDFs are the last place that computer usable data goes. 
Because getting anything out of a PDF as a data source is next to 
impossible.


Sure, you, a human, can read it and consume the data.

Try importing a simple table from a PDF and working with the data in 
something like a spreadsheet.  You can't do it.  The raw data is there. 
 But you can't readily use it.


This is why I say that a PDF is the end of the line for data.

I view it as effectively impossible to take data out of a PDF and do 
anything with it without first needing to reconstitute it before I can 
use it.


I'll add this:

PDF is a decent page layout format.  But trying to view the contents in 
any different layout is problematic (at best).


Trying to use the result of a page layout as a data source is ... 
problematic.




--
Grant. . . .
unix || die





--
Grant. . . .
unix || die


Re: Scanning docs for bitsavers

2019-12-02 Thread Grant Taylor via cctalk

On 12/2/19 8:20 PM, Alexandre Souza via cctalk wrote:

I cannot understand your problems with PDF files.


My problem with PDFs starts where most people stop using them.

Take the average PDF of text, try to copy and paste the text into a text 
file.  (That may work.)


Now try to edit a piece of the text, such as taking part of a line out, 
or adding to a line.  (You can probably do that too.)


Now fix the line wrapping to get the margins back to where they should 
be.  (This will likely be a nightmare without a good text editor to 
reflow the text.)


All of the text I get out of PDFs is (at best) discrete lines that are 
unassociated with other lines.  They just happen to be next to each other.


Conversely, if I copy text off of a web page or out of many programs, I 
can paste into an editor, make my desired changes, and the line 
re-wrapping is already done for me.  This works for non-PDF sources 
because it's a continuous line of text that can be re-wrapped and re-used.


In my opinion, PDFs are the last place that computer usable data goes. 
Because getting anything out of a PDF as a data source is next to 
impossible.


Sure, you, a human, can read it and consume the data.

Try importing a simple table from a PDF and working with the data in 
something like a spreadsheet.  You can't do it.  The raw data is there. 
  But you can't readily use it.


This is why I say that a PDF is the end of the line for data.

I view it as effectively impossible to take data out of a PDF and do 
anything with it without first needing to reconstitute it before I can 
use it.



I've created lots and lots of PDFs, with treated and untreated scanned
material. All of them are very readable and in use for years.


Sure, you, a human, can quite easily read it.  But you are not 
processing the data the way that I'm talking about.



Of course, garbage in, garbage out.


I'm not talking about GIGO.

I take the utmost care in my scans to have good enough source files, 
so I can create great PDFs.


Of course, Guy's commens are very informative and I'll learn more from it.
But I still believe in good preservation using PDF files. FOR ME it is the
best we have in encapsulating info. Forget HTMLs.


I find HTML to be IMMENSELY easier to extract data from.


Please, take a look at this PDF, and tell me: Isn't that good enough for
preservation/use?


It's good enough for humans to use.

But it suffers from the same problem that I'm describing.

Try copying the text and pasting it into a wider or narrower document. 
What happens to the line wrapping or margins?  Based on my experience, 
they are crap.


With HTML, I can copy content and paste it into a wider or narrower 
window without any problem.


Data is originated somewhere.  Something is done to it.  It's 
manipulated, reformatted, processed, displayed and / or printed, and 
ultimately consumed.  In my experience, PDF files are the end of that 
chain.  There is no good way to get text out of a PDF.


Take (part of) the first paragraph of your sample PDF:  What's easier to 
re-use in a new document:


This (direct copy and paste):
--8<--
Os transceptores Control modelo TAC-45 (versão de 10 a 45 Watts) e 
TAC-70 (versão de 10 a
70 Watts) foram um marco na radiocomunicação comercial brasileira. 
Lançados em 1983,
consistiam num transceptor dividido em dois blocos: o corpo do rádio e 
um cabeçote de
comando, onde ficavam os comandos de volume, squelch e o seletor de 4 
canais.

-->8--

Or this:
--8<--
Os transceptores Control modelo TAC-45 (versão de 10 a 45 Watts) e 
TAC-70 (versão de 10 a 70 Watts) foram um marco na radiocomunicação 
comercial brasileira. Lançados em 1983, consistiam num transceptor 
dividido em dois blocos: o corpo do rádio e um cabeçote de comando, onde 
ficavam os comandos de volume, squelch e o seletor de 4 canais.

-->8--

With format=flowed, the second copy will re-scale ti any window width. I 
can also triple click to select the entire paragraph, something I can't 
do with the first copy.  Heck, I can't even reliably do anything with 
sentence in the first copy.  It's all broken lines.  The second copy is 
a continuous string that makes up (part of) the paragraph.


Which format would you like to work with if you need to extract text 
from a file and use in something else?  Something that you have to 
repair the damage introduced by the file format?  Or something that 
preserves the text integrity?




--
Grant. . . .
unix || die





--
Grant. . . .
unix || die


Re: Scanning docs for bitsavers

2019-12-02 Thread Alexandre Souza via cctalk
I cannot understand your problems with PDF files.
I've created lots and lots of PDFs, with treated and untreated scanned
material. All of them are very readable and in use for years. Of course,
garbage in, garbage out. I take the utmost care in my scans to have good
enough source files, so I can create great PDFs.

Of course, Guy's commens are very informative and I'll learn more from it.
But I still believe in good preservation using PDF files. FOR ME it is the
best we have in encapsulating info. Forget HTMLs.

Please, take a look at this PDF, and tell me: Isn't that good enough for
preservation/use?
https://drive.google.com/file/d/0B7yahi4JC3juSVVkOEhwRWdUR1E/view

Thanks
Alexandre

---8<---Corte aqui---8<---
http://www.tabajara-labs.blogspot.com
http://www.tabalabs.com.br
---8<---Corte aqui---8<---


Em ter., 3 de dez. de 2019 às 00:08, Grant Taylor via cctalk <
cctalk@classiccmp.org> escreveu:

> On 12/2/19 5:34 PM, Guy Dunphy via cctalk wrote:
>
> Interesting comments Guy.
>
> I'm completely naive when it comes to scanning things for preservation.
>   Your comments do pass my naive understanding.
>
> > But PDF literally cannot be used as a wrapper for the results,
> > since it doesn't incorporate the required image compression formats.
> > This is why I use things like html structuring, wrapped as either a zip
> > file or RARbook format. Because there is no other option at present.
> > There will be eventually. Just not yet. PDF has to be either greatly
> > extended, or replaced.
>
> I *HATE* doing anything with PDFs other than reading them.  My opinion
> is that PDF is where information goes to die.  Creating the PDF was the
> last time that anything other than a human could use the information as
> a unit.  Now, in the future, it's all chopped up lines of text that may
> be in a nonsensical order.  I believe it will take humans (or something
> yet to be created with human like ability) to make sense of the content
> and recreate it in a new form for further consumption.
>
> Have you done any looking at ePub?  My understanding is that they are a
> zip of a directory structure of HTML and associated files.  That sounds
> quite similar to what you're describing.
>
> > And that's why I get upset when people physically destroy rare old
> > documents during or after scanning them currently. It happens so
> > frequently, that by the time we have a technically adequate document
> > coding scheme, a lot of old documents won't have any surviving
> > paper copies.  They'll be gone forever, with only really crap quality
> > scans surviving.
>
> Fair enough.
>
>
>
> --
> Grant. . . .
> unix || die
>


Re: P112

2019-12-02 Thread Bill Gunshannon via cctalk
On 12/2/19 8:36 PM, Dennis Boone via cctalk wrote:
>   > The menu you get when you hit Escape on startup has an option for
>   > setting a floppy as 8".  Mine is ROM 5.7 which I believe is the next
>   > to last.  Unless it is different than the other CP/M systems I have
>   > FORMAT should have no hardware dependent code in it.  It was the OS
>   > that tracked and controlled what the underlying format of the
>   > floppies were.
> 
> That's literally the only thing I could find.  I can't see any place
> where a DPB is defined for an 8" drive.  If the DPB is wrong, FORMAT
> will misbehave.  I'd be utterly unsurprised if more parameters than just
> cylinder count weren't wrong.
> 
> You might try examining the active DPBs for the system to see what all
> it's using, and even correct it with a monitor or debugger.

I put 8" on a back burner for the moment.

> 
>   > > I haven't had mine out in a while, but last I did, the GIDE did
>   > > work.  Seems like there was some ordering of operations on HD setup
>   > > that I did wrong the first time.
> 
> The more I think about this, the more I think maybe the thing that
> "fixed" mine was wiping the drive before trying FDISK.

Wiping with what.

> 
> You didn't show the full FDISK session, or a listing of what partitions
> it thinks are there to begin with.  That might help shake free thoughts
> from others.

It starts out with no partitions and claims the partition table
is not a valid P112 table.  The "w" command fixes that but the
table is still empty.  Interestingly enough, a 64M CF in an IDE
adapter works with FDISK. but then when I try to "INIT" it under
RSX180 it prints a stream of garbage on the screen and does nothing
to the disk/CF.

Not sure how much longer I am likely to keep beating my head
against the wall.

bill



Re: Scanning docs for bitsavers

2019-12-02 Thread Grant Taylor via cctalk

On 12/2/19 5:34 PM, Guy Dunphy via cctalk wrote:

Interesting comments Guy.

I'm completely naive when it comes to scanning things for preservation. 
 Your comments do pass my naive understanding.


But PDF literally cannot be used as a wrapper for the results, 
since it doesn't incorporate the required image compression formats. 
This is why I use things like html structuring, wrapped as either a zip 
file or RARbook format. Because there is no other option at present. 
There will be eventually. Just not yet. PDF has to be either greatly 
extended, or replaced.


I *HATE* doing anything with PDFs other than reading them.  My opinion 
is that PDF is where information goes to die.  Creating the PDF was the 
last time that anything other than a human could use the information as 
a unit.  Now, in the future, it's all chopped up lines of text that may 
be in a nonsensical order.  I believe it will take humans (or something 
yet to be created with human like ability) to make sense of the content 
and recreate it in a new form for further consumption.


Have you done any looking at ePub?  My understanding is that they are a 
zip of a directory structure of HTML and associated files.  That sounds 
quite similar to what you're describing.


And that's why I get upset when people physically destroy rare old 
documents during or after scanning them currently. It happens so 
frequently, that by the time we have a technically adequate document 
coding scheme, a lot of old documents won't have any surviving 
paper copies.  They'll be gone forever, with only really crap quality 
scans surviving.


Fair enough.



--
Grant. . . .
unix || die


Re: P112

2019-12-02 Thread Dennis Boone via cctalk
 > The menu you get when you hit Escape on startup has an option for
 > setting a floppy as 8".  Mine is ROM 5.7 which I believe is the next
 > to last.  Unless it is different than the other CP/M systems I have
 > FORMAT should have no hardware dependent code in it.  It was the OS
 > that tracked and controlled what the underlying format of the
 > floppies were.

That's literally the only thing I could find.  I can't see any place
where a DPB is defined for an 8" drive.  If the DPB is wrong, FORMAT
will misbehave.  I'd be utterly unsurprised if more parameters than just
cylinder count weren't wrong.

You might try examining the active DPBs for the system to see what all
it's using, and even correct it with a monitor or debugger.

 > > I haven't had mine out in a while, but last I did, the GIDE did
 > > work.  Seems like there was some ordering of operations on HD setup
 > > that I did wrong the first time.

The more I think about this, the more I think maybe the thing that
"fixed" mine was wiping the drive before trying FDISK.

You didn't show the full FDISK session, or a listing of what partitions
it thinks are there to begin with.  That might help shake free thoughts
from others.

Does dgriffi lurk on this list?

De


Re: P112

2019-12-02 Thread Bill Gunshannon via cctalk
On 12/2/19 4:55 PM, Dennis Boone via cctalk wrote:
>   > Well, I have the dBit FDADAP.  Works great.  I have used them before
>   > on a PC to access PDP-11 disks from PUTR and E11.  The P112 claims to
>   > support 8" but I am finding it unlikely.  If it (well, at least the
>   > OSes it runs) don't even know it only has 77 tracks I can't see how
>   > anyone has done 8" disks on it.
> 
> I went spelunking in the ROM and BIOS sources the other day, and I don't
> see any 8" drive stuff in there at all -- it's all 3.5" and 5.25".  I
> looked at FORMAT too.  Am I looking at old code?

The menu you get when you hit Escape on startup has an option
for setting a floppy as 8".  Mine is ROM 5.7 which I believe
is the next to last.  Unless it is different than the other
CP/M systems I have FORMAT should have no hardware dependent
code in it.  It was the OS that tracked and controlled what
the underlying format of the floppies were.

> 
>   > Why am I getting this sneaking suspicion that none of this stuff
>   > actually works?
> 
> I haven't had mine out in a while, but last I did, the GIDE did work.
> Seems like there was some ordering of operations on HD setup that I did
> wrong the first time.

The GIDE seems to work.  It appears to be FDISK that is broken.
Given that, unless people had custom versions of FDISK I fail
to see how anyone set up a hard disk on a P112.

> 
> It doesn't help that there are no docs for the thing other than whatever
> paper came with it.

Thus the reason for me searching places like this for help.  :-)

> 
> Sanity check your software versions?  Mixing and matching variants can
> be problematic since there are several generations and several forks of
> the P112 and GIDE stuff; and there's at least one version of FORMAT
> that's reputed to have a serious bug.

No mix or match.  Just using the images provided on the CD that
came with it.  FORMAT works OK for 3.5" disks.  I have had no
luck trying to FORMAT 5.25" or 8" floppies.  And I can't even
get that far on a hard disk.

bill



Re: Scanning docs for bitsavers

2019-12-02 Thread Guy Dunphy via cctalk
At 01:57 PM 2/12/2019 -0700, you wrote:
>On Tue, Nov 26, 2019 at 8:51 PM Jay Jaeger via cctalk 
>wrote:
>
>> When I corresponded with Al Kossow about format several years ago, he
>> indicated that CCITT Group 4 lossless compression was their standard.
>>
>
>There are newer bilevel encodings that are somewhat more efficient than G4
>(ITU-T T.6), such as JBIG (T.82) and JBIG2 (T.88), but they are not as
>widely supported, and AFAIK JBIG2 is still patent encumbered. As a result,
>G4 is still arguably the best bilevel encoding for general-purpose use. PDF
>has natively supported G4 for ages, though it gained JBIG and JBIG2 support
>in more recent versions.
>
>Back in 2001, support for G4 encoding in open source software was really
>awful; where it existed at all, it was horribly slow. There was no good
>reason for G4 encoding to be slow, which was part of my motivation in
>writing my own G4 encoder for tumble (an image-to-PDF utility). However, G4
>support is generally much better now.



Mentioning JBIG2 (or any of its predecessors) without noting that it is
completely unacceptable as a scanned document compression scheme, demonstrates
a lack of awareness of the defects it introduces in encoded documents.
See http://everist.org/NobLog/20131122_an_actual_knob.htm#jbig2
JBIG2 typically produces visually appalling results, and also introduces so
many actual factual errors (typically substituted letters and numbers) that
documents encoded with it have been ruled inadmissible as evidence in court.
Sucks to be an engineering or financial institution, which scanned all its
archives with JBIG2 then shredded the paper originals to save space.
The fuzzyness of JBIG is adjustable, but fundamentally there will always
be some degree of visible patchyness and risk of incorrect substitution.

As for G4 bilevel encoding, the only reasons it isn't treated with the same
disdain as JBIG2, are:
1. Bandwaggon effect - "It must be OK because so many people use it."
2. People with little or zero awareness of typography, the visual quality of
   text, and anything to do with preservation of historical character of
   printed works. For them "I can read it OK" is the sole requirement.

G4 compression was invented for fax machines. No one cared much about visual
quality of faxes, they just had to be readable. Also the technology of fax
machines was only capable of two-tone B reproduction, so that's what G4
encoding provided.

Thinking these kinds of visual degradation of quality are acceptable when
scanning documents for long term preservation, is both short sighted and
ignorant of what can already be achieved with better technique.

For example, B text and line diagram material can be presented very nicely
using 16-level gray shading, That's enough to visually preserve all the
line and edge quality. The PNG compression scheme provides a color indexed
4 bits/pixel format, combining with PNG's run-length coding. When documents
are scanned with sensible thresholds plus post-processed to ensure all white
paper is actually #FF, and solid blacks are actually #0, but edges retain
adequate gray shading, PNG achieves an excellent level of filesize compression.
The visual results are _far_ superior to G4 and JBIG2 coding, and surprisingly
the file sizes can actually be smaller. It's easy to achieve on-screen results
that are visually indistinguishable from looking at the paper original, with
quite acceptable filesizes.
And that's the way it should be.

Which brings us to PDF, that most people love because they use it all the
time, never looked into the details of its internals, and can't imagine
anything better.
Just one point here. PDF does not support PNG image encoding. *All* the
image compression schemes PDF does support, are flawed in various cases.
But because PDF structuring is opaque to users, very few are aware of 
this and its other problems. And therefore why PDF isn't acceptable as a
container for long term archiving of _scanned_ documents for historical
purposes. Even though PDF was at least extended to include an 'archival'
form in which all the font definitions must be included.

When I scan things I'm generally doing it in an experimental sense,
still exploring solutions to various issues such as the best way to deal
with screened print images and cases where ink screening for tonal images
has been overlaid with fine detail line art and text. Which makes processing
to a high quality digital image quite difficult.

But PDF literally cannot be used as a wrapper for the results, since
it doesn't incorporate the required image compression formats. 
This is why I use things like html structuring, wrapped as either a zip
file or RARbook format. Because there is no other option at present.
There will be eventually. Just not yet. PDF has to be either greatly
extended, or replaced.

And that's why I get upset when people physically destroy rare old documents
during or after scanning them currently. It happens so frequently, that by

Re: The Internet Archive

2019-12-02 Thread Eric Smith via cctalk
On Wed, Nov 27, 2019 at 6:44 PM ben via cctalk 
wrote:

> Well it is good thing, but the REAL Hyper-media is yet to come.
> PROJECT XANADU *Founded 1960 * The Original Hypertext Project
>

The Foonly is not a /360.
The Foonly is more like a -10.
The Foonly is faster than lightning.
Oh, I'll get my Foonly... but when?


Re: 3" disks Was: InfoWorld - May 11, 1992 (3" disk formats)

2019-12-02 Thread John Foust via cctalk
At 04:26 PM 12/2/2019, Fred Cisin via cctalk wrote:
>Thank you.
>I haven't heard from Brett Glass in decades, since he moved to Idaho.  He ran 
>the numbers and decided that the differential in market value between his 
>housing here and similar in Idaho was enough to support him for quite a while. 

He's still around.  I knew him back in my writing days, and then
I bumped into him again a decade later when I started a WISP.

https://twitter.com/brettglass 

- John



3" disks Was: InfoWorld - May 11, 1992 (3" disk formats)

2019-12-02 Thread Fred Cisin via cctalk

Thank you.
I haven't heard from Brett Glass in decades, since he moved to Idaho.  He 
ran the numbers and decided that the differential in market value between 
his housing here and similar in Idaho was enough to support him for quite 
a while.  Since he was working as a writer, he didn't have to be 
physically close to his work, and he could get decent internet access 
through the university there.



3" drives were readily available in MFM compatible forms.  The drive was 
designed to be a drop-in replacement for 5.25" SA400 style drives. (OK, 
"SA450"?).  And the 3" drives even used a 34 pin card edge and a "molex" 
power connector (like 5.25"; unlike 3.5")

Depending of format choices, MFM from 180K to 720K.


Amstrad used them, as did early Gavilan and some others.  The Gavilans 
that I had (both 8 and 16 line models) were later ones, with 3.5" drives. 
Gavilan's MS-DOS 2.11 3.5" format was not the same as IBM's PC-DOS 3.20 
720K format.  But, some development continued, even after Gavilan 
collapsed, and the Gavilan MS-DOS 2.ll version K was the same format as 
IBM.  For those not familiar, MS-DOS 2.11 and 3.31 were versions that were 
heavily modified by OEMs, particularly for drive types (including >32M in 
3.31).  Hence, 2.11 and 3.31 are DIFFERENT from one OEM to another!



But, early on, AMDISK marketed two drive external boxes for Radio Shack 
Color Computer and for Apple2.  Coco was box standard SA400 compatible 
drives.


I never had one of the Apple2 3" boxes.  So, I have questions about the 
interface.
Q: Was it a different logic board on the 3" drive for compatability with 
the Apple2 "DISK2" interface?   (GCR encoding)
Q: Or did their external 2 drive box come with its own MFM FDC for the 
Apple2?   (In which case, like the SVA FDC, it could adapt an Apple2 to 
"standard" drive types)


I assume that the 2 drive external boxes came after the original drive as 
5.25" drop-in retrofit, but I could easily be wrong, and it is POSSIBLE 
that the Apple2 version could have been the first release of the drives.



The 3" drives were available in 40 and 80 cylinder models.
The 3" drives were available in single and double sided.  The single sided 
drives would permit "flippy" operation, to use the other side of the disk 
as if it were another disk.   BUT, the double sided drives (at least 
the few that I had) would NOT let you insert a flipped disk; therefore, 
the double sided drives could not access the "B" side of a "flippy" disk!

I never got around to looking into modifying the drive for that.

--
Grumpy Ol' Fred ci...@xenosoft.com


On Sun, 1 Dec 2019, Sellam Abraham via cctalk wrote:


I thought this was fun; stumbled upon it while looking for what words of
wisdom Fred had to share about the format of 3" disks:

https://books.google.com/books?id=7D0EMBAJ=PA86=PA86=3%22+floppy+disk+format+fred+cisin=bl=b3iHCeqJzB=ACfU3U19DoXha-0sh2fqm26M72Z1tlKLXw=en=X=2ahUKEwjsivXd75XmAhURLX0KHYL0BBkQ6AEwAnoECAoQAQ#v=onepage=3%22%20floppy%20disk%20format%20fred%20cisin=false

Hopefully Fred will see this and tell me whether the 3" disk format was MFM
or GCR given that the Orwellipedia says the 3" disk format was initially
designed to work with the Apple ][ floppy drive interface.

https://en.wikipedia.org/wiki/History_of_the_floppy_disk?fbclid=IwAR2atb2Z_j_-DVNLTT1eqAZLw4ajB9s0LxzWgSMQoyEoi0_5Yy1KuNi7_TI#The_3-inch_compact_floppy_disk

Sellam


Re: P112

2019-12-02 Thread Dennis Boone via cctalk
 > Well, I have the dBit FDADAP.  Works great.  I have used them before
 > on a PC to access PDP-11 disks from PUTR and E11.  The P112 claims to
 > support 8" but I am finding it unlikely.  If it (well, at least the
 > OSes it runs) don't even know it only has 77 tracks I can't see how
 > anyone has done 8" disks on it.

I went spelunking in the ROM and BIOS sources the other day, and I don't
see any 8" drive stuff in there at all -- it's all 3.5" and 5.25".  I
looked at FORMAT too.  Am I looking at old code?

 > Why am I getting this sneaking suspicion that none of this stuff
 > actually works?

I haven't had mine out in a while, but last I did, the GIDE did work.
Seems like there was some ordering of operations on HD setup that I did
wrong the first time.

It doesn't help that there are no docs for the thing other than whatever
paper came with it.

Sanity check your software versions?  Mixing and matching variants can
be problematic since there are several generations and several forks of
the P112 and GIDE stuff; and there's at least one version of FORMAT
that's reputed to have a serious bug.

De


Re: Scanning docs for bitsavers

2019-12-02 Thread Eric Smith via cctalk
On Tue, Nov 26, 2019 at 8:51 PM Jay Jaeger via cctalk 
wrote:

> When I corresponded with Al Kossow about format several years ago, he
> indicated that CCITT Group 4 lossless compression was their standard.
>

There are newer bilevel encodings that are somewhat more efficient than G4
(ITU-T T.6), such as JBIG (T.82) and JBIG2 (T.88), but they are not as
widely supported, and AFAIK JBIG2 is still patent encumbered. As a result,
G4 is still arguably the best bilevel encoding for general-purpose use. PDF
has natively supported G4 for ages, though it gained JBIG and JBIG2 support
in more recent versions.

Back in 2001, support for G4 encoding in open source software was really
awful; where it existed at all, it was horribly slow. There was no good
reason for G4 encoding to be slow, which was part of my motivation in
writing my own G4 encoder for tumble (an image-to-PDF utility). However, G4
support is generally much better now.


Re: P112

2019-12-02 Thread Bill Gunshannon via cctalk
On 12/2/19 11:31 AM, Lamar Owen wrote:
> 
> 
> As far as 8-inch drives are concerned, you would need to do exactly 
> everything you would need to do to hook up an 8-inch drive to a PC, 
> since the P112 uses a PC SuperIO chip for the FDC, and the floppy 
> headers have PC pinouts and signal meanings (unlike the CPU280.). 
> The dBit FDADAP or similar would be needed to generate TG43 as well as 
> translate the pinout correctly.  I haven't tried single-density support 
> on the P112, so don't know if that would work or not, but the SuperIO 
> chip used should be able to do that.
> 
> 

Well, I have the dBit FDADAP.  Works great.  I have used them
before on a PC to access PDP-11 disks from PUTR and E11.  The
P112 claims to support 8" but I am finding it unlikely.  If it
(well, at least the OSes it runs) don't even know it only has
77 tracks I can't see how anyone has done 8" disks on it.

And then I went on to try the GIDE.  I can't get FDISK to
create partitions of any kind.

I get

Command (h for help) : n
Partition number (1-8) : 1
First cylinder (1-17455, default 1) : 1
Value out of range

And that is what I get no matter what value I enter.

Why am I getting this sneaking suspicion that none of this
stuff actually works?

bill



Re: P112

2019-12-02 Thread Lamar Owen via cctalk

On 11/29/19 7:01 PM, Bill Gunshannon via cctalk wrote:
Let's try again with the right name in the Subject line! It's not 
really classic (although it does try to pretend to be but does anyone 
here do anything with the P112 SBC? I am trying to get 8" disks 
running on it but I am seeing some rather strange behavior.
Well, the P112 is a classic of sorts, being a mid-1990's design (much 
like the CPU280 from Tilmann Reh that I 'revived' a couple of years 
back, and still have PCBs leftover :-) ).  The P112 kit was, up 
until a few months ago, still available from David Griffith (661.org, 
which you've already found).  I bought two while I was buying a few 
years back, and built up one of them, which I still use a bit with a 
GIDE from Terry.  I am actually planning to port the TRS-80 Model 4's 
LS-DOS 6 to it for fun, but haven't had time to work too much with it.  
I was actually thinking about fabbing a few boards to try out faster 
Z80182 chips (officially there is a 33MHz version that has been 
overclocked by some to well over that speed) rather than risk 
desoldering the 16MHz '182 from one of the two kit boards I bought, so, 
for David Griffith's benefit, I would be interested in a bare board or 
three myself if he decides to fab some.  Sourcing the SuperIO and doing 
the fine-pitch SMD soldering will be a bit of a challenge, but worth it 
I believe.


As far as 8-inch drives are concerned, you would need to do exactly 
everything you would need to do to hook up an 8-inch drive to a PC, 
since the P112 uses a PC SuperIO chip for the FDC, and the floppy 
headers have PC pinouts and signal meanings (unlike the CPU280.).  
The dBit FDADAP or similar would be needed to generate TG43 as well as 
translate the pinout correctly.  I haven't tried single-density support 
on the P112, so don't know if that would work or not, but the SuperIO 
chip used should be able to do that.





191202 Classic equipment available & my bad year.

2019-12-02 Thread Dave Dunfield via cctalk
Hi, made a number of updates to the sale pages on my site, and brought
back a copy of my commercial site (good for downloads).
Unfortunately I screwed up the .html pages and lost some links.

Should all me fixed now.

Added an FAQ some more parts (eg: 8008 CPI for MOD8), some sample
pricing (please see FAQ before complaining).

If you've looked at the site before, do refresh each page as you go to
it as many browers cache page and will happily show you the old one.

http://www.classiccmp.org/dunfield/sale/index.htm

Dave


InfoWorld - May 11, 1992 (3" disk formats)

2019-12-02 Thread Sellam Abraham via cctalk
I thought this was fun; stumbled upon it while looking for what words of
wisdom Fred had to share about the format of 3" disks:

https://books.google.com/books?id=7D0EMBAJ=PA86=PA86=3%22+floppy+disk+format+fred+cisin=bl=b3iHCeqJzB=ACfU3U19DoXha-0sh2fqm26M72Z1tlKLXw=en=X=2ahUKEwjsivXd75XmAhURLX0KHYL0BBkQ6AEwAnoECAoQAQ#v=onepage=3%22%20floppy%20disk%20format%20fred%20cisin=false

Hopefully Fred will see this and tell me whether the 3" disk format was MFM
or GCR given that the Orwellipedia says the 3" disk format was initially
designed to work with the Apple ][ floppy drive interface.

https://en.wikipedia.org/wiki/History_of_the_floppy_disk?fbclid=IwAR2atb2Z_j_-DVNLTT1eqAZLw4ajB9s0LxzWgSMQoyEoi0_5Yy1KuNi7_TI#The_3-inch_compact_floppy_disk

Sellam