Re: Int 13h buffer 64k boundaries

2018-04-20 Thread Peter Coghlan via cctalk


> That reminds me of when I phoned IBM here in Ireland looking for software
> support for their VM mainframe operating system not too many years later,
> sometime in the early 1990s.  I spelled out every variation of the name
> I could think of but they kept asking me what version of OS/2 I had.
> I guess by then the circle had turned again.

Did they think that you were saying "P M"?
"PM" ("Presentation Manager") was the OS/2 equivalent of "Windows"

Tell them that you mean 'V' as in "Venus"  :-)



Well, I called out terms such as Virtual Machine / Conversational (Cambridge?)
Monitor System / System Product / High Performance Option and got back the
over the telephone equivelant of blank looks and questions about whether the
hardware involved was a PS/2 (it was an Amdahl 5870 but we had an IBM
software support contract).

I must have managed to get the message through eventually though because
some time after I left that job, my former boss told me that IBM got back
to them with a workaround for the issue I reported.

Regards,
Peter Coghlan.


Re: Int 13h buffer 64k boundaries

2018-04-20 Thread Fred Cisin via cctalk

On Fri, 20 Apr 2018, Peter Coghlan via cctalk wrote:

That reminds me of when I phoned IBM here in Ireland looking for software
support for their VM mainframe operating system not too many years later,
sometime in the early 1990s.  I spelled out every variation of the name
I could think of but they kept asking me what version of OS/2 I had.
I guess by then the circle had turned again.


Did they think that you were saying "P M"?
"PM" ("Presentation Manager") was the OS/2 equivalent of "Windows"

Tell them that you mean 'V' as in "Venus"  :-)


Re: Int 13h buffer 64k boundaries

2018-04-20 Thread Chuck Guzis via cctalk
On 04/20/2018 03:23 AM, Peter Coghlan via cctalk wrote:

> That reminds me of when I phoned IBM here in Ireland looking for software
> support for their VM mainframe operating system not too many years later,
> sometime in the early 1990s.  I spelled out every variation of the name
> I could think of but they kept asking me what version of OS/2 I had.
> I guess by then the circle had turned again.

Around 1983-4, we were looking for a smallish minicomputer to share the
workload of our VAX 11/750.  So we were considering alternatives.  Since
the 750 was running BSD, we definitely wanted another Unix box.

I saw a product announcement for the AT 3B5 mini and it looked like
something that might fit the bill.  So, I wanted to find out about
pricing and where we could benchmark one.  AT had just gone through
its breakup/"consent decree", so I placed a call to AT Sales and asked
about the 3B5.   I was transfered several times to various sales types
who didn't have the faintest idea of what I was talking about, even
after I read them the product announcement.  It was an hour of being
transfered from department to department, with absolutely no satisfaction.

We eventually gave up--if AT was going to be this difficult just to
*sell* us a system, what kind of nightmare was *support* likely to be?

The only computer anyone knew anything about was the PC 6300.  I told
them that I could drop by the Sears Computer Store (remember those?) on
El Camino and take one home this evening if that's what I wanted.

In the end, they offered to send us some literature--you guessed
it--that described the 6300.

--Chuck



Re: Int 13h buffer 64k boundaries

2018-04-20 Thread Peter Coghlan via cctalk
>
> I remember
> going to the regional IBM sales office (was that on Arques? It's been
> too lnng), purchase order in hand, wanting to pick up 10 of the 5150s.
> Nobody really know what we were asking for--finally, someone showed up
> and told us that the lead time would be 12 weeks ARO.  We went down to
> Computerland and bought out their stock that evening.
>

That reminds me of when I phoned IBM here in Ireland looking for software
support for their VM mainframe operating system not too many years later,
sometime in the early 1990s.  I spelled out every variation of the name
I could think of but they kept asking me what version of OS/2 I had.
I guess by then the circle had turned again.

Regards,
Peter Coghlan.


Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Guy Sotomayor Jr via cctalk

> On Apr 19, 2018, at 8:55 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 04/19/2018 07:56 PM, Guy Sotomayor Jr wrote:
> 
>> As to why IBM entered the PC market, the rumor was (at least at the time
>> within IBM) was that T.J. Watson, Jr. was at an employee’s house and saw
>> an Apple II.  He said that he wanted to have IBM branded computers in IBM
>> employees homes.  That was how the IBM PC project was kicked off.
> 
> But it wasn't clear at all what IBM intended the PC for.  Cassette tape,
> TV interface and anything but state-of-the-art design
> 
> The best part of the 5150 IMOHO, was the keyboard.

It was a variant of the keyboard that was used on the System/23.  The basic
keyboard technology was used in a lot of IBM keyboards at the time.

[snip]

> 
> My general impression is that IBM made the 5150 product, without the
> faintest idea of how they were going to sell it.
> 

It was IBM’s answer to the Apple II and various S-100 systems so it was
stripped down for a “low” entry price and/or added with other stuff.  It was
designed to be easy to interface to so that others could make peripherals.

It was really following the model of what other “home” computers at the
time were doing.  It was also a bit of an experiment and in that respect
you’re correct.  They didn’t know what it would be used for nor how to
sell it as it was *so* far outside of the normal IBM product lines.

TTFN - Guy




Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Chuck Guzis via cctalk
On 04/19/2018 07:56 PM, Guy Sotomayor Jr wrote:

> As to why IBM entered the PC market, the rumor was (at least at the time
> within IBM) was that T.J. Watson, Jr. was at an employee’s house and saw
> an Apple II.  He said that he wanted to have IBM branded computers in IBM
> employees homes.  That was how the IBM PC project was kicked off.

But it wasn't clear at all what IBM intended the PC for.  Cassette tape,
TV interface and anything but state-of-the-art design

The best part of the 5150 IMOHO, was the keyboard.

By the time one got through equipping the 5150 with floppy drives, as
display and memory, it ran into a pretty good pile of money.  It was
also clear that IBM didn't have any idea of how to sell it.  I remember
going to the regional IBM sales office (was that on Arques? It's been
too lnng), purchase order in hand, wanting to pick up 10 of the 5150s.
Nobody really know what we were asking for--finally, someone showed up
and told us that the lead time would be 12 weeks ARO.  We went down to
Computerland and bought out their stock that evening.

I recall the scuttlebutt that went on before the official 5150 product
announcement.  IBM had just announced its 68K-based lab computer.  There
were those who were hoping for a 68K PC, but I figured that there was no
way that IBM would jeopardize their CS9000 sales.

But there were certainly other 8086-based PCs out before the 5150--some
quite a bit more evolved.

I recall that Bill Morrow sold his Z80-based business package (MD2,
printer and monitor) bundled with software for about 1/2 or less than
the price of a minimally disk-capable 5150 with monitor.

My general impression is that IBM made the 5150 product, without the
faintest idea of how they were going to sell it.

--Chuck





Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Guy Sotomayor Jr via cctalk

> On Apr 19, 2018, at 4:16 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 04/19/2018 12:14 PM, Fred Cisin via cctalk wrote:
> 
>> I have no difficulty admitting that I didn't, and don't, have
>> Chuck's level of experience and knowledge. My entire venture into
>> microcomputers was a hobby that got out of hand.
> It's not so much expertise, but where you start your investigations.
> 
> Right when I peered into the 5150, I saw the 8237 DMA controller (first
> cousin to the 8257) and recognized it from my 8-bit (8085) days.  It was
> immediately obvious that IBM had taken a bunch of legacy 8 bit
> peripheral chips and shoved them into the PC.   In fact, the 5150 was
> surprising in that how primitive the engineering was--something you
> didn't expect from a high-tech pioneer like IBM.  So the DMA address
> space had to be 16 bits with simple bank select--using a disk controller
> chip that was design to be used with 8 inch drives.

As I have mentioned previously, the 5150 was done by a relatively small
team and they leveraged hardware from a product that had been released
a short time prior to the 5150.  That product was the System/23 which was
based on the 8085.  The importance of the System/23 cannot be overstated
as it was the first IBM product that featured a non-IBM designed CPU.

It is also the case that the entire team that developed the 5150 HW and BIOS
were all from the System/23 team.  The XT-bus was the way it was because
it was the System/23 peripheral bus turned 180-degrees so that “cheap” PC
cards could not be used in the System/23.

The fact that it used “primitive” engineering was actually a design goal.  The
point of the 5150 was to create something that was simple to build and had
a simple design.  Due to the shoestring (for IBM) budget, the team leveraged
a lot from the System/23.

As to why IBM entered the PC market, the rumor was (at least at the time
within IBM) was that T.J. Watson, Jr. was at an employee’s house and saw
an Apple II.  He said that he wanted to have IBM branded computers in IBM
employees homes.  That was how the IBM PC project was kicked off.

BTW, I was on the System/23 team (wrote a fair amount of the ROM code)
and I knew all of the folks on the PC team.  Dr. Dave Bradley (of CTRL-ALT-DEL
fame) had the office across the hall from mine and discussed a lot of the
goings on for what would become the 5150.

TTFN - Guy



Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Chuck Guzis via cctalk
On 04/19/2018 05:33 PM, Jim Brain via cctalk wrote:

> Someday, the products and software designed and built by the folks in
> this list will be judged by those who follow us.  Possibly the rest of
> you have worked in industries where you were allowed to use new
> solutions, you had ample time to design and develop, and your marketing
> departments priced your solutions at a reasonable price point, but I've
> not had those luxuries.  Thus, I want to be fair to those before me who
> created things like the IBM PC architecture, not because it is a great
> architecture, but because they shipped a real product that added value
> for many folks and did so while working inside a company not known for
> agility.  The folks who did that deserve my respect, and when I am gone
> and folks look at my design choices, I hope they will respect me for
> doing what I could given the constraints I faced.

My view is that it probably won't matter.  Technology is moving so fast
that it won't be long before yesterday's PCs will be viewed with the
attitude that today's of "retro" PC enthusiasts view an 082 sorter.
Recall that, in 1955, a lot of common culture viewed that as a
"computer". (I can probably come up with a couple of contemporary cinema
examples where that was exactly how one was portrayed).

When I put on my future-view goggles and read about the steps being made
today in AI and associated hardware technology, all of this "personal
computer" hardware will seem just as primitive.

Consider that the 082 dates from 1949 and the last unit rolled off the
line in 1978.  Now consider how antiquated a 10 year old mobile phone is
viewed by most people.

--Chuck





Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Jim Brain via cctalk

On 4/19/2018 6:16 PM, Chuck Guzis via cctalk wrote:

So, at the time, looking at the 5150, it was an overpriced primitive
implementation using a 1970s CPU.   Many people at the time thought it
would be less popular than the 5100.
While I won't argue the technical merits of your position, I feel like 
we apply revisionism at times to these things.


I would argue that some engineer in IBM ranks was passionately trying to 
convince IBM brass that IBM needed to have a stake in the personal 
computer space, lest other companies swallow up the market.  IBM, 
lumbering giant that it was, probably was reluctant to mess around with 
toy computers (their opinion no doubt) at all. But, someone (or 
someones) won the battle, and someone else had the inspirational idea to 
use off the shelf components, as opposed to having an IBM-branded and 
designed CPU, etc.


Sure, they used old stuff, but it was working stuff, and I think the 
goal was to get something to market as quickly as possible.  Being 
overpriced was IBM Marketing's touch (you call it overpriced, as I 
manufacturer, I call it capitalism at work).


Why do I even post this?

Someday, the products and software designed and built by the folks in 
this list will be judged by those who follow us.  Possibly the rest of 
you have worked in industries where you were allowed to use new 
solutions, you had ample time to design and develop, and your marketing 
departments priced your solutions at a reasonable price point, but I've 
not had those luxuries.  Thus, I want to be fair to those before me who 
created things like the IBM PC architecture, not because it is a great 
architecture, but because they shipped a real product that added value 
for many folks and did so while working inside a company not known for 
agility.  The folks who did that deserve my respect, and when I am gone 
and folks look at my design choices, I hope they will respect me for 
doing what I could given the constraints I faced.


Jim


Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Fred Cisin via cctalk

I have no difficulty admitting that I didn't, and don't, have
Chuck's level of experience and knowledge. My entire venture into
microcomputers was a hobby that got out of hand.


On Thu, 19 Apr 2018, Chuck Guzis via cctalk wrote:

It's not so much expertise, but where you start your investigations.
Right when I peered into the 5150, I saw the 8237 DMA controller (first
cousin to the 8257) and recognized it from my 8-bit (8085) days.  It was
immediately obvious that IBM had taken a bunch of legacy 8 bit
peripheral chips and shoved them into the PC.   In fact, the 5150 was
surprising in that how primitive the engineering was--something you
didn't expect from a high-tech pioneer like IBM.  So the DMA address
space had to be 16 bits with simple bank select--using a disk controller
chip that was design to be used with 8 inch drives.
The Technical Reference BIOS listing confirmed the suspicion that the
5150 implementation couldn't cross 64K banks.  It had nothing to do with
DOS, per se.


Of course not.  But WHY didn't DOS programs, such as FORMAT, check whether 
their buffers were in usable places?   Not a common problem in DOS 1.0, 
but by about DOS 3, DOS was much less likely to be entirely in the bottom 
64K.



At the same time the PC debuted, we were working with early steppings of
the 80186, which did feature two channels of 20-bit address DMA--and 16
bit bus width to boot.


"Wisdom comes from experience. Experience is often a result of lack of 
wisdom."- Terry Pratchett


Although I wanted to know some, I was brought up with NO background in 
hardware nor electronics!

Is it OK to be envious?

My parents were dismayed when I left aerospace FORTRAN programming and 
went into auto repair ("I'll get back into computers when I can afford a 
tabletop computer of my own.  Less than 10 years.")  That started to turn 
around when I was successful, and started supplying them with all of their 
cars.  ("I bought this Karmann Ghia for a few hundred dollars, and did a 
lot of work on it.  I think that you will enjoy it.")


I drooled over S100, and bought the first TRS80 to show up at the store 
($400, since I had learned enough to be able to hook up a tape recorder 
and CCTV monitor).



So, at the time, looking at the 5150, it was an overpriced primitive
implementation using a 1970s CPU.


Even I could see that Segment:Offset was a kludge to get a MB of memory in 
a 64K machine.



Many people at the time thought it
would be less popular than the 5100.


Well, it certainly SOLD way more.  But, I doubt that I could barter it to 
John Titor for a one way ride back 55 years.



Rather than buy my first 5150, I was strongly drawn to the NEC APC. For
about the same price as an outfitted 5150, you could buy a true 16 bit
box with 8" disk drives and really nice graphics that was built like a
battleship.  The only problem is that nobody had ever heard of it.
But IBM had the golden reputation.  Many people at the time,
particularly the older ones, didn't talk about "computers" so much as
"IBM machines".


I made a decision in August, 1981 to buy a 5150.
"It probably won't be as good as many others, but, being from IBM, within 
a decade, most computers will be copies of it, with only a niche market 
for anything else."

I was pleased that Apple survived.

--
Grumpy Ol' Fred ci...@xenosoft.com


Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Jecel Assumpcao Jr. via cctalk
Chuck Guzis pointed out that the PC was built from 8 bit peripheral
chips, which was where the 64KB problem came from.

When I saw the design, I thought it was really cute how they were able
to use one of the timer channels and one of the DMA channels to
implement a DRAM refresh circuit almost "for free". Steve Jobs made fun
of the design showing that just the CGA board had more chips in it than
the whole Macintosh. Sure, PALs eliminate a lot of chips but so did
6845.
 
Sadly, the PC AT was a lot less elegant. My impression was that they
divided the project among separate groups who weren't perfectly
coordinated. How many different ways does a single computer need to
translate key scan codes to ASCII, for example? And there was a circuit
with a bunch of TTLs just to generate the exact same signal that the
clock chip was already generating. That didn't make sense until you
found it came from an application note about the Multibus - if you have
more than one processor than the signal is no longer the same. This
allowed them to add the MASTER line in the ISA bus which would have been
neat if it actually worked.

-- Jecel


Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Chuck Guzis via cctalk
On 04/19/2018 12:14 PM, Fred Cisin via cctalk wrote:

> I have no difficulty admitting that I didn't, and don't, have
> Chuck's level of experience and knowledge. My entire venture into
> microcomputers was a hobby that got out of hand.
It's not so much expertise, but where you start your investigations.

Right when I peered into the 5150, I saw the 8237 DMA controller (first
cousin to the 8257) and recognized it from my 8-bit (8085) days.  It was
immediately obvious that IBM had taken a bunch of legacy 8 bit
peripheral chips and shoved them into the PC.   In fact, the 5150 was
surprising in that how primitive the engineering was--something you
didn't expect from a high-tech pioneer like IBM.  So the DMA address
space had to be 16 bits with simple bank select--using a disk controller
chip that was design to be used with 8 inch drives.

The Technical Reference BIOS listing confirmed the suspicion that the
5150 implementation couldn't cross 64K banks.  It had nothing to do with
DOS, per se.

At the same time the PC debuted, we were working with early steppings of
the 80186, which did feature two channels of 20-bit address DMA--and 16
bit bus width to boot.

So, at the time, looking at the 5150, it was an overpriced primitive
implementation using a 1970s CPU.   Many people at the time thought it
would be less popular than the 5100.

Rather than buy my first 5150, I was strongly drawn to the NEC APC. For
about the same price as an outfitted 5150, you could buy a true 16 bit
box with 8" disk drives and really nice graphics that was built like a
battleship.  The only problem is that nobody had ever heard of it.

But IBM had the golden reputation.  Many people at the time,
particularly the older ones, didn't talk about "computers" so much as
"IBM machines".

--Chuck



Re: Int 13h buffer 64k boundaries

2018-04-19 Thread Fred Cisin via cctalk
Yes, it was a "beginner" mistake to not already know that the DMA couldn't 
span a 64K boundary.

It is obvious.  Once you've already run into it.

I have no difficulty admitting that I didn't, and don't, have Chuck's 
level of experience and knowledge.

My entire venture into microcomputers was a hobby that got out of hand.



> I'm learning a lot these days that would have been handy back then!
There are numerous people here whose posts present significant 
information.


--
Grumpy Ol' Fred ci...@xenosoft.com


On Wed, 18 Apr 2018, Chuck Guzis via cctalk wrote:


Really?  64K boundary issues cropping up in MS-DOS?

Egad, that would have been known in DOS 1.0.  Certainly, for anyone
writing his/her own low-level disk I/O, it was obvious.

Now, I'll add that if you wrote your own specialized device driver, DOS
did not guarantee handing your driver a buffer that obeyed the 64K
boundary rule.  I suspect that some DOS errors were reported to MS
because of third-party driver bugs.

And if you wrote a low-level driver that used 16-bit I/O, the magic
number was 128K.

But even in the earlies DOS 2.0 device drivers that I wrote, I included
code to split the transfer up to get around the 64K problem if needed.

--Chuck


Re: Int 13h buffer 64k boundaries

2018-04-18 Thread Chuck Guzis via cctalk
Really?  64K boundary issues cropping up in MS-DOS?

Egad, that would have been known in DOS 1.0.  Certainly, for anyone
writing his/her own low-level disk I/O, it was obvious.

Now, I'll add that if you wrote your own specialized device driver, DOS
did not guarantee handing your driver a buffer that obeyed the 64K
boundary rule.  I suspect that some DOS errors were reported to MS
because of third-party driver bugs.

And if you wrote a low-level driver that used 16-bit I/O, the magic
number was 128K.

But even in the earlies DOS 2.0 device drivers that I wrote, I included
code to split the transfer up to get around the 64K problem if needed.

--Chuck


Re: Int 13h buffer 64k boundaries

2018-04-18 Thread Chuck Guzis via cctalk
On 04/18/2018 09:20 PM, Fred Cisin via cctalk wrote:
>>> I always found it amusing that many programs (even FORMAT!) would fail
>>> with the wrong error message if their internal DMA buffers happened to
>>> straddle a 64K block boundary.  THAT was a direct result of failure to
>>> adequately integrate, or at least ERROR-CHECK!, the segment-offset
>>> kludge
>>> bag.  Different device drivers and TSRs could affect at 16 byte
>>> intervals
>>> where the segment of a program ended up loading.
>>> It was NOT hard to normalize the Segment:Offset address and MOVE the
>>> buffer to another location if it happened to be straddling.
> 
> On Wed, 18 Apr 2018, Charles Anthony wrote:
>> Huh. I would guess that this is the source of a DOS bug that I found back
>> in the day, reported to MS, and never heard back.
>> . . . A buffer boundary straddling error certainly sounds like the
>> issue I was
>> seeing; it feels very odd to see a plausible explanation 35 years later.
> 
> I'm learning a lot these days that would have been handy back then!
> 
> Segment:Offset hides it until you normalize the resulting address.
> IIRC, INT13h should return a code of 09h if the DMA straddles a 64K
> boundary.
> But, not all code checks for that, or knows what to do when it happens.
> Looking at the value of ES:BX? can work, or, if it happens, swap your
> DMA buffer with one that is not used for DMA (and doesn't happen to be
> 64K away :-)  In my code, I happened to have buffers for several
> purposes, so that was easy to do.
> If operating above Int 13H (DOS calls), then you are dependent on DOS
> error checking.  "Can you trust THAT?"
> If operating below Int 13h, then be careful where your DMA ends up, work
> without DMA, or simply watch for occurrence.
> 
> And, of course, a lot of C code can't tell the difference between end of
> file and a disk error.
> #define EOF (-1)    /* depending on implementation */
> while ((ptr2++ = fgetc(fp2)) != EOF); /* does not differentiate between
> error and end of file */ fgets() returns a null pointer for EITHER
> end-of-file OR error!
> and therefore assumes total reliability and any failure to read is
> assumed to be EOF.
> IFF available, feof(fp2) is much better.
> 
> 
> You certainly did the right thing, narrowing it down to load address. 
> The final conclusion would have been to systematically try many/all load
> addresses, and see whether it was consistent for given ones, and what
> the failing ones had in common.
> 
> Yes, the "solution" for the extraneous FORMAT failure was "add or remove
> TSRs and device drivers"!
> 
> When I first hit it, I used a P.O.S.T. card, and put in minimal code to
> output values until I realized that DS was the key, and that I had
> mishandled error #9.  Eventually I realized that even for code not my
> own, I needed to write a TSR intercepting Int 13H calls.
> (For exampole, the critical error handler in certain early versions of
> PC-Tools was more concerned with protecting their pretty display than
> success of writes!)
> 
> 
> Microsoft's response to error reporting was amusing.
> 
> I was in the Windows 3.10 Beta, and encountered the SMARTDRV write
> caching problem.  There was apparently a flaw on one of my drives, that
> neither SPINRITE nor SSTOR could find.  But, during Windoze
> installation, a write would fail, and with write caching ON (Windoze
> installation did NOT give you a choice), there was no way to recover
> from a write error!
> (SMARTDRV had already told SETUP that it had been successful, so now,
> when the error occured, there was no way to (I)gnore the error (figure
> out which file copy had failed, rename the failed copy "BADSECS", and go
> back later to copy that one manually).  All you could do was (R)etry
> which didn't work, or (A)bort, which cancelled the entire setup before
> it ever wrote the directory entries for the files that had worked. By
> loading a bunch of space filler files on the disk, I was able to get the
> installation to be in a working area.
> Once I finally determined WHERE the bad track was, I put in a filler
> file to keep it from being used.  (SPINRITE tried to return it to use
> when I just marked it as BAD!)
> 
> Microsoft's response was, "YOU have a HARDWARE problem.  NOT OUR PROBLEM."
> I was unable to either convince them that CORRECT response to a hardware
> problem was a responsibility of the OS, NOR that SMARTDRV with
> write-caching was going to cause a lot of data losses that they would
> get blamed for, inspite of it not be narrowed down to SMARTDRV, and that
> it would end up costing them a lot.
> 
> Sho'nuff, COMPRESSION got blamed for the data losses.
> 
> DOS 6.2x had to be put out for FREE to fix "the problems with compression".
> The "problems with compression" were fixed by having SMARTDRV NOT
> default to write caching ON, have SMARTDRV NOT rearrange writes for
> efficiency (it wasn't writing DIRectory sectors until later), and having
> SMARTDRV NOT returning a DOS prompt until its buffers