This message is from the T13 list server.

Speaking as a host silicon and software vendor ...

DeviceControl is nearly always left at 0x00. Occasionally we hit nIEN (very
rarely would be a better description) - but that is only during certain
identification sequences in APIC mode with shared interrupts.

Also - if you're really trying to make yourself look like a "modern" windows
environment - why not use APIC mode?

-----Original Message-----
From: Pat LaVarre
To: [EMAIL PROTECTED]
Sent: 2/17/2004 8:37 AM
Subject: Re: [t13] o x3F6 DeviceControl default should be what

This message is from the T13 list server.


> Remember
> that some hosts don't even make a physical connection to the INTRQ
> signal - they don't care about the setting of nIEN. Other hosts may
> want to do some commands in polling mode and may set nIEN=1 when
> executing those commands.

Aye.

At my desk now, I see o 376 02 floats INTRQ hi, just as selecting the
absent device vi o 1F6 does.

When floating hi, INTRQ appears asserted, so here now I can only disable
the toggling of INTRQ that could edge-trigger a PIC, I cannot force
INTRQ deasserted.  That is, I cannot fully disable INTRQ.

> >Anybody already know what is the most popular o x3F6 DeviceControl
> >value?
> 
> Well the possible values are:
> 
> 80H - HOB bit (ATA devices)
> 04H - SRST bit
> 02H - nIEN bit

Please could you elaborate "possible"?

I was thinking the possible values are from x00 thru xFF inclusive.

> There values may be OR'ed together.

Yep.

> NOTE: Bit 0 shall always be written by a host as zero.

Mask x01 used to mean something?

> Ignore the LBA48 HOB bit ..

By its t13.org definition, the o x3F6 DeviceControl port is shared and
write-only.  That means I can't read it, mess with it, restore it. 
Instead, when I need to force x02 nIEN lo for my device under test, I
have to guess what else may have been written in the other bits or for
the sake of the other device on the bus.

> Ignore ... the SRST bit,

Ok.

> most software will set the
> nIEN bit depending on if the host wants to use interrupts.

Good to hear.

> >x00 and x02?  The difference there is x02 nIEN.
> 
> I would assume 00H would be most common - I don't see why a
> multitasking OS would not use interrupts or would ever want to
> disable interrupts from an ATA device (interrupt disable would be
> done at a much higher level - either in the processor or the system
> interrupt controller).

Are we not now starting to see x80 and x82 grow common?

> Note that on x86 systems using PCI bus ATA host controllers and
> executing DMA data transfer commands the nIEN bit MUST BE ZERO. The
> INTRQ signal from the device is needed by the host controller so that
> it can know when a DMA data transfer command as ended,

Yes.

> especially for
> read commands that do not transfer all the data described by the DMA
> PRD list.

This means the host can see the PRD list ended without needing to see
the INTRQ from the device?  In particular, devices which lack INTRQ can
be read and written via DMA, except that errors then appear as timeouts,
recovered by reset?

> >Do Win XP/ 2K get along here, or does that design tradition conflict
> >with Win ME/ 9X here?
> 
> Why do you ask? Do you see differences in these OS?

I'm asking because I find devices designed for Windows work best if I
program my host to talk like Windows.  Talking differently than Windows
is a way of injecting noise that should not matter.  I like to inject as
little such noise as practical.

I think the market shift to designing for XP rather than 98 has made
compatibility temporarily more difficult to achieve, as we wait for
hosts that were talking like 98 to learn to talk like XP.

Personally I haven't practiced much lately, but once upon a time bus
traces in effect identified the host by detailed choices that mostly
don't matter.

Possibly I remember I saw auto sense of x0E bytes in Win ME/ 9X, of x12
bytes in Win XP/ 2K.  Possibly they did not both set the byte count
limit to xFFFF.  Etc.

Pat LaVarre

Reply via email to