On Wed, 16 Oct 2002, Colin Leroy wrote:

> Hi,
> 
> ok, I made a few more tests about why dma doesn't work on ppc.
> I made mmio-mode dump ring info in mach64_ring_idle when everything was
> fine, and compared with the mach64_do_wait_for_idle failing in dma mode.
> 
> I found some differences in the logs, some seem logical, some are hard to
> understand why (for me at least ;-))
> 
> Here's a few info:
> 
> mmio: BUS_CNTL =              0x7b2fa150
> dma:                          0x7b2fa110

Here, bit 6 means disable bus mastering, so this is normal.  It's set for 
mmio but not dma.

> mmio: BM_FRAME_BUF_OFFSET =   0x007ff980
> dma:                          0x007ffe48

BM_FRAME_BUF_OFFSET is only used with bus mastering.  The value is always
0x007ff800 (the offset of the register map in the framebuffer) plus the
register address.  So for dma here the register address is 0x648, which is
BM_DATA.  That means the last dma transfer was a "gui-master" where the
dma buffer is filled with alternating register offsets and data.  Other
values you'll see are 0x007ffe44 (BM_HOSTDATA), which indicates a texture
blit, and 0x007ff980 (BM_FRAME_BUF_OFFSET), which means the card is
transferring a DMA descriptor in preperation for a gui-master or blit.

> mmio: GUI_STAT =              0x00800000 
> dma:                          0x00800201 (the one making the problem visible)

Here, bit 0 means the engine is busy, and bits 23:16 indicate the number
of free FIFO slots.  In the case of mmio, the register shows all slots
open and the engine is idle.  In the case of dma here, all slots are open
but the engine is busy, when this state doesn't change and bit 0 remains
high, it indicates a lockup.  Bits 11:8 indicate that the draw destination
is outside of the current scissor, in this case DSTX is right of the right
scissor.  I'm not sure if the scissor is really the problem here, this
could also indicate corrupt vertex data.  There are several different 
possible causes of a lockup.

> mmio: head_addr: 0x08550290 head: 164 tail: 164 
>       (head_addr always  0x855????, head and tail vary)
> 
> dma:  head_addr: 0x07988ae0 head: 696 tail: 700 
>       (head_addr always  0x789????, head and tail vary)

The range of head_addr will vary from one server generation to the next,
since the descriptor ring placement will depend on available memory, but
the size is always 16kB.  The head_addr should always be between the ring
bus address reported by the drm at startup and the start address + 16K.  
The head and tail are constantly changing as descriptors are added to the
ring at the tail and the card consumes them from the head, moving along
and wrapping at the end of the ring.  Here, since in the mmio case the 
card is idle, the head and tail are equal.  In the dma case, the card 
locked-up on the last descriptor in the ring (each descriptor is 4 dwords 
and the head and tail shown are dword offsets).

Note that mmio is implemented by generating dma descriptors as with the
dma path, and then "manually" processing the descriptor ring and feeding
the data from the dma buffers one register at a time.  So the ring and
buffer data should be the same with mmio and dma for a given series of GL
operations, except for the descriptor and buffer addresses which will vary 
depending on the placement in memory of the ring and buffers.

> Well :) Hope it could be of some use!

I don't see anything unusual here -- except the lockup, of course.  
Determining the root cause of the lockup is the difficult part.  Have you
found any Mesa demos that don't lock up with either sync or async dma?

-- 
Leif Delgass 
http://www.retinalburn.net




-------------------------------------------------------
This sf.net email is sponsored by: viaVerio will pay you up to
$1,000 for every account that you consolidate with us.
http://ad.doubleclick.net/clk;4749864;7604308;v?
http://www.viaverio.com/consolidator/osdn.cfm
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to