After working on some relocation optimizations where user mode computes
relocation data optimistically, I think we can actually dispense with
the kernel-mode relocations entirely. 

The super ioctl submission would contain the list of buffers and their
presumed offsets (for those that have relocations pointing at them). As
the buffers are validated, if they don't land at the offset specified by
user mode, the whole submission is rejected. User mode then walks the
relocation data and rewrites those that have moved. With all of the
buffers updated to point at the new (presumed) buffer locations, the
request is re-submitted (and, we hope, executed this time).

Benefits:

 1) simplified kernel API
 2) reduced user->kernel data transfer (no relocations)
 3) eliminate user->kernel relocation reformatting
 4) eliminate buffer objects for relocations
 5) eliminate buffer object mapping to kernel

Costs:

 1) more user mode code 
 2) might never make progress

1) isn't a significant issue -- it should be a whole lot easier to walk
the relocation list in user space than the current relocation code is in
kernel mode.

2) seems like the big sticking point -- it's easy to imagine a case
where two clients hammer the hardware and there isn't space for both of
them. One fairly simple kludge-around would be to 'pin' the buffers in
place for 'a while' and wait for the client to re-submit the request.
That would block the conflicting application briefly.

Comments?

-- 
[EMAIL PROTECTED]

Attachment: signature.asc
Description: This is a digitally signed message part

-------------------------------------------------------------------------
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to