Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-27 Thread Sven Luther
On Wed, Mar 26, 2003 at 09:08:51AM -0800, Ian Romanick wrote:
 Michel Dänzer wrote:
 On Mit, 2003-03-26 at 08:45, Philip Brown wrote:
 
 Video mem is a core X server resource, and should be reserved through the
 core server, always.
 
 Actually, I thought we're talking about a scheme where the server is
 only a client of the DRM memory manager.
 
 Yes.  It would be a lot easier if more was implemented in the DRM, but 
 we don't want more in the kernel then is absolutely required.  As it 
 stands, the DRM only implements the mechanism for paging out blocks to 
 secondary storage (i.e., system memory, AGP, etc.).  All of the 
 mechanism for allocating memory to applications and the policy for which 
 blocks get paged and reclaimed happens in user-mode.

Did you ever got to speak with the XFree86 folk about this, it seems
that the new XAA implementation will abstract memory management, and let
the (X) driver handle this.

Ideally, you would have the little bit of code in the kernel module and
a library on top of that which could be used by the DRI, but also by the
X driver or even other userland stuff (like DirectFB for example).

 I've been working on a prototype implementation of the user-mode code 
 for the last week.  My current estimation is that the user-mode code 
 will be 3 to 4 times as large as the kernel code.  I should have a 
 pthreads based framework with a mock up of the kernel code ready to 

Would this pthread using userland pthread using code be usable in the X
driver ?

Friendly,

Sven Luther


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-27 Thread Sven Luther
On Wed, Mar 26, 2003 at 12:22:48PM -0800, Ian Romanick wrote:
 Philip Brown wrote:
 So since it is orthogonal, you should have no objections to lowest-level
 allocation of video memory being done by GLX calling xf86Allocate 
 routines, yes?
 (ie: leave the X core code alone)
 
 That is what's currently done.  The goal was two fold.  One (very minor, 
 IMO) goal was to allow the pixmap cache to cooperate with the texture 
 cache.  The other goal was to allow the amount of memory used by the 
 front buffer to be dynamic when the screen mode changes.
 
 I believe this whole thread started off by references to hacking X server
 code to call DRI extension code. That is what I am arguing against, as
 unneccessary. Extension code should call core code, not the other way
 around  (except for API-registered callbacks, of course)
 
 The way to do that is to reproduce code from the 3D driver in the X 
 server.  The memory management code that is in the 3D driver (for doing 
 the allocations and communicating with the DRM) really has to be there. 
  Moving it into the X server would really hurt performance.  There's 
 really only four possible solutions:
 
   1. Have the X server call the code in the 3D driver.
   2. Have the 3D driver call the code in the X server.
   3. Have the code exist in both places.
   4. Leave things as they are.
 
 I'm saying the #2 is unacceptable for performance reasons.  You're 
 saying that #1 unacceptable for software engineering reasons.  We're 
 both saying that #3 is unacceptable for software engineering reasons. 
 Users are saying #4 is unacceptable for performance reasons.  Where does 
 that leave us?

What about #3, but using a common library, so the same code is linked in
in two places ?

Friendly,

Sven Luther


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-27 Thread Sven Luther
On Thu, Mar 27, 2003 at 03:06:03AM +0100, Michel Dänzer wrote:
 On Don, 2003-03-27 at 00:37, Keith Whitwell wrote:
  Ian Romanick wrote:
   Michel Dänzer wrote:
   
   On Mit, 2003-03-26 at 21:22, Ian Romanick wrote:
  
   If the paged memory system is only used when DRI is enabled, does it 
   really matter where the code the X server calls is located?  Could we 
   make the memory manager some sort of API-registered callback?  It 
   would be one that only DRI (and perhaps video-capture extensions) 
   would ever use, but still.
  
  
  
   As far as I understand Mark Vojkovich's comments on the next generation
   XAA, all offscreen memory management is going to be handled via driver
   callbacks.
   
   
   Interesting.  What about on screen?  I mean, are there any plans to 
   re-size the amount of memory used for the front buffer when the screen 
   mode changes?
   
  
  Isn't that the RandR proposal, promoted or developed by core team X-iles?
 
 I'd say it's slightly more than a proposal, as the resize part is
 implemented in 4.3.0. :) I do think dynamic management of everything
 including the front buffer is the long term goal.

I don't believe it frees (do you say that in english ?) the Onscreen
memeory though, i had the impression that it just allocate memory for
the maximum possible screen and use a part of it if you are using a
lesser resolution, a bit like the virtual memory is used right now.

Friendly,

Sven Luther


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Keith Whitwell
Ian Romanick wrote:
Alan Hourihane wrote:

On Tue, Mar 25, 2003 at 11:27:17PM +, Keith Whitwell wrote:

Alan Hourihane wrote:

If there's any architectural reason why we can't use XFree86's module
loader for OS indepence here ?
The whole point of the drmCommand*() interface is that it's 
portable, so
I don't see any reason to use OS specific functions like dlopen in this
case.

Unless there is some quantifiable reason.


The goal is to load the same piece of code in both places, so that 
would require that the radeon_dri.so object became an XFree86 module, 
and that the XFree86 module loader was also incorporated into libGL.so.


O.k. That seems like a good goal to aim for.

That seems like a big step, and would obviously break compatibility 
with older libGL.so's.
 
I don't think it's that big a step, and the advantages are enourmous 
in maintenance.


I don't think that requiring people to upgrade their libGL.so and their 
driver binary at the same time is a big deal.  It's espcially not a big 
deal given that the user will have to update their GLX module anyway to 
get the full benefit.

I think an additional goal is to be able to use the same driver binary 
with the miniGLX.  Would that be possible if the XFree86 module format 
was used?
No, that will be strictly dlopen() based.

Keith



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Philip Brown
On Wed, Mar 26, 2003 at 12:10:48AM -0800, Ian Romanick wrote:
 Philip Brown wrote:
  Well, okay, there needs to be a little extra handholding between server and
  client. So, you add a GLX_dri_reserve_mem extension or something that
  reserves video memory by proxy. Or do it in some more direct fashion,
  bypassing GLX protocol overhead if you prefer, but still letting the GLX
  module on the server reserve it cleanly through the server interfaces.
  
  That's the clean way to do it, even if it requires more coding on the DRI
  side.
  
  For non-video (ie AGP) memory, the issue isnt relevant, since the client
  can do the reservation through the drm kernel driver directly, I believe.
 
 After reading this I honestly believe that you and I must be talking 
 about different things.  I'm talking about allocating a block of memory 
 to hold a texture that's going to be read directly by the rendering 
 hardware.  The texture should be kept in directly readable memory 
 (either on-card or AGP) unless the space is needed by some other operation.
 
 Not only that, this is an operation that needs to be fast.  As fast as 
 possible, in fact.

Yes, I know that.
Sounds like we just didnt get down to discussing the details.

Consider the GLX_dri_reserve_mem as equivalent to sbrk()
Then have a more local memory allocator for subdividing the large chunk.
That's going to be a lot more efficient that relying on the XFree routines
to do fine-level memory management anyways. xfree's routines arent really
optimized for that sort of thing, I think.


 
 Right now our memory manager is layered on top of the X memory manager. 
 [stuff on future texmem ]

Well, great. Sounds like we're actually talking about the same thing then.
It's just a matter of what granularity you call the X server for requesting
memory. 
Currently, I'm guessing its a matter of
  [pseudocode]
  size=FindAllFreeMem();
  xallocmem(size);

Wheres what would be nicer to the server, while still preserving local 
speed, would probably be to alloc X memory in 2 megabyte chunks, or
something like that, and then use the layered local memory manager for
those large chunk(s).

[some reasonable fraction of FindAllFreeMem(), not neccessarily strictly 2
 megabytes. ]



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Alan Cox
On Wed, 2003-03-26 at 01:15, Ian Romanick wrote:
  From a security perspective, people may want to disable direct 
 rendering.  There is a shared memory segment that an evil program 
 could muck with and cause DoS problems.  I probably haven't thought 
 about it enough, but I can't see how would could disable direct 
 rendering AND use the fork method.

chmod 700 the DRI devices

Alan



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Suzy Deffeyes
Jens-
I agree with you, supporting HW accelerated indirect rending would be a good
thing.

 Take a look at the DRI high level design doc:

http://dri.sourceforge.net/doc/design_high_level.html

 In section 4.3, Indirect Rendering, there's a section on Multi-rendering
 in a single address space.

Caution, newbie question! Indirect rendering doesn't currently get its own
thread, does it?  It does affect interactivity, but I'm curious how much of
the benefit you'd gain would be from making it direct, and how much of the
benefit would be from moving GLX requests to a second thread.


   [KHLS94] Mark J. Kilgard, Simon Hui, Allen A Leinwand, and Dave
Spalding.  X Server Multi-rendering for OpenGL and PEX.  8th Annual X
Technical Conference, Boston, Mass., January 25, 1994.  Available from
http://reality.sgi.com/opengl/multirender/multirender.html.


I sent Kilgard a note asking him if he knows of an archived copy. It's a
damn shame reality.sgi.com went down before it got into the google cache.

Suzy Deffeyes



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Alan Cox

http://www.realitydiluted.com/mirrors/reality.sgi.com/



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Michel Dänzer
On Mit, 2003-03-26 at 08:45, Philip Brown wrote:
 
 Video mem is a core X server resource, and should be reserved through the
 core server, always.

Actually, I thought we're talking about a scheme where the server is
only a client of the DRM memory manager.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Ian Romanick
Michel Dänzer wrote:
On Mit, 2003-03-26 at 08:45, Philip Brown wrote:

Video mem is a core X server resource, and should be reserved through the
core server, always.
Actually, I thought we're talking about a scheme where the server is
only a client of the DRM memory manager.
Yes.  It would be a lot easier if more was implemented in the DRM, but 
we don't want more in the kernel then is absolutely required.  As it 
stands, the DRM only implements the mechanism for paging out blocks to 
secondary storage (i.e., system memory, AGP, etc.).  All of the 
mechanism for allocating memory to applications and the policy for which 
blocks get paged and reclaimed happens in user-mode.

I've been working on a prototype implementation of the user-mode code 
for the last week.  My current estimation is that the user-mode code 
will be 3 to 4 times as large as the kernel code.  I should have a 
pthreads based framework with a mock up of the kernel code ready to 
distribute in another week or two.  That combined with a few application 
traces should give us a good idea of how well the system will work in 
practice.



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Ian Romanick
Philip Brown wrote:
On Wed, Mar 26, 2003 at 12:10:48AM -0800, Ian Romanick wrote:

Philip Brown wrote:

Well, okay, there needs to be a little extra handholding between server and
client. So, you add a GLX_dri_reserve_mem extension or something that
reserves video memory by proxy. Or do it in some more direct fashion,
bypassing GLX protocol overhead if you prefer, but still letting the GLX
module on the server reserve it cleanly through the server interfaces.
That's the clean way to do it, even if it requires more coding on the DRI
side.
For non-video (ie AGP) memory, the issue isnt relevant, since the client
can do the reservation through the drm kernel driver directly, I believe.
After reading this I honestly believe that you and I must be talking 
about different things.  I'm talking about allocating a block of memory 
to hold a texture that's going to be read directly by the rendering 
hardware.  The texture should be kept in directly readable memory 
(either on-card or AGP) unless the space is needed by some other operation.

Not only that, this is an operation that needs to be fast.  As fast as 
possible, in fact.


Yes, I know that.
Sounds like we just didnt get down to discussing the details.
Consider the GLX_dri_reserve_mem as equivalent to sbrk()
Then have a more local memory allocator for subdividing the large chunk.
That's going to be a lot more efficient that relying on the XFree routines
to do fine-level memory management anyways. xfree's routines arent really
optimized for that sort of thing, I think.
Okay.  You're just not listening.  THAT WON'T ALLOW US TO IMPLEMENT A 
FUNCTIONING 3D DRIVER.  Textures memory is like a cache that is shared 
by multiple running processes.  We need to be able to do the equivalent 
of paging out blocks from that cache when one process needs more memory. 
 An OS needs something under sbrk in order to implement paged memory, 
and so do we.

Going to a system where we add memory to our available pool while 
processes are running won't add much, if any, tangible benefit to users. 
 Instead, it will make a lot of work for DRI developers (every process 
with a GL context will have to be notified when any context makes a 
magic sbrk call).

Utah-GLX doesn't have these worries because it only supports one GL 
context at a time.  DRI drivers don't have that luxury.



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Ian Romanick
Jens Owen wrote:
Ian,

I think you're making a mountain out of a mole hill, but I like the 
mountain that you're trying to build.  Supporting HW accellerated 
indirect rendering would be a good thing, but it's not necessary for the 
change you're trying to make.
Right.  It's not required for what I want to do at all.  I just saw that 
some of the same things needed to happen in order to do either. :)

Ian Romanick wrote:

There is no easy way for the DDX driver to export the extended visual 
information needed for fbconfigs to the GLX layer.  The primary issue 
is binary compatability.  The size of the __GLXvisualConfigRec 
structure cannot be changed, and I have not been able to find a way to 
communicate version information between the GLX layer and the DDX driver.


The 3D (DRI) driver can be dependent on a specific version of the 2D 
(DDX) driver, and breaking compatability between these two drivers if 
done properly is much easier than breaking compatability with the kernel 
(DRM) driver.

The 2D and 3D driver are always distributed together, so it should be 
rare that someone is using an older DDX driver with a newer Mesa driver, 
and simply bumping the major number of the DDX DRI version will cause 
the 3D driver to gracefully fall back to indirect rendering if this 
mismatch occurs.
That is true today.  However, if driver development does move to Mesa 
CVS and the existing DRI tree gets depricated, that may not continue to 
be the case.  Even if that were always true, it doesn't solve the 
problem of getting the extended information into the GLX layer on the 
server-side.

That said, I'll comment on HW accellerated indirect rendering, simple 
because that's a cool project:

  I am not a big fan of the fork trick.
 
   From a security perspective, people may want to disable direct
  rendering.  There is a shared memory segment that an evil program
  could muck with and cause DoS problems.  I probably haven't thought
  about it enough, but I can't see how would could disable direct
  rendering AND use the fork method.
 
  Regardless, there would be a fair amount of overhead on every GL call.
  If I'm not mistaken, the server would have to receive the GLX protocol
  then send it to another process.  There would be the overhead of
  sending the data to yet another process and the task switch.  That on
  top of the  overhead already in the GLX protocol starts to sound very
  painful.
Take a look at the DRI high level design doc:

  http://dri.sourceforge.net/doc/design_high_level.html

In section 4.3, Indirect Rendering, there's a section on Multi-rendering 
in a single address space.  Basically this boils down to threads.  A 
decent document was written on this:
Interesting.  I had been thinking about how multiple contexts would be 
handled from a single daemon process.  Using threads would simplify 
things a lot.  That should even provide some performance benefit to SMP 
machines.

 [KHLS94] Mark J. Kilgard, Simon Hui, Allen A Leinwand, and Dave
  Spalding.  X Server Multi-rendering for OpenGL and PEX.  8th Annual X
  Technical Conference, Boston, Mass., January 25, 1994.  Available from
  http://reality.sgi.com/opengl/multirender/multirender.html.
However, reality.sgi.com doesn't appear to be online.  Does anybody have 
an archived version of this document?
I know a couple people that can bug Mark for a copy. :)



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Philip Brown
On Wed, Mar 26, 2003 at 09:14:37AM -0800, Ian Romanick wrote:
 Philip Brown wrote:
  Consider the GLX_dri_reserve_mem as equivalent to sbrk()
  Then have a more local memory allocator for subdividing the large chunk.
  That's going to be a lot more efficient that relying on the XFree routines
  to do fine-level memory management anyways. xfree's routines arent really
  optimized for that sort of thing, I think.
 
 Okay.  You're just not listening.  THAT WON'T ALLOW US TO IMPLEMENT A 
 FUNCTIONING 3D DRIVER.  Textures memory is like a cache that is shared 
 by multiple running processes.  We need to be able to do the equivalent 
 of paging out blocks from that cache when one process needs more memory. 
   An OS needs something under sbrk in order to implement paged memory, 
 and so do we.

eh?


Card has 32 megs of VideoRAM.
Initialization phase:
 X grabs 4 megs for actual video display
 X grabs 1 meg(?) for pixmaps
 DRI/GLX starts, notices that there is 27 megs free.
 Decides to be nice, and only pre-alloc 16 megs.
 Parcels out that 16 megs to clients somehow.
   (clients will probably grab memory in 2-4meg chunks from GLX,
then use local memory manager on that)

 

 New client comes in. Requests new corse chunk o' VRAM from GLX
 Oops. we've used up the 16 megs pre-allocated.
 Used to be 11 megs free, but X server has been busy, and there is
 now only 8 megs free.
 GLX calls xf86AllocateOffscreenLinear() to grab another 4 megs of
 VRAM from the X server, then hands some part of it off to the new
 client




 ...  Instead, it will make a lot of work for DRI developers (every process 
 with a GL context will have to be notified when any context makes a 
 magic sbrk call).

No, you dont have to notify all GL clients. See above.

Ya know, I heard this guy Keith Whitwell wrote some nice mmXXX()
routines in 1999 that, coincidentally enough, handle *exactly* *this* *type*
*of* *situation* for a local memory manager for GLX clients.
Now, what are the odds of that? Maybe we could get that guy to help out
here somehow...

:-) :-) :-)



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Ian Romanick
Philip Brown wrote:
On Wed, Mar 26, 2003 at 09:14:37AM -0800, Ian Romanick wrote:

Philip Brown wrote:

Consider the GLX_dri_reserve_mem as equivalent to sbrk()
Then have a more local memory allocator for subdividing the large chunk.
That's going to be a lot more efficient that relying on the XFree routines
to do fine-level memory management anyways. xfree's routines arent really
optimized for that sort of thing, I think.
Okay.  You're just not listening.  THAT WON'T ALLOW US TO IMPLEMENT A 
FUNCTIONING 3D DRIVER.  Textures memory is like a cache that is shared 
by multiple running processes.  We need to be able to do the equivalent 
of paging out blocks from that cache when one process needs more memory. 
 An OS needs something under sbrk in order to implement paged memory, 
and so do we.


eh?

Card has 32 megs of VideoRAM.
Initialization phase:
 X grabs 4 megs for actual video display
 X grabs 1 meg(?) for pixmaps
 DRI/GLX starts, notices that there is 27 megs free.
 Decides to be nice, and only pre-alloc 16 megs.
 Parcels out that 16 megs to clients somehow.
   (clients will probably grab memory in 2-4meg chunks from GLX,
then use local memory manager on that)
 

 New client comes in. Requests new corse chunk o' VRAM from GLX
 Oops. we've used up the 16 megs pre-allocated.
 Used to be 11 megs free, but X server has been busy, and there is
 now only 8 megs free.
 GLX calls xf86AllocateOffscreenLinear() to grab another 4 megs of
 VRAM from the X server, then hands some part of it off to the new
 client
What happens when you have 15 processes running with GL contexts that 
each need 24MB of texture memory per frame?  Nearly all of the 
allocations in question are transient.  A texture only needs to be in 
graphics memory while it's being used by the hardware.  If the texture 
manager has to pull from a hodge podge of potentially discontiguous 
blocks of memory (as in your example) there will be a lot of requests 
for memory that we should be able to satisfy that will fail.  The result 
is a fallback to the software rasterizer.

Grab the texmem-0-0-1 branch and look at the code in 
lib/GL/mesa/src/drv/common/texmem.[ch], read the texmem-0-0-2 design 
document that was posted to the list (and discussed WRT this very issue 
at great length), and then get back to me.

...  Instead, it will make a lot of work for DRI developers (every process 
with a GL context will have to be notified when any context makes a 
magic sbrk call).


No, you dont have to notify all GL clients. See above.

Ya know, I heard this guy Keith Whitwell wrote some nice mmXXX()
routines in 1999 that, coincidentally enough, handle *exactly* *this* *type*
*of* *situation* for a local memory manager for GLX clients.
Now, what are the odds of that? Maybe we could get that guy to help out
here somehow...
Okay, seriously?!?  I've spent the last 18 months working with this 
code.  Texture memory management in the DRI has been my primary focus 
for over a year.  I know what's in there.  I know how it works.  I know 
what its shortcomings are.

The current memory management system looks like this:

 Core X routines
   |
   V
 Coarse grained, block oriented cache / paged memory system
   |
   V
 Keith's mmHeap_t code
What needs to happen to make everyone play nice together is:

 Coarse grained, block oriented cache / paged memory system
   | |
   V |
 Core X routines |
 V
  3D driver texture allocator
In other words, what you've brought up here is a completely orthogonal 
issue.



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Keith Whitwell

The current memory management system looks like this:

 Core X routines
   |
   V
 Coarse grained, block oriented cache / paged memory system
   |
   V
 Keith's mmHeap_t code
Actually that's not my code at all, if you're talking about the stuff in 
common/mm.[ch].  I know it's ended up with my name on it, but that's bogus.  I 
can't remember who's it is, but it's lifted from Utah so maybe Phil can tell 
us  we can put the right name on it.

Keith



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Philip Brown
On Wed, Mar 26, 2003 at 11:18:18AM -0800, Ian Romanick wrote:
 Philip Brown wrote:
   
  
   New client comes in. Requests new corse chunk o' VRAM from GLX
   Oops. we've used up the 16 megs pre-allocated.
   Used to be 11 megs free, but X server has been busy, and there is
   now only 8 megs free.
   GLX calls xf86AllocateOffscreenLinear() to grab another 4 megs of
   VRAM from the X server, then hands some part of it off to the new
   client
 
 What happens when you have 15 processes running with GL contexts that 
 each need 24MB of texture memory per frame?  Nearly all of the 
 allocations in question are transient.  A texture only needs to be in 
 graphics memory while it's being used by the hardware.  If the texture 
 manager has to pull from a hodge podge of potentially discontiguous 
 blocks of memory (as in your example) there will be a lot of requests 
 for memory that we should be able to satisfy that will fail.  The result 
 is a fallback to the software rasterizer.


Ah, I see whats on your mind now  ...

 What needs to happen to make everyone play nice together is:
 
   Coarse grained, block oriented cache / paged memory system
 | |
 V |
   Core X routines |
   V
3D driver texture allocator
 
 In other words, what you've brought up here is a completely orthogonal 
 issue.

Orthogonal to the issue that is foremost on your mind,  of
 how do you 'page out' textures from a GLX client, to give the active
  client more room,   yes.

[I'd be happy to discuss that actual issue in irc with you next time ;-)
 but I'll spare the list that one for now]

So since it is orthogonal, you should have no objections to lowest-level
allocation of video memory being done by GLX calling xf86Allocate routines, 
yes?
(ie: leave the X core code alone)


I believe this whole thread started off by references to hacking X server
code to call DRI extension code. That is what I am arguing against, as
unneccessary. Extension code should call core code, not the other way
around  (except for API-registered callbacks, of course)




---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Keith Whitwell
Andreas Ehliar wrote:
On Wed, Mar 26, 2003 at 07:23:09PM +, Keith Whitwell wrote:

Actually that's not my code at all, if you're talking about the stuff in 
common/mm.[ch].  I know it's ended up with my name on it, but that's bogus. 
I can't remember who's it is, but it's lifted from Utah so maybe Phil can 
tell us  we can put the right name on it.


The first copyright notice in mm.c is: 
 * Copyright (C) 1999 Wittawat Yamwong

He was the one who added G200 support to Utah-GLX. IIRC it was a month or
so after the specifications had been released. Quite impressive. Especially
since we had no idea someone else was working on it until he sent us a
notice about it :)
Done.

Keith



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Ian Romanick
Philip Brown wrote:
So since it is orthogonal, you should have no objections to lowest-level
allocation of video memory being done by GLX calling xf86Allocate routines, 
yes?
(ie: leave the X core code alone)
That is what's currently done.  The goal was two fold.  One (very minor, 
IMO) goal was to allow the pixmap cache to cooperate with the texture 
cache.  The other goal was to allow the amount of memory used by the 
front buffer to be dynamic when the screen mode changes.

I believe this whole thread started off by references to hacking X server
code to call DRI extension code. That is what I am arguing against, as
unneccessary. Extension code should call core code, not the other way
around  (except for API-registered callbacks, of course)
The way to do that is to reproduce code from the 3D driver in the X 
server.  The memory management code that is in the 3D driver (for doing 
the allocations and communicating with the DRM) really has to be there. 
 Moving it into the X server would really hurt performance.  There's 
really only four possible solutions:

1. Have the X server call the code in the 3D driver.
2. Have the 3D driver call the code in the X server.
3. Have the code exist in both places.
4. Leave things as they are.
I'm saying the #2 is unacceptable for performance reasons.  You're 
saying that #1 unacceptable for software engineering reasons.  We're 
both saying that #3 is unacceptable for software engineering reasons. 
Users are saying #4 is unacceptable for performance reasons.  Where does 
that leave us?

To be perfectly honest, I would much rather pick #3 over #2 or #4.

If the paged memory system is only used when DRI is enabled, does it 
really matter where the code the X server calls is located?  Could we 
make the memory manager some sort of API-registered callback?  It would 
be one that only DRI (and perhaps video-capture extensions) would ever 
use, but still.

I really do want to find a compromise here.  I really want to help make 
Linux / XFree86 a first-class platform for 3D.  Right now there are a 
few infrastructure elements missing, and I believe that this is a 
significant one.  There are two issues from the end-user perspective: 
stability and performance.  Since this is a performance issue, I can't 
in good conscience accept a solution that loses significant performance.

Do you think the guy playing Quake 7 or using Maya really cares if the X 
sever calls into extension code or if memory managment code is 
duplicated in the 3D driver and the X server? :)  The question for us 
is, Which compromise do we want to make to give the user what they want?



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Philip Brown
On Wed, Mar 26, 2003 at 12:22:48PM -0800, Ian Romanick wrote:
 ...  The memory management code that is in the 3D driver (for doing 
 the allocations and communicating with the DRM) really has to be there. 
   Moving it into the X server would really hurt performance.  There's 
 really only four possible solutions:
 
   1. Have the X server call the code in the 3D driver.
   2. Have the 3D driver call the code in the X server.
   3. Have the code exist in both places.
   4. Leave things as they are.
 
 I'm saying the #2 is unacceptable for performance reasons.  You're 
 saying that #1 unacceptable for software engineering reasons.  We're 
 both saying that #3 is unacceptable for software engineering reasons. 
 Users are saying #4 is unacceptable for performance reasons.  Where does 
 that leave us?
 
 To be perfectly honest, I would much rather pick #3 over #2 or #4.

Likewise. However, I think that your evaluation of #2 is premature.
There are a few different ways to accomplish that, and I dont think you're
seeing all the possibilities clearly.


 If the paged memory system is only used when DRI is enabled, does it 
 really matter where the code the X server calls is located?  Could we 
 make the memory manager some sort of API-registered callback?  It would 
 be one that only DRI (and perhaps video-capture extensions) would ever 
 use, but still.


Details of the API sound like some good fodder for a long irc discussion


 I really do want to find a compromise here.  I really want to help make 
 Linux / XFree86 a first-class platform for 3D.  Right now there are a 
 few infrastructure elements missing, and I believe that this is a 
 significant one.  There are two issues from the end-user perspective: 
 stability and performance.  Since this is a performance issue, I can't 
 in good conscience accept a solution that loses significant performance.

The users will always cry about performance more. But you have to always
consider stability FIRST. Performance can usually be incrementally improved.
Stability is not so easy.
Stability comes first and formost from clean design, which leads to
 better maintainability, and smaller scope of debugging (ie: modularity)




---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Jens Owen
Suzy Deffeyes wrote:
Jens-
I agree with you, supporting HW accelerated indirect rending would be a good
thing.

Take a look at the DRI high level design doc:

  http://dri.sourceforge.net/doc/design_high_level.html

In section 4.3, Indirect Rendering, there's a section on Multi-rendering
in a single address space.


Caution, newbie question! Indirect rendering doesn't currently get its own
thread, does it?
No.

It does affect interactivity, but I'm curious how much of
the benefit you'd gain would be from making it direct, and how much of the
benefit would be from moving GLX requests to a second thread.
Without threads, there is a definite tradeoff between X interactivity 
and 3D performance/latency.  My leaning years ago when we designed the 
DRI was towards doing a daemon process and taking the 3D hit in favor of 
portability, but threading supporting in most modern OS's has improved 
considerably since then.  I would definitely lean towards using a local 
thread in the server today.


 [KHLS94] Mark J. Kilgard, Simon Hui, Allen A Leinwand, and Dave
  Spalding.  X Server Multi-rendering for OpenGL and PEX.  8th Annual X
  Technical Conference, Boston, Mass., January 25, 1994.  Available from
  http://reality.sgi.com/opengl/multirender/multirender.html.


I sent Kilgard a note asking him if he knows of an archived copy. It's a
damn shame reality.sgi.com went down before it got into the google cache.
Thanks Karl and Alan for the pointers to the cached copies.

--
   /\
 Jens Owen/  \/\ _
  [EMAIL PROTECTED]  /\ \ \   Steamboat Springs, Colorado


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Michel Dänzer
On Mit, 2003-03-26 at 21:22, Ian Romanick wrote:
 
 If the paged memory system is only used when DRI is enabled, does it 
 really matter where the code the X server calls is located?  Could we 
 make the memory manager some sort of API-registered callback?  It would 
 be one that only DRI (and perhaps video-capture extensions) would ever 
 use, but still.

As far as I understand Mark Vojkovich's comments on the next generation
XAA, all offscreen memory management is going to be handled via driver
callbacks.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Ian Romanick
Michel Dänzer wrote:
On Mit, 2003-03-26 at 21:22, Ian Romanick wrote:

If the paged memory system is only used when DRI is enabled, does it 
really matter where the code the X server calls is located?  Could we 
make the memory manager some sort of API-registered callback?  It would 
be one that only DRI (and perhaps video-capture extensions) would ever 
use, but still.


As far as I understand Mark Vojkovich's comments on the next generation
XAA, all offscreen memory management is going to be handled via driver
callbacks.
Interesting.  What about on screen?  I mean, are there any plans to 
re-size the amount of memory used for the front buffer when the screen 
mode changes?





---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-26 Thread Michel Dänzer
On Don, 2003-03-27 at 00:37, Keith Whitwell wrote:
 Ian Romanick wrote:
  Michel Dänzer wrote:
  
  On Mit, 2003-03-26 at 21:22, Ian Romanick wrote:
 
  If the paged memory system is only used when DRI is enabled, does it 
  really matter where the code the X server calls is located?  Could we 
  make the memory manager some sort of API-registered callback?  It 
  would be one that only DRI (and perhaps video-capture extensions) 
  would ever use, but still.
 
 
 
  As far as I understand Mark Vojkovich's comments on the next generation
  XAA, all offscreen memory management is going to be handled via driver
  callbacks.
  
  
  Interesting.  What about on screen?  I mean, are there any plans to 
  re-size the amount of memory used for the front buffer when the screen 
  mode changes?
  
 
 Isn't that the RandR proposal, promoted or developed by core team X-iles?

I'd say it's slightly more than a proposal, as the resize part is
implemented in 4.3.0. :) I do think dynamic management of everything
including the front buffer is the long term goal.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Keith Whitwell
Ian Romanick wrote:
As many of you know, I've been doing a lot of thinking lately about the 
GLX part of XFree86  DRI.  In that process I've come across a few 
stumbling blocks.  A few things that make forward progress more 
difficult.  To this point my efforts have been focused on the 
client-side of things.  Some of the recent changes in the texmem-0-0-1 
branch have brought some significant improvements to the level of GLX 
support in XFree86.  However, improvements to GLX can only go so far 
without also looking at the server-side.

On the server-side there are two major ehancements to be made:  adding 
support for GLX 1.3 (and a few other GLX extensions) and adding support 
for accelerated rendering.  My initial consideration has been based on 
adding support primarilly for SGIX_fbconfig and SGIX_pbuffer.  These, 
along with SGI_make_current_read, are the major stumbling blocks to GLX 
1.3 support.  Currently, server-side accelerated rendering is a 
secondary issue.

In the current GLX visual mechanism, the DDX driver exports an array of 
__GLXvisualConfigRec objects that is used by the GLX extension.  This is 
done because the DDX driver is the only place in the server that knows 
what display modes the hardware can support.  There are two significant 
problems with this, but only the first was initially apparent to me.

There is no easy way for the DDX driver to export the extended visual 
information needed for fbconfigs to the GLX layer.  The primary issue is 
binary compatability.  The size of the __GLXvisualConfigRec structure 
cannot be changed, and I have not been able to find a way to communicate 
version information between the GLX layer and the DDX driver.

The other problem is even more grave.  Not only do the GLX layer and the 
DDX driver have to agree on interface-level support, the DDX driver and 
the DRI driver have to agree on hardware-level support.  Here's an 
example.  Assume that some way is found to export fbconfig data from the 
DDX driver to the GLX layer.  The first version of this will likely only 
support fbconfigs that are the same as the currently supported set of 
GLX visuals.  At some point we may wish to add support for floating 
point depth buffers to the Radeon driver.  To support this, the fbconfig 
code in the DDX driver would need to be updated AND code in the DRI 
driver would need to be updated.  Since the DRI driver is never loaded 
into the server, there is absolutely NO WAY for this information to be 
communicated.
Ah yes, good point.  There's versioning from the 2d driver to the 3d driver, 
though it's little used, but there's nothing going the other way.

After the lively discussion with Philip Brown (aka bolthole) in 
yesterday's #dri-devel chat, I got to thinking.  The current method is 
used because the DDX driver is currently the only thing on the 
server-side that has hardware-specific knowledge.  Since the DRM module, 
the DDX driver, and the DRI driver need compatable information about the 
hardware, why not move the visual and fbconfig knowledge into the DRI 
driver and have the GLX layer load the DRI driver?
Sounds tasty.

I don't think that all of the DRI / 3D related knowledge in the DDX 
driver should be moved to the DRI driver, but I think that this piece 
should.
There are initialization tasks that have dependencies on the rest of the X 
server, and there are others that are pretty self contained.  Some could be 
moved to the 3d driver.

This would also allow use to create and control our own method for 
communicating version information between the GLX layer and the DRI 
driver.  It also eliminates a number of the potential binary 
compatability problems.  Enacapsulating this information in the 3D 
driver should also be helpful to the embedded driver branch.
Yes.  Another aspect of the embedded branch is that the drivers there do the 
whole initialization themselves - there is probably then an overlap between 
the inits that could be moved out of the 2d driver and those implemented by 
the embedded driver.

This could also pave the way for the X server to use the new memory 
manager that is being developed.  We could make some sort of a conduit 
for the X server to call into the DRI driver to allocate graphics / AGP 
memory.  There are other ways to achieve this, but this would be an easy 
way.
Yes, very nice.

Utah did have some stuff going for it.  It was designed as a server-side-only 
accelerated indirect renderer.  My innovation was to figure out that the 
client could pretty easily play a few linker tricks  load that server module 
with dlopen(), and then with minimal communication with the server, do 90% of 
the direct rendering tasks itself.  (This was after my first encounter with 
PI, I think, until then I hadn't heard of direct rendering).

The nice thing about this was that the same binary was running the show on 
both the client and the server.  That really was obvious in the communication 
between them -- all the protocol 

Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Alan Cox
On Tue, 2003-03-25 at 21:48, Keith Whitwell wrote:
  The final point that I would like to make is that we're going to NEED to 
  load the DRI driver on the server-side at some point in order to support 
  accelerated server-side rendering.  We could then implemented a 
  server-side software-only DRI driver.  This driver could then export a 
  wide variety of fbconfigs (16-bit/32-bit/floating-point per channel 
  color for pbuffers) that the underlying hardware doesn't support.
 
 It really shouldn't be that hard.  Against it are:

One thing I never understood was whether the server should do this or
fork off a client which is just another DRI direct render application
that happens to get told to render the GLX commands coming down the
connection from the remote host. I've no real feel for the costs of
doing it that way, or enough experience to know if I'm talking out of
my hat obviously.

Pure server side 3d would be welcome for a lot of the very old hardware
too. Its good enough to run screensavers 8)

Alan



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Gareth Hughes
Keith Whitwell wrote:
Yes, very nice.

Utah did have some stuff going for it.  It was designed as a 
server-side-only accelerated indirect renderer.  My innovation was to 
figure out that the client could pretty easily play a few linker tricks 
 load that server module with dlopen(), and then with minimal 
communication with the server, do 90% of the direct rendering tasks 
itself.  (This was after my first encounter with PI, I think, until then 
I hadn't heard of direct rendering).

The nice thing about this was that the same binary was running the show 
on both the client and the server.  That really was obvious in the 
communication between them -- all the protocol structs were private to 
one .c file.
That's what we do -- the NVIDIA libGLcore.so driver backend does both 
client-side direct rendering and server-side indirect rendering. 
libGL.so or libglx.so does the necessary work to allow the main driver 
to have at it.

It really shouldn't be that hard.  Against it are:

- XFree's dislike of native library functions, which the 3d driver 
uses with abandon.
You can avoid these issues by using imports -- the server-side native 
library function imports would just call the appropriate XFree86 
routine, while the client-side imports would just call the regular C 
library versions.  I think Brian added stuff like this at some point, 
not sure however.

- XFree's love of their loadable module format, which the 3d driver 
isn't...
Our libGLcore is a regular shared library (as is our libglx.so, for that 
matter).  Doesn't seem to be an issue, AFAIK.

-- Gareth



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Keith Whitwell
Gareth Hughes wrote:
Keith Whitwell wrote:

Yes, very nice.

Utah did have some stuff going for it.  It was designed as a 
server-side-only accelerated indirect renderer.  My innovation was 
to figure out that the client could pretty easily play a few linker 
tricks  load that server module with dlopen(), and then with minimal 
communication with the server, do 90% of the direct rendering tasks 
itself.  (This was after my first encounter with PI, I think, until 
then I hadn't heard of direct rendering).

The nice thing about this was that the same binary was running the 
show on both the client and the server.  That really was obvious in 
the communication between them -- all the protocol structs were 
private to one .c file.


That's what we do -- the NVIDIA libGLcore.so driver backend does both 
client-side direct rendering and server-side indirect rendering. 
libGL.so or libglx.so does the necessary work to allow the main driver 
to have at it.

It really shouldn't be that hard.  Against it are:

- XFree's dislike of native library functions, which the 3d driver 
uses with abandon.


You can avoid these issues by using imports -- the server-side native 
library function imports would just call the appropriate XFree86 
routine, while the client-side imports would just call the regular C 
library versions.  I think Brian added stuff like this at some point, 
not sure however.
Yep - I see that you could get the server to instantiate the imports  avoid 
the problem that way.  Good.


- XFree's love of their loadable module format, which the 3d 
driver isn't...


Our libGLcore is a regular shared library (as is our libglx.so, for that 
matter).  Doesn't seem to be an issue, AFAIK.
My impression is that a patch trying to add a dlopen() call to one of the 
xfree86 hosted ddx drivers would be rejected.

Keith



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Philip Brown
On Tue, Mar 25, 2003 at 09:48:21PM +, Keith Whitwell wrote:
 
 Utah did have some stuff going for it.  It was designed as a server-side-only 
 accelerated indirect renderer.  My innovation was to figure out that the 
 client could pretty easily play a few linker tricks  load that server module 
 with dlopen(), and then with minimal communication with the server, do 90% of 
 the direct rendering tasks itself.  (This was after my first encounter with 
 PI, I think, until then I hadn't heard of direct rendering).

Turns out that most of those dlopen hacks arent neccessary, if you use
libglx.so, instead of libglx.a, it seems. ld.so takes care of things
automatically, when the X server itself does dlopen() on the extension
module.
A direct rendering client will still need to do interesting things. But
for server-side rendering, the dlopen() stuff does not appear to be
neccessary. (At least against xfree4. Maybe it was neccessary with xfree3)

It also turns out that there was a lot of grungy scrninfoP-driverPrivate
dereferencing going on, that is also completely unneccessary. So the
current Utah-GLX X server interfacing code is a good deal cleaner than when
you last looked at it.  Many of the old dlopen tricks are still in place;
however, new code seems to be directly accessing the server functions with
no tricks.




---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Philip Brown
On Tue, Mar 25, 2003 at 12:37:17PM -0800, Ian Romanick wrote:
 
 This could also pave the way for the X server to use the new memory 
 manager that is being developed.  We could make some sort of a conduit 
 for the X server to call into the DRI driver to allocate graphics / AGP 
 memory.  There are other ways to achieve this, but this would be an easy 
 way.

Please do not do this. Choose the clean way, not the easy way.

There are already AGP (and memory alloc) related calls in the X server
framework; xf86BindGARTMemory(), xf86EnableAGP(), etc.

The core X server should not be making calls into extension modules.
Extension modules should be making calls to xfree-exported functions.
If there arent sufficient xfree-exported functions, extend or add new ones.



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Brian Paul
Keith Whitwell wrote:
Gareth Hughes wrote:

Keith Whitwell wrote:

Yes, very nice.

Utah did have some stuff going for it.  It was designed as a 
server-side-only accelerated indirect renderer.  My innovation was 
to figure out that the client could pretty easily play a few linker 
tricks  load that server module with dlopen(), and then with minimal 
communication with the server, do 90% of the direct rendering tasks 
itself.  (This was after my first encounter with PI, I think, until 
then I hadn't heard of direct rendering).

The nice thing about this was that the same binary was running the 
show on both the client and the server.  That really was obvious in 
the communication between them -- all the protocol structs were 
private to one .c file.


That's what we do -- the NVIDIA libGLcore.so driver backend does both 
client-side direct rendering and server-side indirect rendering. 
libGL.so or libglx.so does the necessary work to allow the main driver 
to have at it.

It really shouldn't be that hard.  Against it are:

- XFree's dislike of native library functions, which the 3d 
driver uses with abandon.


You can avoid these issues by using imports -- the server-side native 
library function imports would just call the appropriate XFree86 
routine, while the client-side imports would just call the regular C 
library versions.  I think Brian added stuff like this at some point, 
not sure however.


Yep - I see that you could get the server to instantiate the imports  
avoid the problem that way.  Good.


- XFree's love of their loadable module format, which the 3d 
driver isn't...


Our libGLcore is a regular shared library (as is our libglx.so, for 
that matter).  Doesn't seem to be an issue, AFAIK.


My impression is that a patch trying to add a dlopen() call to one of 
the xfree86 hosted ddx drivers would be rejected.
Last I looked at the XF86 loader, it had some stuff in it that implied to me 
that it couldn't simply be treated as a wrapper for dlopen(), dlsym(), etc.
I don't remember the details right now.

-Brian



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Keith Whitwell

My impression is that a patch trying to add a dlopen() call to one of 
the xfree86 hosted ddx drivers would be rejected.


Last I looked at the XF86 loader, it had some stuff in it that implied 
to me that it couldn't simply be treated as a wrapper for dlopen(), 
dlsym(), etc.
I don't remember the details right now.


Yes, the XFree86 modules aren't regular .so type shared objects -- but the 
thing we're interested in loading *is*, so we'd be forced to used dlopen() to 
get it in the server.

Keith



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Alan Hourihane
On Tue, Mar 25, 2003 at 10:51:05PM +, Keith Whitwell wrote:
 Gareth Hughes wrote:
 Keith Whitwell wrote:
 
 
 Yes, very nice.
 
 Utah did have some stuff going for it.  It was designed as a 
 server-side-only accelerated indirect renderer.  My innovation was 
 to figure out that the client could pretty easily play a few linker 
 tricks  load that server module with dlopen(), and then with minimal 
 communication with the server, do 90% of the direct rendering tasks 
 itself.  (This was after my first encounter with PI, I think, until 
 then I hadn't heard of direct rendering).
 
 The nice thing about this was that the same binary was running the 
 show on both the client and the server.  That really was obvious in 
 the communication between them -- all the protocol structs were 
 private to one .c file.
 
 
 That's what we do -- the NVIDIA libGLcore.so driver backend does both 
 client-side direct rendering and server-side indirect rendering. 
 libGL.so or libglx.so does the necessary work to allow the main driver 
 to have at it.
 
 It really shouldn't be that hard.  Against it are:
 
 - XFree's dislike of native library functions, which the 3d driver 
 uses with abandon.
 
 
 You can avoid these issues by using imports -- the server-side native 
 library function imports would just call the appropriate XFree86 
 routine, while the client-side imports would just call the regular C 
 library versions.  I think Brian added stuff like this at some point, 
 not sure however.
 
 Yep - I see that you could get the server to instantiate the imports  
 avoid the problem that way.  Good.
 
 
 - XFree's love of their loadable module format, which the 3d 
 driver isn't...
 
 
 Our libGLcore is a regular shared library (as is our libglx.so, for that 
 matter).  Doesn't seem to be an issue, AFAIK.
 
 My impression is that a patch trying to add a dlopen() call to one of the 
 xfree86 hosted ddx drivers would be rejected.

If there's any architectural reason why we can't use XFree86's module
loader for OS indepence here ? 

The whole point of the drmCommand*() interface is that it's portable, so
I don't see any reason to use OS specific functions like dlopen in this
case.

Unless there is some quantifiable reason.

Alan.


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Alan Cox
On Tue, 2003-03-25 at 23:15, Philip Brown wrote:
 There are already AGP (and memory alloc) related calls in the X server
 framework; xf86BindGARTMemory(), xf86EnableAGP(), etc.
 
 The core X server should not be making calls into extension modules.
 Extension modules should be making calls to xfree-exported functions.
 If there arent sufficient xfree-exported functions, extend or add new ones.

The core doesn't have enough information to do all of this job. Video
memory can be shared with other resources - not just X handled resources
like Xv but DMA space for onchip mpeg2 decoders for example. (or audio
even in one case)



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Keith Whitwell
Alan Hourihane wrote:
On Tue, Mar 25, 2003 at 10:51:05PM +, Keith Whitwell wrote:

Gareth Hughes wrote:

Keith Whitwell wrote:


Yes, very nice.

Utah did have some stuff going for it.  It was designed as a 
server-side-only accelerated indirect renderer.  My innovation was 
to figure out that the client could pretty easily play a few linker 
tricks  load that server module with dlopen(), and then with minimal 
communication with the server, do 90% of the direct rendering tasks 
itself.  (This was after my first encounter with PI, I think, until 
then I hadn't heard of direct rendering).

The nice thing about this was that the same binary was running the 
show on both the client and the server.  That really was obvious in 
the communication between them -- all the protocol structs were 
private to one .c file.


That's what we do -- the NVIDIA libGLcore.so driver backend does both 
client-side direct rendering and server-side indirect rendering. 
libGL.so or libglx.so does the necessary work to allow the main driver 
to have at it.


It really shouldn't be that hard.  Against it are:

  - XFree's dislike of native library functions, which the 3d driver 
uses with abandon.


You can avoid these issues by using imports -- the server-side native 
library function imports would just call the appropriate XFree86 
routine, while the client-side imports would just call the regular C 
library versions.  I think Brian added stuff like this at some point, 
not sure however.
Yep - I see that you could get the server to instantiate the imports  
avoid the problem that way.  Good.


  - XFree's love of their loadable module format, which the 3d 
driver isn't...


Our libGLcore is a regular shared library (as is our libglx.so, for that 
matter).  Doesn't seem to be an issue, AFAIK.
My impression is that a patch trying to add a dlopen() call to one of the 
xfree86 hosted ddx drivers would be rejected.


If there's any architectural reason why we can't use XFree86's module
loader for OS indepence here ? 

The whole point of the drmCommand*() interface is that it's portable, so
I don't see any reason to use OS specific functions like dlopen in this
case.
Unless there is some quantifiable reason.
The goal is to load the same piece of code in both places, so that would 
require that the radeon_dri.so object became an XFree86 module, and that the 
XFree86 module loader was also incorporated into libGL.so.

That seems like a big step, and would obviously break compatibility with older 
libGL.so's.

We could also compile the radeon_dri.so code as both a .so file and an XFree86 
module, but that has issues of its own.

Keith



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Philip Brown
On Wed, Mar 26, 2003 at 12:37:08AM +, Alan Cox wrote:
 On Tue, 2003-03-25 at 23:15, Philip Brown wrote:
  There are already AGP (and memory alloc) related calls in the X server
  framework; xf86BindGARTMemory(), xf86EnableAGP(), etc.
  
  The core X server should not be making calls into extension modules.
  Extension modules should be making calls to xfree-exported functions.
  If there arent sufficient xfree-exported functions, extend or add new ones.
 
 The core doesn't have enough information to do all of this job. Video
 memory can be shared with other resources - not just X handled resources
 like Xv but DMA space for onchip mpeg2 decoders for example. (or audio
 even in one case)

If the core doesnt have enough information to do something, it probably
doesnt need information about it in the first place, and that information
should be kept in the extension module.

As far as video memory allocation, there are existing core routines that
handle that.
Utah-glx now uses the core routines to alloc video memory. 




---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Alan Hourihane
On Tue, Mar 25, 2003 at 11:18:45PM +, Keith Whitwell wrote:
 
 My impression is that a patch trying to add a dlopen() call to one of 
 the xfree86 hosted ddx drivers would be rejected.
 
 
 Last I looked at the XF86 loader, it had some stuff in it that implied 
 to me that it couldn't simply be treated as a wrapper for dlopen(), 
 dlsym(), etc.
 I don't remember the details right now.
 
 
 Yes, the XFree86 modules aren't regular .so type shared objects -- but the 
 thing we're interested in loading *is*, so we'd be forced to used dlopen() 
 to get it in the server.

The XFree86 loader is capabile of loading .so or .a files. It has the
support to resolve the symbols already. The dlloader.c and elfloader.c
manage this respectively.

Obviously the .so format being non-portable.

There's a few loader commands to resolve symbols and find if they exist
etc - the behaviour being like GetProcAddress() too.

Alan.


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Michel Dänzer
On Die, 2003-03-25 at 21:37, Ian Romanick wrote: 
 
 In the current GLX visual mechanism, the DDX driver exports an array of 
 __GLXvisualConfigRec objects that is used by the GLX extension.  This is 
 done because the DDX driver is the only place in the server that knows 
 what display modes the hardware can support.  There are two significant 
 problems with this, but only the first was initially apparent to me.
 
 There is no easy way for the DDX driver to export the extended visual 
 information needed for fbconfigs to the GLX layer.  The primary issue is 
 binary compatability.  The size of the __GLXvisualConfigRec structure 
 cannot be changed, and I have not been able to find a way to communicate 
 version information between the GLX layer and the DDX driver.

Maybe I'm missing something, but one could always play tricks like add a
symbol to the GLX module and check its presence from the DDX driver?


[ GLX layer loading the DRI driver ]

 This could also pave the way for the X server to use the new memory 
 manager that is being developed.  We could make some sort of a conduit 
 for the X server to call into the DRI driver to allocate graphics / AGP 
 memory.  There are other ways to achieve this, but this would be an easy 
 way.

I assume this is only about the user space layer of the memory manager,
and the core will always be in the DRM?


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Ian Romanick
Philip Brown wrote:
On Tue, Mar 25, 2003 at 12:37:17PM -0800, Ian Romanick wrote:

This could also pave the way for the X server to use the new memory 
manager that is being developed.  We could make some sort of a conduit 
for the X server to call into the DRI driver to allocate graphics / AGP 
memory.  There are other ways to achieve this, but this would be an easy 
way.


Please do not do this. Choose the clean way, not the easy way.

There are already AGP (and memory alloc) related calls in the X server
framework; xf86BindGARTMemory(), xf86EnableAGP(), etc.
The core X server should not be making calls into extension modules.
Extension modules should be making calls to xfree-exported functions.
If there arent sufficient xfree-exported functions, extend or add new ones.
The idea is that the X server and the 3D driver can use the same memory 
manager for off-screen memory.  That way pixmap cache, textures, and 
vertex buffers all get managed in the same way and share ALL of off 
screen memory.  Currently, when DRI is enables, the pixmap cache is 
very, very small, even if there are no 3D clients running.

There are other benefits as well.  It sure would be nice to be able to 
resize the amount of memory used when the screen mode is changed, for 
example. :)



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Ian Romanick
Alan Cox wrote:
On Tue, 2003-03-25 at 21:48, Keith Whitwell wrote:

The final point that I would like to make is that we're going to NEED to 
load the DRI driver on the server-side at some point in order to support 
accelerated server-side rendering.  We could then implemented a 
server-side software-only DRI driver.  This driver could then export a 
wide variety of fbconfigs (16-bit/32-bit/floating-point per channel 
color for pbuffers) that the underlying hardware doesn't support.
It really shouldn't be that hard.  Against it are:


One thing I never understood was whether the server should do this or
fork off a client which is just another DRI direct render application
that happens to get told to render the GLX commands coming down the
connection from the remote host. I've no real feel for the costs of
doing it that way, or enough experience to know if I'm talking out of
my hat obviously.
Pure server side 3d would be welcome for a lot of the very old hardware
too. Its good enough to run screensavers 8)
I am not a big fan of the fork trick.

From a security perspective, people may want to disable direct 
rendering.  There is a shared memory segment that an evil program 
could muck with and cause DoS problems.  I probably haven't thought 
about it enough, but I can't see how would could disable direct 
rendering AND use the fork method.

Regardless, there would be a fair amount of overhead on every GL call. 
If I'm not mistaken, the server would have to receive the GLX protocol 
then send it to another process.  There would be the overhead of sending 
the data to yet another process and the task switch.  That on top of the 
overhead already in the GLX protocol starts to sound very painful.



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Ian Romanick
Alan Hourihane wrote:
On Tue, Mar 25, 2003 at 11:27:17PM +, Keith Whitwell wrote:

Alan Hourihane wrote:

If there's any architectural reason why we can't use XFree86's module
loader for OS indepence here ? 

The whole point of the drmCommand*() interface is that it's portable, so
I don't see any reason to use OS specific functions like dlopen in this
case.
Unless there is some quantifiable reason.
The goal is to load the same piece of code in both places, so that would 
require that the radeon_dri.so object became an XFree86 module, and that 
the XFree86 module loader was also incorporated into libGL.so.
O.k. That seems like a good goal to aim for.

That seems like a big step, and would obviously break compatibility with 
older libGL.so's.
 
I don't think it's that big a step, and the advantages are enourmous in 
maintenance.
I don't think that requiring people to upgrade their libGL.so and their 
driver binary at the same time is a big deal.  It's espcially not a big 
deal given that the user will have to update their GLX module anyway to 
get the full benefit.

I think an additional goal is to be able to use the same driver binary 
with the miniGLX.  Would that be possible if the XFree86 module format 
was used?

The only problem I see with using the XFree86 module format is the 
general irritation of using it with a debugger.  Right now I can 
*easilly* debug a DRI driver using any application and GDB or DDD.  If 
I'm not mistaken, that becomes more difficult with a non .so format.



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Philip Brown
On Tue, Mar 25, 2003 at 05:07:38PM -0800, Ian Romanick wrote:
 Philip Brown wrote:
  The core X server should not be making calls into extension modules.
  Extension modules should be making calls to xfree-exported functions.
  If there arent sufficient xfree-exported functions, extend or add new ones.
 
 The idea is that the X server and the 3D driver can use the same memory 
 manager for off-screen memory.  That way pixmap cache, textures, and 
 vertex buffers all get managed in the same way and share ALL of off 
 screen memory.

Yes, and existing core X server APIs allow that.

  Currently, when DRI is enables, the pixmap cache is 
 very, very small, even if there are no 3D clients running.

Then DRI isnt behaving nicely, and should be a better neighbour.
And/or, the 2d layer should be more practical about its use of
xf86AllocateOffscreenLinear()




---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Keith Whitwell
Alan Hourihane wrote:
On Tue, Mar 25, 2003 at 11:18:45PM +, Keith Whitwell wrote:

My impression is that a patch trying to add a dlopen() call to one of 
the xfree86 hosted ddx drivers would be rejected.


Last I looked at the XF86 loader, it had some stuff in it that implied 
to me that it couldn't simply be treated as a wrapper for dlopen(), 
dlsym(), etc.
I don't remember the details right now.


Yes, the XFree86 modules aren't regular .so type shared objects -- but the 
thing we're interested in loading *is*, so we'd be forced to used dlopen() 
to get it in the server.


The XFree86 loader is capabile of loading .so or .a files. It has the
support to resolve the symbols already. The dlloader.c and elfloader.c
manage this respectively.
Obviously the .so format being non-portable.

OK, that changes things.

Given that there's a portable way to load non-portable shared objects, I don't 
see any barrier to proceeding.

Keith



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Alan Hourihane
On Tue, Mar 25, 2003 at 11:27:17PM +, Keith Whitwell wrote:
 Alan Hourihane wrote:
 On Tue, Mar 25, 2003 at 10:51:05PM +, Keith Whitwell wrote:
 
 Gareth Hughes wrote:
 
 Keith Whitwell wrote:
 
 
 Yes, very nice.
 
 Utah did have some stuff going for it.  It was designed as a 
 server-side-only accelerated indirect renderer.  My innovation was 
 to figure out that the client could pretty easily play a few linker 
 tricks  load that server module with dlopen(), and then with minimal 
 communication with the server, do 90% of the direct rendering tasks 
 itself.  (This was after my first encounter with PI, I think, until 
 then I hadn't heard of direct rendering).
 
 The nice thing about this was that the same binary was running the 
 show on both the client and the server.  That really was obvious in 
 the communication between them -- all the protocol structs were 
 private to one .c file.
 
 
 That's what we do -- the NVIDIA libGLcore.so driver backend does both 
 client-side direct rendering and server-side indirect rendering. 
 libGL.so or libglx.so does the necessary work to allow the main driver 
 to have at it.
 
 
 It really shouldn't be that hard.  Against it are:
 
   - XFree's dislike of native library functions, which the 3d driver 
 uses with abandon.
 
 
 You can avoid these issues by using imports -- the server-side native 
 library function imports would just call the appropriate XFree86 
 routine, while the client-side imports would just call the regular C 
 library versions.  I think Brian added stuff like this at some point, 
 not sure however.
 
 Yep - I see that you could get the server to instantiate the imports  
 avoid the problem that way.  Good.
 
 
   - XFree's love of their loadable module format, which the 3d 
 driver isn't...
 
 
 Our libGLcore is a regular shared library (as is our libglx.so, for that 
 matter).  Doesn't seem to be an issue, AFAIK.
 
 My impression is that a patch trying to add a dlopen() call to one of the 
 xfree86 hosted ddx drivers would be rejected.
 
 
 If there's any architectural reason why we can't use XFree86's module
 loader for OS indepence here ? 
 
 The whole point of the drmCommand*() interface is that it's portable, so
 I don't see any reason to use OS specific functions like dlopen in this
 case.
 
 Unless there is some quantifiable reason.
 
 The goal is to load the same piece of code in both places, so that would 
 require that the radeon_dri.so object became an XFree86 module, and that 
 the XFree86 module loader was also incorporated into libGL.so.
 
O.k. That seems like a good goal to aim for.

 That seems like a big step, and would obviously break compatibility with 
 older libGL.so's.
 
I don't think it's that big a step, and the advantages are enourmous in 
maintenance.

 We could also compile the radeon_dri.so code as both a .so file and an 
 XFree86 module, but that has issues of its own.

Indeed.

Alan.


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Alan Hourihane
On Tue, Mar 25, 2003 at 11:33:31PM +, Keith Whitwell wrote:
 Alan Hourihane wrote:
 On Tue, Mar 25, 2003 at 11:18:45PM +, Keith Whitwell wrote:
 
 My impression is that a patch trying to add a dlopen() call to one of 
 the xfree86 hosted ddx drivers would be rejected.
 
 
 Last I looked at the XF86 loader, it had some stuff in it that implied 
 to me that it couldn't simply be treated as a wrapper for dlopen(), 
 dlsym(), etc.
 I don't remember the details right now.
 
 
 Yes, the XFree86 modules aren't regular .so type shared objects -- but 
 the thing we're interested in loading *is*, so we'd be forced to used 
 dlopen() to get it in the server.
 
 
 The XFree86 loader is capabile of loading .so or .a files. It has the
 support to resolve the symbols already. The dlloader.c and elfloader.c
 manage this respectively.
 
 Obviously the .so format being non-portable.
 
 
 OK, that changes things.
 
 Given that there's a portable way to load non-portable shared objects, I 
 don't see any barrier to proceeding.

Yes, but there's still an opportunity to provide a radeon_dri.a that
is OS independent that could be loaded by libGL and the Xserver.

Given that a .so and a .a is managed by the XFree86 loader, it's possible
that libGL could be given the backwards compatibility to load older .so
files too - although that may take a little more work to manage this
compatibility.

Alan.


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel