Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Felix Kühling
On Tue, 3 Dec 2002 11:29:34 -0800 (PST)
Linus Torvalds [EMAIL PROTECTED] wrote:

 
 On Tue, 3 Dec 2002, magenta wrote:
  
  User preferences are an entirely different matter.  I totally agree that
  the user should be able to override default behaviors, but environment
  variables are such a crappy way of doing this.
 
 Why? Environment variables are in many ways more powerful than config 
 files, and can be equally easily edited (think of your .bashrc as the 
 config file for environment variables).

One thing we should keep in mind about the future is indirect rendering.
Environment variables which are only known on the client side won't work
then. However, the GLX server side would be able to identify the client
based on the window id for instance. Therefore I believe a configuration
file on the server side with application specific entries is the way to
go.

 
 I agree that using _bare_ environment variables is nasty, and nobody 
 should need to do
 
   export GL_TEXTURE_DEPTH=32
[...]
 
   Linus

Regards,
   Felix

   __\|/_____ ___ ___
__Tschüß___\_6 6_/___/__ \___/__ \___/___\___You can do anything,___
_Felix___\Ä/\ \_\ \_\ \__U___just not everything
  [EMAIL PROTECTED]o__/   \___/   \___/at the same time!


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Allen Akin
On Wed, Dec 04, 2002 at 12:57:44AM -0600, D. Hageman wrote:
| This illustrates one of the bad points of using environment variables.  
| Will we have to add environment variables every time a new app is pushed 
| out the door?  Bad approach.  

In general, if a bug affects every app, then the driver needs to be
fixed.  Ian's scenario (and my reply) were about the case in which you
want to change driver behavior for one app without affecting others.

|  The approach I want to avoid is defining a bunch of general low-level
|  switches ...
|...  This is not the
|  way to provide effective controls to the end user, it's not the way to
|  keep application behavior consistent from run to run on the same system,
|  and it doesn't even help make the driver developers' lives easier.
| 
| Ah, but it *must* be defined as a bunch of low level switches to make 
| developers lives easier.

If preferences are handled at the application level, then in most cases
the driver developers don't have to do anything.  That's as easy as you
can get. :-)

What I had in mind was that supporting a bunch of low-level switches
involves lots of conditional code deep in the drivers.

| I think the thing that will make users lives easier is a tool that can 
| modify the per-app configuration. ...

Folks can work on this if they want, obviously.  But it has less payoff
than work on other projects, because library-level controls aren't as
effective as controls at the application level, and because
programmability in current and future graphics hardware is reducing the
number of low-level fixed-function switches that can be exposed.

Allen


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Trunk-to-texmem merge

2002-12-04 Thread Leif Delgass
On Tue, 3 Dec 2002, Ian Romanick wrote:

 Unless there are any objections, I'm going to commit a merge from the trunk
 to the texmem-0-0-1 branch tomorrow (Wednesday).  I've tested the merge on
 the R100, and I'll test it on an M6 and a G400 before I commit it.

That's fine by me.  FYI, I've started trying to debug r128 in the texmem
branch.  I've found some problems, but am still experiencing texture
corruption.  The first problem I found is in the switch/case at
r128_texmem.c:281 (r128UploadTexImages()).  Since the uploading of
textures was moved from r128EmitHwStateLocked() to functions called from
r128UpdateTextureState(), a texture isn't marked as bound until _after_
it's uploaded, so the default case was being hit (with t-base.bound ==
0).

Another problem I found is that r128DDBindTexture no longer sets the 
R128_NEW_TEXTURE flag, and this prevents the texture state from being 
updated when an app switches textures.  For example: running tunnel, I get 
the floor texture on the walls, but if I set R128_NEW_TEXTURE in 
r128DDBindTexture, the wall textures and floor textures appear in the 
right places.  How do the radeon/r200 drivers work without setting the 
NEW_TEXTURE flag there?  Also, shouldn't it unbind the texture currently 
bound to that texture unit?

One other thing I noticed is that R128_NEW_TEXTURE is being set in
disable_tex() and update_tex_common() in r128_texstate.c.  This shouldn't
be necessary, in fact it causes the texture state to be repeatedly updated
when there haven't been any actual state changes (I saw that happening in
multiarb, e.g.).  Marking the context and texture registers for upload
should be enough.  Only the texture functions in r128_tex.c (texture
image, env mode, parameter, etc. changes) and enabling/disabling of
texturing in r128_state.c need to cause an update of the texture state.

I'll keep digging...

--
Leif Delgass 
http://www.retinalburn.net




---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread magenta
On Wed, Dec 04, 2002 at 11:06:01AM -0800, Allen Akin wrote:
 On Wed, Dec 04, 2002 at 12:57:44AM -0600, D. Hageman wrote:
 | This illustrates one of the bad points of using environment variables.  
 | Will we have to add environment variables every time a new app is pushed 
 | out the door?  Bad approach.  
 
 In general, if a bug affects every app, then the driver needs to be
 fixed.  Ian's scenario (and my reply) were about the case in which you
 want to change driver behavior for one app without affecting others.

But this isn't about application bug workarounds, it's about users
specifying hinting or forcing extensions to be active or whatever.

 What I had in mind was that supporting a bunch of low-level switches
 involves lots of conditional code deep in the drivers.

Not necessarily.  A configuration to force, say, FSAA to be enabled would
just require that the initial state of the OpenGL context to be changed,
and for the original issue which sparked this debate (changing the unhinted
internal texture format from same as display to 8 bits/channel) could be
handled by adding a default texture depth variable to the context
information and use that when the format is specified as GL_RGB instead of
GL_RGB8 or whatever.

I basically see three camps in this discussion:

1. Users should be able to configure default behavior using configuration
files (which would be selected based on argv[0] or similar)

2. Users should be able to configure default behavior using environment
variables (which would be configured on a per-application basis using
wrapper scripts or a launcher program or similar)

3. Users should not be able to configure default behavior; applications
should specify all behavior explicitly if it matters, and expose this as an
application-level configuration option to the user

Personally, I'm torn between camps 1 and 3.

Actually, I just thought of a solution which could possibly satisfy all
three camps: have a libGL wrapper library (loaded via LD_PRELOAD) which
overrides functionality as needed.  Want to force FSAA to be enabled?  Put
it into glXCreateContext().  Want to force GL_RGB8 when the application
chooses GL_RGB?  Do it in glTexImage().  Hey, if you want to force GL_RGB4
when the application chooses GL_RGB8, you could do that too!

Basically, I see no reason to put this configuration into the drivers
themselves, as it could easily be done using an LD_PRELOADed library.

-- 
http://trikuare.cx


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Ian Romanick
On Wed, Dec 04, 2002 at 12:06:20PM -0800, magenta wrote:
 On Wed, Dec 04, 2002 at 11:06:01AM -0800, Allen Akin wrote:
  On Wed, Dec 04, 2002 at 12:57:44AM -0600, D. Hageman wrote:
  | This illustrates one of the bad points of using environment variables.  
  | Will we have to add environment variables every time a new app is pushed 
  | out the door?  Bad approach.  
  
  In general, if a bug affects every app, then the driver needs to be
  fixed.  Ian's scenario (and my reply) were about the case in which you
  want to change driver behavior for one app without affecting others.
 
 But this isn't about application bug workarounds, it's about users
 specifying hinting or forcing extensions to be active or whatever.

As I pointed out in another post, the same mechanism could be used for both.
There are enough corner cases in the OpenGL spec that an application could
do something that would just happen to work fine with one dirver, but crash
horribly on another.  If that were to happen in, say, Maya or Doom 3 or some
other commercial app, the common practice on other systems is to provide a
driver based work around.

The ideal sollution is to fix the app, but not all developers move at the
speed of open source. :)

 1. Users should be able to configure default behavior using configuration
 files (which would be selected based on argv[0] or similar)
 
 2. Users should be able to configure default behavior using environment
 variables (which would be configured on a per-application basis using
 wrapper scripts or a launcher program or similar)
 
 3. Users should not be able to configure default behavior; applications
 should specify all behavior explicitly if it matters, and expose this as an
 application-level configuration option to the user
 
 Personally, I'm torn between camps 1 and 3.

In terms of policy, camps 1 and 2 really are the same.  The difference
between 1 and 2 is just a matter of mechanism.

 Actually, I just thought of a solution which could possibly satisfy all
 three camps: have a libGL wrapper library (loaded via LD_PRELOAD) which
 overrides functionality as needed.  Want to force FSAA to be enabled?  Put
 it into glXCreateContext().  Want to force GL_RGB8 when the application
 chooses GL_RGB?  Do it in glTexImage().  Hey, if you want to force GL_RGB4
 when the application chooses GL_RGB8, you could do that too!
 
 Basically, I see no reason to put this configuration into the drivers
 themselves, as it could easily be done using an LD_PRELOADed library.

I think that is not a good idea.  We want to DISCOURAGE replacing /
modifying core libraries.  Not only that, virtually all of the behavior that
has been discussed here is device-dependent.  libGL.so is
device-independent.  I don't really see a point in having a device-dependent
wrapper.  That introduces the additional problem of having to have the
wrapper libGL.so and the *_dri.so in sync.  I see only head-aches down that
path...

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread magenta
On Wed, Dec 04, 2002 at 12:18:03PM -0800, Ian Romanick wrote:
  1. Users should be able to configure default behavior using configuration
  files (which would be selected based on argv[0] or similar)
  
  2. Users should be able to configure default behavior using environment
  variables (which would be configured on a per-application basis using
  wrapper scripts or a launcher program or similar)
  
  3. Users should not be able to configure default behavior; applications
  should specify all behavior explicitly if it matters, and expose this as an
  application-level configuration option to the user
  
  Personally, I'm torn between camps 1 and 3.
 
 In terms of policy, camps 1 and 2 really are the same.  The difference
 between 1 and 2 is just a matter of mechanism.

And yet the debate has been so heated! ;)

  Actually, I just thought of a solution which could possibly satisfy all
  three camps: have a libGL wrapper library (loaded via LD_PRELOAD) which
  overrides functionality as needed.  Want to force FSAA to be enabled?  Put
  it into glXCreateContext().  Want to force GL_RGB8 when the application
  chooses GL_RGB?  Do it in glTexImage().  Hey, if you want to force GL_RGB4
  when the application chooses GL_RGB8, you could do that too!
  
  Basically, I see no reason to put this configuration into the drivers
  themselves, as it could easily be done using an LD_PRELOADed library.
 
 I think that is not a good idea.  We want to DISCOURAGE replacing /
 modifying core libraries.  Not only that, virtually all of the behavior that
 has been discussed here is device-dependent.  libGL.so is
 device-independent.  I don't really see a point in having a device-dependent
 wrapper.  That introduces the additional problem of having to have the
 wrapper libGL.so and the *_dri.so in sync.  I see only head-aches down that
 path...

But the whole thing behind the discussion is that this is about users
tweaking behavior of games for quality/performance/etc., and this wouldn't
replace libGL, it'd just supplement it.  Like, libTweakGL or whatever.
It's not card-specific functionality which is being talked about here, and
I don't see why the functionality should go into the drivers.

Like, for the purpose of *correctness*, yeah, the app should do things
correctly to begin with (and not rely on undefined behavior).  But that's
not what the issue is, as I see it.  The issue appears to be that some
people want default behavior (hinting, internal texture quality, certain
disabled-by-default extensions, etc.) to be configurable by the user, and
people are proposing fixes which would needlessly complicate the individual
drivers.

My belief on this issue at this very moment is that libGL and the
individual DRI drivers should favor correctness and quality over speed, and
that external LD_PRELOADed tweak libraries should be used to override these
default behaviors.  It keeps the messy user configuration stuff out of DRI
(keeping the drivers simpler and avoiding the headache of how to actually
provide the configuration), it gives all of the functionality that the
empower the users camp is rallying for, and it neatly solves all of the
issues which have been talked about in this debate.

The LD_PRELOAD mechanism is quite clean, and doesn't require any
replacement or modification of core libraries (at least, not in the
replace your /usr/X11R6/lib/libGL.so.1.2 with this one and hope things
don't break way); it just allows the user to insert functionality which
wasn't there before, like how the esddsp tool uses LD_PRELOAD to replace
raw UNIX audio with esd calls, rather than requiring every application to
move to libesd to work with esd.  Some consider it ugly, but personally I
find it to be quite elegant.

-- 
http://trikuare.cx


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Nicholas Leippe
On Wednesday 04 December 2002 01:06 pm, you wrote:
 
 I basically see three camps in this discussion:
 
 1. Users should be able to configure default behavior using configuration
 files (which would be selected based on argv[0] or similar)
 
 2. Users should be able to configure default behavior using environment
 variables (which would be configured on a per-application basis using
 wrapper scripts or a launcher program or similar)
 
 3. Users should not be able to configure default behavior; applications
 should specify all behavior explicitly if it matters, and expose this as an
 application-level configuration option to the user

It seems to me that 2 and 3 are independent.  I don't see why the 
application's configuration doesn't just provide an interface to changing 
it's own environment variables.  This would allow wrapper scripts to supply 
variables/values the application didn't know about when written, and let the 
application provide a nice interface to the user for changing them as well.

Wrapper scripts can provide both default settings (bashrc) or per-application 
settings just the same.

It seems as if none of the levels of controls people have been asking for in 
this thread can't be satisfied via environment variables in one way or 
another--it seems to be the most flexible solution.


Nick


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Trunk-to-texmem merge

2002-12-04 Thread Ian Romanick
On Wed, Dec 04, 2002 at 02:35:39PM -0500, Leif Delgass wrote:
 On Tue, 3 Dec 2002, Ian Romanick wrote:
 
  Unless there are any objections, I'm going to commit a merge from the trunk
  to the texmem-0-0-1 branch tomorrow (Wednesday).  I've tested the merge on
  the R100, and I'll test it on an M6 and a G400 before I commit it.
 
 That's fine by me.  FYI, I've started trying to debug r128 in the texmem
 branch.  I've found some problems, but am still experiencing texture
 corruption.  The first problem I found is in the switch/case at
 r128_texmem.c:281 (r128UploadTexImages()).  Since the uploading of
 textures was moved from r128EmitHwStateLocked() to functions called from
 r128UpdateTextureState(), a texture isn't marked as bound until _after_
 it's uploaded, so the default case was being hit (with t-base.bound ==
 0).

I've actually moved it again, too.  I moved it to enable_tex_2d to match the
R100 / R200 drivers.

 Another problem I found is that r128DDBindTexture no longer sets the 
 R128_NEW_TEXTURE flag, and this prevents the texture state from being 
 updated when an app switches textures.  For example: running tunnel, I get 
 the floor texture on the walls, but if I set R128_NEW_TEXTURE in 
 r128DDBindTexture, the wall textures and floor textures appear in the 
 right places.  How do the radeon/r200 drivers work without setting the 
 NEW_TEXTURE flag there?  Also, shouldn't it unbind the texture currently 
 bound to that texture unit?

Ah-ha!  The R128 driver tracks changes to texture state on its own, but the
R100 / R200 drivers just let Mesa do it.  When the state changes, Mesa calls
the drivers UpdateState function (r128DDInvalidateState 
radeonInvalidateState) and passes it new_state.  If texture state has been
called, _NEW_TEXTURE will be set in new_state.  The changes that I'm going to
commit today are:

- Remove all usage of R128_NEW_TEXTURE.
- Modify r128UpdateHwState to test _NEW_TEXTURE in NewGLState

I suspect that will fix the texture problems.  Somebody (that actually has
Rage128 hardware!) should go through and eliminate the new_state field from
r128_context altogether.  I will make similar changes to the MGA driver.  It
would be nice to have fundamental things, like tracking state changes, as
similar as possible across the various drivers.  It makes it easier to move
from driver-to-driver to fix bugs and make enhancements.

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Jens Owen
magenta wrote:


I basically see three camps in this discussion:

1. Users should be able to configure default behavior using configuration
files (which would be selected based on argv[0] or similar)

2. Users should be able to configure default behavior using environment
variables (which would be configured on a per-application basis using
wrapper scripts or a launcher program or similar)

3. Users should not be able to configure default behavior; applications
should specify all behavior explicitly if it matters, and expose this as an
application-level configuration option to the user

Personally, I'm torn between camps 1 and 3.


I'm squarely in camp 3 based on Allen's rationale and his experience.


Actually, I just thought of a solution which could possibly satisfy all
three camps: have a libGL wrapper library (loaded via LD_PRELOAD) which
overrides functionality as needed.  Want to force FSAA to be enabled?  Put
it into glXCreateContext().  Want to force GL_RGB8 when the application
chooses GL_RGB?  Do it in glTexImage().  Hey, if you want to force GL_RGB4
when the application chooses GL_RGB8, you could do that too!

Basically, I see no reason to put this configuration into the drivers
themselves, as it could easily be done using an LD_PRELOADed library.



The Chromium project has been doing this for a while.  At SigGraph, I 
saw a demo of quake3 running in wire frame mode using this type of trick.

Let's strive to keep as much unneeded complexity as we can out of the 
drivers.

--
   /\
 Jens Owen/  \/\ _
  [EMAIL PROTECTED]  /\ \ \   Steamboat Springs, Colorado



---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Dieter Nützel
Am Mittwoch, 4. Dezember 2002 21:18 schrieb Ian Romanick:
 On Wed, Dec 04, 2002 at 12:06:20PM -0800, magenta wrote:
  On Wed, Dec 04, 2002 at 11:06:01AM -0800, Allen Akin wrote:
   On Wed, Dec 04, 2002 at 12:57:44AM -0600, D. Hageman wrote:
   | This illustrates one of the bad points of using environment
   | variables. Will we have to add environment variables every time a new
   | app is pushed out the door?  Bad approach.
  
   In general, if a bug affects every app, then the driver needs to be
   fixed.  Ian's scenario (and my reply) were about the case in which you
   want to change driver behavior for one app without affecting others.
 
  But this isn't about application bug workarounds, it's about users
  specifying hinting or forcing extensions to be active or whatever.

 As I pointed out in another post, the same mechanism could be used for
 both. There are enough corner cases in the OpenGL spec that an application
 could do something that would just happen to work fine with one dirver, but
 crash horribly on another.  If that were to happen in, say, Maya or Doom 3
 or some other commercial app, the common practice on other systems is to
 provide a driver based work around.

 The ideal sollution is to fix the app, but not all developers move at the
 speed of open source. :)

  1. Users should be able to configure default behavior using configuration
  files (which would be selected based on argv[0] or similar)
 
  2. Users should be able to configure default behavior using environment
  variables (which would be configured on a per-application basis using
  wrapper scripts or a launcher program or similar)
 
  3. Users should not be able to configure default behavior; applications
  should specify all behavior explicitly if it matters, and expose this as
  an application-level configuration option to the user
 
  Personally, I'm torn between camps 1 and 3.

 In terms of policy, camps 1 and 2 really are the same.  The difference
 between 1 and 2 is just a matter of mechanism.

Yes, I'll second this.
I mentioned '/etc/mesa.conf' only as example but can live without it.
The QT/XFT way sounds good to me.
Let the 'Tweek_tools' (Gnome, KDE, etc.) write to it and second you have the 
functionality like environment variables.

But let's start with Right (tm) defaults for the cards (default user conf) 
in the libGL.

-Dieter


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET
comprehensive development tool, built to increase your
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Ian Romanick
On Wed, Dec 04, 2002 at 01:57:48PM -0700, Nicholas Leippe wrote:
 It seems as if none of the levels of controls people have been asking for in 
 this thread can't be satisfied via environment variables in one way or 
 another--it seems to be the most flexible solution.

The problem with env vars is that if they change (or new ones are added, or
old ones removed, or the user changes hardware, or ...) all of the scripts
that set them have to change.  It's not un-manageable, but it would be a
fair amount of work.

Now, imagine the drivers having an interface that a tool (for creating app.
profiles) could query.  The driver would send back (perhaps using XML or
something similar?) a list of knobs that is has in the form:

- Short name
- Long description
- Type (boolean, range, etc.)
- Default value (perhaps as mandated by the OpenGL standard)

The tool could be something as simple as a shell utility to tell the user
what options are available for the driver.  That would be a step up from the
current 'grep getenv xc/xc/lib/GL/mesa/src/drv/driver name/*.c'. :)  It
would also save us from having to maintain a web page of all the different
env vars for each driver.  Internationalization would be a problem, though.

The neat thing is that if other (close source) drivers supported the query
interface  config file format, they could use the same tool.  I don't see
how you could do the same with a wrapper library.  How could the wrapper
know how to disable some extension in the Nvidia driver?

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Trunk-to-texmem merge

2002-12-04 Thread Alan Hourihane
On Wed, Dec 04, 2002 at 01:23:26 -0800, Ian Romanick wrote:
 On Wed, Dec 04, 2002 at 02:35:39PM -0500, Leif Delgass wrote:
  On Tue, 3 Dec 2002, Ian Romanick wrote:
  
   Unless there are any objections, I'm going to commit a merge from the trunk
   to the texmem-0-0-1 branch tomorrow (Wednesday).  I've tested the merge on
   the R100, and I'll test it on an M6 and a G400 before I commit it.
  
  That's fine by me.  FYI, I've started trying to debug r128 in the texmem
  branch.  I've found some problems, but am still experiencing texture
  corruption.  The first problem I found is in the switch/case at
  r128_texmem.c:281 (r128UploadTexImages()).  Since the uploading of
  textures was moved from r128EmitHwStateLocked() to functions called from
  r128UpdateTextureState(), a texture isn't marked as bound until _after_
  it's uploaded, so the default case was being hit (with t-base.bound ==
  0).
 
 I've actually moved it again, too.  I moved it to enable_tex_2d to match the
 R100 / R200 drivers.
 
  Another problem I found is that r128DDBindTexture no longer sets the 
  R128_NEW_TEXTURE flag, and this prevents the texture state from being 
  updated when an app switches textures.  For example: running tunnel, I get 
  the floor texture on the walls, but if I set R128_NEW_TEXTURE in 
  r128DDBindTexture, the wall textures and floor textures appear in the 
  right places.  How do the radeon/r200 drivers work without setting the 
  NEW_TEXTURE flag there?  Also, shouldn't it unbind the texture currently 
  bound to that texture unit?
 
 Ah-ha!  The R128 driver tracks changes to texture state on its own, but the
 R100 / R200 drivers just let Mesa do it.  When the state changes, Mesa calls
 the drivers UpdateState function (r128DDInvalidateState 
 radeonInvalidateState) and passes it new_state.  If texture state has been
 called, _NEW_TEXTURE will be set in new_state.  The changes that I'm going to
 commit today are:
 
 - Remove all usage of R128_NEW_TEXTURE.
 - Modify r128UpdateHwState to test _NEW_TEXTURE in NewGLState
 
 I suspect that will fix the texture problems.  Somebody (that actually has
 Rage128 hardware!) should go through and eliminate the new_state field from
 r128_context altogether.  I will make similar changes to the MGA driver.  It
 would be nice to have fundamental things, like tracking state changes, as
 similar as possible across the various drivers.  It makes it easier to move
 from driver-to-driver to fix bugs and make enhancements.

I'm using an R128 for some work I'm doing at the moment. I'll take a look
when the trunk merge is done.

Alan.


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread D. Hageman
On Wed, 4 Dec 2002, magenta wrote:
 
 Actually, I just thought of a solution which could possibly satisfy all
 three camps: have a libGL wrapper library (loaded via LD_PRELOAD) which
 overrides functionality as needed.  Want to force FSAA to be enabled?  Put
 it into glXCreateContext().  Want to force GL_RGB8 when the application
 chooses GL_RGB?  Do it in glTexImage().  Hey, if you want to force GL_RGB4
 when the application chooses GL_RGB8, you could do that too!
 
 Basically, I see no reason to put this configuration into the drivers
 themselves, as it could easily be done using an LD_PRELOADed library.

That isn't a decent solution.  You would have to have a large amount of a 
wrappers laying around to support all the possible hints/options a 
person would want to use.  It is probably the worst in terms of user 
friendliness as well.  

Next please ...

-- 
//\\
||  D. Hageman[EMAIL PROTECTED]  ||
\\//


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread magenta
On Wed, Dec 04, 2002 at 01:57:48PM -0700, Nicholas Leippe wrote:
 On Wednesday 04 December 2002 01:06 pm, you wrote:
  
  I basically see three camps in this discussion:
  
  1. Users should be able to configure default behavior using configuration
  files (which would be selected based on argv[0] or similar)
  
  2. Users should be able to configure default behavior using environment
  variables (which would be configured on a per-application basis using
  wrapper scripts or a launcher program or similar)
  
  3. Users should not be able to configure default behavior; applications
  should specify all behavior explicitly if it matters, and expose this as an
  application-level configuration option to the user
 
 It seems to me that 2 and 3 are independent.  I don't see why the 
 application's configuration doesn't just provide an interface to changing 
 it's own environment variables.  This would allow wrapper scripts to supply 
 variables/values the application didn't know about when written, and let the 
 application provide a nice interface to the user for changing them as well.

The problem with 3 is that then new features can't necessarily be added
back into older closed-source applications.  For example, FSAA and
anisotropic filtering and so on.  It could be easily overridden at the
libGL level (using LD_PRELOAD) without requiring lots of cruft in the
drivers themselves; I don't see why the drivers should get unnecessary
functionality like this when it could be provided by external solutions
(external both to the drivers *and* the applications).

 Wrapper scripts can provide both default settings (bashrc) or per-application 
 settings just the same.

One would hope that the individual applications aren't using environment
variables to store all of their configuration. :P

 It seems as if none of the levels of controls people have been asking for in 
 this thread can't be satisfied via environment variables in one way or 
 another--it seems to be the most flexible solution.

What about remote indirect rendering?  Someone else has already mentioned
that the driver would have no way of getting environment variables in that
case.

I just don't see why everyone wants to put this functionality into the
driver itself; IMO, it just adds unnecessary complexity to the drivers.

The purpose for an LD_PRELOADed library would be to provide tweaks to
applications which *don't* provide the configuration.  Which is really the
whole issue here - people want to be able to configure behavior that
certain applications don't allow to be configured.  There's no reason for
the driver to do this, and for those applications where the source can't be
modified, why not just LD_PRELOAD the functionality in externally?

-- 
http://trikuare.cx


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread magenta
On Wed, Dec 04, 2002 at 02:33:11PM -0700, Jens Owen wrote:
 magenta wrote:
  
  3. Users should not be able to configure default behavior; applications
  should specify all behavior explicitly if it matters, and expose this as an
  application-level configuration option to the user
  
  Personally, I'm torn between camps 1 and 3.
 
 I'm squarely in camp 3 based on Allen's rationale and his experience.

Yeah, I am now too, after talking about the LD_PRELOAD idea.  (I should
have gone back and edited my message, which I wrote in a
stream-of-consciousness manner.)

  Actually, I just thought of a solution which could possibly satisfy all
  three camps: have a libGL wrapper library (loaded via LD_PRELOAD) which
  overrides functionality as needed.  Want to force FSAA to be enabled?  Put
  it into glXCreateContext().  Want to force GL_RGB8 when the application
  chooses GL_RGB?  Do it in glTexImage().  Hey, if you want to force GL_RGB4
  when the application chooses GL_RGB8, you could do that too!
  
  Basically, I see no reason to put this configuration into the drivers
  themselves, as it could easily be done using an LD_PRELOADed library.
  
 
 The Chromium project has been doing this for a while.  At SigGraph, I 
 saw a demo of quake3 running in wire frame mode using this type of trick.

I wanted to see that, but it was one of those talks which I managed to miss
all but the last 5 minutes of. :)  I thought that Chromium was a complete
libGL replacement (for the purpose of clustered rendering), though, and
that the wireframe was really a complete reformatting of the OpenGL
primitives.  At least, that's what the paper in thne proceedings seems to
indicate.

 Let's strive to keep as much unneeded complexity as we can out of the 
 drivers.

I definitely agree.

-- 
http://trikuare.cx


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Ian Romanick
On Wed, Dec 04, 2002 at 01:49:34PM -0800, magenta wrote:

 What about remote indirect rendering?  Someone else has already mentioned
 that the driver would have no way of getting environment variables in that
 case.

Remote indirect rendering is a problem no matter how you slice it.

 I just don't see why everyone wants to put this functionality into the
 driver itself; IMO, it just adds unnecessary complexity to the drivers.
 
 The purpose for an LD_PRELOADed library would be to provide tweaks to
 applications which *don't* provide the configuration.  Which is really the
 whole issue here - people want to be able to configure behavior that
 certain applications don't allow to be configured.  There's no reason for
 the driver to do this, and for those applications where the source can't be
 modified, why not just LD_PRELOAD the functionality in externally?

Here's another example that somebody just reminded me about.  Quite a few of
the CAD cards out there have ways to tune internal optimization
parameters.  These can be things as simple as what vertex format to prefer
(i.e., float colors vs. packed ubyte colors) to much more complex things.
Looking at the drivers for the FireGL 4, it uses two cryptic 32-bit ints in
XF86Config to communicate this to the driver.  It's configuration tool has
profiles for 4 different apps (including Maya  Softimage).  Admitedly,
this isn't the right solution either, but it is another data point.

As far as I can tell, there is no way either an app or a wrapper library
could communicate this information to the driver.  Yet, shipping high end
drivers support and demanding users expect this level of
application-to-driver tuning.

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Trunk-to-texmem merge

2002-12-04 Thread Ian Romanick
On Tue, Dec 03, 2002 at 05:05:26PM -0800, Ian Romanick wrote:
 Unless there are any objections, I'm going to commit a merge from the trunk
 to the texmem-0-0-1 branch tomorrow (Wednesday).  I've tested the merge on
 the R100, and I'll test it on an M6 and a G400 before I commit it.

I am in the process of commiting the merge.  By the time this messages gets
to most people, it should be done.  I have tested on r100, M6, and G400
hardware.  The r200  r128 drivers pass the it compiles test.

Merging in the r200 changes causes some changes in other parts of the code.
I modified the routines in texmem.c to calculate mipmap  size limits for 3D
textures, cube maps, and texture rectangles.  Right now the r200 is the only
driver that supports these texture types, so I have not tested this code
thoroughly.  I would greatly appreciate it if somebody could compare the
output of 'glxinfo -l' (from a recent Mesa build) on r200 from the trunk and
the branch.  A slightly different calculation method is used, so the results
may not be exactly the same, but they should be close.

I made some additional changes to the locking, texture upload, and state
tracking in the r128 driver.  This was done to make it more like the r100
and r200 drivers.  I believe that it is correct, but I may have
inadvertently introduced errors.

Could somebody with the appropriate documentation add the missing #defines
to radeon_reg.h to support texture rectangles, 3D textures, and cubic
textures?  It should be pretty trivial to back-port support for these
features from the r200 driver once the registers are known.

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Trunk-to-texmem merge

2002-12-04 Thread Ian Romanick
On Wed, Dec 04, 2002 at 02:30:03PM -0800, Ian Romanick wrote:
 On Tue, Dec 03, 2002 at 05:05:26PM -0800, Ian Romanick wrote:
  Unless there are any objections, I'm going to commit a merge from the trunk
  to the texmem-0-0-1 branch tomorrow (Wednesday).  I've tested the merge on
  the R100, and I'll test it on an M6 and a G400 before I commit it.
 
 I am in the process of commiting the merge.  By the time this messages gets
 to most people, it should be done.  I have tested on r100, M6, and G400
 hardware.  The r200  r128 drivers pass the it compiles test.

Scratch that.  CVS died mid-commit.  I am now waiting for idr's lock
in  Grr!  I'll try again in the morning.  I'll send out e-mail when the
commit is actually done.  Sigh...

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-users] Re: [Dri-devel] Radeon x86 PCI [Was: help selecting a graphics card, and some general questions]

2002-12-04 Thread Keith Gross
Might it not be possible to eliminate all the PCIGART_ENABLED stuff and for 
the time being control this in the XF86Config.  If you have a PCI card you 
use ForcePCIMode true.  If you have a AGP card you use either ForcePCIMode 
false or just say nothing and the driver assumes AGP.  This way the PCI GART 
gets more testing and a lot of people like me don't spend many frustating 
hours figuring out that PCI Radeons are not supported by default and than 
having to build their own to get it working.

On Wednesday 04 December 2002 03:23 pm, Michel Dänzer wrote:
 On Mit, 2002-12-04 at 15:27, José Fonseca wrote:
  On Wed, Dec 04, 2002 at 02:48:50PM +0100, Michel Dänzer wrote:
   On Mit, 2002-12-04 at 12:52, Keith Whitwell wrote:
José Fonseca wrote:
 Is there any reason no to enable x86 PCI support on Radeon?
   
I think nobody's been able to make it work stably.
  
   I don't think PCI cards work less stably than AGP cards per se, the
   main concern is AGP cards falling back to PCI GART when agpgart isn't
   available for some reason. I wonder if there's a way to determine the
   slot type from the 2D driver?
 
  If that's the reason then the solution couldn't be simpler. See the
  patch attached.

 Unfortunately, all Radeons are actually AGP chips, so IsPCI is never set
 automatically, owners of PCI cards would have to use Option
 ForcePCIMode. My idea was to determine the type of GART to use from
 the type of slot the card is connected to.

 Besides, I'd remove all the PCIGART_ENABLED ugliness while we're at it.



---
This SF.net email is sponsored by: Microsoft Visual Studio.NET
comprehensive development tool, built to increase your
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Trunk-to-texmem merge

2002-12-04 Thread Keith Whitwell


I suspect that will fix the texture problems.  Somebody (that actually has
Rage128 hardware!) should go through and eliminate the new_state field from
r128_context altogether.  I will make similar changes to the MGA driver.  It
would be nice to have fundamental things, like tracking state changes, as
similar as possible across the various drivers.  It makes it easier to move
from driver-to-driver to fix bugs and make enhancements.


This dates from Mesa 3.x where the mesa state tracking mechanism was designed 
around the software rasterizer  useless for anything else.

Keith




---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread magenta
On Wed, Dec 04, 2002 at 02:21:30PM -0800, Ian Romanick wrote:
 
 As far as I can tell, there is no way either an app or a wrapper library
 could communicate this information to the driver.  Yet, shipping high end
 drivers support and demanding users expect this level of
 application-to-driver tuning.

A wrapper library doesn't have to communicate any information to the driver
- it just intercepts the function calls and turns them into something based
on the user's preference.

Like, here, a concrete example, based on the topic which sparked this whole
discussion to begin with.  Let's say an application does

glTexImage2D(GL_TEXTURE_2D, blah, blah, blah, GL_RGB,
GL_UNSIGNED_BYTE, blah);

and the driver decides that GL_RGB should default to GL_RGB4.  But the user
doesn't want that to happen, so they configure the wrapper library to
intercept that call and turn it into:

glTexImage2D(GL_TEXTURE_2D, blah, blah, blah, GL_RGB8,
GL_UNSIGNED_BYTE, blah);

See what I'm saying?  The wrapper library wouldn't explicitly tell the
driver anything, it'd just make hints based on user preferences, rather
than based on driver default.

Or, for a more complex idea, let's say the user wants to force wireframe
rendering and FSAA.  Probably the easiest way for this to happen is for the
wrapper library to have something like:

void glXMakeCurrent(Display *dpy, Window win, GLXContext ctx)
{
real_glXMakeCurrent(dpy, win, ctx);
SetupOverridenStuff();
}

void SetupOverriddenStuff()
{
if (override_wireframe)
real_glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
if (override_fsaa)
real_glEnable(GL_whatever_it_is_to_enable_FSAA);
// ...
}

and then the overridden glPolygonMode would be, for example,

void glPolygonMode(GLEnum foo, GLEnum bar)
{
if (!override_wireframe)
real_glPolygonMode(foo, bar);
}

and so on.

See, the wrapper wouldn't have to communicate directly to the driver in
order to do any of what's been discussed - it would override it *based on
user preferences* using the existing high-level functionality provided by
OpenGL itself.

-- 
http://trikuare.cx


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Trunk-to-texmem merge

2002-12-04 Thread Leif Delgass
On Wed, 4 Dec 2002, Ian Romanick wrote:

 On Wed, Dec 04, 2002 at 02:30:03PM -0800, Ian Romanick wrote:
  On Tue, Dec 03, 2002 at 05:05:26PM -0800, Ian Romanick wrote:
   Unless there are any objections, I'm going to commit a merge from the trunk
   to the texmem-0-0-1 branch tomorrow (Wednesday).  I've tested the merge on
   the R100, and I'll test it on an M6 and a G400 before I commit it.
  
  I am in the process of commiting the merge.  By the time this messages gets
  to most people, it should be done.  I have tested on r100, M6, and G400
  hardware.  The r200  r128 drivers pass the it compiles test.
 
 Scratch that.  CVS died mid-commit.  I am now waiting for idr's lock
 in  Grr!  I'll try again in the morning.  I'll send out e-mail when the
 commit is actually done.  Sigh...

That sucks.  You'll probably need to submit a support request to sf.net to
get the lock removed.  Better luck on the next attempt...

-- 
Leif Delgass 
http://www.retinalburn.net



---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Wrapper library stuff (was: Re: Smoother graphics with 16bpp on radeon)

2002-12-04 Thread magenta
Another note: A third-party tweak library could conceivably convert calls
for S3TC functionality into appropriate calls for ARB_texture_compression
instead.

-- 
http://trikuare.cx


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread magenta
On Wed, Dec 04, 2002 at 02:30:31PM -0600, D. Hageman wrote:
 On Wed, 4 Dec 2002, magenta wrote:
  
  Actually, I just thought of a solution which could possibly satisfy all
  three camps: have a libGL wrapper library (loaded via LD_PRELOAD) which
  overrides functionality as needed.  Want to force FSAA to be enabled?  Put
  it into glXCreateContext().  Want to force GL_RGB8 when the application
  chooses GL_RGB?  Do it in glTexImage().  Hey, if you want to force GL_RGB4
  when the application chooses GL_RGB8, you could do that too!
  
  Basically, I see no reason to put this configuration into the drivers
  themselves, as it could easily be done using an LD_PRELOADed library.
 
 That isn't a decent solution.  You would have to have a large amount of a 
 wrappers laying around to support all the possible hints/options a 
 person would want to use.  It is probably the worst in terms of user 
 friendliness as well.  

Um, why couldn't a single wrapper override all fo the calls it needs to
override for the purpose of providing the functionality of a tweak utility?
A single library could easily provide every user-configurable setting here.

-- 
http://trikuare.cx


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread D. Hageman
On Wed, 4 Dec 2002, magenta wrote:

 On Wed, Dec 04, 2002 at 02:30:31PM -0600, D. Hageman wrote:
  On Wed, 4 Dec 2002, magenta wrote:
   
   Actually, I just thought of a solution which could possibly satisfy all
   three camps: have a libGL wrapper library (loaded via LD_PRELOAD) which
   overrides functionality as needed.  Want to force FSAA to be enabled?  Put
   it into glXCreateContext().  Want to force GL_RGB8 when the application
   chooses GL_RGB?  Do it in glTexImage().  Hey, if you want to force GL_RGB4
   when the application chooses GL_RGB8, you could do that too!
   
   Basically, I see no reason to put this configuration into the drivers
   themselves, as it could easily be done using an LD_PRELOADed library.
  
  That isn't a decent solution.  You would have to have a large amount of a 
  wrappers laying around to support all the possible hints/options a 
  person would want to use.  It is probably the worst in terms of user 
  friendliness as well.  
 
 Um, why couldn't a single wrapper override all fo the calls it needs to
 override for the purpose of providing the functionality of a tweak utility?
 A single library could easily provide every user-configurable setting here.

Okay, I guess I could see what you are saying now ... it still isn't 
exactly what I would call an elegant solution.  The more I think about it 
the more I cringe of the thought of it.  We don't have to make workarounds 
and cheap hacks to accomplish this -- we have the source code ... we can 
do it right.  

'nuff on that ...

I guess the best question is since this idea has caused a lot of heat, but 
everyone seems to agree that it would be a nice idea - How do we decide 
where to go next with it? 


-- 
//\\
||  D. Hageman[EMAIL PROTECTED]  ||
\\//


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Allen Akin
On Wed, Dec 04, 2002 at 02:21:30PM -0800, Ian Romanick wrote:
| Remote indirect rendering is a problem no matter how you slice it.

Well, maybe not if you handle preference-setting at the application
level, rather than trying to do it at the library or driver levels.
Then it can be dynamic, or there can be multiple sets of preferences for
local vs. remote connections, or different preferences can be used
simultaneously if the app has both types of connections open at the same
time.

| Here's another example that somebody just reminded me about.  Quite a few of
| the CAD cards out there have ways to tune internal optimization
| parameters.  These can be things as simple as what vertex format to prefer
| (i.e., float colors vs. packed ubyte colors) ...

I'm confused about this one.  Surely the driver knows which vertex
formats are efficient.  Is this a space/time tradeoff hint that's given
to the driver for controlling display-list compilation?  Or something
more sophisticated?  Or a tool for optimizing benchmark results?

| As far as I can tell, there is no way either an app or a wrapper library
| could communicate this information to the driver.

The usual way to solve this kind of problem is with an extension.  That
way the app can control which vertex formats are used for which display
lists (for example) based on how it knows the dlists will be used.
Otherwise, the driver has to apply the vertex format preference to all
dlists, and it's easy to see how that could make performance worse or
even lead to poor rendering (if the colors aren't stored with enough
precision for some dlists).

Yes, this requires source-code changes in the app.  But if the
functionality is genuinely valuable, once one vendor provides it, the
market will drive other hardware vendors to provide it and app
developers to use it.  And the OpenGL extension mechanism provides a
portable way to access the feature.

Controlling this sort of stuff with a driver-level preference is
sometimes useful as a temporary workaround, or as a solution to
political problems like finessing benchmarking rules, but it isn't
something you'd want to depend on in the long run because it has too
many failure modes.

If folks want to spend effort on this they should be aware that it's a
fragile mechanism with a lot less return for the effort than simply
handling preferences at the right level (in the apps).  Don't expect it
to solve the hard problems.

Allen


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Allen Akin
On Wed, Dec 04, 2002 at 01:39:19PM -0800, Ian Romanick wrote:
| 
| Now, imagine the drivers having an interface that a tool (for creating app.
| profiles) could query.  The driver would send back (perhaps using XML or
| something similar?) a list of knobs that is has in the form:
| 
| - Short name
| - Long description
| - Type (boolean, range, etc.)
| - Default value (perhaps as mandated by the OpenGL standard)

That's a good design for controlling driver-level preferences, if you're
determined to spend time on that approach.

Before coding it up, a good exercise (which we could do online) is to
generate a list of knobs that would apply to all drivers.  I think
that'll be instructive. :-)

Allen


---
This SF.net email is sponsored by: Microsoft Visual Studio.NET 
comprehensive development tool, built to increase your 
productivity. Try a free online hosted session at:
http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



glTune Proposal (was RE: [Dri-devel] Smoother graphics with 16bpp on radeon)

2002-12-04 Thread Alexander Stohr
Title: glTune Proposal (was RE: [Dri-devel] Smoother graphics with 16bpp on radeon)





I was reading almost 80% of the discussion
and want to give you a quite bold sheme
of how that all can be handled in terms of
a real world system:


You'd write an extension to the drivers that
advertises all queried enviromental strings.
This should resemble a similar checking sheme 
as it is done for the exported gl extensions,
the known driver specific config file options 
and for the imported XF86 module symbols. 
Any advertised environmental string is allowed
by the XF86 system to be parsed by the drivers.


On the client side there is a shell application
which i will call `gltune` right now. This
application just queries the libGL and the
driver behind it for their respective environment
parameters and further can query their current
state and their default state. This application
is able to write those values out to the shell:


 # current settings of libGL version 1.2
 LIBGL_ALWAYS_INDIRECT=0
 LIBGL_NUMBER_OF_LIGHTS=4
 # current settings of r200 version 4.1.0
 LIBGL_NO_TCL=0
 LIBGL_MAX_LOD=6


(looks quite similar to what you might see with Samba config.)


With this outputs you will get a full overview 
about the current state. You should be able to pass 
that data back to the shell. There should be a 
gltune program option that prefixes varies the outputs
so that e.g. . sourcing with the bash is possible.


For this there is no need for a write back option for the program
but its possible to allow the program to perform the wrong way.
Anyways, i dont think global options should get merged into such 
a per-client and per-terminal sheme.


Of course there is a possibility of attaching a GUI
tool to that ENV-NAMES extension, which then might
make up Profile management, allowing to have a big
bunch of help file in some central location and other
ways of giving the ordinary skilled user good hints
on every reported environment setting.


Sample:
 Profile: Quake3 [Load] [Save] [Reset]
 [page1] [page2] [page3]
 Accelleration Level: [help]
 ( ) software rendering
 (o) hardware rendering w/o TCL
 ( ) hardware rendering with TCL
 [x] LIBGL_AA - enforces antialising [help]
 [6] LIBGL_MAX_LOD - level of detail [help]
 [browse unclassified only] [browse all]
 [Launch Application] [Launch Shell] [Quit]


I am not a TCL/TK freak or whatever, but i think
a set of config files should provide all extra
information for a specific grafics adapter or
if there is not yet a tailored config then 
it recognizes at least the basic switches
and offers the remaining e.g. in an alphabethic
listing. Help should be quite easy to maintain.


You might get the idea behind now.


I mean that will be ease of use - and it must 
really not break with the existing sheme.
It's just a front end for serving to the low level.


There is one drawback that i should not be silent about:
you will not be able to deal with large amounts of
environmental variables effectively because that
checking and counter checking against lists will be
time consuming. Generic vars like LIGHT_1 trough
LIGHT_32767 arent target of those simple sheme.


You see you can get anything from shell variables.
Even the GUI frontend and the profiles. Flexibility
is the nature of the shell vars in contrast to 
binary interfaces where you always have to fear
about compatibility if only a single change happens.


-Alex.





RE: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Alexander Stohr
Title: RE: [Dri-devel] Smoother graphics with 16bpp on radeon






  What about remote indirect rendering? Someone else has 
 already mentioned
  that the driver would have no way of getting environment 
 variables in that
  case.
 
 Remote indirect rendering is a problem no matter how you slice it.


With a precise ENV-NAME method you cann tell the client what 
of the environment it should sent to the server. Maybe there
is some way of sending the application name as well. This would
mean auto-selecting profiles. Just specify a list of application
names for each profile and you are done. But profile handling
would result in a server specific new task.


 Looking at the drivers for the FireGL 4, it uses two cryptic 
 32-bit ints in
 XF86Config to communicate this to the driver. It's 
 configuration tool has
 profiles for 4 different apps (including Maya  Softimage). 
 Admitedly,
 this isn't the right solution either, but it is another data point.


Its a static way of specifying driver behaviour.
Whilst programming for DirectX i was in duty for
fixing dynmic resource allocation which were mutual
exclusive to each other. For this i would not recommend
to have anything configureable on a per application base.


Just imagine concurrent start up and shut down
of applications - when would you allow which feature?


-Alex.





RE: [Dri-devel] Smoother graphics with 16bpp on radeon

2002-12-04 Thread Alexander Stohr
Title: RE: [Dri-devel] Smoother graphics with 16bpp on radeon





The layer idea is not bad,
but its more the taste of a hack.
Remember that dri is OpenSource,
so you dont need those hacks.


As soon as you start with that you will notice that a layer
will increase distance between your application and the drivers 
on nearly any call. You dont really want that.


Further you cant ensure that you covered all the paths 
because GL is an extensible system that might open 
highly relevant paths. And you might have to keep track 
of the numerous render state variables in order to keep
the things in order and to know when to intercept and
when not to intercept.


I think its easier to turn on several features in the driver
than somewhere else. Maybe there are features you can by no
means control with the help of an intermediate level.
(Remebering the FireGL big focus and Stereo support.)


-Alex.