Re: tdfx and DDC2

2005-08-30 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tim Roberts wrote:
 Michael wrote:
 
 I don't see why they should be enabled - they're PC-specific and even
 with x86 emulation they would be pretty much useless since you're not
 too likely to encounter a graphics board with PC firmware in a Mac ( or
 other PowerPC boxes )
 
 Wrong.  No hardware manufacturer in their right mind would build a
 Mac-only PCI graphics board, with the possible exception of Apple. 
 They're going to build a generic graphics board that works in a PC and
 by the way also works in a Mac.  Such a board will have a video BIOS.

That is 100% untrue.  Take *any* AGP or PCI card, with one* exception,
made for the Mac and it will not work in a PC.  Macs (and Suns and IBM
pSeries) use OpenFirmware (byte-code compiled Forth) and PCs use
compiled x86 for their respective firmwares.  Neither one works with the
other.

Some people have had limited success reflashing PC cards with Mac
firmware, but I don't think that counts.

* http://apps.ati.com/ir/PressReleaseText.asp?compid=105421releaseI
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFLTwX1gOwKyEAw8RAnIaAJ4nIQh9s+lKW9n7XWyCKx/1HBzfSACfblqv
pslJWtJ5D7StoYOSGlz8tPE=
=Xs6N
-END PGP SIGNATURE-
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Darwin extern/static fix

2005-04-13 Thread Ian Romanick
Torrey Lyons wrote:
At 3:42 PM -0400 4/13/05, David Dawes wrote:
On Wed, Apr 13, 2005 at 11:52:47AM -0700, Torrey Lyons wrote:
Bugzilla #1576 and the fix committed for it is only partially right.
The patch applewmExt.h is right, but patching the imported Mesa code
in extras/Mesa/include/GL/internal/dri_interface.h is the wrong thing
to do and likely has unintended side effects on other platforms. The
correct fix is just to rename __driConfigOptions in
lib/GL/apple/dri_glx.c. Thanks for pointing out the issue.

I didn't find anything that requires the external declaration of
__driConfigOptions, which is why I applied the patch as submitted.
Perhaps something should in the BUILT_IN_DRI_DRIVER case.  There
are also likely other issues with the BUILT_IN_DRI_DRIVER case.
Yes, I don't know of a specific issue, but it seems like bad practice to 
change an imported header file when we don't need to. The names I came 
up with in apple/dri_glx.c are completely arbitrary. Now that in gcc 4.0 
we can't rely on static to avoid namespace collisions, those static 
variables should be named something more unique. In the X.Org tree I'm 
going to change the name of the static variables in apple/dri_glx.c. Of 
course there's nothing wrong with doing both this and the submitted patch.
__driConfigOptions is supposed to be exported by the DRI driver.  The 
idea is that a configuration utility would open libGL and use 
glXGetDriverConfig to get the configuration options supported by the 
driver.  If the libGL doesn't support loading DRI drivers, as I suspect 
is the case with the Darwin libGL, there is no reason for 
glXGetDriverConfig to ever return *anything* other than NULL.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: DRM kernel source broken/incomplete

2005-02-08 Thread Ian Romanick
Dr Andrew C Aitchison wrote:
On Tue, 8 Feb 2005, David Dawes wrote:
It looks like the DRM kernel source in xc/extras/drm is broken and
incomplete, especially for BSD platforms.  The Linux version only
appears to build for a narrow range of kernels, and this either
needs to be fixed, or the minimum kernel requirements enforced in
the Makefile.
Perhaps we'll have to roll back to an older version that does build?
How often does the Xserver / DRM binary interface change - 
is it viable to just use the DRM in the running kernel ?

I suppose this is really a question for one of the DRM lists but,
is it a forlorn hope that the DRM could have a static binary
interface to either the kernel or the X server ?
(I guess that a moving kernel puts the former outside the control
of the DRM project ?)
There's a mixed answer (good news / bad news) to that question.  AFAIK, 
the user-space client-side drivers and the DDX should work with a quite 
old DRM.  That's the good news part.  The bad news is that some features 
and / or bug fixes may not be available.  For example, the current R200 
driver works just fine with the DRM that ships with 2.4.21 kernel, but a 
couple security fixes and support for tiled framebuffers is missing.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Added Pseudocolor Visuals for XFree86?

2004-11-01 Thread Ian Romanick
Bussoletti, John E wrote:
At Boeing we have a number of graphics applications that have been
developed in-house, originally for various SGI platforms.  These
applications are used for engineering visualization  They work well on
the native hardware and even display well across the network using third
party applications under Windows like Hummingbird's ExCeed 3D.  However,
under Linux, the fail to work properly, either natively or via remote
display with the original SGI hardware acting as server, due to
omissions in the available Pseudocolor Visuals.   
The X terminology is a little different than most people expect, so I 
want to ask for some clarification.  By SGI hardware acting as server 
do you mean the application is running on the SGI and displaying on the 
Linux system, or the application is running on the Linux system and 
displaying on the SGI?  In X terminology, the server (i.e., X-server) 
is where ever the stuff is being displayed.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: G4 AGP

2004-09-29 Thread Ian Romanick
F. Heitkamp wrote:
I can't get agp to work with my Apple G4.  When I enable DRI X comes  up 
but the resolution appears to be 640x480 and the mouse cursor is large, 
distorted and quivering.  No user input is possible at this point.
Is AGP support for the G4 still under development or is it supposed to 
work?  I have a Radeon 9000.
AFAIK, AGP is supported on all G4 based Macs.  All of that should work 
fine even without AGP support.  Does it work correctly with DRI 
disabled?  Anything relavent show up in /var/log/XFree86.log?
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Continued : Xfree 4.4 make install failure on ppc system - scaled fonts problem with mkfonts

2004-07-15 Thread Ian Romanick
[EMAIL PROTECTED] wrote:
Following to the post  http://www.mail-archive.com/[EMAIL PROTECTED]/msg16132.html
I think I have found where the problem is : line 1024 of mkfontscale.c while calling 
FT_Get_Name_Index.
n parameters value is space when it crashed. I didn't checked all values of data in the struct 
face but family name is Utopia when it crash.
I have been able to reproduce this same problem on a G4 running Debian 
(sarge), but *not* on a POWER4 box.  GCC 3.3.4 was used on the G4, and 
GCC 3.3.3 was used on the POWER4.  Both built for 32-bit.  On the G4, I 
tried building with a variety of different optimization settings (-O0, 
-O2, -Os) and architecture settings, but nothing seemed to help.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Adding DMX to XFree86

2004-06-23 Thread Ian Romanick
Kevin E Martin wrote:
I think many of us would very much like to have hardware accelerated
indirect rendering, and from time to time there has been talk of adding
it to the DRI project.  It's actually been on the to do list for the
DRI project from the original design days, but it's a large project and
there was little interest in funding it back when I was with PI and VA.
I'm still hopeful that it will eventually happen.
The current thinking is to, essentially, 'rm -rf xc/programs/Xserver/GL' 
and re-write it so that libglx.a loads a device-dependent *_dri.so, like 
the client-side libGL does.  The advantage being that only one driver 
binary will be needed per-device.  The support and maintainence 
advantages should be obvious.

Work has been started on an Xlib based DRI driver (something of a 
contradiction in terms, I know) by Adam Jackson.  I've started writing 
Python scripts to automatically generate GLX protocol handling code (for 
both client-side and server-side).  We're getting closer to starting the 
real work, but I need to clear a few things off my plate first.

My goal is to start a branch in the DRI tree in the next few (3 to 4) 
months to get this work going.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Matrox I2C patch

2004-06-14 Thread Ian Romanick
Ryan Underwood wrote:
Not a common scenario.  I know a lot of G550's come with a DVI and an
analog connector, but I've never seen a G450 like that.  (The G450
manual claims that they exist., however.)
I have a PCI G450 (for PowerPC, no less) that has this configuration. 
Of course, I can't get it to work because there's no support for PCI 
domain != 0 on PPC64, and all the PCI slots in my box are in domain 1. 
:(  Until I write domain probing support, I can't help you, but I can 
verify that the cards *do* exist. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Register access on MIPS system

2004-06-08 Thread Ian Romanick
Marc Aurele La France wrote:
Well, domain support for MIPS has yet to be written.  Ditto for PowerPC.  And
that for Alpha's is somewhat broken.  Lack of time, for one, and lack of
hardware.
Is there some guidance or documenation for how to do this?  I'm about to 
be forced (heh...) to write domain support for PowerPC.  I'd like to be 
able to complete that task with as little pain as possible. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XAA2 namespace?

2004-03-03 Thread Ian Romanick
Mark Vojkovich wrote:
On Tue, 2 Mar 2004, Sottek, Matthew J wrote:

 It's currently global because the hardware I work on doesn't
have to fall back to software very often.  Bookkeeping on a per-
surface basis is a simple modification and one I will add.  This
precludes using XAA2 with hardware that doesn't support concurrent
SW and HW access to the framebuffer, but that's OK since that
stuff is old and we're trying to move forward here.  HW that sucks
can use the old XAA.
It shouldn't preclude this from working. You just need the call
to look like sync(xaa_surface_t *surface) and let old hardware
sync the whole engine regardless of the input. It helps those
who can use it and is the same as what you have now for everyone
else.
  I don't understand your reasoning.

  The difference with per-surface as opposed to global sync state 
is that you don't have to sync when CPU rendering to a surface that
has no previously unsynced GPU rendering.  The point of this is
to ALLOW concurrent CPU and GPU rendering into video ram except
in the case where both want to render to the same surface.  There
are old hardware that allow no concurrent CPU and GPU rendering
at all.

  Even with Sync() passing the particular surface which is necessitating
the sync, I would expect all drivers to be syncing the whole chip
without caring what the surface was.  Most hardware allow you to
do checkpointing in the command stream so you can tell how far
along the execution is, but a Sync can happen at any time.  Are
you really going to be checkpointing EVERY 2D operation? 
Not every operation, but every few operations.  For example, the 
Radeon 3D driver has a checkpoint at the end of each DMA buffer.  It's 
more coarse grained than every operation, but it's much finer grained 
than having to wait for the engine to idle.

I still contend that it would be a benefit to know how many
rects associated with the same state are going to be sent
(even if you send those in multiple batches for array size
limitations) this allows a driver to batch things up as it
sees fit.
   I don't know the amount of data coming.  The old XAA (and
cfb for that matter) allocated the pathelogical case: number
of rects times number of clip rects.  It doesn't know how many
there are until it's done computing them, but it knows the
upper bounds.  I have seen this be over 8 Meg!  The new XAA
has a preallocated scratch space (currently a #define for the 
size).  When the scratch buffer is full, it flushes it out to
the driver.   If XAA is configured to run with minimal memory,
the maximum batch size will be small.
That sounds reasonable.  That's basically how the 3D drivers work.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XAA2 namespace?

2004-03-03 Thread Ian Romanick
Mark Vojkovich wrote:

   Ummm... which other models are you refering to?  I'm told that
Windows does it globally.  Having per-surface syncing may mean
you end up syncing more often.  Eg.  Render with HW to one surface
then to another, then if you render to SW to both of those surfaces,
two syncs happen.  Doing it globally would have resulted in only
one sync call.
   Unless you can truely checkpoint every rendering operation,
anything other than global syncing is going to result in more
sync calls.  The more I think about going away from global syncing,
the more this sounds like a bad idea.
It may result in more sync calls, but it should also result in less time 
spent waiting in each call.  If you HW render to surface A, then B, then 
need to SW render to surface A, you don't need to wait for the HW to 
finish with surface B.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: 3D support for radeon 9600 pro (ppc)

2004-02-20 Thread Ian Romanick
Sven Luther wrote:
I think that ATI is missing something here. I believe that Powerpc 
hardware with ATI graphics represent a ever growing linux installed
base, with the G5 Powermac, with the new powerbooks, as well as with
non-apple powerpc boxes like the pegasos motherboards. But then, it is
probably that the ATI drivers are not endian clean, and that they can't
be bothered to make a powerpc build, even an unsupported one, probably
because of that, or maybe for some hidden reason like the intel-ATI
connection or something such.
Even if it is ever growing, it probably still only represents 1% of 1% 
of their total market.  It would take some pretty extreme dedication to 
the Linux movment to make a business case to devote even an single 
engineer to that cause. :(

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: 3D support for radeon 9600 pro (ppc)

2004-02-20 Thread Ian Romanick
Sven Luther wrote:

On Fri, Feb 20, 2004 at 07:55:27AM -0800, Ian Romanick wrote:

Sven Luther wrote:

I think that ATI is missing something here. I believe that Powerpc 
hardware with ATI graphics represent a ever growing linux installed
base, with the G5 Powermac, with the new powerbooks, as well as with
non-apple powerpc boxes like the pegasos motherboards. But then, it is
probably that the ATI drivers are not endian clean, and that they can't
be bothered to make a powerpc build, even an unsupported one, probably
because of that, or maybe for some hidden reason like the intel-ATI
connection or something such.
Even if it is ever growing, it probably still only represents 1% of 1% 
of their total market.  It would take some pretty extreme dedication to 
the Linux movment to make a business case to devote even an single 
engineer to that cause. :(
Whatever. The truth is that outside of x86, there is actually not a
single graphic card vendor with recent graphic card which provide 3D
driver support. Until something changes, this mean the death of 3D
support on non x86 linux.
Agreed.

And then, seriously, do you believe it it will need a full time engineer
to make a powerpc build ? If the drivers were endian clean, then it
would only be a matter of launching a build, and track the occasional
arch related problem. Hell, if a volunteer project can make it, why
can't ATI ? And i would do it, if ATI would give me access to the needed
sources, under strong NDA or whatever, i would build their drivers, but
they don't want to. Chances of Nvidia releasing powerpc binaries are
worse even, altough it is possible that their drivers are more endianess
clean, if they share the code with the OS X driver, which i know ATI
does not.
I think the endianess issue is minor.  There's probably lots of assembly 
code in various parts of the driver.  The driver may also have some 
software fallback cases for vertex programs that generate x86 machine 
code instead of code for the GPU (pure speculation).  If the driver was 
not written with other architectures in mind, it is very likely that 
there's way more to it than just kicking off a build.

The only real hope is that ATI will release the R300 specs once the R400
is released, but even there, i only half believe it.
Agreed 100% on both counts. :(

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: 3D support for radeon 9600 pro (ppc)

2004-02-19 Thread Ian Romanick
jaspal kallar wrote:
I know there is already 2D support for the radeon 9600 pro in the upcoming 4.4 release. 
My question is if I buy an Apple Powermac G5 with a radeon 9600 pro card will I eventually in the future be able to
get 3D  support on the powerpc platform (not x86!!) ?
Only if ATI ports their closed-source driver to PowerPC.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Question about nplanes and ColormapEntries in VisualRec

2004-02-17 Thread Ian Romanick
I'm making some changes to the server-side GLX in the DRI tree.  For 
part of my changes I want to eliminate the need for libGLcore to have 
access to a VisualRec (programs/Xserver/include/scrnintstr.h, line 68). 
 There are only two fields from that structure that are accessed by 
libGLcore, and I believe those values can be otherwise derrived, but I 
want to be sure.

First, a comment in the structure says that nplanes is log2 
(ColormapEntries).  Does that mean that (1U v-nplanes) == 
v-ColormapEntries is always true?

Second, for TrueColor and DirectColor visuals, is it safe to assume 
nplanes is the sum of the red, green, and blue bits?

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Question about nplanes and ColormapEntries in VisualRec

2004-02-17 Thread Ian Romanick
Keith Packard wrote:
Around 9 o'clock on Feb 17, Ian Romanick wrote:

First, a comment in the structure says that nplanes is log2 
(ColormapEntries).  Does that mean that (1U v-nplanes) == 
v-ColormapEntries is always true?
no.  ColormapEntries on a Direct/True visual is

	 1  max(nred,ngreen,nblue).
Okay, then that comment is a little misleading for those cases, but I 
can live with it.

Second, for TrueColor and DirectColor visuals, is it safe to assume 
nplanes is the sum of the red, green, and blue bits?
no.  There may be extra bits which have no defined meaning in the core 
protocol which are used by extensions.
The GLX extension usually adds some bits for alpha for it's visuals (and 
those are the only visuals I care about in this case).  However, even in 
the case where there's 32 bits total (including the alpha channel), 
nplanes is still only 24.  So, let me phrase my original question a 
different way.  Since the GLX extension sets nplanes in its added 
visuals, can it make whatever assumptions about nplanes it wants? :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Latest fixes from DRI Project

2004-02-10 Thread Ian Romanick
Torrey Lyons wrote:

These fixes have the side effect of breaking GLX on Mac OS X. The 
problem is the addition of new server side dependencies on 
glPointParameteri, glPointParameteriv, glSampleMaskSGIS, 
glSamplePatternSGIS. Mac OS X instead uses glPointParameteriNV and 
glPointParameterivNV and GL_SGIS_multisample is not supported. I can fix 
these by substituting the glPointParameter*NV calls and removing the 
I think it would be better to put the '#ifdef __DARWIN__' in the 
dispatch code.  I'm not terribly fond of using #defines like that. 
Since NV_point_sprite isn't supported in all versions of OS X, is 
something more needed?

http://developer.apple.com/opengl/extensions.html#GL_NV_point_sprite

calls to the glSample*SGIS functions as shown in the patch below. Note 
the server still says it supports the glx extension 
GLX_SGIS_multisample. Should I add an #ifdef to glxscreens.c as well to 
remove claiming this extension? Any other comments?
Absolutely.  If it's in the extension string, some application could try 
to use that functionality and get a nasty surprise.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?

2004-02-07 Thread Ian Romanick
Andreas Stenglein wrote:
Am 2004.02.04 21:00:14 +0100 schrieb(en) Brian Paul:
Ian Romanick wrote:

Making that change and changing the server-side to not advertise a core 
version that it can't take protocol for would fix the bug for 4.4.0.  Do 
you think anything should be done to preserve text after the version? 
That is, if a server sends us 1.4.20040108 Foobar, Inc. Fancypants GL, 
should we return 1.2 or something more elaborate?
It would be nice to preserve the extra text, but it's not essential.
why not just add the 1.2  before the original text?
1.2 1.4.20040108 Foobar, Inc. Fancypants GL
so you would see that the renderer could support 1.4 if GLX could do it.
I like it. :)  It looks a little weird to me like that, but I think 
doing 1.2 (1.4.20040108 Foobar, Inc. Fancypants GL)  should work just 
as well.  I'll try to have a patch tomorrow.  The server-side of things 
is...ugly.  The deeper I dig into the server-side GLX code, the more I 
think it needs the Ultimate Refactor...'rm -rf programs/Xserver/GL'

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] Re: GL_VERSION 1.5 when indirect rendering?

2004-02-04 Thread Ian Romanick
Michel Dnzer wrote:
On Wed, 2004-02-04 at 00:56, Ian Romanick wrote:

Does anyone know if either the ATI or Nvidia closed-source drivers 
support ARB_texture_compression for indirect rendering?  If one of them 
does, that would give us a test bed for the client-side protocol 
support.  When that support is added, we can change the library version 
to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 
and .1.3 symlinks).
Are those symlinks really necessary? Apps should only care about
libGL.so.1 . 
It's a debatable point.  If an app explicitly links against 
libGL.so.1.5, then it can expect symbols to statically exist that may 
not be in libGL.so.1.2.  So an app that links against libGL.so.1.5 
wouldn't have to use glXGetProcAddress for glBindBuffer or glBeginQuery, 
but an app linking to a lower version would.

Do we want to encourage that?  That's the debatable part. :)

While we're at it: is there a reason for libGL not having a patchlevel,
e.g. libGL.so.1.2.0? This can cause unpleasant surprises because
ldconfig will consider something like libGL.so.1.2.bak as the higher
patchlevel and change libGL.so.1 to point to that instead of
libGL.so.1.2 .
That's a good idea.  I've been bitten by that before, but my sollution 
was to make it libGL.bak.so.1.2 or something similar.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?

2004-02-04 Thread Ian Romanick
Brian Paul wrote:
Ian Romanick wrote:

That's *bad*.  It is currently *impossible* to have GL 1.5 with 
indirect rendering because some of the GLX protocol (for 
ARB_occlusion_query  ARB_vertex_buffer_objects) was never completely 
defined.  Looking back at it, we can't even advertise 1.3 or 1.4 with 
indirect rendering becuase the protocol for ARB_texture_compression 
isn't supported (on either end).
Ian, it seems to me that xc/lib/GL/glx/single2.c's glGetString() 
function should catch queries for GL_VERSION (as it does for 
GL_EXTENSIONS) and compute the minimum of the renderer's 
glGetString(GL_VERSION) and what the client/server GLX modules can support.

That would solve this, right?
Making that change and changing the server-side to not advertise a core 
version that it can't take protocol for would fix the bug for 4.4.0.  Do 
you think anything should be done to preserve text after the version? 
That is, if a server sends us 1.4.20040108 Foobar, Inc. Fancypants GL, 
 should we return 1.2 or something more elaborate?

I thought about it some last night, and I think there's some longer term 
work to be done on the client-side.  Basically, we need a mechanism for 
GL extensions that matches what we have for GLX extensions.  There are a 
few extensions that are essentially client-side only.  We should be able 
to expose those without expecting the server-side to list them.  In 
fact, the server-side should not list them.  Extensions like 
EXT_draw_range_elements, EXT_multi_draw_arrays, and a few others fall 
into this category.  It should be fairly easy to generalize the code for 
GLX extensions so that it can be used for both.

As a side bonus, that would eliminate the compiler warning in glxcmds.c 
about the __glXGLClientExtensions string being too long. :)

Does anyone know if either the ATI or Nvidia closed-source drivers 
support ARB_texture_compression for indirect rendering?  If one of 
them does, that would give us a test bed for the client-side protocol 
support.  When that support is added, we can change the library 
version to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with 
extra .1.2 and .1.3 symlinks).
[big snip]

OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce3/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 44.96
OpenGL extensions:
GL_EXT_blend_minmax, GL_EXT_texture_object, GL_EXT_draw_range_elements,
GL_EXT_texture3D, GL_EXT_secondary_color, GL_ARB_multitexture,
GL_EXT_multi_draw_arrays, GL_ARB_point_parameters, GL_EXT_fog_coord,
GL_ARB_imaging, GL_EXT_vertex_array, GL_EXT_paletted_texture,
GL_ARB_window_pos, GL_EXT_blend_color
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess
So, it appears that GL_ARB_texture_compression is not supported, but the 
GL_VERSION is reported as 1.4.0  Hmmm.
Okay, that's just weird.  Normally the Nvidia extension string is about 
3 pages long.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Manufacturers who fully disclosed specifications for agp cards?

2004-02-03 Thread Ian Romanick
Mike A. Harris wrote:
On Sat, 31 Jan 2004, Ryan Underwood wrote:

where is the docs for the VSA based cards (voodoo4/voodoo5)?  I have
been unable to locate them.
In a chest in a basement at Nvidia somewhere, with a lock on it, 
behind a bunch of old filing cabinets, in a room at the end of a 
very long hallway, with spiderwebs hanging everywhere, with a 
sign on the door which reads:

	Beware of the leopard
I can just imagine it in a big warehouse like where the Ark ended up at 
the end of Raiders. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?

2004-02-03 Thread Ian Romanick
Andreas Stenglein wrote:

after setting LIBGL_ALWAYS_INDIRECT=1
glxinfo shows
OpenGL version string: 1.5 Mesa 6.0
but doesnt show all extensions necessary for OpenGL 1.5
An application only checking for GL_VERSION 1.5 would probably fail.

Any idea what would happen with libGL.so / libGLcore.a from different versions
of XFree86 / DRI and/or different vendors (nvidia) on the client/server machines?
That's *bad*.  It is currently *impossible* to have GL 1.5 with indirect 
rendering because some of the GLX protocol (for ARB_occlusion_query  
ARB_vertex_buffer_objects) was never completely defined.  Looking back 
at it, we can't even advertise 1.3 or 1.4 with indirect rendering 
becuase the protocol for ARB_texture_compression isn't supported (on 
either end).

Please submit a bug for this on XFree86.  Something should be done for 
this for the 4.4.0 release.

http://bugs.xfree86.org/

Does anyone know if either the ATI or Nvidia closed-source drivers 
support ARB_texture_compression for indirect rendering?  If one of them 
does, that would give us a test bed for the client-side protocol 
support.  When that support is added, we can change the library version 
to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 
and .1.3 symlinks).

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Manufacturers who fully disclosed specifications for agp cards?

2004-02-02 Thread Ian Romanick
Ryan Underwood wrote:

Your request for free publication is undeniably idealistic.  I think it
is a perfectly reasonable compromise to provide specs under NDA to
developers who have shown themselves to be productive and trustworthy in
the past, e.g. by contributing to XFree86 or producing and supporting an
own 3rd-party driver like Tungsten Graphics.  It is a much less risky
investment for the chip manufacturer than freely publishing documentation
for all.  The manufacturer will rarely reach any individuals who would
not have qualified for a NDA anyway, and will most likely end up giving
their competitors ideas they may not have had otherwise.
The problem is that none of the NDAs I have seen (which is not that 
many) explicitly give you the rights to release source code based on 
documentation on NDA.  If you happen to work for a company that is 
extremely cautious about such legal issues, that means you don't get to 
sign any NDAs.

Personally (i.e., not speaking for my employer in any way) agree that 
it's reasonable for hardware vendors to release documentation under NDA. 
 However, if they're releasing NDA documentation to developers for the 
purpose of creating open-source drivers, the NDA should explicitly give 
the developers that right.

Again, that's just this developer's personal opinion.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: PFNGLXGETUSTPROC argument signed or unsigned?

2004-01-22 Thread Ian Romanick
David Dawes wrote:

What is the correct typedef for PFNGLXGETUSTPROC?  glxclient.h has:

typedef int (* PFNGLXGETUSTPROC) ( int64_t * ust );

and it is used as a signed quantity in glxcmds.c.

But most drivers use uint64_t, and src/glx/mini/dri_util.h in the Mesa
trunk uses unsigned:
typedef int (* PFNGLXGETUSTPROC) ( uint64_t * ust );
That was my bad.  It should be int64_t everywhere.  It makes more sense 
for it to be unsigned, but the GLX_OML_sync_control spec has it as signed.

http://oss.sgi.com/projects/ogl-sample/registry/OML/glx_sync_control.txt

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Xserver/GL/glx/g_render.c changes?

2004-01-14 Thread Ian Romanick
Torrey Lyons wrote:

In building the top of the tree on Mac OS X 10.2 I have run into 
troubles linking the GLX support in Xserver/GL. The problem is that 
native OpenGL in Mac OS X 10.2 does not include glActiveStencilFaceEXT() 
and glWindowPos3fARB(), which have been added to g_render.c and 
g_renderswap.c since 4.3.0. On Mac OS X 10.3 things build fine since 
these calls are available.

g_render.c includes the comment:

/* DO NOT EDIT - THIS FILE IS AUTOMATICALLY GENERATED */

I can build server side GLX successfully if I just #ifdef the offending 
calls out on Mac OS X 10.2. or #define them to no-ops. Is this likely to 
cause problems? How is g_render.c automatically generated? What is the 
best way to conditionally remove support for these two functions?
It's not.  This code was donated by SGI, and I suspect that at SGI it is 
automatically generated.  However, in XFree86 it is not.  I'm in the 
process of making some changes to this file in DRI CVS.  I'll drop a 
line to this list when I'm done so that you can tell me which routines 
break on the Mac, and what ifdef needs to be put around them.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: glx failing

2003-11-10 Thread Ian Romanick
Frank Gießler wrote:
with my current CVS snapshot (Changelog up to #530), glxgears gives me 
the following at startup:

X Error of failed request:  BadLength (poly request too large or 
internal Xlib length error)
  Major opcode of failed request:  144 (GLX)
  Minor opcode of failed request:  1 (X_GLXRender)
  Serial number of failed request:  22
  Current serial number in output stream:  23

This used to work before. I've seen this on both OS/2 and SuSE Linux 8.2 
(XFree CVS built without DRI). Any idea what this means and/or where I 
should look?
Can you give any details to help reproduce this error?  There is a 
reported bug in this area, but I thought that it was fixed.  Your 
XF86Config would also be useful.

http://bugs.xfree86.org/show_bug.cgi?id=439

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Radeon performance, z-buffer clears

2003-10-27 Thread Ian Romanick
Vahur Sinijarv wrote:

Does anyone know if fast z-buffer clears and 'z-compression aka hyper-z'
are going to be implemented in radeon DRI drivers (actually it is in the
'radeon' kernel module). It seems to be one of the areas where major
performance gain could be achieved, taking this driver to the same
performance level as ATI's binary only driver has. I've done some perf.
tests and by disabling z-clears frame rates almost double, which shows
that the current approach by drawing a dummy quad is very slow ... I
would be willing to implement it myself if anyone would tell me where to
find information about programming this feature.
ATI has not provided documentation for this feature to developers. 
Until then, it has zero chance of being implemented in open-source drivers.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Kernel Module? On second thought...

2003-10-21 Thread Ian Romanick
Mike A. Harris wrote:

If DRI is disabled, then the Radeon driver will use the older
MMIO mechanism to do 2D acceleration.  I don't know what if any
of the other drivers will use DRI for 2D or Xvideo currently,
however any hardware that supports using DMA/IRQ for 2D
accelration or other stuff theoretically at least can use the DRI
to do it.
I think that's the right model to follow.  Cards that can get benefit 
should use the existing DRM mechanism, even if they don't support 3D.  I 
believe that the i810 uses its DRM for Xv (or maybe it's XvMC...it's 
something video related).

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DRI proprietary modules

2003-10-20 Thread Ian Romanick
John Dennis wrote:
For DRI to work correctly there are several independent pieces that all
have to be in sync.
* XFree86 server which loads drm modules (via xfree86 driver module)

* The drm kernel module

* The agpgart kernel module

Does anybody know for the proprietary drivers (supplied by ATI and
Nvidia) which pieces they replace and which pieces they expect to be
there?
The Nvidia drivers do not use DRI.  The 3dlabs, ATI, PowerVR, and Matrox 
(for their Parhelia hardware) drivers do.  They will *all* replase the 
DRM kernel module, the XFree86 2D driver, and the client-side 3D driver 
(the *_dri.so file).  Most include a custom libGL.so that provides some 
added functionality.  The client-side 3D driver and the DRM kernel 
module are very tightly related, and should be considered a single 
entity (for the most part).

The reason I'm asking is to understand the consequences of
changing an API. I'm curious to the answer in general, but in this
specific instance the api I'm worried about is between the agpgart
kernel module and drm kernel module. If the agpgart kernel module
modifies it's API will that break things for someone who installs a
proprietary 3D driver? Do the proprietary drivers limit themselves to
mesa driver and retain the existing kernel services assuming the IOCTL's
are the same?
Don't bring Mesa into this.  Mesa fundamentally has nothing to do with 
DRI.  It just so happens that all of the open-source DRI drivers use 
Mesa, but there is no such requirement.  AFAIK, *none* of the 
closed-source drivers use any code from Mesa.

Or do they replace the kernel drm drivers as well? If so
do they manage AGP themselves, or do they use the systems agpgart
driver? Do they replace the systems agpgart driver?
I think both the ATI and Nvidia drivers have the option to either use 
agpgart or an internal implementation.  I'm fairly certain that the 
PowerVR, 3dlabs, and Matrox drivers all use agpgart exclusively.  All of 
the drivers, closed-source or open-source, depend on the agpgart 
interface.  Changing that interface in a non-backwards compatible will 
break them all.

I guess my question is, what changes are under consideration?

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Export symbol lists on Linux (was Re: RFC Marking private symbols in XFree86 shared libraries as private)

2003-10-20 Thread Ian Romanick
Jakub Jelinek wrote:

The first is a MUST list, symbols which are exported from XFree86 shared
libraries now when there is no anonymous version script, are not exported
when an anonymous versions script created from stock *-def.cpp file
is applied and are used by some binary or shared library (including other
shared libraries in the XFree86 collection). There is IMHO no way other
than adding these to *-def.cpp files (any issues with this)?
For libGL.so, as anonymous version scripts accept wildcards, I think
we should use gl* wildcard, as it is too error-prone to list all
the gl* functions.
Sorry for taking so long to reply.  I was taking a few days off. :)

libGL.so needs to export XF86DRI*, __glXFindDRIScreen, and a few _glapi 
functions on all platforms that support DRI (i.e., Linux and *BSD 
currently).  Do a nm /usr/X11R6/lib/modules/dri/*_dri.so | grep ' U 
_glapi' | sort -u  to see which ones.  On all platforms all symbols 
matching gl[A-Z]* need to be exported.  Other than that I don't think 
anything needs to be exported by libGL.so.

I *believe* that the *_dri.so files only need to export 
__driCreateScreen.  There are some other symbols that need to be 
exported in DRI CVS, but that code isn't in XFree86 CVS AFAIK (and won't 
be until after 4.4.0).

Thanks for tackling this!

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: PBuffer support in current XFree86?

2003-10-13 Thread Ian Romanick
Andrew P. Lentvorski, Jr. wrote:
I just grabbed the latest source from CVS and compiled.  While the system
is identifying itself as 1.3 Mesa 5.0.2, glXGetFBConfigs seems to be
always returning a NULL pointer for any combination of attributes I can
feed into it.
The core OpenGL version is different from the GLX version.  You need to 
look at the GLX version (from glXQueryVersion) or the GLX extension 
string (from glXQueryExtensions).

Is this expected?
Support for GLX_SGIX_fbconfig in hardware accelerated 3D drivers will 
not make it into XFree86 4.4.0, but support should be available in DRI 
CVS in the next couple months (give or take).  GLX_SGIX_pbuffer (which 
will be the last bit of GLX 1.3 functionality to add) will be added 
sometime after that.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: RFC Marking private symbols in XFree86 shared libraries as private

2003-10-09 Thread Ian Romanick
Jakub Jelinek wrote:

   1) could be done by some header which everything uses, doing
   #if defined HAVE_VISIBILITY_ATTRIBUTE  defined __PIC__
   #define hidden __attribute__((visibility (hidden)))
   #else
   #define hidden /**/
   #endif
   and write prototypes like:
   void hidden someshlibprivateroutine (void);
   extern int someshlibprivatevar hidden;
   etc.
I sent you a message about this before (in reference to libGL.so), but I 
never heard back from you.  I think this is a very good idea!  I would 
prefer it if __HIDDEN__ or HIDDEN or something similar were used.  That 
makes it stand out more.  Also, is there any reason to not have the 
symbols be hidden in non-PIC mode?

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: What about a kernel module?

2003-10-08 Thread Ian Romanick
Raymond Jennings wrote:

I'd like to suggest that you implement device-specific code as a kernel 
module.
This has been discussed to death.  XFree86 is portable to systems where 
we can't just willy-nilly add kernel modules.  With few exceptions, such 
as to implement hardware 3D, this is right out.

Also I have Red Hat 7.0 and when I drag a window, it is SLOW.
Since the version of XFree86 in that distro is at least 3 years old, it 
probably doesn't support hardware acceleration on your card.  Doing 
everything in software is slow.  Big surprise! :)  Try upgrading to 
something more recent, please.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] Deadlock with radeon DRI

2003-10-02 Thread Ian Romanick
Keith Whitwell wrote:

I haven't deeply investigated this but two solutions spring to mind:
- Hack:  Move the call to RADEONAdjustFrame() during initialization 
to before the lock is grabbed.
- Better:  Replace the call to RADEONAdjustFrame() during 
initialization with something like:

if (info-FBDev) {
fbdevHWAdjustFrame(scrnIndex, x, y, flags);
} else {
RADEONDoAdjustFrame(pScrn, x, y, FALSE);
}
which is basically what RADEONAdjustFrame() wraps.
That seems like the right way to go, but I'd feel better if the body of 
RADEONAdjectFrame was moved to a new function called 
RADEONAdjectFrameLocked.  RADEONAdjectFrame would just lock, call 
RADEONAdjectFrameLocked, and unlock.  That matches what's been done 
elsewhere in the 3D driver, anyway.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Exporting sched_yield to the drivers

2003-09-22 Thread Ian Romanick
Mark Vojkovich wrote:

  Can we export to the drivers some function that yields the CPU?
Currently alot of drivers burn the CPU waiting for fifos, etc...
usleep(0) is not good for this because it's jiffy based and usually
never returns in less than 10 msec which has the effect of making
interactivity worse instead of better.  I'm not sure which platforms 
don't export sched_yield() and which will need alternative 
implementations.
There was a thread about this on the dri-devel list some months ago. 
The short answer is DON'T DO IT! :)  I don't think that sched_yield will 
give the desired results in the 2D driver any more than it does in the 
3D driver.  I *believe* that there is another function for this purpose, 
but I can't recall what it is called.

http://marc.theaimsgroup.com/?l=dri-develm=105425072210516w=2
http://lwn.net/Articles/31462/
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Exporting sched_yield to the drivers

2003-09-22 Thread Ian Romanick
Mark Vojkovich wrote:

On Mon, 22 Sep 2003, Ian Romanick wrote:


Mark Vojkovich wrote:


 Can we export to the drivers some function that yields the CPU?
Currently alot of drivers burn the CPU waiting for fifos, etc...
usleep(0) is not good for this because it's jiffy based and usually
never returns in less than 10 msec which has the effect of making
interactivity worse instead of better.  I'm not sure which platforms 
don't export sched_yield() and which will need alternative 
implementations.
There was a thread about this on the dri-devel list some months ago. 
The short answer is DON'T DO IT! :)  I don't think that sched_yield will 
give the desired results in the 2D driver any more than it does in the 
3D driver.  I *believe* that there is another function for this purpose, 
but I can't recall what it is called.

http://marc.theaimsgroup.com/?l=dri-develm=105425072210516w=2
http://lwn.net/Articles/31462/
   Currently, sched_yield() *does* give the desired result and I have
used it with great success in many places, XvMC drivers in particular.
Issues with specific implementations of sched_yield() with recent
Linux kernels does not change the need to yield.  Driver yields will
not be random and usleep is unusable because of it's jiffy nature.
I was never challenging the idea that the driver should yield the CPU. 
On the contrary, I believe that is a good and necessary thing.  However, 
I am a firm believer that on 2.5 (and presumably 2.6 as well) Linux 
kernels using sched_yield has some very undesirable side-effects.

It sounds like the Linux 2.5 implementation is less desirable than
the Linux 2.4 implementation, however, in lieu of an alternative,
it is still better than burning the entire slice waiting for the
fifo to drain.  The ability to yield is essential with DMA based
user-space drivers.  These drivers can queue up alot of work and
often have to wait a long time before they can continue. 
With pure user-space drivers this is a difficult problem to solve.  With 
user-space drivers with a kernel component the problem is a bit easier. 
 The user-space part can wait on a semaphore of some sort and the 
kernel part waits on an interrupt.  When the kernel receives the 
interrupt, it kicks the semaphore.

BEFORE THE FLAME WAR BREAKS OUT, I FULLY UNDERSTAND WHY THE DRIVERS ARE 
IMPLEMENTED THE WAY THAT THEY ARE.  THIS IS *NOT*...I repeat...*NOT* A 
CALL TO START MOVING STUFF INTO THE KERNEL OR ANYTHING LIKE THAT. :)

However, for quite a few of the drivers there already exists a kernel 
component, either through fbdev or DRI, or both.  Some of the drivers, 
like the Radeon and Rage 128 use this mechanism for DMA in the DDX 
driver.  Perhaps *part* of the sollution is to better leverage that?

   The fact that there may be different best implementations 
with various kernels only further supports that XFree86 should
export a xf86Yield() function which does the right thing on
that platform.  For Linux = 2.4 that appears to be sched_yield().
I don't know about the other OSes though, which is why I brought
this up on this list.
Having xf86Yield as a wrapper is a very good idea.  We just have to be 
careful how it's implemented (irony intentional). :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Exporting sched_yield to the drivers

2003-09-22 Thread Ian Romanick
Nathan Hand wrote:

On Tue, 2003-09-23 at 07:55, Mark Vojkovich wrote:

On Tue, 23 Sep 2003, Nathan Hand wrote:


On Tue, 2003-09-23 at 05:58, Mark Vojkovich wrote:

 Can we export to the drivers some function that yields the CPU?
Currently alot of drivers burn the CPU waiting for fifos, etc...
usleep(0) is not good for this because it's jiffy based and usually
never returns in less than 10 msec which has the effect of making
interactivity worse instead of better.  I'm not sure which platforms 
don't export sched_yield() and which will need alternative 
implementations.
FIFO busy loops are very quick. You'll harm overall graphic performance
by yielding. 
 Your experience is out of date.  If I've just filled a Megabyte
DMA fifo and I'm waiting to cram another Megabyte into it, how
quick is my FIFO busy loop then?  I've had great success with
sched_yield().
There's no disputing the first comment :-/

Wouldn't it be easier to dynamically adjust the size of the FIFO? So
instead of 

slice 1) send 1 megabyte
...
slice 2) fifo not drained, yield
...
slice 3) fifo not drained, yield
...
slice 4) fifo drained, send 1 megabyte
...
repeat forever, many wasted slices
Why not

slice 1) send 1 megabyte
...
slice 2) fifo not drained, reduce fifo to 512kB, wait
...
slice 3) fifo not drained, reduce fifo to 256kB, wait
...
slice 4) fifo drained, send 256kB
...
slice 5) fifo drained, send 256kB
A bigger FIFO reduces the risk of the FIFO emptying before you're ready
but if your slices are arriving faster than the GPU can drain the FIFO,
does it really matter?
Yuck!  Modern graphics cards are designed to operate optimally when 
given large chunks of commands to operate on at once.  Under optimal 
driver circumstances, this leads to better throughput and lower CPU 
over-head.  Chopping down the size of the DMA buffer will not improve 
performance.  I'm not even convinced that it would dramatically improve 
latency (which is the goal of adding sched_yield).  Letting the CPU and 
the graphics adapter work for long periods of time in parallel *is* a 
good thing!

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DRI and Silicon Motion

2003-09-04 Thread Ian Romanick
Cheshire Cat Fish wrote:

Mesa support/conformance is a requirement. The resulting SMI drivers 
would remain open source, and part of the Xfree/DRI/Linux distribution.  
That is the plan at least.
That's good news. :)

There are way too many variables to be able to accurately answer that 
question (see my answer to your first question). :)
But it sounds like at best I can only re-use the very lowest level of 
drawing code (the part that talks to the hardware_ from the Windows 2000 
driver.  Everything above that will be different.
That's a fair assessment.

This is starting to sound like a couple of months work.
At least.  I don't know how much time per week you're planning to put 
into this, but, working full time, it would probably take a month or 
so for someone familiar with DRI internals to get something working 
using existing driver code  good documentation.  To get it working 
*well* would require more time.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DRI and Silicon Motion

2003-09-03 Thread Ian Romanick
Cheshire Cat Fish wrote:

I am investigating supporting DRI and OpenGL for the Silicon Motion driver.
I'm new to both of those, so some of these may be newbie sounding 
questions.

1) I have the  OpenGL code from the Windows 2000 Silicon Motion driver.  
Can this code be mostly used as is?  Or will the Linux code be 
entirely different?
Depending on licensing issues attached to the code you have and how you 
want to distribute it, you may be able to use a lot or a little.  All of 
the existing open-source drivers are based on Mesa, and the whole build 
process for 3D drivers in XFree86 is built on that.  I suspect, but am 
in no position to say for sure, that any contributed drivers would 
have to conform to that.  Porting the existing driver to use Mesa would 
probably be a lot of work, but it shouldn't be insurmountable.

If you want to basically use your existing code as-is, you can port it 
to just interface with the XFree86 libGL.so.  That would be a much 
smaller task, but it would leave you on your own (pretty much) to 
support and distribute the driver.  I don't think it would get included 
in an XFree86 release.  There's also the issue of the license that may 
be attached to the existing code, but as I'm neither a lawyer or an 
official XFree86 maintainer I'm in no position to comment.

2) In the DRI Users Guide, section 3.2-Graphics Hardware, Silicon Motion 
is not listed as currently being supported.  Is this still the case? Is 
anyone working on this?  Or am I starting from scratch?
This hardware is not supported and I know of nobody that is working on it.

3) How big of a task am I looking at here? Since I alrady have the Win2k 
OGL code to base my work on, it seems to me it shouldn't be too hard to 
drop that code in and hook it up to DRI.  A few weeks maybe?  Or am I 
missing something fundamental?
There are way too many variables to be able to accurately answer that 
question (see my answer to your first question). :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: patch for ia64 page size

2003-08-11 Thread Ian Romanick
Jakub Jelinek wrote:

On Sun, Aug 10, 2003 at 07:06:58PM -0500, Warren Turkal wrote:

@@ -1003,6 +993,8 @@
   break;
}
+r128_drm_page_size = getpagesize();
+
sysconf (_SC_PAGESIZE)
is the standardized way of querying page size.
I seem to recall some discussion about this a few months ago.  There are 
some portability issues with both getpagesize and sysconf(_SC_PAGESIZE). 
 Because of that, XFree86 has a wrapper function called 
xf86getpagesize.  There also seems to be a #define that aliases 
getpagesize to xf86getpagesize, so I'm not sure if the wrapper should be 
used or if getpagesize should be used.  Either way, I'm sure that 
sysconf(_SG_PAGESIZE) should *not* be used directly.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: bugzilla #439: bufSize in lib/GL/glx/glxcmds.c can be too large.

2003-06-30 Thread Ian Romanick
Egbert Eich wrote:
There is a report in bugzilla (#439) which claims:

the bug is in xc/lib/GL/glx/glxcmds.c 
 int bufSize = XMaxRequestSize(dpy) * 4;
should be 
int bufSize = XMaxRequestSize(dpy) * 4 - 8;
or more cleanly
 int bufSize = XMaxRequestSize(dpy) * 4 - sizeof(xGLXRenderReq);

it happens that you may completely fill your GLX buffer if you 
use variable size command larger than 156 bytes (and smaller than 4096 bytes)
in that case you find yourself with an X command larger than 256Kbytes. This
is very unlikely but possible. It explain why this bug has not shown itself
before in this very old SGI code.

I've briefly looked at the code and it seems to be correct.
However I would like to double check before I commit anything.
Any opinions?
I'm not sure this is correct.  bufSize is used to allocate the buffer 
(gc-buf in the code) that will hold the commands, including the 
xGLXRenderReq header.  I've been doing a lot of work lately on the GLX 
code (both client-side  server-side) in the DRI tree lately.  I'll take 
a look at this a bit more and get back to you.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: bugzilla #439: bufSize in lib/GL/glx/glxcmds.c can be too large.

2003-06-30 Thread Ian Romanick
Ian Romanick wrote:
Egbert Eich wrote:

There is a report in bugzilla (#439) which claims:

the bug is in xc/lib/GL/glx/glxcmds.c  int bufSize = 
XMaxRequestSize(dpy) * 4;
should be int bufSize = XMaxRequestSize(dpy) * 4 - 8;
or more cleanly
 int bufSize = XMaxRequestSize(dpy) * 4 - sizeof(xGLXRenderReq);

it happens that you may completely fill your GLX buffer if you use 
variable size command larger than 156 bytes (and smaller than 4096 bytes)
in that case you find yourself with an X command larger than 
256Kbytes. This
is very unlikely but possible. It explain why this bug has not shown 
itself
before in this very old SGI code.

I've briefly looked at the code and it seems to be correct.
However I would like to double check before I commit anything.
Any opinions?
I'm not sure this is correct.  bufSize is used to allocate the buffer 
(gc-buf in the code) that will hold the commands, including the 
xGLXRenderReq header.  I've been doing a lot of work lately on the GLX 
code (both client-side  server-side) in the DRI tree lately.  I'll take 
a look at this a bit more and get back to you.
I looked into the code, and I now understand what's going on.  Alexis 
made a good catch of a very subtle bug!  The main problem that I had was 
that it wasn't 100% clear at first glance how bufSize / buf / pc were 
used.  Some form of - 8 should be applied to bufSize.  I have attached 
the patch that I plan to apply to the DRI tree.  I suspect that it has 
only cosmetic and / or commentary differences from your patch.

Some things have moved around in the DRI tree, so this patch probably 
won't apply to the XFree86 tree.
Index: glxcmds.c
===
RCS file: /cvsroot/dri/xc/xc/lib/GL/glx/glxcmds.c,v
retrieving revision 1.44
diff -u -d -r1.44 glxcmds.c
--- glxcmds.c   25 Jun 2003 00:39:58 -  1.44
+++ glxcmds.c   30 Jun 2003 20:49:15 -
@@ -198,7 +261,7 @@
 GLXContext AllocateGLXContext( Display *dpy )
 {
  GLXContext gc;
- int bufSize = XMaxRequestSize(dpy) * 4;
+ int bufSize;
  CARD8 opcode;
 
 if (!dpy)
@@ -217,7 +280,14 @@
 }
 memset(gc, 0, sizeof(struct __GLXcontextRec));
 
-/* Allocate transport buffer */
+/*
+** Create a temporary buffer to hold GLX rendering commands.  The size
+** of the buffer is selected so that the maximum number of GLX rendering
+** commands can fit in a single X packet and still have room in the X
+** packed to for the GLXRenderReq header.
+*/
+
+bufSize = (XMaxRequestSize(dpy) * 4) - sz_xGLXRenderReq;
 gc-buf = (GLubyte *) Xmalloc(bufSize);
 if (!gc-buf) {
Xfree(gc);
 


Re: restarting drm modules

2003-06-26 Thread Ian Romanick
Doug Buxton wrote:
I'm a new to the XFree86 sources, so I was hoping someone could give some suggestions as to where to start looking.  Is there an existing mechanism for changing drm drivers, or restarting drm without restarting X entirely?  I'm trying to find a way to make X gracefully handle changing the drm module.  Right now when I disable the kernel module X either hangs (until I reactivate the module) or crashes, depending on whether I'm using the distrubution version of XFree86 or the one that I downloaded and compiled.
There was once (is still?) a patch around for the Radeon / R200 driver 
that allowed this.  The mechanism was that the user could switch to a 
virtual terminal, rmmod the kernel driver, copy an different driver 
/lib/modules/..., insmod the new driver (this step may not have been 
required), and return back to X.  Like I said, the Radeon  R200 were 
the *only* drivers that supported this.

In principle, it should be possible to do this with most of the drivers, 
but there are a few corner cases where you have to be careful.  As 3D on 
XFree86 becomes more ubiquitous, having drivers that can do this will be 
a better and better idea.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: RFC: OpenGL + XvMC

2003-06-03 Thread Ian Romanick
Mark Vojkovich wrote:
On Sun, 1 Jun 2003, Jon Leech wrote:
   You might want to think about how this could carry over to the
upcoming super buffers extension, too, since that will probably replace
pbuffers for most purposes within a few years. Since super buffers
  There are alot of people who are just discovering pbuffers now.
I would guess it would take years before superbuffers were widely used.
I would re-think that assumption. :)  A *lot* of people have known about 
pbuffers but have intentionally avoided them.  When superbuffers are 
available, they are going to jump all over it!  Not only that, on Linux 
on the Nvidia drivers and the ATI drivers for the FireGL 1/2/3 cards 
(not the Radeon based FireGL cards) support it at all currently.

Since nobody supports superbuffers yet, I think we could probably 
re-visit this issue when it is available.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: status of SiS 3d?

2003-06-03 Thread Ian Romanick
Alex Deucher wrote:
Sis wrote support for the 300 series and it works.  However, when mesa
4.x came out no one ever updated the sis dri stuff to match the new
structure.  so DRI works with the 300 if you use the mesa 3.x libs.  It
shouldn't be too hard to port the sis stuff to mesa 4.x, but there
doesn't seem to be much interest in doing so.  3D support for newer sis
boards probably won't happen cause Sis has changed their policy in
regard to giving out docs to their chips.  3D support for the older sis
boards 6326 or whatever it's called should be possible since docs are
available for that board (there was even a utah-glx driver for it), but
it needs to be written.
Which boards did the DRI driver support?  I see 6327 sprinkled all over 
the driver, but not much else.  Would it also support the 6326?  I see 
those on eBay for less than $15 shipped.  If the driver supports that 
chip, I might get one and update the driver to just get Can I have 3D 
on my old SiS card? out of the FAQ. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: RFC: OpenGL + XvMC

2003-06-03 Thread Ian Romanick
Mark Vojkovich wrote:
On Sun, 1 Jun 2003, Jon Leech wrote:

On Mon, Jun 02, 2003 at 01:09:59AM -0400, Mark Vojkovich wrote:

  Extending GL to recognize a relatively unknown XFree86 format
is a hard sell.  I wouldn't even be able to convince my own company
to dirty their code for it seeing as how relatively nobody is using
XvMC.
   Do you implement this without touching the GL driver code? Seems
difficult to avoid touching the driver in the general case, when the
format and location of pbuffer memory is intentionally opaque.
   I haven't touched the GL driver at all.  XvMC is direct rendered
and the assumption is that it's using the same direct rendering
architecture as OpenGL and should be able to get access to the
pbuffer memory if it can name it, just like GL would be able to
do.
You may not have touched the GL driver at all, but you are using some 
sort of non-public interface to it to convert a pbuffer ID to an 
address.  That was somewhat the point of Jon's comment.  I certainly 
don't see anything in any pbuffer documentation that I've ever seen that 
describes how to get the address in video memory of a pbuffer.  In fact, 
the documentation that I have seen goes to some length to explain that 
at certain points in time the pbuffer may not have an address in video 
memory.

Instead of modifying your 3D driver, you've used an internal interface 
that, luckilly for you, just happened to already be there.  The rest of 
us may not be so lucky.

Given that, I have only three comments / requests for the function.

1. Please provide a way to specify the destination buffer (i.e., 
GL_FRONT, GL_BACK_RIGHT, etc.) of the copy.

2. Make explicit the coordinate conversion monkey business.

3. Is there a way for apps to determine if this function is available on 
their hardware?  Later this year when pbuffers become available in the 
open-source drivers, we probably won't (initially) have support for this 
function.  I fully expect that support will follow soon, but it won't be 
there initially.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: OpenGL + XvMC

2003-06-03 Thread Ian Romanick
Sottek, Matthew J wrote:
Let me preface my comment with I don't know a lot about OGL so some
further clarification may be needed.
I am assuming that pbuffers are basically buffers that can be used
as textures by OGL. I would then assume that the OGL driver would
have some mapping of pbuffer id to the texture memory it represents,
maybe this memory is in video memory maybe it has been swapped out
so-to-speak by some texture manager etc.
A pbuffer is (basically) just an off-screen window.  You can do the same 
things to a pbuffer that you can do to a normal window.  This includes 
copying its contents to a texture.  There was a proposal to bring 
WGL_render_texture to GLX, but, in light of other developments, there 
wasn't much interest.  It *may* be resurrected at some point for 
completeness sake, but I wouldn't hold my breath.

So basically this copies data from an XvMC offscreen surface to an
OGL offscreen surface to be used by OGL for normal rendering purposes.
Seems easy enough... I expect anyone doing XvMC would use the drm
for the direct access (or their own drm equivalent) which would also
be the same drm used for OGL and therefore whatever texture management
needs to be done should be possible without much of a problem.
Well, except that, at least in the open-source DRI based drivers, the 
texture memory manager doesn't live in the DRM (anymore than malloc and 
free live in the kernel).

My main problem with the concept is that it seems that a copy is not
always required, and is costly at 24fps. For YUV packed surfaces at
least, an XvMC surface could be directly used as a texture. Some way
to associate an XvMC surface with a pbuffer without a copy seems
like something that would have a large performance gain.
It *may* not always be required.  There have been GLX extensions in the 
past (see my first message in this thread) that worked that way. 
However, as we discussed earlier, this doesn't seem to work so well with 
MPEG video files.  The main problem being that you don't get the frames 
exactly in order.  You're stuck doing a copy either way.

Also, what is the goal exactly? Are you trying to allow video to be
used as textures within a 3d rendered scene, or are your trying to
make it possible to do something like Xv, but using direct rendering
and 3d hardware?
If you are trying to do the latter, it seems far easier to just plug
your XvMC extension into the 3d engine rather than into the overlay. I think
you've done the equivalent with Xv already.
I think the goal is to be able to do both.  Although, the idea of using 
MPEG video files as animated textures in a game is pretty cool. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: status of SiS 3d?

2003-06-03 Thread Ian Romanick
Thomas Winischhofer wrote:
Alex Deucher wrote:

right now just the 300 series (300, 305?, 540, 630/S/ST, 730) have DRI
support.  the old series 6326, 620, 530 don't have DRI support, but but
there are docs available (on the dri website I think) to write a DRI
driver; there was also a utah-glx driver for the that series.  I think
the 6327 might have been the internal sis name for the 300 series,
although that's just a guess on my part.  The 6326 and the 300 series
might be simialr enough to support them both with one driver, but I
No, they are not.
So...the 6327 is the 300 series, and it is not similar at all to the 
6326?  It's also not at all similar to the 315 series?  Wow.  Their 
hardware designers really went out of their way to make a driver 
writer's life miserable. :(

about the DRI, and I'd be willing to try to help you if you wanted to. 
I'll even provide cards.  sis 300 series cards are also very cheap. 
I wouldn't buy a 300 series card nowadays, as cheap as they might be. 
They are quite slow and far behind today's standards. Their only strong 
side is video support.
I certainly wouldn't buy one to replace my Radeon 8500! :)  It would be 
exclusively to update the drive.  It's the same reason I would be a 
Gamma card w/an R2 rasterizer...too bad there are *none* on eBay.  After 
I realized that, I pretty much give up any hopes of the gamma driver 
ever being updated.  That is, unless 3dlabs were to give out 
documentation for an R3 or R4 rasterizer.

It's doubtful however since sis refuses to hand out docs any more.
Once they are through with what is going on right now (can't tell you), 
the situation might become better.
We'll all be waiting with bated breath. :)

Thanks for your help.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: RFC: OpenGL + XvMC

2003-06-01 Thread Ian Romanick
Mark Vojkovich wrote:
On Fri, 30 May 2003, Ian Romanick wrote:

Mark Vojkovich wrote:

  I'd like to propose adding a XvMCCopySurfaceToGLXPbuffer function
to XvMC.  I have implemented this in NVIDIA's binary drivers and
am able to do full framerate HDTV video textures on the higher end
GeForce4 MX cards by using glCopyTexSubImage2D to copy the Pbuffer
contents into a texture.
This shoulds like a good candidate for a GLX extension.  I've been 
wondering when someone would suggest somthing like this. :)  Although, I 
did expect it to come from someone doing video capture work first.
   I wanted to avoid something from the GLX side.  Introducing the
concept of an XFree86 video extension buffer to GLX seemed like a hard
sell.  Introducting a well establish GLX drawable type to XvMC 
seemed more reasonable.
Right.  I thought about this a bit more last night.  A better approach 
might be to expose this functionality as an XFree86 extension, then 
create a GLX extension on top of it.  I was thinking of an extension 
where you would bind a magic buffer to a pbuffer, then take a snapshot 
from the input buffer to the pbuffer.  Doing that we could create 
layered extensions for binding v4l streams to pbuffers.  This would be 
like creating a subclass in C++ and just over-riding the virtual 
CaptureImage method.  I think that would be much nicer for application code.

At the same time, all of the real work would still be done in the X 
extension (or v4l).  Only some light-weight bookkeeping would live in GLX.

Over the years there have been a couple extensions for doing things 
this, both from SGI.  They both work by streaming video data into a new 
type of GLX drawable and use that to source pixel / texel data.

  http://oss.sgi.com/projects/ogl-sample/registry/SGIX/video_source.txt
  http://oss.sgi.com/projects/ogl-sample/registry/SGIX/dmbuffer.txt
The function that you're suggesting here is a clear break from that.  I 
don't think that's a bad thing.  I suspect that you designed it this way 
so that the implementation would not have to live in the GLX subsystem 
or in the 3D driver, correct?
   That was one of the goals.   I generally view trying to bind 
a video-specific buffer to an OpenGL buffer as a bad idea since they
always end up as second class.  While there have been implementations
that could use video buffers as textures, etc... they've always had
serious limitations like the inability to have mipmaps, or repeat, or
limited filtering ability or other disapointing things that people
are sad to learn about.  I opted instead for an explicit copy from
a video-specific surface to a first-class OpenGL drawable.  Being
able to do HDTV video textures on a P4 1.2 Gig PC with a $100 video
card has show this to be a reasonable tradeoff.
The reason you would lose mipmaps and most of the texture wrap modes is 
because video streams rarely have power-of-two dimensions.  In the past, 
hardware couldn't do mipmapping or GL_WRAP on non-power-of-two textures. 
 For the most part, without NV_texture_rectangle, you can't even use 
npot textures. :(

With slightly closer integration between XvMC and the 3D driver, we 
ought to be able to do something along those lines.  Basically, bind a 
XvMCSurface to a pbuffer.  Then, each time a new frame of video is 
rendered the pbuffer would be automatically updated.  Given the way the 
XvMC works, I'm not sure how well that would work, though.  I'll have to 
think on it some more.


   Mpeg frames are displayed in a different order than they are
rendered.  It's best if the decoder has full control over what goes
where and when.
Oh.  That does change things a bit.

Status
XvMCCopySurfaceToGLXPbuffer (
 Display *display,
 XvMCSurface *surface,
 XID pbuffer_id,
 short src_x,
 short src_y,
 unsigned short width,
 unsigned short height,
 short dst_x,
 short dst_y,
 int flags
);
One quick comment.  Don't use 'short', use 'int'.  On every existing and 
future platform that we're likely to care about the shorts will take up 
as much space as an int on the stack anyway, and slower / larger / more 
instructions will need to be used to access them.
   This is an X-window extension.  It's limited to the signed 16 bit
coordinate system like the X-window system itself, all of Xlib and
the rest of XvMC.
So?  Just because the values are limited to 16-bit doesn't necessitate 
that they be stored in a memory location that's only 16-bits.  If X were 
being developed from scratch today, instead of calling everything short, 
it would likely be int_fast16_t.  On IA-32, PowerPC, Alpha, SPARC, and 
x86-64, this is int.  Maybe using the C99 types is the right answer anyway.

  This function copies the rectangle specified by src_x, src_y, width,
 and height from the XvMCSurface denoted by surface to offset dst_x, dst_y 
 within the pbuffer identified by its GLXPbuffer XID pbuffer_id.
 Note that while the src_x, src_y are in XvMC's standard left-handed
 coordinate system and specify the upper left hand

Re: glapi_x86.S glx86asm.py

2003-01-30 Thread Ian Romanick
Alexander Stohr wrote:

From CVS/XFree86/xc/extras/Mesa/bin/Attic/glx86asm.py,v

revision 1.2
date: 2000/12/07 16:12:47;  author: dawes;  state: dead;  lines: +0 -0
Remove from the trunk the Mesa files that aren't needed.

Latest entry in cvs log of c/extras/Mesa/src/X86/glapi_x86.S
revision 1.7
date: 2002/09/09 21:07:33;  author: dawes;  state: Exp;  lines: +1 -1
Mesa 4.0.2 merge

(So the script glx86asm.py was removed after glapi_x86.S last changed,
which is a good sign).


really? hmm, if the respective API listing ever changes or extends 
it might be simpler to use an existing script and then submitting the
results 
than to perform error prone copy and paste operations on the results.

You'd have to ask Brian to be sure, but I believe the intention is that 
if the interface ever changes, a new .S file be generated in the Mesa 
tree and imported to the XFree86  DRI trees.  There should never be a 
case where the .S file would change in XFree86 and not change in Mesa.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel