Re: [XFree86] 3D acceleration via XDM

2006-03-17 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Joel CARNAT wrote:

 If I setup an xdm server (on the NetBSD machine) and connect to it from
 a Linux machine (where the commercial drivers is installed), do I get 3D
 accel with the remote session.
 
 Another way to put that is, when using XDMCP, do I heritate from local X
 driver capabilities or do I keep the server's one (I'm thinking of running
 NetBSD on a sparc64 system but remote connect to it using a 20$ diskless i386
 with 3D graphical drivers ;).

You need to have the drivers on the system where the hardware is.  There
is no aspect of this that has any prayer of ever working.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.1 (GNU/Linux)

iD8DBQFEGufsX1gOwKyEAw8RAuA0AJ0dIs/kOijE7jVnkLQGAyDUlrtc8gCgkEx4
jVpD39LXGfCwfYIHELo927o=
=kASI
-END PGP SIGNATURE-
___
XFree86 mailing list
XFree86@XFree86.Org
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Mesa, X.org and XFree86

2006-01-17 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jim Osborn wrote:
 I need Mesa's Glut, which didn't come with XFree86-4.5.0.

Mesa includes Mark Kilgard's original GLUT.  That has license issues
that some find objectionable, so most Linux distros and (presumably)
XFree86 don't ship it.

If you really need that version of GLUT (instead of, say, freeglut),
just download the Mesa sources and build it from there.  You should be
able to do 'make linux' or something similar from the top-level, then
just install lib/libglut* and include/GL/glut.h.

I don't know if there's still a stand-alone release of Mark's GLUT.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFDzRvvX1gOwKyEAw8RAn0tAJ44NcoZeAreaDkE7El2gM9HKdIPmACfTImL
H/6bH46JyeEs+jy24X/uA4Q=
=y2HG
-END PGP SIGNATURE-
___
XFree86 mailing list
XFree86@XFree86.Org
http://XFree86.Org/mailman/listinfo/xfree86


Re: tdfx and DDC2

2005-08-30 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tim Roberts wrote:
 Michael wrote:
 
 I don't see why they should be enabled - they're PC-specific and even
 with x86 emulation they would be pretty much useless since you're not
 too likely to encounter a graphics board with PC firmware in a Mac ( or
 other PowerPC boxes )
 
 Wrong.  No hardware manufacturer in their right mind would build a
 Mac-only PCI graphics board, with the possible exception of Apple. 
 They're going to build a generic graphics board that works in a PC and
 by the way also works in a Mac.  Such a board will have a video BIOS.

That is 100% untrue.  Take *any* AGP or PCI card, with one* exception,
made for the Mac and it will not work in a PC.  Macs (and Suns and IBM
pSeries) use OpenFirmware (byte-code compiled Forth) and PCs use
compiled x86 for their respective firmwares.  Neither one works with the
other.

Some people have had limited success reflashing PC cards with Mac
firmware, but I don't think that counts.

* http://apps.ati.com/ir/PressReleaseText.asp?compid=105421releaseI
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFLTwX1gOwKyEAw8RAnIaAJ4nIQh9s+lKW9n7XWyCKx/1HBzfSACfblqv
pslJWtJ5D7StoYOSGlz8tPE=
=Xs6N
-END PGP SIGNATURE-
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] AGP GART support required???????

2005-06-15 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

soumya de wrote:

 (EE) I810(0) :AGP GART support is not available. Make
 sure your kernel has agpgart support or that the
 agpgart kernel module is loaded. 

I'm not 100% positive, but I believe this is accurate.  The AGP
controller of the i8xx chipsets and the integrated graphics controllers
are very intimately related.  I don't think the graphics controller has
any way to map a framebuffer, for example, without using the AGP GART.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCsGkwX1gOwKyEAw8RAssCAKCX9Ko83cO1r+JRJH5hxyQyjEwUYwCgoC3C
F0bEaBnPrIN6rV7NXTg0dk4=
=6Tg2
-END PGP SIGNATURE-

___
XFree86 mailing list
XFree86@XFree86.Org
http://XFree86.Org/mailman/listinfo/xfree86


Re: Darwin extern/static fix

2005-04-13 Thread Ian Romanick
Torrey Lyons wrote:
At 3:42 PM -0400 4/13/05, David Dawes wrote:
On Wed, Apr 13, 2005 at 11:52:47AM -0700, Torrey Lyons wrote:
Bugzilla #1576 and the fix committed for it is only partially right.
The patch applewmExt.h is right, but patching the imported Mesa code
in extras/Mesa/include/GL/internal/dri_interface.h is the wrong thing
to do and likely has unintended side effects on other platforms. The
correct fix is just to rename __driConfigOptions in
lib/GL/apple/dri_glx.c. Thanks for pointing out the issue.

I didn't find anything that requires the external declaration of
__driConfigOptions, which is why I applied the patch as submitted.
Perhaps something should in the BUILT_IN_DRI_DRIVER case.  There
are also likely other issues with the BUILT_IN_DRI_DRIVER case.
Yes, I don't know of a specific issue, but it seems like bad practice to 
change an imported header file when we don't need to. The names I came 
up with in apple/dri_glx.c are completely arbitrary. Now that in gcc 4.0 
we can't rely on static to avoid namespace collisions, those static 
variables should be named something more unique. In the X.Org tree I'm 
going to change the name of the static variables in apple/dri_glx.c. Of 
course there's nothing wrong with doing both this and the submitted patch.
__driConfigOptions is supposed to be exported by the DRI driver.  The 
idea is that a configuration utility would open libGL and use 
glXGetDriverConfig to get the configuration options supported by the 
driver.  If the libGL doesn't support loading DRI drivers, as I suspect 
is the case with the Darwin libGL, there is no reason for 
glXGetDriverConfig to ever return *anything* other than NULL.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] libGL.la missing?

2005-03-31 Thread Ian Romanick
Nicholas Sushkin wrote:
I recently compiled and installed XFree86 4.5.0 on Linux. Now I am trying to 
compile KDE, but it complains that libGL.la is missing. Isn't this file 
supposed to be installed by XFree86? Is it a bug that libGL.la wasn't 
generated by the install?
libGL is installed as libGL.so.1.2.  If the KDE build is looking for 
something else, it is looking for the wrong thing.

___
XFree86 mailing list
XFree86@XFree86.Org
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] libGL error: MGA DRI driver expected DDX driver version 1.0.x but got version 4.1.0

2005-03-03 Thread Ian Romanick
Timo Saarinen wrote:
I have Matrox G550 card installed. The operating system is Debian Linux 
Testing and there is hal driver (mgadriver-4.1) from Matrox installed 
because it allows using digital cable. 

When running glxinfo with LIBGL_DEBUG=1 the following error message is 
printed: libGL error: MGA DRI driver expected DDX driver version 1.0.x 
but got version 4.1.0. Is it possible to get DRI working without 
giving up the hal drivers?
Find whoever decided to randomly change the DDX version and kick them in 
the teeth. :(  Version numbers, especially major version numbers, are 
used to determine binary interface compatability.  Changing them 
randomly breaks things.
___
XFree86 mailing list
XFree86@XFree86.Org
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Fedora 3 -new installation GL issues

2005-02-18 Thread Ian Romanick
Jesse Nichols wrote:
New to linux, but just installed FC3 on a P3-450, 256mb ram, 40gig HD,
Radeon7000pci card.
FC3 uses X.org instead of XFree86, so this is actually the wrong list. :(
The installation went great.  Desktop looks great, works fine.  
The problem is that all the GL screensavers like GL-gears, etc are
extremely slow.
Tux racer runs great, so there must be some form of gl being used for
rendering.
When I run glxinfo it says 

Direct rendering: yes
If glxinfo says you have direct rendering, then all applications should 
have hardware accelerated rendering.  Can you be more specific about the 
problems you're seeing?
___
XFree86 mailing list
XFree86@XFree86.Org
http://XFree86.Org/mailman/listinfo/xfree86


Re: [Mesa3d-dev] Re: [XFree86] Problem with programs/Xserver/GL/glx/glxcmds.c

2005-02-15 Thread Ian Romanick
Brian Paul wrote:
Bukie Mabayoje wrote:
This is a mesa issue.
Willing, John (J.K.) wrote:
While running with a commercial CAD program, we encountered a problem 
with the OpenGL libraries.

In Function DoMakeCurrent
640   if (prevglxc) {
641  if (prevglxc-drawPixmap) {
642 if (prevglxc-drawPixmap != prevglxc-readPixmap) {
643  /*
644  ** The previous drawable was a glx pixmap, 
release it.
645  */
646  prevglxc-readPixmap-refcnt--;

We came across a problem where prevglxc-readPixmap is NULL causing a 
Segmentation Fault at line 646.  I resolved the issue by changing 
line 641 to:

641  if (prevglxc-drawPixmap  prevglxc-readPixmap) {
John Willing
What file is that in?
It's in programs/Xserver/GL/glx/glxcmds.c.  This file is *NOT* in the 
Mesa tree.  This is part of the Xserver.  Bukie, how do you figure that 
this is a Mesa problem?
___
XFree86 mailing list
XFree86@XFree86.Org
http://XFree86.Org/mailman/listinfo/xfree86


Re: [Mesa3d-dev] Re: [XFree86] Problem with programs/Xserver/GL/glx/glxcmds.c

2005-02-15 Thread Ian Romanick
Bukie Mabayoje wrote:
The file is located Mesa/src/glx/x11 in your tree
and  xc/programs/Xserver/GLglx in the Xfree86 tree.
But they are both out of sync. My Assumption is that Xfree86 uses  the mesa 
stuff. I may be wrong.
While they have the same name, those files are different.  The one in 
src/glx/x11 is used in the client-side libGL library, and the one in 
programs/Xserver/GL/glx is used in the server-side libglx.a module.  I 
believe that the names were set went SGI originally donated the code 
some years ago, and has caused confusion ever since. :(
___
XFree86 mailing list
XFree86@XFree86.Org
http://XFree86.Org/mailman/listinfo/xfree86


Re: DRM kernel source broken/incomplete

2005-02-08 Thread Ian Romanick
Dr Andrew C Aitchison wrote:
On Tue, 8 Feb 2005, David Dawes wrote:
It looks like the DRM kernel source in xc/extras/drm is broken and
incomplete, especially for BSD platforms.  The Linux version only
appears to build for a narrow range of kernels, and this either
needs to be fixed, or the minimum kernel requirements enforced in
the Makefile.
Perhaps we'll have to roll back to an older version that does build?
How often does the Xserver / DRM binary interface change - 
is it viable to just use the DRM in the running kernel ?

I suppose this is really a question for one of the DRM lists but,
is it a forlorn hope that the DRM could have a static binary
interface to either the kernel or the X server ?
(I guess that a moving kernel puts the former outside the control
of the DRM project ?)
There's a mixed answer (good news / bad news) to that question.  AFAIK, 
the user-space client-side drivers and the DDX should work with a quite 
old DRM.  That's the good news part.  The bad news is that some features 
and / or bug fixes may not be available.  For example, the current R200 
driver works just fine with the DRM that ships with 2.4.21 kernel, but a 
couple security fixes and support for tiled framebuffers is missing.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] ATI Mobility Radeon 7000 (notebook) not recognized - not supported??

2004-11-17 Thread Ian Romanick
Cornelis Bockemuehl wrote:
My hardware is an IBM Thinkpad R32, and IBM specifies the graphic adapter
as follows:
ATI Mobility Radeon 7000, AGB 4x, 16MB DDR-SDRAM, 66 MHz bus
On the ATI webpage I find the following graphic chip, which might (or might
not??) be the same:
ATI Mobility Radeon 7000 IGP
(note the IGP suffix)
A PCI bus scan of my system delivers the following complete output:
http://www.os2warp.be/notebook2/ibm/r32pci.txt
the essence being: Radeon Mobility M6 LY
Here's the scoop.  Any ATI chipset that is an IGP, and there are 
several, is a chipset where the graphics chips is integrated into the 
motherboard's memory controler.  IGP is an acronym for Integrated 
Graphics some word that starts with P.  The regular mobility chips are 
stand-alone graphics chips that usually have memory either itegrated on 
the chip or stacked on the chip's case.  Look at the picture of the 
Radeon Mobility M9 here:

http://www.idhw.com/textual/chip/ati/chippicture.html
The chip in your laptop is the stand-alone Radeon Mobility M6.  The LY 
part is just the ASCII translation of the hex digits 0x4c 0x59.  This is 
done because ATI likes to make lots of chips in the same model with 
different PCI IDs.  For the M6, there are two PCI IDs around: 4c59 and 
4c5a.  The former is called LY and the later is called LZ.  There 
are 16 different versions of the chip used in the Radeon 8500 cards! 
Usually the different PCI ID doesn't mean anything, so I wouldn't worry 
about that.
___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] ATI Mobility Radeon 7000 (notebook) not recognized - not supported??

2004-11-17 Thread Ian Romanick
Cornelis Bockemuehl wrote:
Hello Ian,
Thanks for your explanation!
So the bottom line for me:
- IGP and M6 is not the same chip
Correct.
- Since I only find IGP in the radeon driver documentation for
XFree86, while my own system has an M6, which is not mentioned, there
is a good chance that it is indeed not supported, and that the reason
why all XFree86 programs fail might be exactly that :-(
AFAIK, it *should* be supported.  At least from the 3D perspective, 
which is the only area I really know, the M6 and the desktop Radeon 7000 
are identical.  Since the 7000 is supported, and has been for a long 
time, the M6 should be as well.  Afterall, the M6 is more than 2 years old.
___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] PBuffer support

2004-11-03 Thread Ian Romanick
Aleksandar Donev wrote:
Hello,
It appears to me in principle the xfree86 implementation of GLX is up to 
1.3 and supports pbuffers. However, I have not been able to find even a 
single example on the web that I can use as a test. I am trying to 
follow the GLX 1.3 specs and it is not working (the creation in 
glXCreatePBuffer fails). So I was hoping I could find someone else's 
example on how to do it??? The Mesa docs mention a pbinfo.c, but this 
is no longer there and also seems to be pre GLX 1.3. If such a test 
program is available, it would be useful to have.
No shipping version of XFree86 or X.org supports pbuffers.  The glxinfo 
utility will tell you what GLX version and GLX extensions are supported.
___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: Added Pseudocolor Visuals for XFree86?

2004-11-01 Thread Ian Romanick
Bussoletti, John E wrote:
At Boeing we have a number of graphics applications that have been
developed in-house, originally for various SGI platforms.  These
applications are used for engineering visualization  They work well on
the native hardware and even display well across the network using third
party applications under Windows like Hummingbird's ExCeed 3D.  However,
under Linux, the fail to work properly, either natively or via remote
display with the original SGI hardware acting as server, due to
omissions in the available Pseudocolor Visuals.   
The X terminology is a little different than most people expect, so I 
want to ask for some clarification.  By SGI hardware acting as server 
do you mean the application is running on the SGI and displaying on the 
Linux system, or the application is running on the Linux system and 
displaying on the SGI?  In X terminology, the server (i.e., X-server) 
is where ever the stuff is being displayed.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: G4 AGP

2004-09-29 Thread Ian Romanick
F. Heitkamp wrote:
I can't get agp to work with my Apple G4.  When I enable DRI X comes  up 
but the resolution appears to be 640x480 and the mouse cursor is large, 
distorted and quivering.  No user input is possible at this point.
Is AGP support for the G4 still under development or is it supposed to 
work?  I have a Radeon 9000.
AFAIK, AGP is supported on all G4 based Macs.  All of that should work 
fine even without AGP support.  Does it work correctly with DRI 
disabled?  Anything relavent show up in /var/log/XFree86.log?
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] Problems enabling 3D accelration on ATI Radeon M6

2004-09-01 Thread Ian Romanick
Christian Brix Folsted Andersen wrote:
I am having trouble enabling 3D accelration on my laptop with ATI Radeon 
Mobility M6.

System: Mandrake 10.0
Kernel: 2.6.8
When i run glxgears i get this error:
Xlib:  extension XFree86-DRI missing on display :0.0
The framerate is only approx 200 fps and I suspect 3D accelration is not 
enabled. Any sugestions?

/var/log/XFree86.0.log
...
(II) RADEON(0): Acceleration enabled
(==) RADEON(0): Backing store disabled
(==) RADEON(0): Silken mouse enabled
(II) RADEON(0): Using hardware cursor (scanline 770)
(II) RADEON(0): Largest offscreen area available: 1024 x 1274
(II) RADEON(0): Direct rendering disabled
...
Try running at 16-bit color or a lower resolution.  You don't have 
enough memory to get 3D at 1024x768x24-bit.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: Continued : Xfree 4.4 make install failure on ppc system - scaled fonts problem with mkfonts

2004-07-15 Thread Ian Romanick
[EMAIL PROTECTED] wrote:
Following to the post  http://www.mail-archive.com/[EMAIL PROTECTED]/msg16132.html
I think I have found where the problem is : line 1024 of mkfontscale.c while calling 
FT_Get_Name_Index.
n parameters value is space when it crashed. I didn't checked all values of data in the struct 
face but family name is Utopia when it crash.
I have been able to reproduce this same problem on a G4 running Debian 
(sarge), but *not* on a POWER4 box.  GCC 3.3.4 was used on the G4, and 
GCC 3.3.3 was used on the POWER4.  Both built for 32-bit.  On the G4, I 
tried building with a variety of different optimization settings (-O0, 
-O2, -Os) and architecture settings, but nothing seemed to help.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] PCI Radeon 7500, DRI, and 4.4.0

2004-06-24 Thread Ian Romanick
Andy Goth wrote:
First I must point out that everything works alright when I don't use
DRI (more specifically, when I enable xinerama to disable DRI as a side
effect).  So the card's fine and X is fine and my installation isn't
completely broken and my configuration must be right.
But with DRI I wind up with a non-usable X.  The first couple X-related
commands (xsetroot, etc.) in my .xinitrc execute and their results are
visible (the root window no longer drives me batty), but it doesn't take
long for the display to freeze (I don't see wmaker start).  Along the
top of the screen I have a bar of garbage several hundred pixels thick.
But stranger things are in evidence... X is taking 100% CPU.  My mouse
continues to work, but my keyboard is sometimes stuck (no LEDs, no
Ctrl+Alt+...).  And if I ssh in remotely and do the following:
It sounds like the chip is crashing and X is busy waiting for it to 
finish.  Could you send a copy of your log?  Also, could you try a more 
recent version of the DRI driver?  See Snapshots on 
http://dri.sourceforge.net/cgi-bin/moin.cgi/Download

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: Adding DMX to XFree86

2004-06-23 Thread Ian Romanick
Kevin E Martin wrote:
I think many of us would very much like to have hardware accelerated
indirect rendering, and from time to time there has been talk of adding
it to the DRI project.  It's actually been on the to do list for the
DRI project from the original design days, but it's a large project and
there was little interest in funding it back when I was with PI and VA.
I'm still hopeful that it will eventually happen.
The current thinking is to, essentially, 'rm -rf xc/programs/Xserver/GL' 
and re-write it so that libglx.a loads a device-dependent *_dri.so, like 
the client-side libGL does.  The advantage being that only one driver 
binary will be needed per-device.  The support and maintainence 
advantages should be obvious.

Work has been started on an Xlib based DRI driver (something of a 
contradiction in terms, I know) by Adam Jackson.  I've started writing 
Python scripts to automatically generate GLX protocol handling code (for 
both client-side and server-side).  We're getting closer to starting the 
real work, but I need to clear a few things off my plate first.

My goal is to start a branch in the DRI tree in the next few (3 to 4) 
months to get this work going.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Matrox I2C patch

2004-06-14 Thread Ian Romanick
Ryan Underwood wrote:
Not a common scenario.  I know a lot of G550's come with a DVI and an
analog connector, but I've never seen a G450 like that.  (The G450
manual claims that they exist., however.)
I have a PCI G450 (for PowerPC, no less) that has this configuration. 
Of course, I can't get it to work because there's no support for PCI 
domain != 0 on PPC64, and all the PCI slots in my box are in domain 1. 
:(  Until I write domain probing support, I can't help you, but I can 
verify that the cards *do* exist. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Register access on MIPS system

2004-06-08 Thread Ian Romanick
Marc Aurele La France wrote:
Well, domain support for MIPS has yet to be written.  Ditto for PowerPC.  And
that for Alpha's is somewhat broken.  Lack of time, for one, and lack of
hardware.
Is there some guidance or documenation for how to do this?  I'm about to 
be forced (heh...) to write domain support for PowerPC.  I'd like to be 
able to complete that task with as little pain as possible. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] i810, glxinfo and xscreensaver crashes

2004-04-26 Thread Ian Romanick
Since this is most likely a DRI related issue, I'm cross-posting to 
dri-devel.

Jeremy C. Reed wrote:

My wife's XFree86 was randomly crashing every once in a while. Not a good
thing.
I tracked it down to xscreensaver. I used xscreensaver-control and
manually tried different savers. It crashed on GL-related savers.
Also, X crashes when running glxinfo.
I read several postings about this, but no real answer. I also didn't find
anything the same in bugzilla. (307 seems similar).
When X crashes, it has a Segmentation fault and signal 11.

I received no other usable (as far as I can tell) output related to crash
from xinit nor in the /var/log/XFree86.0.log.
Her system is running Linux 2.6.3 kernel.

Linux agpgart interface v0.100 (c) Dave Jones
agpgart: Detected an Intel i815 Chipset.
agpgart: Maximum main memory to use for agp memory: 93M
agpgart: detected 4MB dedicated video ram.
agpgart: AGP aperture is 64M @ 0xe800
[drm] Initialized i830 1.3.2 20021108 on minor 0
So, is it an i81x or an i830/i845/i865?  The X log seems to indicate 
i815, but the i830 kernel module is loaded.  Try using the i810 kernel 
module.

I have tried with and without i810 module.

XFree86 is 4.4.0 (as installed from pkgsrc).

/var/log/XFree86.0.log says:

(II) LoadModule: i810
(II) Loading /usr/X11R6/lib/modules/drivers/i810_drv.o
(II) Module i810: vendor=The XFree86 Project
compiled for 4.4.0, module version = 1.3.0
Module class: XFree86 Video Driver
ABI class: XFree86 Video Driver, version 0.7
(II) LoadModule: mouse
(II) Loading /usr/X11R6/lib/modules/input/mouse_drv.o
(II) Module mouse: vendor=The XFree86 Project
compiled for 4.4.0, module version = 1.0.0
Module class: XFree86 XInput Driver
ABI class: XFree86 XInput driver, version 0.4
(II) I810: Driver for Intel Integrated Graphics Chipsets: i810,
i810-dc100,
i810e, i815, i830M, 845G, 852GM/855GM, 865G
(II) Primary Device is: PCI 00:02:0
(--) Chipset i815 found
...

(**) I810(0): DRI is disabled because it runs only at 16-bit depth.

I also tried 16-bit so i810 DRI would stay enabled. But that didn't help.
The GL-related use still crashed X.
What should I do so she can use GL applications?
In the mean time, try setting LIBGL_ALWAYS_INDIRECT in her .profile (or 
whatever) or disabling DRI.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Xlib: extension XFree86-DRI missing on display :0.0.

2004-04-19 Thread Ian Romanick
patrick boenzli wrote:

HW and SW datas:
Sony VAIO PCG-SR11K:
  S3 Savage IX-MV, 8MB
Hardware accelerated 3D is not supported by any current X release on 
that graphics chip.  However, there is a driver in development.  Please 
look at:

http://dri.sourceforge.net/cgi-bin/moin.cgi/S3Savage?action=highlightvalue=CategoryHardwareChipset
http://dri.sourceforge.net/cgi-bin/moin.cgi/Download#head-f3c794f007343b969bc570c5dd057212ece700be
___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] SiS Direct rendering

2004-04-02 Thread Ian Romanick
Marcial Vieira wrote:
On Wednesday 31 March 2004 13:27, Ian Romanick wrote:

Based on the fact that glxinfo shows GLX version 1.4 is supported, you
must have installed the software libGL from Mesa.  You need the libGL
that came with XFree86 to get hardware acceleration.  Uninstall the Mesa
library and put the original one back.


Well, I compiled a new version because the libs with cames with xfree (Slack 
9.1) didn't work for me, glxinfo tell indirect rendering and glxgears doesn't 
run, so I upgrade, now continues with indirect rendering but glxgears run.

By the way, I hear about the file that did direct rendering is
libGL.so.1.(version) so anyone that has sis with direct rendering please send 
me that file, it should be in the /usr/X11R6/lib.

I downloaded libGL.so.1.1 but it is so old and some programs like xmame.xgl 
doesn't work with that version.
You're mixing and matching things in ways that they should not be mixed. 
 It's like putting diesel in a gasoline engine: it isn't going to work! 
 The libGL.so that comes from the Mesa distribution draws everything in 
software using Xlib calls.  YOU DO NOT WANT THAT AT ALL.  The libGL.so 
that comes from XFree86 acts as a driver loader to load the 
direct-rendering driver.  This is the *only* way to get hardware 
accelerated 3D on XFree86.  Look in at the glxinfo output in your last 
message, you went from one version of Mesa's libGL (wrong) to another, 
older version of Mesa's libGL (even more wrong).  Did I mention that you 
need the libGL.so from XFree86 and not the one from Mesa? ;)

If the driver that ships with the version of XFree86 you have installed 
isn't working, then you might try an updated driver.  Get a driver 
snapshot from the DRI project's site.  There have some problems lately, 
so get a snapshot from *before* March 4th.  sis-20040303.tar.bz2 would 
be a good choice.

http://dri.sourceforge.net/cgi-bin/moin.cgi/Download

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] SiS Direct rendering

2004-03-31 Thread Ian Romanick
Marcial Vieira wrote:
I tryed to configure my SiS 630 on-board with linux but 3D never worked, 
glxinfo always output:

direct rendering: No

But XFree86.1.log has:
(II) SIS(0): [drm] installed DRM signal handler
(II) SIS(0): [DRI] installation complete
(II) SIS(0): [drm] installed DRM signal handler
(II) SIS(0): [DRI] installation complete
(II) SIS(0): Direct rendering enabled
What's wrong?
Based on the fact that glxinfo shows GLX version 1.4 is supported, you 
must have installed the software libGL from Mesa.  You need the libGL 
that came with XFree86 to get hardware acceleration.  Uninstall the Mesa 
library and put the original one back.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Error: Xlib: extension XFree86-DRI missing on display :0.0.

2004-03-24 Thread Ian Romanick
Paulo Belletato wrote:

I'm not understanding why that simple opengl program gives the message:

Xlib: extension XFree86-DRI missing on display :0.0.
You should be able to ignore that message.  As part of libGL start-up it 
tries to determine if direct rendering is available.  With the 
standard libGL, this means using DRI protocol.  When it sends the 
Hello? message to the X-server, the X-server tells it that it knows 
nothing about DRI.  Hence the message.  From that point it should just 
fallback to indirect rendering and work.  Is that what you're seeing?

I belive that is the use of GLUT library, because I've already commented 
the line Load dri from XF86Config and restarted X. When I run the 
program, the message was the same.

Since I do not see any references to DRI on the program and according your 
message DRI should not be necessary to run opengl programs, I belive that 
the glut.h header should use some DRI resources, Am I correct?
You won't see any references to DRI in your program.  DRI means Direct 
Rendering Infrastructure.  It's how the standard XFree86 libGL, the 
X-server, and the kernel work together to provide hardware accelerated 
direct rendering.  It all happens under the sheets, so to speak.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Error: Xlib: extension XFree86-DRI missing on display :0.0.

2004-03-24 Thread Ian Romanick
Paulo Belletato wrote:

The problem is that the compiled program doesn't work properly.

A small window is created but it seems to be transparent. In fact It 
copies the background. 
That is odd.  In an earlier message you said that you have an Nvidia 
card.  Did you ever install the Nvidia drivers (the ones from their 
website)?  It sounds like you may be using some of their files 
(server-side glx module) but not using their 2D driver (nvidia vs. 
nv).  Other than that, I don't know what could be causing the behavior 
you are seeing.  Sorry I couldn't be of more help. :(

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Radeon Mobility U1 accelerated 3D support yet?

2004-03-11 Thread Ian Romanick
Phil Barnett wrote:
On Wednesday 10 March 2004 9:51 am, Alex Deucher wrote:

3D support for the IGP chipsets is available in DRI cvs.  You can
either build from source or try the nightly binary snapshots available
here:
http://dri.sourceforge.net/cgi-bin/moin.cgi/Download
Cool, I'll try it out.

What directories should I back up so that I can revert to a previous version 
if the CVS is unworkable?

I'm running debian.
IIRC, with the binary snapshots you don't need to backup anything.  The 
included install script backs up the files that will be replaced. 
You'll want to read the included documentation to be sure, though.  If 
you build from source, you'll want to backup the whole /usr/X11R6 
directory tree, as Alex mentioned in a different message.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: XAA2 namespace?

2004-03-03 Thread Ian Romanick
Mark Vojkovich wrote:
On Tue, 2 Mar 2004, Sottek, Matthew J wrote:

 It's currently global because the hardware I work on doesn't
have to fall back to software very often.  Bookkeeping on a per-
surface basis is a simple modification and one I will add.  This
precludes using XAA2 with hardware that doesn't support concurrent
SW and HW access to the framebuffer, but that's OK since that
stuff is old and we're trying to move forward here.  HW that sucks
can use the old XAA.
It shouldn't preclude this from working. You just need the call
to look like sync(xaa_surface_t *surface) and let old hardware
sync the whole engine regardless of the input. It helps those
who can use it and is the same as what you have now for everyone
else.
  I don't understand your reasoning.

  The difference with per-surface as opposed to global sync state 
is that you don't have to sync when CPU rendering to a surface that
has no previously unsynced GPU rendering.  The point of this is
to ALLOW concurrent CPU and GPU rendering into video ram except
in the case where both want to render to the same surface.  There
are old hardware that allow no concurrent CPU and GPU rendering
at all.

  Even with Sync() passing the particular surface which is necessitating
the sync, I would expect all drivers to be syncing the whole chip
without caring what the surface was.  Most hardware allow you to
do checkpointing in the command stream so you can tell how far
along the execution is, but a Sync can happen at any time.  Are
you really going to be checkpointing EVERY 2D operation? 
Not every operation, but every few operations.  For example, the 
Radeon 3D driver has a checkpoint at the end of each DMA buffer.  It's 
more coarse grained than every operation, but it's much finer grained 
than having to wait for the engine to idle.

I still contend that it would be a benefit to know how many
rects associated with the same state are going to be sent
(even if you send those in multiple batches for array size
limitations) this allows a driver to batch things up as it
sees fit.
   I don't know the amount of data coming.  The old XAA (and
cfb for that matter) allocated the pathelogical case: number
of rects times number of clip rects.  It doesn't know how many
there are until it's done computing them, but it knows the
upper bounds.  I have seen this be over 8 Meg!  The new XAA
has a preallocated scratch space (currently a #define for the 
size).  When the scratch buffer is full, it flushes it out to
the driver.   If XAA is configured to run with minimal memory,
the maximum batch size will be small.
That sounds reasonable.  That's basically how the 3D drivers work.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XAA2 namespace?

2004-03-03 Thread Ian Romanick
Mark Vojkovich wrote:

   Ummm... which other models are you refering to?  I'm told that
Windows does it globally.  Having per-surface syncing may mean
you end up syncing more often.  Eg.  Render with HW to one surface
then to another, then if you render to SW to both of those surfaces,
two syncs happen.  Doing it globally would have resulted in only
one sync call.
   Unless you can truely checkpoint every rendering operation,
anything other than global syncing is going to result in more
sync calls.  The more I think about going away from global syncing,
the more this sounds like a bad idea.
It may result in more sync calls, but it should also result in less time 
spent waiting in each call.  If you HW render to surface A, then B, then 
need to SW render to surface A, you don't need to wait for the HW to 
finish with surface B.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: 3D support for radeon 9600 pro (ppc)

2004-02-20 Thread Ian Romanick
Sven Luther wrote:
I think that ATI is missing something here. I believe that Powerpc 
hardware with ATI graphics represent a ever growing linux installed
base, with the G5 Powermac, with the new powerbooks, as well as with
non-apple powerpc boxes like the pegasos motherboards. But then, it is
probably that the ATI drivers are not endian clean, and that they can't
be bothered to make a powerpc build, even an unsupported one, probably
because of that, or maybe for some hidden reason like the intel-ATI
connection or something such.
Even if it is ever growing, it probably still only represents 1% of 1% 
of their total market.  It would take some pretty extreme dedication to 
the Linux movment to make a business case to devote even an single 
engineer to that cause. :(

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: 3D support for radeon 9600 pro (ppc)

2004-02-20 Thread Ian Romanick
Sven Luther wrote:

On Fri, Feb 20, 2004 at 07:55:27AM -0800, Ian Romanick wrote:

Sven Luther wrote:

I think that ATI is missing something here. I believe that Powerpc 
hardware with ATI graphics represent a ever growing linux installed
base, with the G5 Powermac, with the new powerbooks, as well as with
non-apple powerpc boxes like the pegasos motherboards. But then, it is
probably that the ATI drivers are not endian clean, and that they can't
be bothered to make a powerpc build, even an unsupported one, probably
because of that, or maybe for some hidden reason like the intel-ATI
connection or something such.
Even if it is ever growing, it probably still only represents 1% of 1% 
of their total market.  It would take some pretty extreme dedication to 
the Linux movment to make a business case to devote even an single 
engineer to that cause. :(
Whatever. The truth is that outside of x86, there is actually not a
single graphic card vendor with recent graphic card which provide 3D
driver support. Until something changes, this mean the death of 3D
support on non x86 linux.
Agreed.

And then, seriously, do you believe it it will need a full time engineer
to make a powerpc build ? If the drivers were endian clean, then it
would only be a matter of launching a build, and track the occasional
arch related problem. Hell, if a volunteer project can make it, why
can't ATI ? And i would do it, if ATI would give me access to the needed
sources, under strong NDA or whatever, i would build their drivers, but
they don't want to. Chances of Nvidia releasing powerpc binaries are
worse even, altough it is possible that their drivers are more endianess
clean, if they share the code with the OS X driver, which i know ATI
does not.
I think the endianess issue is minor.  There's probably lots of assembly 
code in various parts of the driver.  The driver may also have some 
software fallback cases for vertex programs that generate x86 machine 
code instead of code for the GPU (pure speculation).  If the driver was 
not written with other architectures in mind, it is very likely that 
there's way more to it than just kicking off a build.

The only real hope is that ATI will release the R300 specs once the R400
is released, but even there, i only half believe it.
Agreed 100% on both counts. :(

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: 3D support for radeon 9600 pro (ppc)

2004-02-19 Thread Ian Romanick
jaspal kallar wrote:
I know there is already 2D support for the radeon 9600 pro in the upcoming 4.4 release. 
My question is if I buy an Apple Powermac G5 with a radeon 9600 pro card will I eventually in the future be able to
get 3D  support on the powerpc platform (not x86!!) ?
Only if ATI ports their closed-source driver to PowerPC.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Question about nplanes and ColormapEntries in VisualRec

2004-02-17 Thread Ian Romanick
I'm making some changes to the server-side GLX in the DRI tree.  For 
part of my changes I want to eliminate the need for libGLcore to have 
access to a VisualRec (programs/Xserver/include/scrnintstr.h, line 68). 
 There are only two fields from that structure that are accessed by 
libGLcore, and I believe those values can be otherwise derrived, but I 
want to be sure.

First, a comment in the structure says that nplanes is log2 
(ColormapEntries).  Does that mean that (1U v-nplanes) == 
v-ColormapEntries is always true?

Second, for TrueColor and DirectColor visuals, is it safe to assume 
nplanes is the sum of the red, green, and blue bits?

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Question about nplanes and ColormapEntries in VisualRec

2004-02-17 Thread Ian Romanick
Keith Packard wrote:
Around 9 o'clock on Feb 17, Ian Romanick wrote:

First, a comment in the structure says that nplanes is log2 
(ColormapEntries).  Does that mean that (1U v-nplanes) == 
v-ColormapEntries is always true?
no.  ColormapEntries on a Direct/True visual is

	 1  max(nred,ngreen,nblue).
Okay, then that comment is a little misleading for those cases, but I 
can live with it.

Second, for TrueColor and DirectColor visuals, is it safe to assume 
nplanes is the sum of the red, green, and blue bits?
no.  There may be extra bits which have no defined meaning in the core 
protocol which are used by extensions.
The GLX extension usually adds some bits for alpha for it's visuals (and 
those are the only visuals I care about in this case).  However, even in 
the case where there's 32 bits total (including the alpha channel), 
nplanes is still only 24.  So, let me phrase my original question a 
different way.  Since the GLX extension sets nplanes in its added 
visuals, can it make whatever assumptions about nplanes it wants? :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] glxgears and cpu utilization funny...

2004-02-11 Thread Ian Romanick
Rahul Sawarkar wrote:

When the card has to draw less pixels, a **larger** percentage of the 
time is spent sending commands from the CPU to the card. 
When the window is tiny, the card is drawing fewer pixels, but the CPU 
has to send the **same** number of commands per
 frame.
So are you saying
Same number of commnds per frame +  More pixels gives = cpu
utilization, as compared to Same number of commands per frame + less
pixels  ? Here my cpu utilization is a dramatic 80% when glxgears is a
3x3 inch window and 4% when glx gears is maximized full-screen at
1280x1024 res. Also when I hide the 3x3 glxgears window behind say my
browser window i.e bring another app window to front, and glxgears is no
longer visible, cpu utilization remains at 80% or more.
Could you please clarify ? I think you haven't explained everything
thats on your mind or know.
I think my graphics card GPU is kicking in when I maximize but inactive
when minimized, something to do with the way X works.
its a hunch ...
Each frame requires some number of commands be sent to the card, this 
uses N CPU time.  On my box, a full-screen gears window gets 166 frames 
per second, and a super tiny window gets 6224 frames per second.  What 
was saying in my previous post was that 6224*N  166*N.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: Latest fixes from DRI Project

2004-02-10 Thread Ian Romanick
Torrey Lyons wrote:

These fixes have the side effect of breaking GLX on Mac OS X. The 
problem is the addition of new server side dependencies on 
glPointParameteri, glPointParameteriv, glSampleMaskSGIS, 
glSamplePatternSGIS. Mac OS X instead uses glPointParameteriNV and 
glPointParameterivNV and GL_SGIS_multisample is not supported. I can fix 
these by substituting the glPointParameter*NV calls and removing the 
I think it would be better to put the '#ifdef __DARWIN__' in the 
dispatch code.  I'm not terribly fond of using #defines like that. 
Since NV_point_sprite isn't supported in all versions of OS X, is 
something more needed?

http://developer.apple.com/opengl/extensions.html#GL_NV_point_sprite

calls to the glSample*SGIS functions as shown in the patch below. Note 
the server still says it supports the glx extension 
GLX_SGIS_multisample. Should I add an #ifdef to glxscreens.c as well to 
remove claiming this extension? Any other comments?
Absolutely.  If it's in the extension string, some application could try 
to use that functionality and get a nasty surprise.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] glxgears and cpu utilization funny...

2004-02-08 Thread Ian Romanick
Rahul Sawarkar wrote:

Hello
I've got a intel440bx with a Radeon7500 rv200 chip , running x4.3, on 
kernel 2.6. System is built from source entirely.
One strange thing I noticed is that when I run glxgears in a small 
window say 2x2 inch, cpu utilization jumps above 70%.
But when I maximise the window, cpu utilization drops to 1-2%. I can see 
this clearly in gkrellm.
I thought it should be the reverse.
what gives??
When the card has to draw less pixels, a larger percentage of the time 
is spent sending commands from the CPU to the card.  When the window is 
tiny, the card is drawing fewer pixels, but the CPU has to send the same 
 number of commands per frame.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?

2004-02-07 Thread Ian Romanick
Andreas Stenglein wrote:
Am 2004.02.04 21:00:14 +0100 schrieb(en) Brian Paul:
Ian Romanick wrote:

Making that change and changing the server-side to not advertise a core 
version that it can't take protocol for would fix the bug for 4.4.0.  Do 
you think anything should be done to preserve text after the version? 
That is, if a server sends us 1.4.20040108 Foobar, Inc. Fancypants GL, 
should we return 1.2 or something more elaborate?
It would be nice to preserve the extra text, but it's not essential.
why not just add the 1.2  before the original text?
1.2 1.4.20040108 Foobar, Inc. Fancypants GL
so you would see that the renderer could support 1.4 if GLX could do it.
I like it. :)  It looks a little weird to me like that, but I think 
doing 1.2 (1.4.20040108 Foobar, Inc. Fancypants GL)  should work just 
as well.  I'll try to have a patch tomorrow.  The server-side of things 
is...ugly.  The deeper I dig into the server-side GLX code, the more I 
think it needs the Ultimate Refactor...'rm -rf programs/Xserver/GL'

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] Re: GL_VERSION 1.5 when indirect rendering?

2004-02-04 Thread Ian Romanick
Michel Dnzer wrote:
On Wed, 2004-02-04 at 00:56, Ian Romanick wrote:

Does anyone know if either the ATI or Nvidia closed-source drivers 
support ARB_texture_compression for indirect rendering?  If one of them 
does, that would give us a test bed for the client-side protocol 
support.  When that support is added, we can change the library version 
to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 
and .1.3 symlinks).
Are those symlinks really necessary? Apps should only care about
libGL.so.1 . 
It's a debatable point.  If an app explicitly links against 
libGL.so.1.5, then it can expect symbols to statically exist that may 
not be in libGL.so.1.2.  So an app that links against libGL.so.1.5 
wouldn't have to use glXGetProcAddress for glBindBuffer or glBeginQuery, 
but an app linking to a lower version would.

Do we want to encourage that?  That's the debatable part. :)

While we're at it: is there a reason for libGL not having a patchlevel,
e.g. libGL.so.1.2.0? This can cause unpleasant surprises because
ldconfig will consider something like libGL.so.1.2.bak as the higher
patchlevel and change libGL.so.1 to point to that instead of
libGL.so.1.2 .
That's a good idea.  I've been bitten by that before, but my sollution 
was to make it libGL.bak.so.1.2 or something similar.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?

2004-02-04 Thread Ian Romanick
Brian Paul wrote:
Ian Romanick wrote:

That's *bad*.  It is currently *impossible* to have GL 1.5 with 
indirect rendering because some of the GLX protocol (for 
ARB_occlusion_query  ARB_vertex_buffer_objects) was never completely 
defined.  Looking back at it, we can't even advertise 1.3 or 1.4 with 
indirect rendering becuase the protocol for ARB_texture_compression 
isn't supported (on either end).
Ian, it seems to me that xc/lib/GL/glx/single2.c's glGetString() 
function should catch queries for GL_VERSION (as it does for 
GL_EXTENSIONS) and compute the minimum of the renderer's 
glGetString(GL_VERSION) and what the client/server GLX modules can support.

That would solve this, right?
Making that change and changing the server-side to not advertise a core 
version that it can't take protocol for would fix the bug for 4.4.0.  Do 
you think anything should be done to preserve text after the version? 
That is, if a server sends us 1.4.20040108 Foobar, Inc. Fancypants GL, 
 should we return 1.2 or something more elaborate?

I thought about it some last night, and I think there's some longer term 
work to be done on the client-side.  Basically, we need a mechanism for 
GL extensions that matches what we have for GLX extensions.  There are a 
few extensions that are essentially client-side only.  We should be able 
to expose those without expecting the server-side to list them.  In 
fact, the server-side should not list them.  Extensions like 
EXT_draw_range_elements, EXT_multi_draw_arrays, and a few others fall 
into this category.  It should be fairly easy to generalize the code for 
GLX extensions so that it can be used for both.

As a side bonus, that would eliminate the compiler warning in glxcmds.c 
about the __glXGLClientExtensions string being too long. :)

Does anyone know if either the ATI or Nvidia closed-source drivers 
support ARB_texture_compression for indirect rendering?  If one of 
them does, that would give us a test bed for the client-side protocol 
support.  When that support is added, we can change the library 
version to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with 
extra .1.2 and .1.3 symlinks).
[big snip]

OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce3/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 44.96
OpenGL extensions:
GL_EXT_blend_minmax, GL_EXT_texture_object, GL_EXT_draw_range_elements,
GL_EXT_texture3D, GL_EXT_secondary_color, GL_ARB_multitexture,
GL_EXT_multi_draw_arrays, GL_ARB_point_parameters, GL_EXT_fog_coord,
GL_ARB_imaging, GL_EXT_vertex_array, GL_EXT_paletted_texture,
GL_ARB_window_pos, GL_EXT_blend_color
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess
So, it appears that GL_ARB_texture_compression is not supported, but the 
GL_VERSION is reported as 1.4.0  Hmmm.
Okay, that's just weird.  Normally the Nvidia extension string is about 
3 pages long.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Manufacturers who fully disclosed specifications for agp cards?

2004-02-03 Thread Ian Romanick
Mike A. Harris wrote:
On Sat, 31 Jan 2004, Ryan Underwood wrote:

where is the docs for the VSA based cards (voodoo4/voodoo5)?  I have
been unable to locate them.
In a chest in a basement at Nvidia somewhere, with a lock on it, 
behind a bunch of old filing cabinets, in a room at the end of a 
very long hallway, with spiderwebs hanging everywhere, with a 
sign on the door which reads:

	Beware of the leopard
I can just imagine it in a big warehouse like where the Ark ended up at 
the end of Raiders. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?

2004-02-03 Thread Ian Romanick
Andreas Stenglein wrote:

after setting LIBGL_ALWAYS_INDIRECT=1
glxinfo shows
OpenGL version string: 1.5 Mesa 6.0
but doesnt show all extensions necessary for OpenGL 1.5
An application only checking for GL_VERSION 1.5 would probably fail.

Any idea what would happen with libGL.so / libGLcore.a from different versions
of XFree86 / DRI and/or different vendors (nvidia) on the client/server machines?
That's *bad*.  It is currently *impossible* to have GL 1.5 with indirect 
rendering because some of the GLX protocol (for ARB_occlusion_query  
ARB_vertex_buffer_objects) was never completely defined.  Looking back 
at it, we can't even advertise 1.3 or 1.4 with indirect rendering 
becuase the protocol for ARB_texture_compression isn't supported (on 
either end).

Please submit a bug for this on XFree86.  Something should be done for 
this for the 4.4.0 release.

http://bugs.xfree86.org/

Does anyone know if either the ATI or Nvidia closed-source drivers 
support ARB_texture_compression for indirect rendering?  If one of them 
does, that would give us a test bed for the client-side protocol 
support.  When that support is added, we can change the library version 
to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 
and .1.3 symlinks).

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Manufacturers who fully disclosed specifications for agp cards?

2004-02-02 Thread Ian Romanick
Ryan Underwood wrote:

Your request for free publication is undeniably idealistic.  I think it
is a perfectly reasonable compromise to provide specs under NDA to
developers who have shown themselves to be productive and trustworthy in
the past, e.g. by contributing to XFree86 or producing and supporting an
own 3rd-party driver like Tungsten Graphics.  It is a much less risky
investment for the chip manufacturer than freely publishing documentation
for all.  The manufacturer will rarely reach any individuals who would
not have qualified for a NDA anyway, and will most likely end up giving
their competitors ideas they may not have had otherwise.
The problem is that none of the NDAs I have seen (which is not that 
many) explicitly give you the rights to release source code based on 
documentation on NDA.  If you happen to work for a company that is 
extremely cautious about such legal issues, that means you don't get to 
sign any NDAs.

Personally (i.e., not speaking for my employer in any way) agree that 
it's reasonable for hardware vendors to release documentation under NDA. 
 However, if they're releasing NDA documentation to developers for the 
purpose of creating open-source drivers, the NDA should explicitly give 
the developers that right.

Again, that's just this developer's personal opinion.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] DRI compilation errors

2004-01-26 Thread Ian Romanick
Patrick Dohman wrote:

I am having difficulties compiling the DRI tree on my redhat 8.0 system
running kernel 2.4.20-28 and XFree86 Red Hat Linux release 4.2.1-23. I
have been able to configure my video chip 845gl for a decent
resolution, however I do not have hardware acceleration so I figured I
would give compiling and installing the dri a shot. I downloaded the dri
cvs source and configured my kernel as per the dri compilation guide.
The dri kernel module for my system i810 fails to build and I receive
the following errors in the world log. 

make[3]: Entering directory `/dev/dri-cvs/build/xc/include/GL' 
make[3]: *** No rule to make target
`/X11R6/SourceForge/Mesanew/Mesa-newtree/include/GL/gl.h', needed by
`gl.h'. Stop. 
make[3]: Leaving directory `/dev/dri-cvs/build/xc/include/GL' 
make[2]: *** [includes] Error 2 
make[2]: Leaving directory `/dev/dri-cvs/build/xc/include' 
make[1]: *** [includes] Error 2 
make[1]: Leaving directory `/dev/dri-cvs/build/xc' 
make: *** [World] Error 2 
Contrary to what the FAQ and the compilation guide say, you need to have 
Mesa *source* installed on your system.  xc/xc/config/cf/host.def needs 
to reference the Mesa source tree.  We're in the process of changing the 
way DRI drivers and infrastructure are built, so Mesa  DRI are going 
through a bit of a transitional phase.  That's why the documentation and 
reality are out of sync.  Sorry for the trouble.

I hope this is the correct forum to post this question. If there is a
more elegant solution to my issue I would be more than happy to hear
about it. 
Actually, dri-devel would be much better.  There are quite a few DRI 
developers that read that list that do not read this one.

http://dri.sourceforge.net/cgi-bin/moin.cgi/MailingLists

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: PFNGLXGETUSTPROC argument signed or unsigned?

2004-01-22 Thread Ian Romanick
David Dawes wrote:

What is the correct typedef for PFNGLXGETUSTPROC?  glxclient.h has:

typedef int (* PFNGLXGETUSTPROC) ( int64_t * ust );

and it is used as a signed quantity in glxcmds.c.

But most drivers use uint64_t, and src/glx/mini/dri_util.h in the Mesa
trunk uses unsigned:
typedef int (* PFNGLXGETUSTPROC) ( uint64_t * ust );
That was my bad.  It should be int64_t everywhere.  It makes more sense 
for it to be unsigned, but the GLX_OML_sync_control spec has it as signed.

http://oss.sgi.com/projects/ogl-sample/registry/OML/glx_sync_control.txt

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Xserver/GL/glx/g_render.c changes?

2004-01-14 Thread Ian Romanick
Torrey Lyons wrote:

In building the top of the tree on Mac OS X 10.2 I have run into 
troubles linking the GLX support in Xserver/GL. The problem is that 
native OpenGL in Mac OS X 10.2 does not include glActiveStencilFaceEXT() 
and glWindowPos3fARB(), which have been added to g_render.c and 
g_renderswap.c since 4.3.0. On Mac OS X 10.3 things build fine since 
these calls are available.

g_render.c includes the comment:

/* DO NOT EDIT - THIS FILE IS AUTOMATICALLY GENERATED */

I can build server side GLX successfully if I just #ifdef the offending 
calls out on Mac OS X 10.2. or #define them to no-ops. Is this likely to 
cause problems? How is g_render.c automatically generated? What is the 
best way to conditionally remove support for these two functions?
It's not.  This code was donated by SGI, and I suspect that at SGI it is 
automatically generated.  However, in XFree86 it is not.  I'm in the 
process of making some changes to this file in DRI CVS.  I'll drop a 
line to this list when I'm done so that you can tell me which routines 
break on the Mac, and what ifdef needs to be put around them.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] ATI Radeon

2003-12-03 Thread Ian Romanick
raf wrote:

I've got a problem. my DRI is not working. I dont know why, when i read 
the log i found som unresolved sytmbol... but i dont know what that means.
glxinfo says: direct rendering: No and i would like to fix it.
I use: ATI Radeon 9000 Pro on slackware 9.1, Athlon 2000 xp. motherboard 
chipset is VIA KT4000V.
Any chance you could share that log with the rest of us so that we CAN 
help you?  Just saying, It don't work, help me fix it. is a waste of 
everyone's time (including yours!).

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Radeon 9000

2003-12-03 Thread Ian Romanick
raf wrote:

Hi

I've got a problem. my DRI is not working. I dont know why, when i read 
the log i found som unresolved sytmbol... but i dont know what that means.
glxinfo says: direct rendering: No and i would like to fix it.
I use: ATI Radeon 9000 Pro on slackware 9.1, Athlon 2000 xp. motherboard 
chipset is VIA KT4000V.
(sorry, forgotten about the log --- repaired..)
Looks like our two messages crossed in the mail (so to speak).  Sorry.

[snip]

Symbol xf86setjmp0 from module /usr/X11R6/lib/modules/fonts/libfreetype.a is 
unresolved!
Symbol xf86setjmp0 from module /usr/X11R6/lib/modules/fonts/libfreetype.a is 
unresolved!
This shouldn't cause a problem for DRI.  Other than that, I don't see 
anything suspicious in the log.  The only thing I can think of is that 
the libGL.so from the ATI driver package didn't get installed properly. 
 Could you send the output of 'ldd $(which glxinfo)'?

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] ATI FireGL4 drivers

2003-11-26 Thread Ian Romanick
GS HUNT wrote:

My first choice would to use the dirivers supplied by Xfree..

Xfree 4.3.0 has support for 3d ATI Drivers...which are almost as fast as the 
ATI binaries... not to mention they are more stable..

However if you really need the ATI binaries... try downloading  3.2.8 fglrx 
drivers...which seemed to be the unified driver for many of the ATI cards.. 
hopefully it will be compatible.

http://www2.ati.com/drivers/firegl/fglrx-glc22-4.3.0-3.2.8.i586.rpm

If you don't have a precompiled matching kernel module... you will have to 
have kernel source so it can link its binaries correctly.
The drivers you mention are for the Radeon based FireGL cards.  Lloyd 
was asking about the older pre-Radeon based FireGL cards.  There are 
*no* drivers in the XFree86 source tree for those cards.  The two chip 
families are as different as x86 and PowerPC. :)

On November 26, 2003 04:09 pm, Lloyd A Treinish wrote:

Does anyone know if a driver for the ATI FireGL4 card suitable for XFree86
4.3.0 and the 2.4.20-18.9 kernel (RedHat 9) is available?  The latest
driver (11/26/2002) from ATI (binary only) does not work (a whole list of
incompatibilities).  It was built for XFree86 4.2.0 and libc 6.2 (glibc
2.2).  ATI does provide newer drivers, but only for their newer cards.
Those drivers don't appear to work either with this older card.
Thanks.


___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] PPC 3d acceleration ati radeon 9000 (Dual 1GHZ G4)

2003-11-12 Thread Ian Romanick
juggl3r wrote:

what is ppc? is it related in any way to 3d acceleration in ATI cards?
It's fairly common shorthand for PowerPC.

On Mon, 2003-11-10 at 18:22, Ian Romanick wrote:
Raymond Born wrote:

Will it ever work.  Is there a way that it can already work?
It should already work.  Various users and developers use this class of 
ATI hardware on PPC without troubles.  What problems are you having?


___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: glx failing

2003-11-10 Thread Ian Romanick
Frank Gießler wrote:
with my current CVS snapshot (Changelog up to #530), glxgears gives me 
the following at startup:

X Error of failed request:  BadLength (poly request too large or 
internal Xlib length error)
  Major opcode of failed request:  144 (GLX)
  Minor opcode of failed request:  1 (X_GLXRender)
  Serial number of failed request:  22
  Current serial number in output stream:  23

This used to work before. I've seen this on both OS/2 and SuSE Linux 8.2 
(XFree CVS built without DRI). Any idea what this means and/or where I 
should look?
Can you give any details to help reproduce this error?  There is a 
reported bug in this area, but I thought that it was fixed.  Your 
XF86Config would also be useful.

http://bugs.xfree86.org/show_bug.cgi?id=439

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] PPC 3d acceleration ati radeon 9000 (Dual 1GHZ G4)

2003-11-10 Thread Ian Romanick
Raymond Born wrote:

Will it ever work.  Is there a way that it can already work?
It should already work.  Various users and developers use this class of 
ATI hardware on PPC without troubles.  What problems are you having?

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [Dri-devel] Re: [XFree86] Re: DRI weirdnesses for RADEON 9200

2003-10-31 Thread Ian Romanick
Alan Hourihane wrote:
On Thu, Oct 30, 2003 at 05:07:33PM -0800, Ian Romanick wrote:
manu wrote:

Responding to myself : sorry it seems that the problem is because the  
r200_dri.so module is linked against libexpat.so.1 which is not on my  
system. So I just made a link to the one I had and all is working great  
now!
glxgears gives me ~1535 FPS. Is it OK? (Radeon 9200 with 64MB).
Thanks for the help, and sorry for eating the bandwidth ;-)
Ah!  Actually, thank you very much. :)  The problem seems to be that 
with libexpat.so missing, there are unresolved symbols in r200_dri.so. 
The dlopen of r200_dri.so in OpenDriver (lib/GL/dri/dri_glx.c, line 184) 
fails.  HOWEVER, it only logs a message if LIBGL_DEBUG is set.  I 
removed libexpat from my system and was able to recreate the crash. 
With LIBGL_DEBUG set I get a nice message about not being able to open 
the driver.

My person opinion is that the error messages in OpenDriver (but not the 
ones in GetDriverName) should be printed regardless of the setting of 
LIBGL_DEBUG.  That would have helped find the source of this problem 
much sooner.  We basically got lucky that Manu figured out that libexpat 
was missing for himself. :)
We should probably link against the static version of libexpat.a to
avoid this trouble.
I was pretty sure that the snapshots did staticly link with libexpat.a. 
 I remember there being some discussion about this.  Once XFree86 4.4.0 
hits the streets this particular problem will be moot.  AFAIK, XFree86 
4.4.0 will include libexpat.  However, I still think that the default 
should be to log the error messages when the _dri.so will not load or is 
missing required symbols.  I don't think there is a valid configuration 
where those messages would erroneously be printed.  I can think of such 
cases for all the other messages in dri_glx.c, but not for the errors in 
OpenDriver.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] DRI weirdnesses for RADEON 9200

2003-10-30 Thread Ian Romanick
manu wrote:

Hi all,
before I open a bug for that I would like to sort out things a bit.
Here is the story : I installed a MDK 9.2, the agpgart module seems to  
be OK with my mobo (which has a nForce2 Ultra chipset), so I went on  
and try to make 3D accel work for my Radeon 9200. Whereas all logs were  
OK, glxgears locks up hard (only hard reset could get me out of it). So  
I asked on this list some help, and I have been advised to download  
latest DRI snapshot, what I did, installed it with no problems. But  
here is what I have now :
- glxgears crashes at startup here is the stacktrace (using the core  
dumped) :
#0  0x in ?? ()
#1  0x4033eece in GetDRIDrawable () from /usr/X11R6/lib/libGL.so.1.2
#2  0x4033fd57 in glXSwapBuffers () from /usr/X11R6/lib/libGL.so.1.2
#3  0x4004abdf in glXSwapBuffers () from /usr/X11R6/lib/libGL.so.1
#4  0x0804a546 in XOpenDisplay ()
#5  0x401ccc57 in __libc_start_main () from /lib/i686/libc.so.6
- xawtv does not start and gives me these errors :
This is xawtv-3.88, running on Linux/i686 (2.4.22-10mdk)
Loading required GL library /usr/X11R6/lib/libGL.so.1.2
X Error of failed request:  BadMatch (invalid parameter attributes)
 Major opcode of failed request:  144 (GLX)
 Minor opcode of failed request:  5 (X_GLXMakeCurrent)
 Serial number of failed request:  307
 Current serial number in output stream:  307
Someone on the dri-devel list reported something similar.  Based on your 
stacktrace, it looks like DRI is enabled on the display but not on the 
screen.  The crash seems to be because psc-getDrawable is NULL in 
GetDRIDrawable.  Do you have more than one screen on the display?  The X 
log doesn't seem to indicate so.  It doesn't seem right that 
psc-getDrawable should be NULL if DRI is enabled on a display and 
there's only one screen.  I'm also not sure how you get from 
XOpenDisplay to glXSwapBuffers.  Hmm...

So I ran glxinfo see below which told me that Direct Rendering is not  
enabled! This is crazy as you can check in the XFree log (file  attached).
Hope someone can tell me more about this, be it to file this directly  
as a bug on bugzilla ;-)
I believe that you should file a bug against this, but also try the 
attached patch.  The patch should prevent calling the NULL function 
pointer, but there may be other problems.

Index: lib/GL/glx/glxcmds.c
===
RCS file: /cvs/dri/xc/xc/lib/GL/glx/glxcmds.c,v
retrieving revision 1.65
diff -u -d -r1.65 glxcmds.c
--- lib/GL/glx/glxcmds.c23 Oct 2003 23:21:23 -  1.65
+++ lib/GL/glx/glxcmds.c31 Oct 2003 00:49:08 -
@@ -269,8 +269,8 @@
 
for ( i = 0 ; i  screen_count ; i++ ) {
__DRIscreen * const psc = priv-screenConfigs[i].driScreen;
-   __DRIdrawable * const pdraw = (*psc-getDrawable)(dpy, drawable,
- psc-private);
+   __DRIdrawable * const pdraw = (psc-private != NULL)
+  ? (*psc-getDrawable)(dpy, drawable, psc-private) : NULL;
 
if ( pdraw != NULL ) {
if ( scrn_num != NULL ) {


Re: [XFree86] Re: DRI weirdnesses for RADEON 9200

2003-10-30 Thread Ian Romanick
manu wrote:

Le 30.10.2003 17:07:54, manu a écrit :

So I ran glxinfo see below which told me that Direct Rendering is not  
enabled! This is crazy as you can check in the XFree log (file  
attached).
Hope someone can tell me more about this, be it to file this directly  
as a bug on bugzilla ;-)
Thanks
Responding to myself : sorry it seems that the problem is because the  
r200_dri.so module is linked against libexpat.so.1 which is not on my  
system. So I just made a link to the one I had and all is working great  
now!
glxgears gives me ~1535 FPS. Is it OK? (Radeon 9200 with 64MB).
Thanks for the help, and sorry for eating the bandwidth ;-)
Ah!  Actually, thank you very much. :)  The problem seems to be that 
with libexpat.so missing, there are unresolved symbols in r200_dri.so. 
The dlopen of r200_dri.so in OpenDriver (lib/GL/dri/dri_glx.c, line 184) 
fails.  HOWEVER, it only logs a message if LIBGL_DEBUG is set.  I 
removed libexpat from my system and was able to recreate the crash. 
With LIBGL_DEBUG set I get a nice message about not being able to open 
the driver.

My person opinion is that the error messages in OpenDriver (but not the 
ones in GetDriverName) should be printed regardless of the setting of 
LIBGL_DEBUG.  That would have helped find the source of this problem 
much sooner.  We basically got lucky that Manu figured out that libexpat 
was missing for himself. :)

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: Radeon performance, z-buffer clears

2003-10-27 Thread Ian Romanick
Vahur Sinijarv wrote:

Does anyone know if fast z-buffer clears and 'z-compression aka hyper-z'
are going to be implemented in radeon DRI drivers (actually it is in the
'radeon' kernel module). It seems to be one of the areas where major
performance gain could be achieved, taking this driver to the same
performance level as ATI's binary only driver has. I've done some perf.
tests and by disabling z-clears frame rates almost double, which shows
that the current approach by drawing a dummy quad is very slow ... I
would be willing to implement it myself if anyone would tell me where to
find information about programming this feature.
ATI has not provided documentation for this feature to developers. 
Until then, it has zero chance of being implemented in open-source drivers.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Kernel Module? On second thought...

2003-10-21 Thread Ian Romanick
Mike A. Harris wrote:

If DRI is disabled, then the Radeon driver will use the older
MMIO mechanism to do 2D acceleration.  I don't know what if any
of the other drivers will use DRI for 2D or Xvideo currently,
however any hardware that supports using DMA/IRQ for 2D
accelration or other stuff theoretically at least can use the DRI
to do it.
I think that's the right model to follow.  Cards that can get benefit 
should use the existing DRM mechanism, even if they don't support 3D.  I 
believe that the i810 uses its DRM for Xv (or maybe it's XvMC...it's 
something video related).

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DRI proprietary modules

2003-10-20 Thread Ian Romanick
John Dennis wrote:
For DRI to work correctly there are several independent pieces that all
have to be in sync.
* XFree86 server which loads drm modules (via xfree86 driver module)

* The drm kernel module

* The agpgart kernel module

Does anybody know for the proprietary drivers (supplied by ATI and
Nvidia) which pieces they replace and which pieces they expect to be
there?
The Nvidia drivers do not use DRI.  The 3dlabs, ATI, PowerVR, and Matrox 
(for their Parhelia hardware) drivers do.  They will *all* replase the 
DRM kernel module, the XFree86 2D driver, and the client-side 3D driver 
(the *_dri.so file).  Most include a custom libGL.so that provides some 
added functionality.  The client-side 3D driver and the DRM kernel 
module are very tightly related, and should be considered a single 
entity (for the most part).

The reason I'm asking is to understand the consequences of
changing an API. I'm curious to the answer in general, but in this
specific instance the api I'm worried about is between the agpgart
kernel module and drm kernel module. If the agpgart kernel module
modifies it's API will that break things for someone who installs a
proprietary 3D driver? Do the proprietary drivers limit themselves to
mesa driver and retain the existing kernel services assuming the IOCTL's
are the same?
Don't bring Mesa into this.  Mesa fundamentally has nothing to do with 
DRI.  It just so happens that all of the open-source DRI drivers use 
Mesa, but there is no such requirement.  AFAIK, *none* of the 
closed-source drivers use any code from Mesa.

Or do they replace the kernel drm drivers as well? If so
do they manage AGP themselves, or do they use the systems agpgart
driver? Do they replace the systems agpgart driver?
I think both the ATI and Nvidia drivers have the option to either use 
agpgart or an internal implementation.  I'm fairly certain that the 
PowerVR, 3dlabs, and Matrox drivers all use agpgart exclusively.  All of 
the drivers, closed-source or open-source, depend on the agpgart 
interface.  Changing that interface in a non-backwards compatible will 
break them all.

I guess my question is, what changes are under consideration?

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Export symbol lists on Linux (was Re: RFC Marking private symbols in XFree86 shared libraries as private)

2003-10-20 Thread Ian Romanick
Jakub Jelinek wrote:

The first is a MUST list, symbols which are exported from XFree86 shared
libraries now when there is no anonymous version script, are not exported
when an anonymous versions script created from stock *-def.cpp file
is applied and are used by some binary or shared library (including other
shared libraries in the XFree86 collection). There is IMHO no way other
than adding these to *-def.cpp files (any issues with this)?
For libGL.so, as anonymous version scripts accept wildcards, I think
we should use gl* wildcard, as it is too error-prone to list all
the gl* functions.
Sorry for taking so long to reply.  I was taking a few days off. :)

libGL.so needs to export XF86DRI*, __glXFindDRIScreen, and a few _glapi 
functions on all platforms that support DRI (i.e., Linux and *BSD 
currently).  Do a nm /usr/X11R6/lib/modules/dri/*_dri.so | grep ' U 
_glapi' | sort -u  to see which ones.  On all platforms all symbols 
matching gl[A-Z]* need to be exported.  Other than that I don't think 
anything needs to be exported by libGL.so.

I *believe* that the *_dri.so files only need to export 
__driCreateScreen.  There are some other symbols that need to be 
exported in DRI CVS, but that code isn't in XFree86 CVS AFAIK (and won't 
be until after 4.4.0).

Thanks for tackling this!

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] ATI Mach64 direct rendering

2003-10-14 Thread Ian Romanick
Risenhoover, Paul wrote:

I've been trying to get DRI working on an ATI Mach64.  It's running 
RedHat 8.0 and I just used up2date to ensure I got all new code, plus 
the new kernel.  I've attached all the obligatory files for your review.
There's no official 3D driver for that card.  There is an in-progress 
driver at http://dri.sf.net/.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: PBuffer support in current XFree86?

2003-10-13 Thread Ian Romanick
Andrew P. Lentvorski, Jr. wrote:
I just grabbed the latest source from CVS and compiled.  While the system
is identifying itself as 1.3 Mesa 5.0.2, glXGetFBConfigs seems to be
always returning a NULL pointer for any combination of attributes I can
feed into it.
The core OpenGL version is different from the GLX version.  You need to 
look at the GLX version (from glXQueryVersion) or the GLX extension 
string (from glXQueryExtensions).

Is this expected?
Support for GLX_SGIX_fbconfig in hardware accelerated 3D drivers will 
not make it into XFree86 4.4.0, but support should be available in DRI 
CVS in the next couple months (give or take).  GLX_SGIX_pbuffer (which 
will be the last bit of GLX 1.3 functionality to add) will be added 
sometime after that.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: RFC Marking private symbols in XFree86 shared libraries as private

2003-10-09 Thread Ian Romanick
Jakub Jelinek wrote:

   1) could be done by some header which everything uses, doing
   #if defined HAVE_VISIBILITY_ATTRIBUTE  defined __PIC__
   #define hidden __attribute__((visibility (hidden)))
   #else
   #define hidden /**/
   #endif
   and write prototypes like:
   void hidden someshlibprivateroutine (void);
   extern int someshlibprivatevar hidden;
   etc.
I sent you a message about this before (in reference to libGL.so), but I 
never heard back from you.  I think this is a very good idea!  I would 
prefer it if __HIDDEN__ or HIDDEN or something similar were used.  That 
makes it stand out more.  Also, is there any reason to not have the 
symbols be hidden in non-PIC mode?

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: What about a kernel module?

2003-10-08 Thread Ian Romanick
Raymond Jennings wrote:

I'd like to suggest that you implement device-specific code as a kernel 
module.
This has been discussed to death.  XFree86 is portable to systems where 
we can't just willy-nilly add kernel modules.  With few exceptions, such 
as to implement hardware 3D, this is right out.

Also I have Red Hat 7.0 and when I drag a window, it is SLOW.
Since the version of XFree86 in that distro is at least 3 years old, it 
probably doesn't support hardware acceleration on your card.  Doing 
everything in software is slow.  Big surprise! :)  Try upgrading to 
something more recent, please.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [Dri-devel] Deadlock with radeon DRI

2003-10-02 Thread Ian Romanick
Keith Whitwell wrote:

I haven't deeply investigated this but two solutions spring to mind:
- Hack:  Move the call to RADEONAdjustFrame() during initialization 
to before the lock is grabbed.
- Better:  Replace the call to RADEONAdjustFrame() during 
initialization with something like:

if (info-FBDev) {
fbdevHWAdjustFrame(scrnIndex, x, y, flags);
} else {
RADEONDoAdjustFrame(pScrn, x, y, FALSE);
}
which is basically what RADEONAdjustFrame() wraps.
That seems like the right way to go, but I'd feel better if the body of 
RADEONAdjectFrame was moved to a new function called 
RADEONAdjectFrameLocked.  RADEONAdjectFrame would just lock, call 
RADEONAdjectFrameLocked, and unlock.  That matches what's been done 
elsewhere in the 3D driver, anyway.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Exporting sched_yield to the drivers

2003-09-22 Thread Ian Romanick
Mark Vojkovich wrote:

  Can we export to the drivers some function that yields the CPU?
Currently alot of drivers burn the CPU waiting for fifos, etc...
usleep(0) is not good for this because it's jiffy based and usually
never returns in less than 10 msec which has the effect of making
interactivity worse instead of better.  I'm not sure which platforms 
don't export sched_yield() and which will need alternative 
implementations.
There was a thread about this on the dri-devel list some months ago. 
The short answer is DON'T DO IT! :)  I don't think that sched_yield will 
give the desired results in the 2D driver any more than it does in the 
3D driver.  I *believe* that there is another function for this purpose, 
but I can't recall what it is called.

http://marc.theaimsgroup.com/?l=dri-develm=105425072210516w=2
http://lwn.net/Articles/31462/
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Exporting sched_yield to the drivers

2003-09-22 Thread Ian Romanick
Mark Vojkovich wrote:

On Mon, 22 Sep 2003, Ian Romanick wrote:


Mark Vojkovich wrote:


 Can we export to the drivers some function that yields the CPU?
Currently alot of drivers burn the CPU waiting for fifos, etc...
usleep(0) is not good for this because it's jiffy based and usually
never returns in less than 10 msec which has the effect of making
interactivity worse instead of better.  I'm not sure which platforms 
don't export sched_yield() and which will need alternative 
implementations.
There was a thread about this on the dri-devel list some months ago. 
The short answer is DON'T DO IT! :)  I don't think that sched_yield will 
give the desired results in the 2D driver any more than it does in the 
3D driver.  I *believe* that there is another function for this purpose, 
but I can't recall what it is called.

http://marc.theaimsgroup.com/?l=dri-develm=105425072210516w=2
http://lwn.net/Articles/31462/
   Currently, sched_yield() *does* give the desired result and I have
used it with great success in many places, XvMC drivers in particular.
Issues with specific implementations of sched_yield() with recent
Linux kernels does not change the need to yield.  Driver yields will
not be random and usleep is unusable because of it's jiffy nature.
I was never challenging the idea that the driver should yield the CPU. 
On the contrary, I believe that is a good and necessary thing.  However, 
I am a firm believer that on 2.5 (and presumably 2.6 as well) Linux 
kernels using sched_yield has some very undesirable side-effects.

It sounds like the Linux 2.5 implementation is less desirable than
the Linux 2.4 implementation, however, in lieu of an alternative,
it is still better than burning the entire slice waiting for the
fifo to drain.  The ability to yield is essential with DMA based
user-space drivers.  These drivers can queue up alot of work and
often have to wait a long time before they can continue. 
With pure user-space drivers this is a difficult problem to solve.  With 
user-space drivers with a kernel component the problem is a bit easier. 
 The user-space part can wait on a semaphore of some sort and the 
kernel part waits on an interrupt.  When the kernel receives the 
interrupt, it kicks the semaphore.

BEFORE THE FLAME WAR BREAKS OUT, I FULLY UNDERSTAND WHY THE DRIVERS ARE 
IMPLEMENTED THE WAY THAT THEY ARE.  THIS IS *NOT*...I repeat...*NOT* A 
CALL TO START MOVING STUFF INTO THE KERNEL OR ANYTHING LIKE THAT. :)

However, for quite a few of the drivers there already exists a kernel 
component, either through fbdev or DRI, or both.  Some of the drivers, 
like the Radeon and Rage 128 use this mechanism for DMA in the DDX 
driver.  Perhaps *part* of the sollution is to better leverage that?

   The fact that there may be different best implementations 
with various kernels only further supports that XFree86 should
export a xf86Yield() function which does the right thing on
that platform.  For Linux = 2.4 that appears to be sched_yield().
I don't know about the other OSes though, which is why I brought
this up on this list.
Having xf86Yield as a wrapper is a very good idea.  We just have to be 
careful how it's implemented (irony intentional). :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Exporting sched_yield to the drivers

2003-09-22 Thread Ian Romanick
Nathan Hand wrote:

On Tue, 2003-09-23 at 07:55, Mark Vojkovich wrote:

On Tue, 23 Sep 2003, Nathan Hand wrote:


On Tue, 2003-09-23 at 05:58, Mark Vojkovich wrote:

 Can we export to the drivers some function that yields the CPU?
Currently alot of drivers burn the CPU waiting for fifos, etc...
usleep(0) is not good for this because it's jiffy based and usually
never returns in less than 10 msec which has the effect of making
interactivity worse instead of better.  I'm not sure which platforms 
don't export sched_yield() and which will need alternative 
implementations.
FIFO busy loops are very quick. You'll harm overall graphic performance
by yielding. 
 Your experience is out of date.  If I've just filled a Megabyte
DMA fifo and I'm waiting to cram another Megabyte into it, how
quick is my FIFO busy loop then?  I've had great success with
sched_yield().
There's no disputing the first comment :-/

Wouldn't it be easier to dynamically adjust the size of the FIFO? So
instead of 

slice 1) send 1 megabyte
...
slice 2) fifo not drained, yield
...
slice 3) fifo not drained, yield
...
slice 4) fifo drained, send 1 megabyte
...
repeat forever, many wasted slices
Why not

slice 1) send 1 megabyte
...
slice 2) fifo not drained, reduce fifo to 512kB, wait
...
slice 3) fifo not drained, reduce fifo to 256kB, wait
...
slice 4) fifo drained, send 256kB
...
slice 5) fifo drained, send 256kB
A bigger FIFO reduces the risk of the FIFO emptying before you're ready
but if your slices are arriving faster than the GPU can drain the FIFO,
does it really matter?
Yuck!  Modern graphics cards are designed to operate optimally when 
given large chunks of commands to operate on at once.  Under optimal 
driver circumstances, this leads to better throughput and lower CPU 
over-head.  Chopping down the size of the DMA buffer will not improve 
performance.  I'm not even convinced that it would dramatically improve 
latency (which is the goal of adding sched_yield).  Letting the CPU and 
the graphics adapter work for long periods of time in parallel *is* a 
good thing!

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] Matrox P650 support?

2003-09-09 Thread Ian Romanick
Daniel Lang wrote:

could anyone state, if the Matrox Millenium P650 would work
with XFree 4.3.x on FreeBSD?
The G450/550 seem to be supported. Maybe the chipset is
compatible enough to work with the mga driver.
They are not similar at all.  The P650 is based on the newer Parhelia 
core.  The only driver that I know of for that chip is the one from Matrox.

Matrox provides Linux drivers, but I'm not sure,
if there exist kld-wrappers for FreeBSD.
XFree86 driver binaries are supposed to be cross-platform (at last on 
the same architecture), so the *_drv.o from Matrox *should* work.  You 
won't get 3D acceleration, though.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: DRI and Silicon Motion

2003-09-04 Thread Ian Romanick
Cheshire Cat Fish wrote:

Mesa support/conformance is a requirement. The resulting SMI drivers 
would remain open source, and part of the Xfree/DRI/Linux distribution.  
That is the plan at least.
That's good news. :)

There are way too many variables to be able to accurately answer that 
question (see my answer to your first question). :)
But it sounds like at best I can only re-use the very lowest level of 
drawing code (the part that talks to the hardware_ from the Windows 2000 
driver.  Everything above that will be different.
That's a fair assessment.

This is starting to sound like a couple of months work.
At least.  I don't know how much time per week you're planning to put 
into this, but, working full time, it would probably take a month or 
so for someone familiar with DRI internals to get something working 
using existing driver code  good documentation.  To get it working 
*well* would require more time.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DRI and Silicon Motion

2003-09-03 Thread Ian Romanick
Cheshire Cat Fish wrote:

I am investigating supporting DRI and OpenGL for the Silicon Motion driver.
I'm new to both of those, so some of these may be newbie sounding 
questions.

1) I have the  OpenGL code from the Windows 2000 Silicon Motion driver.  
Can this code be mostly used as is?  Or will the Linux code be 
entirely different?
Depending on licensing issues attached to the code you have and how you 
want to distribute it, you may be able to use a lot or a little.  All of 
the existing open-source drivers are based on Mesa, and the whole build 
process for 3D drivers in XFree86 is built on that.  I suspect, but am 
in no position to say for sure, that any contributed drivers would 
have to conform to that.  Porting the existing driver to use Mesa would 
probably be a lot of work, but it shouldn't be insurmountable.

If you want to basically use your existing code as-is, you can port it 
to just interface with the XFree86 libGL.so.  That would be a much 
smaller task, but it would leave you on your own (pretty much) to 
support and distribute the driver.  I don't think it would get included 
in an XFree86 release.  There's also the issue of the license that may 
be attached to the existing code, but as I'm neither a lawyer or an 
official XFree86 maintainer I'm in no position to comment.

2) In the DRI Users Guide, section 3.2-Graphics Hardware, Silicon Motion 
is not listed as currently being supported.  Is this still the case? Is 
anyone working on this?  Or am I starting from scratch?
This hardware is not supported and I know of nobody that is working on it.

3) How big of a task am I looking at here? Since I alrady have the Win2k 
OGL code to base my work on, it seems to me it shouldn't be too hard to 
drop that code in and hook it up to DRI.  A few weeks maybe?  Or am I 
missing something fundamental?
There are way too many variables to be able to accurately answer that 
question (see my answer to your first question). :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] Backing Store Problem?

2003-09-02 Thread Ian Romanick
John Lee wrote:
I am running XFree86 (v4.2.0) with fvwm2.  If I turn backing store on in 
my X Server, I run into the issue where my window decorations and pop-up 
menus do not refresh properly.  I have to paint them (ie move my mouse 
over where the menu should be) in order for them to display.

If I turn backing store off, then the issue goes away and everything 
refreshes properly.  The problem is, I need backing store for the 
application I am running.

I tried switching to twm and the issue remains.  Any suggestions?
If you're using a Radeon based card, this has been fixed in 4.3.0, IIRC. 
 If it's not in the 4.3.0 release, it is certainly in CVS.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: patch for ia64 page size

2003-08-11 Thread Ian Romanick
Jakub Jelinek wrote:

On Sun, Aug 10, 2003 at 07:06:58PM -0500, Warren Turkal wrote:

@@ -1003,6 +993,8 @@
   break;
}
+r128_drm_page_size = getpagesize();
+
sysconf (_SC_PAGESIZE)
is the standardized way of querying page size.
I seem to recall some discussion about this a few months ago.  There are 
some portability issues with both getpagesize and sysconf(_SC_PAGESIZE). 
 Because of that, XFree86 has a wrapper function called 
xf86getpagesize.  There also seems to be a #define that aliases 
getpagesize to xf86getpagesize, so I'm not sure if the wrapper should be 
used or if getpagesize should be used.  Either way, I'm sure that 
sysconf(_SG_PAGESIZE) should *not* be used directly.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] Which libGL should I'd been using?

2003-07-28 Thread Ian Romanick
Francisco J. Reyna Sepúlveda wrote:

Hi,

My machine:
XFree86 Version 4.3.0 (Debian 4.3.0-0ds2.0.0woody1 
OS Kernel: Linux version 2.4.21
ATI Technologies Inc Radeon Mobility M6 LY

What is the correct libGL.so.1.x I should be using? 

ldd says /usr/X11R6/lib/libGL.so.1 when running ldd glxgears
however apt-file says I also got these:
xlibmesa3-gl: usr/X11R6/lib/libGL.so.1
xlibmesa3-gl: usr/X11R6/lib/libGL.so.1.2
xlibmesa3-gl: usr/lib/libGL.so.1
xlibmesa3-gl: usr/lib/libGL.so.1.2
Which one should ldd glxgears should be displaying?
All of those should be symbolic links except usr/X11R6/lib/libGL.so.1.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Problems with GL library (HELP!)

2003-07-23 Thread Ian Romanick
Francisco J. Reyna Sepúlveda wrote:

Hi,

I cant make my Radeon Mobility work with 3d acceleration, I DONE
EVERYTHING please read.
1) Install XFree 4.x.x. Make sure it works. Backup !

DONE
Which .x.x did you install?

2) Check kernel messages to make sure agpgart and mtrr are being loaded
and work. DRM module (like radeon.o or r128.o) must be loaded after
agpgart, which must be loaded after mtrr. It is possible to compile
these in.
Do *NOT* compile in the DRM module.  The one included with that kernel 
is too old.  Use the one included with XFree86 4.3.0 or one of the DRI 
snap-shots.

#Keeping same order of output
...
mtrr: v1.40 (20010327) Richard Gooch ([EMAIL PROTECTED])
mtrr: detected mtrr type: Intel
...
Linux agpgart interface v0.99 (c) Jeff Hartmann
agpgart: Maximum main memory to use for agp memory: 262M
agpgart: Detected Intel i830M chipset
agpgart: AGP aperture is 256M @ 0xd000
...
[drm] AGP 0.99 on Unknown @ 0xd000 256MB
[drm] Initialized radeon 1.2.0 20011231 on minor 0
[drm:radeon_unlock] *ERROR* Process 357 using kernel context 0
[drm:radeon_unlock] *ERROR* Process 622 using kernel context 0
...
I dont know what those 2 errors mean, but Ill continue...
Stop.  Do not pass Go.  Do not collect $200.

That version of the DRM is too old to be useful to anyone.  If you're 
using a recent version of XFree86, you want *at least* DRM version 
1.3.0.  The correct version to use is included with XFree86 4.3.0.  This 
is your problem.

More recent binary driver snap-shots are available under Downloads at 
http://dri.sourceforge.net/.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] intallation troubleshot with a ATI rage 128 agp ona debian

2003-07-17 Thread Ian Romanick
Martin Boris wrote:
i have buy an ati rage 128 because many site on the web say it's THE
linux compatible graphic card. No luck , my X server don't start.
And as i am not realy good with X i don't know what i can do.
I have found no answer in my debian Faq howto..
my config file can be found at:
http://sombre.alternc.info/XF86Config-4
and the log report at
http://sombre.alternc.info/XFree86.0.log
Can anyone have a look on that and give me advice on what i can do or
link that can help me ???
Your XF86Config-4 looks okay.  The output in the log looks like it just 
couldn't find your hardware.  Could you send your '/sbin/lspci -vvv' 
output?  That might help a bit.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Opteron, GinGin and PCIGart

2003-07-17 Thread Ian Romanick
Mark Lane wrote:

I am getting a weird error when attempting to run ForcePCIMode on an 
opteron with a Radeon 7500 PCI. I have also used a 7000 with the same 
results.

(EE) RADEON(0): GetBuffer timed out, resetting engine...
(EE) RADEON(0): RADEONCPGetBuffer: CP reset -1020
(EE) RADEON(0): RADEONCPGetBuffer: CP start -1020
(EE) RADEON(0): RADEONCPGetBuffer: CP GetBuffer -1020
(EE) RADEON(0): GetBuffer timed out, resetting engine...
(EE) RADEON(0): RADEONCPGetBuffer: CP reset -1020
(EE) RADEON(0): RADEONCPGetBuffer: CP start -1020
(EE) RADEON(0): RADEONCPGetBuffer: CP GetBuffer -1020
(EE) RADEON(0): GetBuffer timed out, resetting engine...
I am running the stock XFree86 which comes with GinGin.
There are some known potential problems with the Radeon driver on 
64-bit.  Right now, none of the DRI developers (that I know of) have 
access to any 64-bit hardware, so you're venturing into uncharted 
territory.  One question, though.  Are you running in mixed 32-bit / 
64-bit mode, or is everything (i.e., X-server, kernel, applications) 
built for 64-bit?

(I'm cross-posting this to the DRI list, but it should probably just 
move there.)

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Looking for a driver for Fire GL2

2003-07-03 Thread Ian Romanick
valli wrote:
I've installed Gentoo Linux 1.4_rc4 on my Pentium III machine.
(includes glibc-2.3.2 and xfree86-4.3.0)
Also part of my machine is the graphic card 'Diamond Fire GL2'.
But I didn't find a driver for this card and my glibc-version on
http://www.ati.com/support/products/workstation/firegl2/linux/firegl2linuxdrivers.html
(The driver for glibc-2.2 didn't work)
I don't think the problem is your glibc version.  Last time I checked, 
the FireGL 2/3/4 driver from ATI only supported XFree86 4.2.  If you 
want to continue to use that card, you'll have to either get ATI to 
update the driver or revert back to XFree86 4.2.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Problem with glxinfo

2003-07-01 Thread Ian Romanick
Stéphane Purnelle wrote:
On Mon, 2003-06-30 at 23:18, Ian Romanick wrote:

You have an Nvidia card.  Did you load Nvidia's 3D drivers?  If not, you 
won't get any accelerated 3D.  glxinfo is working perfectly.


What would you mean ?

Attention, i don't would like to install NVidia driver because the text
mode (80x25) don't work.  The card don't change frequency and the text
console is not readable.
That sounds like a problem for you.  Nvidia has not made available 
documentation for the XFree86 / DRI folks to write 3D drivers, so the 
*ONLY* accelerated 3D drivers are the closed-source drivers from Nvidia. 
 By text-mode I assume you mean an FBcon driver?  That has nothing to 
do with XFree86 anyway.  The sollution is that you're going to have to 
figure out which sets of drivers (for FBcon and XFree86) to install 
together to get the set of functionality that you want.  You may find 
that you can't get everything. :(

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: bugzilla #439: bufSize in lib/GL/glx/glxcmds.c can be too large.

2003-06-30 Thread Ian Romanick
Egbert Eich wrote:
There is a report in bugzilla (#439) which claims:

the bug is in xc/lib/GL/glx/glxcmds.c 
 int bufSize = XMaxRequestSize(dpy) * 4;
should be 
int bufSize = XMaxRequestSize(dpy) * 4 - 8;
or more cleanly
 int bufSize = XMaxRequestSize(dpy) * 4 - sizeof(xGLXRenderReq);

it happens that you may completely fill your GLX buffer if you 
use variable size command larger than 156 bytes (and smaller than 4096 bytes)
in that case you find yourself with an X command larger than 256Kbytes. This
is very unlikely but possible. It explain why this bug has not shown itself
before in this very old SGI code.

I've briefly looked at the code and it seems to be correct.
However I would like to double check before I commit anything.
Any opinions?
I'm not sure this is correct.  bufSize is used to allocate the buffer 
(gc-buf in the code) that will hold the commands, including the 
xGLXRenderReq header.  I've been doing a lot of work lately on the GLX 
code (both client-side  server-side) in the DRI tree lately.  I'll take 
a look at this a bit more and get back to you.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: bugzilla #439: bufSize in lib/GL/glx/glxcmds.c can be too large.

2003-06-30 Thread Ian Romanick
Ian Romanick wrote:
Egbert Eich wrote:

There is a report in bugzilla (#439) which claims:

the bug is in xc/lib/GL/glx/glxcmds.c  int bufSize = 
XMaxRequestSize(dpy) * 4;
should be int bufSize = XMaxRequestSize(dpy) * 4 - 8;
or more cleanly
 int bufSize = XMaxRequestSize(dpy) * 4 - sizeof(xGLXRenderReq);

it happens that you may completely fill your GLX buffer if you use 
variable size command larger than 156 bytes (and smaller than 4096 bytes)
in that case you find yourself with an X command larger than 
256Kbytes. This
is very unlikely but possible. It explain why this bug has not shown 
itself
before in this very old SGI code.

I've briefly looked at the code and it seems to be correct.
However I would like to double check before I commit anything.
Any opinions?
I'm not sure this is correct.  bufSize is used to allocate the buffer 
(gc-buf in the code) that will hold the commands, including the 
xGLXRenderReq header.  I've been doing a lot of work lately on the GLX 
code (both client-side  server-side) in the DRI tree lately.  I'll take 
a look at this a bit more and get back to you.
I looked into the code, and I now understand what's going on.  Alexis 
made a good catch of a very subtle bug!  The main problem that I had was 
that it wasn't 100% clear at first glance how bufSize / buf / pc were 
used.  Some form of - 8 should be applied to bufSize.  I have attached 
the patch that I plan to apply to the DRI tree.  I suspect that it has 
only cosmetic and / or commentary differences from your patch.

Some things have moved around in the DRI tree, so this patch probably 
won't apply to the XFree86 tree.
Index: glxcmds.c
===
RCS file: /cvsroot/dri/xc/xc/lib/GL/glx/glxcmds.c,v
retrieving revision 1.44
diff -u -d -r1.44 glxcmds.c
--- glxcmds.c   25 Jun 2003 00:39:58 -  1.44
+++ glxcmds.c   30 Jun 2003 20:49:15 -
@@ -198,7 +261,7 @@
 GLXContext AllocateGLXContext( Display *dpy )
 {
  GLXContext gc;
- int bufSize = XMaxRequestSize(dpy) * 4;
+ int bufSize;
  CARD8 opcode;
 
 if (!dpy)
@@ -217,7 +280,14 @@
 }
 memset(gc, 0, sizeof(struct __GLXcontextRec));
 
-/* Allocate transport buffer */
+/*
+** Create a temporary buffer to hold GLX rendering commands.  The size
+** of the buffer is selected so that the maximum number of GLX rendering
+** commands can fit in a single X packet and still have room in the X
+** packed to for the GLXRenderReq header.
+*/
+
+bufSize = (XMaxRequestSize(dpy) * 4) - sz_xGLXRenderReq;
 gc-buf = (GLubyte *) Xmalloc(bufSize);
 if (!gc-buf) {
Xfree(gc);
 


Re: [XFree86] Q:3Dtexture in hardware

2003-06-27 Thread Ian Romanick
Andrey P. Cherepenko wrote:
Hi,

I am looking for an appropriate mailing list to discuss 3D texture
implementation in hardware.  Does anyone have any suggestions?
 I am going to try 3D texture for volume visualization.
Could anybody tell me about good implementation 3D texture in hardware ?
What card ? And what max size of resident 3D texture ?
  Hoping I'm posting to the right list :) If I'm
not, hoping you can point me to the right ones ...
Currently there are no open-source drivers that support hardware 
acceleration of 3D textures.  I believe that the Nvidia, ATI, and 
PowerVR closed-source drivers all support hardware acceleration of 3D 
textures, though.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] DRM won't load on RH9 Pentium 3 w/SIS630

2003-06-27 Thread Ian Romanick
[EMAIL PROTECTED] wrote:
I recently installed RH 9.0 and have been working on some   
challenges with X.  Some digging and experimentation has got  
things mostly working.  However, DRM won't load and this has  
me stumpped for now.  
  
Hardware:  
ASUS TUSI-M, P3 @ 1GHz, SIS 630ET AGP chipset, 512MB  
RAM w/64MB for video, Samsung 172N flat panel display.  
  
RedHat 9.0 w/all RPM's applied.  
Kernel 2.4.20-18.9 (i686)  
XFree86 4.3.0-2 (all pkgs are i386)  
  
rpm -q --provides kernel-2.4.20-18.9  
module-info
kernel = 2.4.20  
kernel-drm = 4.1.0  
kernel-drm = 4.2.0  
kernel-drm = 4.3.0  
kernel-drm = 4.2.99.3  
kernel = 2.4.20-18.9  
  
ls /lib/modules/2.4.20-18.9/kernel/drivers/char/drm/  
i810.o  
i830.o  
mga.o  
r128.o  
radeon.o  
tdfx.o  
You did notice that sis.o is on this list, right? :)  Nobody on the DRI 
team has access to any SiS hardware.  Because of that the SiS driver, 
which only supported a couple chips anyway, has fallen waay out of 
date.  There's a few parts to the open-source drivers.  The DRM is the 
kernel part, and it talks directly to the hardware.  In user mode there 
is (basically) a device specific part and a device independent part 
(which mostly comes from Mesa).  The SiS device specific part is based 
on Mesa 3.x, and all the other drivers are based on Mesa 5.x.  As a 
result it doesn't get built any more (and wouldn't build if you tried). 
 Sorry.

I have not yet updated the drivers/kernel for the SIS modules  
beyond what came with RH9 and updated.   
rpm -qf /lib/modules/2.4.20-18.9/kernel/drivers/video/sis/sisfb.o  
kernel-2.4.20-18.9  
The X log shows:  
(II) SIS(0): SiS driver (31/01/03-1) by Thomas Winischhofer  
[EMAIL PROTECTED]  
.net  
Right after loading /usr/X11R6/lib/modules/libvgahw.a  
This is the 2D framebuffer (hence the fb part) driver which has 
nothing to do with the DRM or any of the rest of the 3D driver.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: restarting drm modules

2003-06-26 Thread Ian Romanick
Doug Buxton wrote:
I'm a new to the XFree86 sources, so I was hoping someone could give some suggestions as to where to start looking.  Is there an existing mechanism for changing drm drivers, or restarting drm without restarting X entirely?  I'm trying to find a way to make X gracefully handle changing the drm module.  Right now when I disable the kernel module X either hangs (until I reactivate the module) or crashes, depending on whether I'm using the distrubution version of XFree86 or the one that I downloaded and compiled.
There was once (is still?) a patch around for the Radeon / R200 driver 
that allowed this.  The mechanism was that the user could switch to a 
virtual terminal, rmmod the kernel driver, copy an different driver 
/lib/modules/..., insmod the new driver (this step may not have been 
required), and return back to X.  Like I said, the Radeon  R200 were 
the *only* drivers that supported this.

In principle, it should be possible to do this with most of the drivers, 
but there are a few corner cases where you have to be careful.  As 3D on 
XFree86 becomes more ubiquitous, having drivers that can do this will be 
a better and better idea.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] compiling XFree86 4.3.0 and matrox drivers 2.1

2003-06-21 Thread Ian Romanick
[EMAIL PROTECTED] wrote:
I have downloaded the XFree86 4.3.0 source code and the mgadrivers-2.1-src
from matrox's web site.  I am running linux kernel 2.4.20 with glibc-2.3.1
and gcc 3.2.1.  do I need to copy the mgadrivers source into the xc
directory to compile them or do I need to copy them into my linux source
tree and re-compile my kernel?
Unless they've updated something when I wasn't look, the drivers 
provided by Matrox do *not* work with XFree86 4.3.0.  Just use the ones 
that come with XFree86.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: RFC: OpenGL + XvMC

2003-06-03 Thread Ian Romanick
Mark Vojkovich wrote:
On Sun, 1 Jun 2003, Jon Leech wrote:
   You might want to think about how this could carry over to the
upcoming super buffers extension, too, since that will probably replace
pbuffers for most purposes within a few years. Since super buffers
  There are alot of people who are just discovering pbuffers now.
I would guess it would take years before superbuffers were widely used.
I would re-think that assumption. :)  A *lot* of people have known about 
pbuffers but have intentionally avoided them.  When superbuffers are 
available, they are going to jump all over it!  Not only that, on Linux 
on the Nvidia drivers and the ATI drivers for the FireGL 1/2/3 cards 
(not the Radeon based FireGL cards) support it at all currently.

Since nobody supports superbuffers yet, I think we could probably 
re-visit this issue when it is available.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: status of SiS 3d?

2003-06-03 Thread Ian Romanick
Alex Deucher wrote:
Sis wrote support for the 300 series and it works.  However, when mesa
4.x came out no one ever updated the sis dri stuff to match the new
structure.  so DRI works with the 300 if you use the mesa 3.x libs.  It
shouldn't be too hard to port the sis stuff to mesa 4.x, but there
doesn't seem to be much interest in doing so.  3D support for newer sis
boards probably won't happen cause Sis has changed their policy in
regard to giving out docs to their chips.  3D support for the older sis
boards 6326 or whatever it's called should be possible since docs are
available for that board (there was even a utah-glx driver for it), but
it needs to be written.
Which boards did the DRI driver support?  I see 6327 sprinkled all over 
the driver, but not much else.  Would it also support the 6326?  I see 
those on eBay for less than $15 shipped.  If the driver supports that 
chip, I might get one and update the driver to just get Can I have 3D 
on my old SiS card? out of the FAQ. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: RFC: OpenGL + XvMC

2003-06-03 Thread Ian Romanick
Mark Vojkovich wrote:
On Sun, 1 Jun 2003, Jon Leech wrote:

On Mon, Jun 02, 2003 at 01:09:59AM -0400, Mark Vojkovich wrote:

  Extending GL to recognize a relatively unknown XFree86 format
is a hard sell.  I wouldn't even be able to convince my own company
to dirty their code for it seeing as how relatively nobody is using
XvMC.
   Do you implement this without touching the GL driver code? Seems
difficult to avoid touching the driver in the general case, when the
format and location of pbuffer memory is intentionally opaque.
   I haven't touched the GL driver at all.  XvMC is direct rendered
and the assumption is that it's using the same direct rendering
architecture as OpenGL and should be able to get access to the
pbuffer memory if it can name it, just like GL would be able to
do.
You may not have touched the GL driver at all, but you are using some 
sort of non-public interface to it to convert a pbuffer ID to an 
address.  That was somewhat the point of Jon's comment.  I certainly 
don't see anything in any pbuffer documentation that I've ever seen that 
describes how to get the address in video memory of a pbuffer.  In fact, 
the documentation that I have seen goes to some length to explain that 
at certain points in time the pbuffer may not have an address in video 
memory.

Instead of modifying your 3D driver, you've used an internal interface 
that, luckilly for you, just happened to already be there.  The rest of 
us may not be so lucky.

Given that, I have only three comments / requests for the function.

1. Please provide a way to specify the destination buffer (i.e., 
GL_FRONT, GL_BACK_RIGHT, etc.) of the copy.

2. Make explicit the coordinate conversion monkey business.

3. Is there a way for apps to determine if this function is available on 
their hardware?  Later this year when pbuffers become available in the 
open-source drivers, we probably won't (initially) have support for this 
function.  I fully expect that support will follow soon, but it won't be 
there initially.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: OpenGL + XvMC

2003-06-03 Thread Ian Romanick
Sottek, Matthew J wrote:
Let me preface my comment with I don't know a lot about OGL so some
further clarification may be needed.
I am assuming that pbuffers are basically buffers that can be used
as textures by OGL. I would then assume that the OGL driver would
have some mapping of pbuffer id to the texture memory it represents,
maybe this memory is in video memory maybe it has been swapped out
so-to-speak by some texture manager etc.
A pbuffer is (basically) just an off-screen window.  You can do the same 
things to a pbuffer that you can do to a normal window.  This includes 
copying its contents to a texture.  There was a proposal to bring 
WGL_render_texture to GLX, but, in light of other developments, there 
wasn't much interest.  It *may* be resurrected at some point for 
completeness sake, but I wouldn't hold my breath.

So basically this copies data from an XvMC offscreen surface to an
OGL offscreen surface to be used by OGL for normal rendering purposes.
Seems easy enough... I expect anyone doing XvMC would use the drm
for the direct access (or their own drm equivalent) which would also
be the same drm used for OGL and therefore whatever texture management
needs to be done should be possible without much of a problem.
Well, except that, at least in the open-source DRI based drivers, the 
texture memory manager doesn't live in the DRM (anymore than malloc and 
free live in the kernel).

My main problem with the concept is that it seems that a copy is not
always required, and is costly at 24fps. For YUV packed surfaces at
least, an XvMC surface could be directly used as a texture. Some way
to associate an XvMC surface with a pbuffer without a copy seems
like something that would have a large performance gain.
It *may* not always be required.  There have been GLX extensions in the 
past (see my first message in this thread) that worked that way. 
However, as we discussed earlier, this doesn't seem to work so well with 
MPEG video files.  The main problem being that you don't get the frames 
exactly in order.  You're stuck doing a copy either way.

Also, what is the goal exactly? Are you trying to allow video to be
used as textures within a 3d rendered scene, or are your trying to
make it possible to do something like Xv, but using direct rendering
and 3d hardware?
If you are trying to do the latter, it seems far easier to just plug
your XvMC extension into the 3d engine rather than into the overlay. I think
you've done the equivalent with Xv already.
I think the goal is to be able to do both.  Although, the idea of using 
MPEG video files as animated textures in a game is pretty cool. :)

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: status of SiS 3d?

2003-06-03 Thread Ian Romanick
Thomas Winischhofer wrote:
Alex Deucher wrote:

right now just the 300 series (300, 305?, 540, 630/S/ST, 730) have DRI
support.  the old series 6326, 620, 530 don't have DRI support, but but
there are docs available (on the dri website I think) to write a DRI
driver; there was also a utah-glx driver for the that series.  I think
the 6327 might have been the internal sis name for the 300 series,
although that's just a guess on my part.  The 6326 and the 300 series
might be simialr enough to support them both with one driver, but I
No, they are not.
So...the 6327 is the 300 series, and it is not similar at all to the 
6326?  It's also not at all similar to the 315 series?  Wow.  Their 
hardware designers really went out of their way to make a driver 
writer's life miserable. :(

about the DRI, and I'd be willing to try to help you if you wanted to. 
I'll even provide cards.  sis 300 series cards are also very cheap. 
I wouldn't buy a 300 series card nowadays, as cheap as they might be. 
They are quite slow and far behind today's standards. Their only strong 
side is video support.
I certainly wouldn't buy one to replace my Radeon 8500! :)  It would be 
exclusively to update the drive.  It's the same reason I would be a 
Gamma card w/an R2 rasterizer...too bad there are *none* on eBay.  After 
I realized that, I pretty much give up any hopes of the gamma driver 
ever being updated.  That is, unless 3dlabs were to give out 
documentation for an R3 or R4 rasterizer.

It's doubtful however since sis refuses to hand out docs any more.
Once they are through with what is going on right now (can't tell you), 
the situation might become better.
We'll all be waiting with bated breath. :)

Thanks for your help.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: RFC: OpenGL + XvMC

2003-06-01 Thread Ian Romanick
Mark Vojkovich wrote:
On Fri, 30 May 2003, Ian Romanick wrote:

Mark Vojkovich wrote:

  I'd like to propose adding a XvMCCopySurfaceToGLXPbuffer function
to XvMC.  I have implemented this in NVIDIA's binary drivers and
am able to do full framerate HDTV video textures on the higher end
GeForce4 MX cards by using glCopyTexSubImage2D to copy the Pbuffer
contents into a texture.
This shoulds like a good candidate for a GLX extension.  I've been 
wondering when someone would suggest somthing like this. :)  Although, I 
did expect it to come from someone doing video capture work first.
   I wanted to avoid something from the GLX side.  Introducing the
concept of an XFree86 video extension buffer to GLX seemed like a hard
sell.  Introducting a well establish GLX drawable type to XvMC 
seemed more reasonable.
Right.  I thought about this a bit more last night.  A better approach 
might be to expose this functionality as an XFree86 extension, then 
create a GLX extension on top of it.  I was thinking of an extension 
where you would bind a magic buffer to a pbuffer, then take a snapshot 
from the input buffer to the pbuffer.  Doing that we could create 
layered extensions for binding v4l streams to pbuffers.  This would be 
like creating a subclass in C++ and just over-riding the virtual 
CaptureImage method.  I think that would be much nicer for application code.

At the same time, all of the real work would still be done in the X 
extension (or v4l).  Only some light-weight bookkeeping would live in GLX.

Over the years there have been a couple extensions for doing things 
this, both from SGI.  They both work by streaming video data into a new 
type of GLX drawable and use that to source pixel / texel data.

  http://oss.sgi.com/projects/ogl-sample/registry/SGIX/video_source.txt
  http://oss.sgi.com/projects/ogl-sample/registry/SGIX/dmbuffer.txt
The function that you're suggesting here is a clear break from that.  I 
don't think that's a bad thing.  I suspect that you designed it this way 
so that the implementation would not have to live in the GLX subsystem 
or in the 3D driver, correct?
   That was one of the goals.   I generally view trying to bind 
a video-specific buffer to an OpenGL buffer as a bad idea since they
always end up as second class.  While there have been implementations
that could use video buffers as textures, etc... they've always had
serious limitations like the inability to have mipmaps, or repeat, or
limited filtering ability or other disapointing things that people
are sad to learn about.  I opted instead for an explicit copy from
a video-specific surface to a first-class OpenGL drawable.  Being
able to do HDTV video textures on a P4 1.2 Gig PC with a $100 video
card has show this to be a reasonable tradeoff.
The reason you would lose mipmaps and most of the texture wrap modes is 
because video streams rarely have power-of-two dimensions.  In the past, 
hardware couldn't do mipmapping or GL_WRAP on non-power-of-two textures. 
 For the most part, without NV_texture_rectangle, you can't even use 
npot textures. :(

With slightly closer integration between XvMC and the 3D driver, we 
ought to be able to do something along those lines.  Basically, bind a 
XvMCSurface to a pbuffer.  Then, each time a new frame of video is 
rendered the pbuffer would be automatically updated.  Given the way the 
XvMC works, I'm not sure how well that would work, though.  I'll have to 
think on it some more.


   Mpeg frames are displayed in a different order than they are
rendered.  It's best if the decoder has full control over what goes
where and when.
Oh.  That does change things a bit.

Status
XvMCCopySurfaceToGLXPbuffer (
 Display *display,
 XvMCSurface *surface,
 XID pbuffer_id,
 short src_x,
 short src_y,
 unsigned short width,
 unsigned short height,
 short dst_x,
 short dst_y,
 int flags
);
One quick comment.  Don't use 'short', use 'int'.  On every existing and 
future platform that we're likely to care about the shorts will take up 
as much space as an int on the stack anyway, and slower / larger / more 
instructions will need to be used to access them.
   This is an X-window extension.  It's limited to the signed 16 bit
coordinate system like the X-window system itself, all of Xlib and
the rest of XvMC.
So?  Just because the values are limited to 16-bit doesn't necessitate 
that they be stored in a memory location that's only 16-bits.  If X were 
being developed from scratch today, instead of calling everything short, 
it would likely be int_fast16_t.  On IA-32, PowerPC, Alpha, SPARC, and 
x86-64, this is int.  Maybe using the C99 types is the right answer anyway.

  This function copies the rectangle specified by src_x, src_y, width,
 and height from the XvMCSurface denoted by surface to offset dst_x, dst_y 
 within the pbuffer identified by its GLXPbuffer XID pbuffer_id.
 Note that while the src_x, src_y are in XvMC's standard left-handed
 coordinate system and specify the upper left hand

Re: [XFree86] XFree86 4.3 + radeon 7200 (QD)

2003-05-29 Thread Ian Romanick
Florian Scandella wrote:

i recently installed xfree 4.3 and am experienceing a major slowdown
when running in a 24 bit resolution. as an example i played
neverwinternights with no problem, after the upgrade it only runs in 16
bit mode ( unless you like slideshows ). there are some problems with
that mode to ( textures, shadows,..) but i think thats not X's fault.
direct rendering, agpart and mtrr are enabled. i also set the agp speed
to x2. glxinfo shows Mesa DRI Radeon 20020611 AGP 2x x86/MMX/3DNow!
TCL as rederer version.
There were some recent changes to the Radeon driver in DRI CVS with 
respect to stencil buffer clears.  This seemed to resolve some slow-down 
problems that other users were having with NWN.  Did that resolve your 
problem as well?

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] ATI Radeon 9000 (rv250If) vs DRM, DRI...

2003-03-12 Thread Ian Romanick
Damian Kokowski wrote:
./deimos/tmp/ut2003-demo. $ ./ut2003_demo
Xlib:  extension XiG-SUNDRY-NONSTANDARD missing on display :0.0.
OpenGL renderer relies on DXTC/S3TC support.
Update to the latest version of UT2k3.  It drops the requirement for 
S3TC support.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] radeon 7000 dual head STILL not working! HELP! :(

2003-03-03 Thread Ian Romanick
Alessandro Cerri wrote:
Hi,
 I just set up mandrake 9.0 with xfree 4.2.1 and tried to have dual head
working on my radeon 7000... the relevant XF86Config entries I believe
are correct (see bottom, am omitting irrelevant stuff).
I tried already:
- reinstalling XFree from the official xfree86 distribution (suggested in
  this same discussion list)
- disabling one at a time and all of the Module entries
- taking out the DRI section
- If you wonder about dri, that gets disabled anyway because of
  issues with xinerama/multihead
The error I get is the same already mentioned by someone else:
I've been seeing a similar problem.  Does it work at 8-bpp?  That's the 
ONLY way I've been able to get my PCI RV100 (Radeon 7000) setup to work 
at all.  Even at that it only seems to work with SuSE's XFree86 4.2.1 RPMs.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


DRI memory management (was Re: [XFree86] i810 driver agpgart error)

2003-02-08 Thread Ian Romanick
David Dawes wrote:


I think that's a little different from what is being discussed here.
I think it'd be good to make the reinit work you did more general (work
for all drivers), as well as make it possible for the DRI to adapt to
changes in video memory usage on the fly.  Is your reinit patch in the
DRI trunk yet?


What do you mean by adapt to changes in video memory usage?  Do you 
mean re-sizing the memory usage of the framebuffer?  If so, that's one 
of the tangental goals of the next phase of memory management work that 
is just starting.  Initially we plan to only use the new memory manager 
for textures (like the current memory manager), vertex buffer, 
back-buffers, and depth-buffers...basically all of the different types 
of OpenGL buffers.  Adding support for general X usage (front buffer, 
pixmaps, etc.) would be a follow-on effort.

I'm not really expecting this to be ready in time for 4.4.0, but maybe. :)

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: glapi_x86.S glx86asm.py

2003-01-30 Thread Ian Romanick
Alexander Stohr wrote:

From CVS/XFree86/xc/extras/Mesa/bin/Attic/glx86asm.py,v

revision 1.2
date: 2000/12/07 16:12:47;  author: dawes;  state: dead;  lines: +0 -0
Remove from the trunk the Mesa files that aren't needed.

Latest entry in cvs log of c/extras/Mesa/src/X86/glapi_x86.S
revision 1.7
date: 2002/09/09 21:07:33;  author: dawes;  state: Exp;  lines: +1 -1
Mesa 4.0.2 merge

(So the script glx86asm.py was removed after glapi_x86.S last changed,
which is a good sign).


really? hmm, if the respective API listing ever changes or extends 
it might be simpler to use an existing script and then submitting the
results 
than to perform error prone copy and paste operations on the results.

You'd have to ask Brian to be sure, but I believe the intention is that 
if the interface ever changes, a new .S file be generated in the Mesa 
tree and imported to the XFree86  DRI trees.  There should never be a 
case where the .S file would change in XFree86 and not change in Mesa.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [XFree86] Problems with 3-D gaming on Redhat 7.2

2003-01-29 Thread Ian Romanick
Laura West wrote:

2.  I am using Red Hat 7.2 and upgraded it to the latest kernel, 
along with the latest version of xfree86.  Trident Cyberblade is the 
driver xfree86 chooses to be the best driver.

There is no hardware accelerated 3D driver available for this card for 
Linux.  The game runs slowly because it is using software rendering.

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


Re: [XFree86] Problems with 3-D gaming on Redhat 7.2

2003-01-29 Thread Ian Romanick
[EMAIL PROTECTED] wrote:

Is there any workaround for this?  I am willing to turn off my onboard video card and get another one that is Linux supported.   It would have to be a PCI card.  What may complicate things is that the motherboard has an on-board AGPSo I'm not sure how that will work...There is no AGP slot on my motherboard.  

Any insights?

Laura

See http://dri.sourceforge.net/other/dri_driver_features.html for a list 
of the cards currently supported by the open-source OpenGL drivers. 
Nvida, ATI, and PowerVR also make closed-source drivers.  I believe just 
about every card that has either open-source or closed-source drivers 
comes in a PCI version.

---Original Message---
From: Ian Romanick [EMAIL PROTECTED]
Sent: 01/29/03 07:12 AM
To: [EMAIL PROTECTED]
Subject: Re: [XFree86] Problems with 3-D gaming on Redhat 7.2



Laura West wrote:
   2.  I am using Red Hat 7.2 and upgraded it to the latest kernel, 
along with the latest version of xfree86.  Trident Cyberblade is the 
driver xfree86 chooses to be the best driver.


There is no hardware accelerated 3D driver available for this card for 
Linux.  The game runs slowly because it is using software rendering.


___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86



Re: [XFree86] Antialiasing and dual monitor with ATI Radeon 9000

2003-01-27 Thread Ian Romanick
Rainer Blum wrote:

Hi,
my first question is about the antialiasing feature of
the ATI Radeon 9000 (called smoothvision):
How can I use/activate this feature under Linux?


AFIAK, not with the open-source drivers.  I don't believe that ATI has 
ever released documentation for this feature.  I'm fairly sure that 
ATI's close-source, binary-only driver support this feature.  I haven't 
ever installed those drivers, so I don't know.  You'll have to ask them 
how to make it work. :)

___
XFree86 mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xfree86


  1   2   >