Re: Xfree86.0 high CPU in 1280x768

2006-06-07 Thread Mark Vojkovich
On Wed, 7 Jun 2006, Barry Scott wrote:

> Mark Vojkovich wrote:
> > On Tue, 6 Jun 2006, Barry Scott wrote:
> >
> >
> >> I'm seeing the X process take a lot more CPU time to run any workload in
> >> 1280x768 compared to a standard VGA mode like 1280x1024.
> >>
> >> For example running a text scrolling app (lots of XCopyArea calls) I see 
> >> the
> >> following CPU usage figures:
> >>
> >> 1280x1024:  6.75%
> >> 1280x768:  10.53%
> >>
> >> top shows that the X process is the only one to show a change between
> >> the two modes.
> >>
> >> With XVIDEO apps the difference is from 50% to 70% to play a movie.
> >>
> >> This happens with the  i810 driver and the via driver so  I don't think
> >> its a driver
> >> specific issue. I think that X is changing its behavior.
> >>
> >
> >I seriously doubt that.
> >
> >
> >> Is it possible that X has turned off it acceleration in 1280x768 mode?
> >>
> >
> >   "X" doesn't have anything to do with acceleration.  This is entirely
> > a driver/HW issue.
> >
> I'm surprised that XAA has nothing to do with the X core. I'd have assumed
> that if the driver supports a speed up then X uses it otherwise X falls
> back to
> none accelerated algorithm. But if you say its all in the driver I guess
> that
> means that both the via and the i810 driver have the same bug in them.

   It's not clear that it's a bug yet.

>
> >> What can I look at to find out what the problem is?
> >>
> >
> > Is your refresh rate the same in both cases?  Integrated
> > graphics have peculiar performance characteristics because the
> > graphics hardware shares memory bandwidth with the CPU.
> >
> Refresh rate is 60Hz in both cases. So I assume that its not a memory
> bandwidth change as you suggest.
>
> Where should I look to get some data to work on?
>

   When you did your text scrolling test, were the windows the same
size?  It's often the case that the CPU usage increases when the
graphics speed is faster.  That's because the faster graphics allows
more work to get done.  If it takes a certain amount of CPU to render
one line of text and scroll the window, faster scrolling (because you
have few lines to scroll) translates to higher CPU usage.

   Run some experiments on fixed size windows.
"x11perf -scroll500" would be an interesting test.  Ideally both
resolutions would have the same performance and CPU usage.  If
the lower resolution runs faster, which it might due to more
memory bandwidth being available, then I'd expect CPU usage to
increase as well.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Xfree86.0 high CPU in 1280x768

2006-06-06 Thread Mark Vojkovich
On Tue, 6 Jun 2006, Barry Scott wrote:

> I'm seeing the X process take a lot more CPU time to run any workload in
> 1280x768 compared to a standard VGA mode like 1280x1024.
>
> For example running a text scrolling app (lots of XCopyArea calls) I see the
> following CPU usage figures:
>
> 1280x1024:  6.75%
> 1280x768:  10.53%
>
> top shows that the X process is the only one to show a change between
> the two modes.
>
> With XVIDEO apps the difference is from 50% to 70% to play a movie.
>
> This happens with the  i810 driver and the via driver so  I don't think
> its a driver
> specific issue. I think that X is changing its behavior.

   I seriously doubt that.

>
> Is it possible that X has turned off it acceleration in 1280x768 mode?

  "X" doesn't have anything to do with acceleration.  This is entirely
a driver/HW issue.


>
> What can I look at to find out what the problem is?

Is your refresh rate the same in both cases?  Integrated
graphics have peculiar performance characteristics because the
graphics hardware shares memory bandwidth with the CPU.

Mark.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


RE: [XFree86] Clipping graphic primitives to visible area of Window

2006-04-11 Thread Mark Vojkovich
   This is probably either a r128 driver bug or a bug in the
acceleration architecture (XAA).  If you had access to a non-ATI
video card that would be an interesting test.  What might fix
the problem without resorting to "NoAccel" is to prevent XAA
from putting pixmaps in videoram.  You can do that with:

  Option "XaaNoOffscreenPixmaps"

   If this was a r128 driver bug related to rendering to offscreen
videoram or if this was an XAA problem related to rendering to
backing store's backing pixmap, that would probably fix the problem.
If it were a problem with XAA breaking backing store's wrappers,
it probably wouldn't.  But there may be other causes - perhaps
that driver disables something else when disabling acceleration.
>From looking through driver code, it does appear that "NoAccel"
also disables some things related to 3D.



Mark.

On Tue, 11 Apr 2006, Pearson, Paul L-Baker Atlas wrote:

> Mark,
>
> Send me the name and address of a favorite restaurant or bar and I'll
> have them open a tab for you to drink a beer with your compadres or eat
> a meal.
>
> The "NoAccel" fixed the problem. Moving the window around is slower, but
> the drawing is just as fast and the scrolling is reasonable. The boss is
> not happy though.
>
> Is there something I can do to get the acceleration back in?
>
> I had removed all the Load commands from the config file. It did not
> change anything.
>
> Thanks,
> Paul
>
> -Original Message-
> From: Mark Vojkovich [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, April 11, 2006 12:42
> To: Pearson, Paul L-Baker Atlas
> Cc: devel@XFree86.Org
> Subject: RE: [XFree86] Clipping graphic primitives to visible area of
> Window
>
>
> On Mon, 10 Apr 2006, Pearson, Paul L-Baker Atlas wrote:
>
> > Mark,
> >
> > I removed the backingstore option from XF86Config. Nothing is
> displayed
> > in the area of the drawable that is exposed with scrolling. Our
> > application does not catch the expose event, it relies on
> backingstore.
> > So backingstore is partially working.
> >
> > Our application that uses XPutPixel to draw graphics does not have the
> > problem. I can scroll around and everything is available. We use
> > XDrawLine to draw the graphics that are missing from the display. I'm
> > not sure what we use to display the text - but it is always there when
> I
> > scroll around.
> >
> > I removed the three extensions that you had suggested. Now only the
> > dotted line is displayed and the text is gone.
>
>That's weird.  I would have expected it to get better.
>
> >
> > Where can I find info on the extensions? I searched for awhile looking
> > for descriptions, without luck.
>
> I don't know of a database of extension descriptions.  DBE is
> the double-buffer extension.  I don't know of any applications that
> use it.  GLX is the server component of OpenGL.  DRI is an extension
> related to the implementation of OpenGL that the r128 driver uses.
>
> You could also try removing the extmod module but that holds
> very common extensions like shape and mitshm and some applications
> require those to operate.
>
> You could also try removing the fbdevhw module.  I don't
> think your driver is using the framebuffer device and I didn't
> think that module wrapped any rendering operations, but it
> shouldn't hurt to remove it.
>
>
> >
> > If the backingstore puts up some of the image - shouldn't it put up
> all
> > of the image?
>
>That's probably not the problem.  Backing store allocates a
> pixmap the size of the drawable.  Window rendering that gets
> clipped away goes to the backing pixmap.  When part of the window
> is exposed, that part gets initialized with the backing pixmap
> contents instead of sending the client an expose event.  I doubt
> copying from the backing pixmap is broken.  Most likely, rendering
> to the backing pixmap is broken.  The most common cause of that
> being broken is that some extension broke the mechanism which
> enables the backing store code to monitor window rendering.
>
>Could you also try telling the r128 to turn off hardware
> acceleration?
> That would be:
>
>    Option "NoAccel"
>
> in the Section "Device" in the XF86Config file.  The server will
> get very slow, but if it makes the problem go away it narrows
> down the problem substantially.
>
>
>   Mark.
>
> >
> > We use backingstore for speed of display - these apps are run over the
> > network and the geophysical data is large.
> >
> >

RE: [XFree86] Clipping graphic primitives to visible area of Window

2006-04-11 Thread Mark Vojkovich
On Mon, 10 Apr 2006, Pearson, Paul L-Baker Atlas wrote:

> Mark,
>
> I removed the backingstore option from XF86Config. Nothing is displayed
> in the area of the drawable that is exposed with scrolling. Our
> application does not catch the expose event, it relies on backingstore.
> So backingstore is partially working.
>
> Our application that uses XPutPixel to draw graphics does not have the
> problem. I can scroll around and everything is available. We use
> XDrawLine to draw the graphics that are missing from the display. I'm
> not sure what we use to display the text - but it is always there when I
> scroll around.
>
> I removed the three extensions that you had suggested. Now only the
> dotted line is displayed and the text is gone.

   That's weird.  I would have expected it to get better.

>
> Where can I find info on the extensions? I searched for awhile looking
> for descriptions, without luck.

I don't know of a database of extension descriptions.  DBE is
the double-buffer extension.  I don't know of any applications that
use it.  GLX is the server component of OpenGL.  DRI is an extension
related to the implementation of OpenGL that the r128 driver uses.

You could also try removing the extmod module but that holds
very common extensions like shape and mitshm and some applications
require those to operate.

You could also try removing the fbdevhw module.  I don't
think your driver is using the framebuffer device and I didn't
think that module wrapped any rendering operations, but it
shouldn't hurt to remove it.


>
> If the backingstore puts up some of the image - shouldn't it put up all
> of the image?

   That's probably not the problem.  Backing store allocates a
pixmap the size of the drawable.  Window rendering that gets
clipped away goes to the backing pixmap.  When part of the window
is exposed, that part gets initialized with the backing pixmap
contents instead of sending the client an expose event.  I doubt
copying from the backing pixmap is broken.  Most likely, rendering
to the backing pixmap is broken.  The most common cause of that
being broken is that some extension broke the mechanism which
enables the backing store code to monitor window rendering.

   Could you also try telling the r128 to turn off hardware acceleration?
That would be:

   Option "NoAccel"

in the Section "Device" in the XF86Config file.  The server will
get very slow, but if it makes the problem go away it narrows
down the problem substantially.


Mark.

>
> We use backingstore for speed of display - these apps are run over the
> network and the geophysical data is large.
>
> Thanks for your help,
> Paul
>
>
>
> -Original Message-
> From: Mark Vojkovich [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 10, 2006 12:41
> To: Pearson, Paul L-Baker Atlas
> Cc: devel@XFree86.Org
> Subject: Re: [XFree86] Clipping graphic primitives to visible area of
> Window
>
>
>Backing store doesn't really guarantee that you won't get
> expose events.  I believe the X11 Protocol specification says
> that enabling backing store merely tells the server that saving
> contents would be "useful" and doesn't guarantee that you won't
> get expose events.  A program that isn't capable of handling
> expose events is technically broken.  It's probably the case
> that different vendor implementations of backing store make
> different guarantees.  XFree86 uses the implementation from
> the X11 sample implementation.
>
>The big question is whether or not XFree86 sent exposures
> when this scrolling occurred (assuming the application requested
> expose events in the first place).  If the expose event was
> sent, this is technically not a server bug.   The only thing
> weird that I see from your snapshots was that it appears as
> though some rendering operations may have be rendered to the
> backing store while some others might not have.  Though another
> explanation is that XFree86 didn't render any of it and the
> text was the only part rerendered by the application after
> the expose event.
>
>I did some quick tests with the only backing store aware
> application I have (the "xv" image viewer) and didn't see any
> obvious problems using NVIDIA's drivers.  Sometimes driver or
> extention implementations can break the backing store wrappers,
> but you are using the 'r128' driver which probably isn't modifying
> the wrappers.  Some of the other extensions might.   You could
> try commenting out the loading of the dbe, dri or glx modules
> in the XF86Config, but I doubt those would be breaking backing
> store wrappers.
>
>My guess is that th

Re: [XFree86] Clipping graphic primitives to visible area of Window

2006-04-10 Thread Mark Vojkovich
   Backing store doesn't really guarantee that you won't get
expose events.  I believe the X11 Protocol specification says
that enabling backing store merely tells the server that saving
contents would be "useful" and doesn't guarantee that you won't
get expose events.  A program that isn't capable of handling
expose events is technically broken.  It's probably the case
that different vendor implementations of backing store make
different guarantees.  XFree86 uses the implementation from
the X11 sample implementation.

   The big question is whether or not XFree86 sent exposures
when this scrolling occurred (assuming the application requested
expose events in the first place).  If the expose event was
sent, this is technically not a server bug.   The only thing
weird that I see from your snapshots was that it appears as
though some rendering operations may have be rendered to the
backing store while some others might not have.  Though another
explanation is that XFree86 didn't render any of it and the
text was the only part rerendered by the application after
the expose event.

   I did some quick tests with the only backing store aware
application I have (the "xv" image viewer) and didn't see any
obvious problems using NVIDIA's drivers.  Sometimes driver or
extention implementations can break the backing store wrappers,
but you are using the 'r128' driver which probably isn't modifying
the wrappers.  Some of the other extensions might.   You could
try commenting out the loading of the dbe, dri or glx modules
in the XF86Config, but I doubt those would be breaking backing
store wrappers.

   My guess is that this is probably a bad app assumption rather
than a server bug, but I don't have a way to verify that at the
moment.

Mark.


On Mon, 10 Apr 2006, Pearson, Paul L-Baker Atlas wrote:

> Mark,
>
>
>
> Thanks for the reply. Our applications do depend on backing store and I
> have enabled it, and it appears to work. If I put a window over the
> window, everything that was there comes back when the overlay is
> removed.
>
>
>
> I have a window which is smaller than my drawable. The window has scroll
> bars. I use text, pixels and graphic primitives (XDrawLine) to display
> to the drawable. Everything is displayed in the window. I scroll the
> window. The text and pixels are displayed, but the graphics done with
> the primitives are not displayed. The display acts as if the
> clip_x_origin, clip_y_origin and clip_mask are being set to the size and
> location of the window. If I scroll the window and force a graphics
> update, some more primitives are displayed. If I scroll the window back
> to where it was, that which was displayed with the primitives is gone,
> text and pixels are there.
>
>
>
> I've attached four files (hopefully I will remember to attach them) -
> the XF86Config file, disp1.png (showing the display before scrolling)
> and disp2.png (showing the display after scrolling), and disp3.png
> (after forcing an update to the scrolled window).
>
>
>
> Paul Pearson
>
> Software Developer
>
>
>
> VSFusion
>
> 16430 Park Ten Place, Suite 405
>
> Houston, Texas 77084
>
>
>
> tel:   + 1 281-646-2750
>
> fax:  + 1 281-646-2799
>
> email: [EMAIL PROTECTED]
>
> web:   www.vsfusion.com
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: CVS GLX oddity

2006-03-24 Thread Mark Vojkovich
   Is that the final fix or is there something else I should test?
That one works for me.


Mark.

On Thu, 23 Mar 2006, Mark Vojkovich wrote:

>
>Yes, that works.
>
>   Mark.
>
> On Thu, 23 Mar 2006, David Dawes wrote:
>
> > On Wed, Mar 22, 2006 at 08:52:00PM -0800, Mark Vojkovich wrote:
> > >   initdata is still NULL even after your call to LoaderSymbol() in
> > >that patch.
> >
> > The module name needs to be prepended.  Something like:
> >
> >   if (!initdata) {
> > char *md;
> >
> > xasprintf(&md, "%s" MODULE_DATA_NAME, name);
> > if (md) {
> >   initdata = LoaderSymbol(md);
> >   xfree(md);
> > }
> >   }
> >
> >
> > David
> >
> > >
> > >   Mark.
> > >
> > >On Wed, 22 Mar 2006, David Dawes wrote:
> > >
> > >> On Wed, Mar 22, 2006 at 06:57:17PM -0800, Mark Vojkovich wrote:
> > >> >  I can't get CVS to load NVIDIA's GLX module.  It complains:
> > >> >
> > >> >(II) Loading /usr/X11R6/lib/modules/extensions/libglx.so
> > >> >(EE) LoadModule: Module glx does not have a glxModuleData data object.
> > >> >(II) UnloadModule: "glx"
> > >> >
> > >> >Did something change with regards to this?  It was working before
> > >> >I updated.
> > >>
> > >> dlopen modules have always had different semantics than XFree86 modules.
> > >> These differences will only get greater as additional features are added
> > >> to the XFree86 loader and as the newly added features are used more
> > >> widely.
> > >>
> > >> The following (untested) patch may solve this particular problem.  Let 
> > >> me know
> > >> how it goes.
> > >>
> > >> David
> > >> --
> > >> David Dawes X-Oz Technologies
> > >> www.XFree86.org/~dawes  www.x-oz.com
> > >>
> > >___
> > >Devel mailing list
> > >Devel@XFree86.Org
> > >http://XFree86.Org/mailman/listinfo/devel
> >
> > ___
> > Devel mailing list
> > Devel@XFree86.Org
> > http://XFree86.Org/mailman/listinfo/devel
> >
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: CVS GLX oddity

2006-03-23 Thread Mark Vojkovich
   Yes, that works.

Mark.

On Thu, 23 Mar 2006, David Dawes wrote:

> On Wed, Mar 22, 2006 at 08:52:00PM -0800, Mark Vojkovich wrote:
> >   initdata is still NULL even after your call to LoaderSymbol() in
> >that patch.
>
> The module name needs to be prepended.  Something like:
>
>   if (!initdata) {
> char *md;
>
> xasprintf(&md, "%s" MODULE_DATA_NAME, name);
> if (md) {
>   initdata = LoaderSymbol(md);
>   xfree(md);
> }
>   }
>
>
> David
>
> >
> > Mark.
> >
> >On Wed, 22 Mar 2006, David Dawes wrote:
> >
> >> On Wed, Mar 22, 2006 at 06:57:17PM -0800, Mark Vojkovich wrote:
> >> >  I can't get CVS to load NVIDIA's GLX module.  It complains:
> >> >
> >> >(II) Loading /usr/X11R6/lib/modules/extensions/libglx.so
> >> >(EE) LoadModule: Module glx does not have a glxModuleData data object.
> >> >(II) UnloadModule: "glx"
> >> >
> >> >Did something change with regards to this?  It was working before
> >> >I updated.
> >>
> >> dlopen modules have always had different semantics than XFree86 modules.
> >> These differences will only get greater as additional features are added
> >> to the XFree86 loader and as the newly added features are used more
> >> widely.
> >>
> >> The following (untested) patch may solve this particular problem.  Let me 
> >> know
> >> how it goes.
> >>
> >> David
> >> --
> >> David Dawes X-Oz Technologies
> >> www.XFree86.org/~dawes  www.x-oz.com
> >>
> >___
> >Devel mailing list
> >Devel@XFree86.Org
> >http://XFree86.Org/mailman/listinfo/devel
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: CVS GLX oddity

2006-03-22 Thread Mark Vojkovich
   initdata is still NULL even after your call to LoaderSymbol() in
that patch.

Mark.

On Wed, 22 Mar 2006, David Dawes wrote:

> On Wed, Mar 22, 2006 at 06:57:17PM -0800, Mark Vojkovich wrote:
> >  I can't get CVS to load NVIDIA's GLX module.  It complains:
> >
> >(II) Loading /usr/X11R6/lib/modules/extensions/libglx.so
> >(EE) LoadModule: Module glx does not have a glxModuleData data object.
> >(II) UnloadModule: "glx"
> >
> >Did something change with regards to this?  It was working before
> >I updated.
>
> dlopen modules have always had different semantics than XFree86 modules.
> These differences will only get greater as additional features are added
> to the XFree86 loader and as the newly added features are used more
> widely.
>
> The following (untested) patch may solve this particular problem.  Let me know
> how it goes.
>
> David
> --
> David Dawes X-Oz Technologies
> www.XFree86.org/~dawes  www.x-oz.com
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


CVS GLX oddity

2006-03-22 Thread Mark Vojkovich
  I can't get CVS to load NVIDIA's GLX module.  It complains:

(II) Loading /usr/X11R6/lib/modules/extensions/libglx.so
(EE) LoadModule: Module glx does not have a glxModuleData data object.
(II) UnloadModule: "glx"

Did something change with regards to this?  It was working before
I updated.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: surviving "broken connection" and/or XSetIOErrorHandler()

2006-03-07 Thread Mark Vojkovich
   Xkill not only destroys your window, but terminates the client
connection.  Your XDisplay is no longer valid from that point
on.  The problem you are seeing is that Xlib calls exit() after
calling your IOError handler.

   Historically, I believe the only way to continue after the error
handler gets called is to jump out of it (that stack frame) :(

   I think there may be an alternative in the xfixes extension,
but I'm not sure.  I know it was talked about, but I'm not sure
if a solution was devised.



Mark.



On Mon, 6 Mar 2006, IOhannes m zmoelnig wrote:

> hi.
>
> i have a serious newbie X-programming problem, which i have not been
> able to solve using google and other ressources i found on the net:
>
> i am writing an application which creates and destroys windows (which
> are bound to a glx-context and where the app does some openGL-rendering)
> on demand: the user requests a window to be created by pressing a
> "create window" button; the window is destroyed when the user presses a
> "destroy window" button.
>
> however, when the created window gets destroyed from outside by closing
> the window (via the small "x" at the upper-right window corner) or by
> killing it (via xkill), my application get's no notion about this.
> when the user now request to destroy the (already gone) window, my
> application crashes on calling XCloseDisplay() (it also crashes when i
> try to call glXMakeCurrent()) with the famous:
> "X connection to :0.0 broken (explicit kill or server shutdown)."
>
> of course, this is not the behaviour i prefer: i would like it best if
> destroying the window "by other means" would be the same as using the
> internal window-destruction routines.
> however, for this to work i would like to be a) notified of
> window-destruction of b) have a possibility to check whether the
> display-connection is still valid.
> i don't seem to find a solution to any of these.
> whatever i do with the (invalid) Display, the functions either return
> without an error or they crash with an IOError. there seems to be no
> "isDisplayStillValid()"-function.
> listening to events doesn't seem to work either: i tried to add
> "DestroyNotify" to the events i listen too, but such event never appear
> (other events like grabbing the system pointer of resizing the window
> work fine though)
> the best i can get is registering an error-handler with
> XSetIOErrorHandler(): however this doesn't really help, since i don't
> want to quit the application just because one of it's windows was closed
> (and the IOErrorHandler does not give me an opportunity to return to the
> my application)
>
> i am pretty sure i miss something very obvious.
> could someone please direct me into the right direction?
>
>
> mfg.a.dsr
> IOhannes
>
> PS: this is my 2nd attempt to write send this email, so apologize for
> possible double postings (i haven't been subscribed to xfree86-devel before)
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: How do I sync output changes to vertical retrace?

2006-02-22 Thread Mark Vojkovich
On Wed, 22 Feb 2006, Barry Scott wrote:

> Mark Vojkovich wrote:
> >   The only mechanism I know of is OpenGL.  Most OpenGL drivers have
> > a mechanism to allow buffer swapping at vblank.
> >
> Using DRM/DRI this works:
>
> void waitForVSync()
> {
> if( card_fd < 0 )
> card_fd = open( "/dev/dri/card0", O_RDONLY );
>
> drm_wait_vblank_t wait_vblank;
> wait_vblank.request.type = _DRM_VBLANK_RELATIVE;
> wait_vblank.request.sequence = 1;
> wait_vblank.request.signal = 0;
>
> int rc;
> do
> {
> wait_vblank.request.type = _DRM_VBLANK_RELATIVE;
> rc = ioctl( card_fd, DRM_IOCTL_WAIT_VBLANK, &wait_vblank );
> }
> while( rc != 0 && errno == EINTR );
> }
>
>
> Barry

   Come to think of it, NVIDIA does, or at least did have device
file that lets you wait for vblank as well.  These types of things
are pretty unportable though.

Mark.


/*
gcc -o polltest -Wall polltest.c
*/

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#define FILENAME "/dev/nvidia0"

#define COUNT_FOR (60*4)

int main (void)
{
struct pollfd pfd;
struct timeval tv;
struct timezone tz;
double t1, t2;
int i, fd, timeout;

fd = open(FILENAME, O_RDONLY);

pfd.fd = fd;
pfd.events = POLLIN | POLLPRI;
pfd.revents = 0;

timeout = 1000;  /* milliseconds */

gettimeofday(&tv, &tz);
t1 = tv.tv_usec + (tv.tv_sec * 100.0);

for(i = 0; i < COUNT_FOR; i++) {
   if(poll(&pfd, 1, timeout) <= 0) {
   printf("poll() failed\n");
   break;
   }
usleep(0);
}

gettimeofday(&tv, &tz);
t2 = tv.tv_usec + (tv.tv_sec * 100.0);

close(fd);

printf("Refresh rate is %f Hz\n",
  (double)COUNT_FOR * 100.0 / (t2 - t1));

return 0;
}

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: How do I sync output changes to vertical retrace?

2006-02-15 Thread Mark Vojkovich
On Mon, 13 Feb 2006, Barry Scott wrote:

> I have a text scrolling app that is not playing smoothly.
>
> Attempting to update the windows on a timer is not keeping my changes in
> sync with the monitors refresh. This is causing visual glitches.
>
> What mechanisms can I use to lock my changes to the monitor refresh rate?

  The only mechanism I know of is OpenGL.  Most OpenGL drivers have
a mechanism to allow buffer swapping at vblank.

> If I use DBE is it going to change the screen in the vertical blanking
> interval?
>

   No, it won't.  At least not in the XFree86/X.org implementations.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Xlib : sequence lost (0x1718e > 0x71a0) in reply to 0x0!

2006-02-03 Thread Mark Vojkovich
   Each call to XOpenDisplay opens a new communication socket to
the X-server.  Commands sent through this socket need to be serialized.
If you have two threads trying to send data at the same time through
the same socket they will corrupt each other's data.  XInitThreads
enables a lock around the Xlib code that accesses the socket so
that only one thread can send data through the socket at a time.
This generally works fine except that if you pause one thread while
it is in Xlib and has already taken the lock, it will prevent any other
thread from entering Xlib and taking the lock.  Separate display
connections for each thread are the solution to that.

Mark.


On Wed, 1 Feb 2006 [EMAIL PROTECTED] wrote:

> Hi
>   u have given a good suggestion but I dont understand if the
> pausing thread block any other thread trying to use the
>Xlib with the same display connection then why it is working
> fine sometimes .
> Thanks
>
>  Separate threads either need to use separate display
> > connections or you need to enable thread mutexes for a shared
> > connection (XInitThreads will enable Xlib's internal mutexes).
> > Note still, that pausing a thread while it's in Xlib can block
> > any other threads also trying to use Xlib with the same display
> > connection.  You'd want to use separate display connections
> > for that.
> >
> > Mark.
> >
> > On Tue, 31 Jan 2006 [EMAIL PROTECTED] wrote:
> >
> >>  Hi to all
> >>
> >>
> >> Iam building an kde Application When  I pause the current pthread and
> >>   invoke an dialog in another thread the following error is coming .
> >>
> >> Xlib : unexpected async reply
> >> Xlib :sequence lost (0x1718e > 0x71a0) in reply to 0x0!
> >> X Error : BadImplementation (server does not implement opertaion) 17
> >> Major opcode : 20
> >> MInor opcode : 0
> >> Resource id  : 0x759d1
> >> The error is coming randomly , not always .
> >>   Can any will help how to come out of this error .
> >>
> >>
> >>  Thanks
> >> ___
> >> Devel mailing list
> >> Devel@XFree86.Org
> >> http://XFree86.Org/mailman/listinfo/devel
> >>
> > ___
> > Devel mailing list
> > Devel@XFree86.Org
> > http://XFree86.Org/mailman/listinfo/devel
> >
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Xlib : sequence lost (0x1718e > 0x71a0) in reply to 0x0!

2006-02-01 Thread Mark Vojkovich
   Separate threads either need to use separate display
connections or you need to enable thread mutexes for a shared
connection (XInitThreads will enable Xlib's internal mutexes).
Note still, that pausing a thread while it's in Xlib can block
any other threads also trying to use Xlib with the same display
connection.  You'd want to use separate display connections
for that.

Mark.

On Tue, 31 Jan 2006 [EMAIL PROTECTED] wrote:

>  Hi to all
>
>
> Iam building an kde Application When  I pause the current pthread and
>   invoke an dialog in another thread the following error is coming .
>
> Xlib : unexpected async reply
> Xlib :sequence lost (0x1718e > 0x71a0) in reply to 0x0!
> X Error : BadImplementation (server does not implement opertaion) 17
> Major opcode : 20
> MInor opcode : 0
> Resource id  : 0x759d1
> The error is coming randomly , not always .
>   Can any will help how to come out of this error .
>
>
>  Thanks
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Framebuffer mapped adress

2006-01-25 Thread Mark Vojkovich
   I'm not sure what you're asking.  "FbBase" and "FbStart" in
most drivers are virtual addresses and not useful to anything
other than the X-server.  Do you mean the physical address?
the DGA extension has some protocol to query the physical
address and the start of the framebuffer, but not all
drivers will support this.

Mark.

On Thu, 8 Dec 2005, ayachi gherissi wrote:

> Hi
> Is there a way to get FbBase and FbStart  from an X
> extension.
>
> Thank's
>
> __
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: How to turn off the hardware mouse rendering?

2005-11-28 Thread Mark Vojkovich
On Mon, 28 Nov 2005, Tim Roberts wrote:

> Andrew C Aitchison wrote:
>
> >On Mon, 28 Nov 2005, [gb2312] Daniel(???) wrote:
> >
> >
> >
> >>I want to snap a desktop include the mouse pointer. However, the
> >>common tools and functions can not capture a windows image include
> >>mouse. I think it's because the mouse is not draw by Graphic Card
> >>and is not put to the color buffer. So how to stop the hardware
> >>acceleration? Or there is some special way to do this job?
> >>
> >>
>
> The tools that take a desktop snapshot intentionally remove the mouse
> pointer from the screen, because in the vast majority of cases, you
> don't want it in the snapshot.  If you want the pointer in the snap, you
> will have to add it by hand.
>

   Saying that these tools remove the pointer is misleading.
The snapshot tools do no such thing.  The X-server removes
the cursor independent of whether the cursor was rendered in
hardware or software.  There is no way to get an image from
the X-server that includes the cursor.


Mark.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple Xv overlays cause blue flashing

2005-11-18 Thread Mark Vojkovich
   The grab is client-specific.  The grab will only fail if it's owned
by another client.  This is just to prevent multiple apps from fighting
over the same port.  It's assumed that if you've got a single client
that client will be able to keep track of which ports it's using.

Mark.

On Thu, 17 Nov 2005, Smoof . wrote:

> Thanks everyone for the help.  The ultimate solution was that I switched to
> a machine with an nvidia chipset.  Then I was able use the video blitter
> port to send the video images to their respecitve X windows with no
> flashing.  Interesting thing is that I was able to successfully grab the
> same port multiple times.  I would have thought that once a port had been
> grabbed it would no longer be available.
>
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple Xv overlays cause blue flashing

2005-11-17 Thread Mark Vojkovich
On Wed, 16 Nov 2005, Smoof . wrote:

> >On Wed, 16 Nov 2005, Alex Deucher wrote:
> >
> > > On 11/16/05, Smoof . <[EMAIL PROTECTED]> wrote:
> > > > Hello,
> > > >
> > > > I am writing an application that will display up to 9 independent
> >video
> > > > streams (each stream is 320 x 240).  I'm new to Xv and may not be
> >using the
> > > > correct terminology so please bear with me.  I have tried two
> >approaches:
> > > >
> > > > The first approach was to create one large overlay using
> >XvShmCreateImage
> > > > and tile in the video frames.  Once all frames are tiled in, use
> > > > XvShmPutImage to send them to the X server.  This method works
> >perfectly.
> > > > However, my ultimate goal is to send each video stream to it's own GTK
> > > > widget so I can have each video stream playing in a window that can be
> > > > moved, be surrounded by buttons, minimized, etc...
> > > >
> > > > I implemented this by creating a simple GTK app with three drawing
> >areas
> > > > (ultimately I will have 9) of 320x240 and used some GDK functions to
> > > > determine the X window id's for the widgets.  I created a separate
> >overlay
> > > > (again using  XvShmCreateImage) for each window.  Then I call
> >XvShmPutImage
> > > > once for each window.  Finally I call XFlush so send the requests to
> >the X
> > > > server.  I tried using XSync but it seemed to interfere with the GTK
> >event
> > > > loop.
> > > >
> > > > The problem with this second approach is that the overlays are
> >flashing blue
> > > > (the overlay color key from what I've read).  So I looking for advice
> >on how
> > > > to update multiple overlays at a rate of 24fps without any flashing.
> >Or if
> > > > you don't think this is possible then please let me know and I'll just
> >have
> > > > to get by with my first implementation.
> > > >
> > >
> > > Most hardware only has one overlay so each widget will be fighting for
> > > it.  only the one that has it at any given moment will actually
> > > display the video; the rest will show the colorkey.
> > >
> > > Alex
> >
> >Typically, a client will grab the Xv port when using it to prevent
> >other clients from being able to use the same Xv port.  When a new
> >client can't grab one Xv port, it looks for another one. That mechanism
> >only works when there are different clients.  If you want to do all
> >rendering from the same client, then you need to deliberately use
> >different ports for each Window.  Some drivers export more than one
> >adaptor that supports XvImages and some adaptors have more than one
> >port.  Overlay adaptors will typically only have a single port.
> >Run xvinfo for a summary of adaptors and their capabilities.
> >
> >
> > Mark.
>
> My plan was to do all the rendering with the same client and I know that my
> overlay adaptor only has a single port for the YUV420 format that I am
> using.

   Do you have non-overlay XvImage adaptors with more than one port?
NVIDIA drivers and some others offer this.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple Xv overlays cause blue flashing

2005-11-16 Thread Mark Vojkovich
On Wed, 16 Nov 2005, Alex Deucher wrote:

> On 11/16/05, Smoof . <[EMAIL PROTECTED]> wrote:
> > Hello,
> >
> > I am writing an application that will display up to 9 independent video
> > streams (each stream is 320 x 240).  I'm new to Xv and may not be using the
> > correct terminology so please bear with me.  I have tried two approaches:
> >
> > The first approach was to create one large overlay using XvShmCreateImage
> > and tile in the video frames.  Once all frames are tiled in, use
> > XvShmPutImage to send them to the X server.  This method works perfectly.
> > However, my ultimate goal is to send each video stream to it's own GTK
> > widget so I can have each video stream playing in a window that can be
> > moved, be surrounded by buttons, minimized, etc...
> >
> > I implemented this by creating a simple GTK app with three drawing areas
> > (ultimately I will have 9) of 320x240 and used some GDK functions to
> > determine the X window id's for the widgets.  I created a separate overlay
> > (again using  XvShmCreateImage) for each window.  Then I call XvShmPutImage
> > once for each window.  Finally I call XFlush so send the requests to the X
> > server.  I tried using XSync but it seemed to interfere with the GTK event
> > loop.
> >
> > The problem with this second approach is that the overlays are flashing blue
> > (the overlay color key from what I've read).  So I looking for advice on how
> > to update multiple overlays at a rate of 24fps without any flashing.  Or if
> > you don't think this is possible then please let me know and I'll just have
> > to get by with my first implementation.
> >
>
> Most hardware only has one overlay so each widget will be fighting for
> it.  only the one that has it at any given moment will actually
> display the video; the rest will show the colorkey.
>
> Alex

   Typically, a client will grab the Xv port when using it to prevent
other clients from being able to use the same Xv port.  When a new
client can't grab one Xv port, it looks for another one. That mechanism
only works when there are different clients.  If you want to do all
rendering from the same client, then you need to deliberately use
different ports for each Window.  Some drivers export more than one
adaptor that supports XvImages and some adaptors have more than one
port.  Overlay adaptors will typically only have a single port.
Run xvinfo for a summary of adaptors and their capabilities.


Mark.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: How to subit our X server display driver

2005-10-26 Thread Mark Vojkovich
On Wed, 26 Oct 2005, Luke Chen wrote:

> Dear Sir
>
> I hope someone can answer to my following questions,thanks.
>
> I would like to submit our X server display driver to Xfree86.
> I should follow the 4-step program(describes in Xfree86 developer) and simply 
> submit my display driver to Bugman?

   Yes, either send through the bugzilla or sent to [EMAIL PROTECTED]

>
> How can I know the exactly date of the development cycle of next 
> relase?finished date of the experiment phase,the date of the feature 
> freeze,the date of the code freeze and the date of the release date?

   Major releases don't come very often and the dates are not firm
but we try to get one out at least once a year.  XFree86 does release
developer snapshots every few weeks (http://www.xfree86.org/develsnaps/).


>
> If my display driver is already included in "The Release" version,does it 
> mean Redhat will combine my display driver as its in-box driver?
>

   In my experience RedHat takes a while to get stuff like this
into the box.  You might want to talk to RedHat explicitly about
that.  They do modify and patch their own X-server so it may be
possible to to expediate this.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Can XCopyArea work on the desktop?

2005-10-20 Thread Mark Vojkovich
   XCopyArea can copy arbitrary rectangles of the desktop if the
source is the root window and the GC has IncludeInferiors for the
sub-window mode.

   See the man page on XSetSubwindowMode.  There are a few
Xlib functions for getting the root window ID (XRootWindow,
XDefaultRootWindow, XRootWindowOfScreen).

Mark.

On Thu, 20 Oct 2005, Daniel(Lijun_Xie) wrote:

> Hi,
>
> I want to copy one area of desktop to another place. However I find the 
> XCopyArea function only work on windows(widgets), can't work on the desktop 
> itself. The term desktop, I means the screen.
>
> Is it so?
>
> thank you very much for any information. It's an urgent work.
>
>
>
> Daniel(Lijun_Xie)
> [EMAIL PROTECTED] or [EMAIL PROTECTED]
> 2005-10-20
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Wire protocol for X

2005-10-12 Thread Mark Vojkovich
  It's in the server tree at xc/doc/hardcopy/XProtocol

http://cvsweb.xfree86.org/cvsweb/xc/doc/hardcopy/XProtocol/

Mark.


On Wed, 5 Oct 2005, Eddy Hahn wrote:

> Hi,
>
> I'm in the process to design a system that will traclate wire level protocol 
> from X Windows to RDP. So, you an hook up a PC or a dumb (brick) terminal 
> using RDP to a Linux/Unix system.  For that, I need the wire protocol.  Can 
> someone help me to find it somewhere?
>
> Thanks,
>
> Eddy Hahn
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Wire protocol for X

2005-10-12 Thread Mark Vojkovich
  It's in the server tree at xc/doc/hardcopy/XProtocol

http://cvsweb.xfree86.org/cvsweb/xc/doc/hardcopy/XProtocol/

Mark.

On Wed, 5 Oct 2005, Eddy Hahn wrote:

> Hi,
>
> I'm in the process to design a system that will traclate wire level protocol 
> from X Windows to RDP. So, you an hook up a PC or a dumb (brick) terminal 
> using RDP to a Linux/Unix system.  For that, I need the wire protocol.  Can 
> someone help me to find it somewhere?
>
> Thanks,
>
> Eddy Hahn
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Writing an XFree86 driver for some unusual hardware...

2005-10-08 Thread Mark Vojkovich
   You can get the server to render to a system memory buffer
using the shadowfb.  Many drivers support an Option "ShadowFB"
where rendering happens to system memory and then the driver
periodically flushes the system memory framebuffer to the
real framebuffer.  So you may be able to use this system memory
shadow buffer as a stream for your adaptor.


Mark.

On Sat, 8 Oct 2005, Victor Demchenko wrote:

> Hello
>
>   I want to use some device, that is not a video adaptor, as output for
> XFree86 server. This device receives the 24 bit RGB frames (pictures) as
> byte stream with finish flag at the end of each one and don't know
> anything about some kind of timings etc. The speed of output is about 25
> fps.
>   Is it possible to use this device with XFree86 at all? I trying to
> write a test XFree86 driver for this purpose and hasn't found how to
> get the screen picture to send it to my device. Is it true, that only Xv
> extension allows to control the output of completed frames by the
> adaptor->PutImage() routine? And actually even video players like
> MPlayer or Xine, when using Xv, attempting to call XvShmPutImage()
> (that, as I know, cannot be handled by my driver) instead of
> XvPutImage().
>   I need at least to output the video by some player. But it
> will be wonderful if I can to use this device as usual monitor to output
> any of the X applications.
>
> Thank you.
>
> ---
>   br
> Victor
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: "nv" driver: Option "FPDither" default value

2005-10-04 Thread Mark Vojkovich
   Whoops, I'm wrong.  It turns out it's not in the EDID.  For
desktop systems this is set in the control panel.  For laptops,
the driver keeps a list of known panels.  The iMac is essentially
a laptop.

Mark.

On Tue, 4 Oct 2005, Benjamin Herrenschmidt wrote:

> >The iMac looks very "laptop-like" so I'm not surprised it has a
> > 6 bit panel.  It might be in the EDID.  I'm not sure how else
> > software would be able to know.
>
> I'll try to find somebody with access to the appropriate VESA specs to
> find out then. The "other" way to know is what Apple does in OS X for
> things like default panel gamma table, backlight value range, etc...
> they have a long table of pretty much every monitor they ever
> shipped with those informations.
>
> Another possibility, if possible (I have to dbl check the driver) would
> be to check the dither setting set by the BIOS/firmware. I'm not sure
> it's set wrong on the iMac, I suspect not, in fact, It's probably just
> nvidiafb and X "nv" that disabling it by default. Maybe if we could
> "read" it's previous state the same way we read the panel size from the
> registers, we could use that as a default value when no option is
> specified in the config file.
>
> In a similar vein, I noticed that the kernel fbdev now have some code to
> calculate timings using the CVT algorithm, and that it actually produces
> a working modeline for this panel based solely on the panel size read
> from registers, while X{Free,.org} just picks a scaled mode as 1440x900
> isn't in it's built-in list. I suppose it would be time to rework
> xf86Modes.c a bit to better deal with flat panels anyway, I'll look into
> it if I ever find time...
>
> Ben.
>
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: "nv" driver: Option "FPDither" default value

2005-10-03 Thread Mark Vojkovich
On Mon, 3 Oct 2005, Benjamin Herrenschmidt wrote:

> On Sun, 2005-10-02 at 18:32 -0700, Mark Vojkovich wrote:
> >FPDither takes 8 bit output and dithers down to 6 bit.  It
> > will improve the quality on 6 bit panels and degrade it on 8
> > bit panels.  Nearly all desktop panels are 8 bit (only very cheap
> > or very old ones are not).  Most laptop panels have been 6
> > bit, but some high-end laptops have 8 bit panels.
>
> Ok, thanks. Is there a way to "detect" the panel component size (via
> EDID maybe) ? I'm actually surprised that the iMac G5 panel is only 6
> bits but heh, I suppose Apple had to cut costs on this one... I don't
> suppose it could be a chip misconfiguration in the firmware causing it
> to emit 6 bits data only, could it ?
>

   The iMac looks very "laptop-like" so I'm not surprised it has a
6 bit panel.  It might be in the EDID.  I'm not sure how else
software would be able to know.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: "nv" driver: Option "FPDither" default value

2005-10-02 Thread Mark Vojkovich
   FPDither takes 8 bit output and dithers down to 6 bit.  It
will improve the quality on 6 bit panels and degrade it on 8
bit panels.  Nearly all desktop panels are 8 bit (only very cheap
or very old ones are not).  Most laptop panels have been 6
bit, but some high-end laptops have 8 bit panels.

Mark.


On Sun, 2 Oct 2005, Benjamin Herrenschmidt wrote:

> Hi Mark !
>
> I have a small question about the "nv" driver...
>
> What is the reason why option "FPDither" is not enabled by default ?
>
> It definitely makes a huge difference in quality on the iMac G5 I have
> here. Can it actually reduce the quality on other setups or impact
> performances ?
>
> Regards,
> Ben.
>
>
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: tdfx and DDC2

2005-08-31 Thread Mark Vojkovich
   The NVIDIA Mac boards I've seen are Mac only.  They won't even plug
into a PC because the connector is different.  It's like PCI, but has
and extra power tab to drive the Apple Display Connector.  None of
those boards have a PC BIOS; they have OpenFirmware fcode.

   I think most hardware manufacturers prefer an incompatible
board for the Mac.  It means you can charge more for them, which
you need to do because you need to cover the cost of the software
developement for the lower volume PowerPC market.


Mark.

On Tue, 30 Aug 2005, Tim Roberts wrote:

> Michael wrote:
>
> >I don't see why they should be enabled - they're PC-specific and even
> >with x86 emulation they would be pretty much useless since you're not
> >too likely to encounter a graphics board with PC firmware in a Mac ( or
> >other PowerPC boxes )
> >
> >
>
> Wrong.  No hardware manufacturer in their right mind would build a
> Mac-only PCI graphics board, with the possible exception of Apple.
> They're going to build a generic graphics board that works in a PC and
> by the way also works in a Mac.  Such a board will have a video BIOS.
>
> I suppose you might find a board with a Mac-only SKU that does not stuff
> the BIOS chip.
>
> --
> Tim Roberts, [EMAIL PROTECTED]
> Providenza & Boekelheide, Inc.
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: error libXinerama

2005-07-18 Thread Mark Vojkovich
   It probably didn't come with your Linux distribution.  It probably
wasn't built by default with the XFree86 version RH9 is using.  I've got
one you can use at:

http://www.xfree86.org/~mvojkovi/libXinerama.so.1.0

Stick it in /usr/X11R6/lib and run ldconfig.

Mark.

On Fri, 15 Jul 2005, Stefanus Eddy wrote:

> dear XFree86 team,
>
> i'm using RH 9.0 with Kernel Linux Kernel 2.4.20-30.9 i686 i686 i386
> GNU/Linux
> i get this error when runing Remote Desktop Connectio, Mozilla and other
>
> krdc: error while loading shared libraries: libXinerama.so.1: cannot open
> shared object file: No such file or directory
>
> i try to locate this libraries, but i can't find. have any idea???
>
> thank you
>
> best regards,
>
>
> eddy
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: What is the relationship between XFree86 and X11?

2005-07-03 Thread Mark Vojkovich
   X11 is a standard.  XFree86 is an implementation of that standard.
Additionally, the X-Window System allows for vendor-specific extensions,
so XFree86 implements features beyond what are covered in the X11 standard.

Mark.

On Sun, 3 Jul 2005, Edison Deng wrote:

> Hi, everybody.
> I am new guy for the X Window system. Can anyone tell me what is the
> relationship between XFree86 and X11?
> Thanks & Regards,
> Edison
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Does Intel 865G graphics card support gamma correction

2005-06-29 Thread Mark Vojkovich
   While i865G hardware might support gamma correction, the
XFree86 drivers for it do not.  I believe this is because
nobody with the time or ability to add gamma correction
support the driver have sufficient documentation for the i865.


Mark.

On Wed, 22 Jun 2005, Karthik Ramamoorthy wrote:

> Hi all,
>
> In my system ie Intel PC with i865G integrated graphics card,
> i am not able to do gamma correction. My Linux is SuSE 9.2.
> Is it that Intel cards does have XFree86 driver support to
> change gamma values.
>
>How can i do gamma correction in my system. Is it possible in
> Intel systems or not?
>
> Regards
> Karthik R
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: DBE on nv, Shape and XAA questions

2005-06-29 Thread Mark Vojkovich
   The "nv" driver has nothing to do with either Xinerama or DBE.
The decision to disable DBE when Xinerama is enabled (because DBE
it doesn't work in Xinerama) is made in core XFree86 code, not the
driver.

Mark.

On Wed, 22 Jun 2005, Michal [iso-8859-2] Maru?ka wrote:

> Mark Vojkovich <[EMAIL PROTECTED]> writes:
>
> > On Sat, 18 Jun 2005, Michal [iso-8859-2] Maru??ka wrote:
> >
> >> * Is it correct, that the "nv" driver does not support DBE (double buffer 
> >> extension)?
> >
> >The drivers have nothing to do with DBE extension support.  XFree86
> > supports DBE for all hardware whenever you load the "extmod" module.
> > DBE is not supported in Xinerama, however.
>
> Thank you for the solution. It was the Option "Xinerama" "true", even if I had
> no configuration for the xinerama layout.  Strange that with mga driver DBE
> works even with xinerama. Some manpage (the nv?)  should mention this
> incompatibility.
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: libXinerama

2005-06-29 Thread Mark Vojkovich
   It's at http://www.xfree86.org/~mvojkovi/libXinerama.so.1.0

Mark.

On Thu, 16 Jun 2005, device hda1 wrote:

> Dear Mark,
> I'm sorry for sending this email, but I've read in 
> http://www.mail-archive.com/devel@xfree86.org/msg07158.html, there was 
> discuss about Metacity window, and Manikandan T ask about libXinerama.so.1, 
> and you said that you can provide libXinerama.so.1, below was your posting.
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
> Of Mark Vojkovich
> Sent: 07 April 2005 22:56
> To: devel@XFree86.Org
> Subject: Re: missing libXinerama.so.1
>
>It's unfortunate that Metacity has that dependency.  The .so comes
> with newer X-servers.  You can try to pull one out of newer X-server
> packages.  I can mail you the library alone if want.
>
> Mark.
>
> Is it ok if I ask you to send me libXinerama.so.1 too ?
>
> Thanks in Advance,
> Aji
>
> --
> Wahai Raja Iblis Datanglah disini ada Linux
> (asumsikan disini ada gambar bintang lima terbalik dalam sebuah lingkaran)
> Datang ngga dijemput, pulang ngga diantar
> (Khan besar mo ko bedeng)
> --
>
>
> --
> ___
> Check out the latest SMS services @ http://www.linuxmail.org
> This allows you to send and receive SMS through your mailbox.
>
> Powered by Outblaze
>
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: DBE on nv, Shape and XAA questions

2005-06-22 Thread Mark Vojkovich
On Sat, 18 Jun 2005, Michal [iso-8859-2] Maru?ka wrote:

> * Is it correct, that the "nv" driver does not support DBE (double buffer 
> extension)?

   The drivers have nothing to do with DBE extension support.  XFree86
supports DBE for all hardware whenever you load the "extmod" module.
DBE is not supported in Xinerama, however.

>
>
> * How fast are "Screen to screen bit blits" ?

   Depends on the hardware.  Hardware blits are always significantly
faster than software blits.

>
> I tuned sawfish WM, so that when resizing windows it sets the border window 
> (WM
> decoration) to a suitable window gravity, so it's up to the X server to move 
> it
> correctly (WM does not redraw). I avoided shaped windows.
>
> I use the "mga" driver, so it has "Screen to screen bit blits" XXA call.
>
> Yet, if I resize (in 1 direction) continuously the window, i see the vertical
> line as a staircase line:
>
>
>||
> b  ||
> l  |   |
> a  | ->|
> c  |   |
> k  |  |
>|  |
>
> Any idea if it can be improved?

   Rendering is not synchronized with the screen retrace, so tearing
is expected.

>
> * Shape vs. flickering
>
> if i run xlogo -shape (resize it to make things clear) and move the logo 
> window, i see
> flickering (the window below is obscured and redrawn even outside of the 
> logo).

We you move a window, the uncovered area needs to be redrawn, of
course.  When using the shape extension, exposures are not done down
to the pixel.  For performance reasons, it's exposing bounding boxes,
because every separate exposed rectangle is a additional call back
to the client.


>
> I would likt to know, if "Screen to Screen color expansion" XAA call could be 
> used to
> avoid it.
>

   Everything is already fully accelerated.  Add Option "NoAccel" to
your Section "Device" in the XF86Config file to see what unaccelerated
rendering looks like.


Mark.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: missing libXinerama.so.1

2005-04-07 Thread Mark Vojkovich
   It's unfortunate that Metacity has that dependency.  The .so comes
with newer X-servers.  You can try to pull one out of newer X-server
packages.  I can mail you the library alone if want.

Mark.


On Wed, 6 Apr 2005, Manikandan Thangavelu wrote:

> Hi All,
>
> I am missing libXinerama.so.1 library in my machine. I want to upgrade
> my Metacity window manager which has this library as dependency.
> I do have libXinerama.a but not the .so file. Where can I get it?
>
> Thanks in Advance,
> Manikandan T
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: DGA and PointerMoved()

2005-03-07 Thread Mark Vojkovich
   There is no cursor in DGA mode.  Clients can still get mouse
events, but that's not the same thing.  DGA mouse events reflect
relative motion rather than absolute, and the bulk of the cursor
paths need to be bypassed.  That is, the cursor isn't anywhere,
the mouse has been disconnected from the cursor and the relative
mouse events are sent directly to the client.

Mark.

On Mon, 7 Mar 2005, Thomas Winischhofer wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
>
> Is it intentional that pScrn->PointerMoved() is being called (by whom?)
> when DGA is active?
>
> I only receive the cursor's last coordinates here from the time before
> DGA was activated. Any mouse movement while DGA is active has no
> influence on the coordinates received here.
>
> Anyone?
>
> Thomas
>
> - --
> Thomas Winischhofer
> Vienna/Austria
> thomas AT winischhofer DOT net   *** http://www.winischhofer.net
> twini AT xfree86 DOT org
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.0 (GNU/Linux)
>
> iD8DBQFCLIXEzydIRAktyUcRAtvSAJ0fMhtBsuyZ/eZfzDLpRCIkimZe2wCggSlM
> cgH5/Ho/sbB7/xNugkez80s=
> =PWRP
> -END PGP SIGNATURE-
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: 4.4.99.902: s3 fails some of xtests

2005-03-03 Thread Mark Vojkovich
On Wed, 2 Mar 2005, Tim Roberts wrote:

> Németh Márton wrote:
>
> > Hi!
> >
> > I've tested 4.5.0RC2 with xtest 4.0.10, see
> > http://bugs.xfree86.org/show_bug.cgi?id=1557 for details.
> >
> > I've attached a test C program which always produces bad rendering
> > using acceleration, and never if XaaNoScreenToScreenCopy is set
> > (=without acceleration). The results are also attached.
> >
> > Have anyone see souch behaviour?
> >
> > Have anyone programers manual about 86c764/765 [Trio32/64/64V+] chip?
>
>
> Is it really only GXclear, GXinvert, and GXset that fail?  If so, the
> diagnosis is pretty easy.
>
> For those three ROPs, it's not really a screen-to-screen blit at all:
> the source surface is not used.  Most S3 chips (Savage included) fail if
> you attempt to use a two-operand bitblt command when the source is not
> involved.  That's why there is an XAA flag specifically for this case.
>
> The solution is to add
>  pXAA->ScreenToScreenCopyFlags = ROP_NEEDS_SOURCE;
>  to the S3AccelInitXxx function at the bottom of the file.
>

   I don't believe the Trio32/64/64V+ had that problem.  That was
specific to the ViRGE.  I'm more inclined to believe that this
problem is because it's not setting:

   pXAA->ScreenToScreenCopyFlags = NO_TRANSPARENCY;

  I don't recall the the S3 driver I wrote a long time ago having
that feature, and you definitely don't want to be using it if you
support transparency during color expansions.  The transparent blit
feature is really only for chips that don't have a color expansion
engine for stippling.

   If you want to see correct acceleration code for the old S3 chips
you should dig up the old s3 code in the XFree86 3.3.x XF86_SVGA
server.  I wrote that years ago.


Mark.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Modeline behavior changed (broken)?

2005-02-17 Thread Mark Vojkovich
On Thu, 17 Feb 2005, David Dawes wrote:
> On Thu, Feb 17, 2005 at 10:52:33AM -0800, Mark Vojkovich wrote:
> >
> >   I think the priority should be:  Section "Monitor", EDID, builtin.
> >But it appears that it's EDID, Section "Monitor", builtin.
>
> Yes, I agree that the modes specified explicitly in the Monitor section
> should have first priority.  The attached patch prevents EDID modes matching
> Monitor section modes from being added to the pool, much the same way as
> happens already for the built-in default/VESA modes.
>
> Let me know how it goes.

   With the patch, it works as I expect it to.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Modeline behavior changed (broken)?

2005-02-17 Thread Mark Vojkovich
On Wed, 16 Feb 2005, David Dawes wrote:

> On Wed, Feb 16, 2005 at 06:07:43PM -0800, Mark Vojkovich wrote:
> >   It used to be that if you specified a modeline, say "1600x1200" in
> >the XF86Config file, that modeline would take preference over any
> >internal modelines of the same name.  This no longer appears to be
> >the case.  If I have a "1600x1200" modeline in the XF86Config file,
> >it no longer gets used, but another mode instead (I presume the
> >internal mode).  I have to name my mode to something else in order
> >to use it.
> >
> >   It seems like the server was changed to ignore modes placed
> >in the monitor section if they conflict with internal modes.  Was
> >this change intentional?
>
> It works correctly for me.  If explicitly provided modes are not
> overriding the default modes then it is a bug.  Can you send your
> log file?

   It appears that what's happening is modes from the monitor's
edid take precedence over Section "Monitor" overrides.  I specified
mode "1600x1200" in my SubSection "Display" Modes.  I provided a custom
modeline in the Section "Monitor":

# 1600x1200 @ 79.1 Hz, 98.9 kHz
   Modeline  "1600x1200" 213.6 1600 1664 1856 2160 1200 1201 1204 1250

but the monitor is reporting 86 Hz, 106 kHz.

(**) NV(0): *Preferred EDID mode "1600x1200": 230.0 MHz, 106.5 kHz, 85.2 Hz

  ...

(**) NV(0):  Mode "1600x1200": 213.6 MHz, 98.9 kHz, 79.1 Hz
(II) NV(0): Modeline "1600x1200"  213.60  1600 1664 1856 2160  1200 1201 1204 
1250

   I think the priority should be:  Section "Monitor", EDID, builtin.
But it appears that it's EDID, Section "Monitor", builtin.


Mark.



___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Modeline behavior changed (broken)?

2005-02-16 Thread Mark Vojkovich
   It used to be that if you specified a modeline, say "1600x1200" in
the XF86Config file, that modeline would take preference over any
internal modelines of the same name.  This no longer appears to be
the case.  If I have a "1600x1200" modeline in the XF86Config file,
it no longer gets used, but another mode instead (I presume the
internal mode).  I have to name my mode to something else in order
to use it.

   It seems like the server was changed to ignore modes placed
in the monitor section if they conflict with internal modes.  Was
this change intentional?


Mark.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Problem restoring console?

2005-02-15 Thread Mark Vojkovich
On Tue, 15 Feb 2005, David Dawes wrote:

> On Tue, Feb 15, 2005 at 10:34:16AM -0800, Mark Vojkovich wrote:
> >On Mon, 14 Feb 2005, David Dawes wrote:
> >
> >> On Mon, Feb 14, 2005 at 07:40:40PM -0800, Mark Vojkovich wrote:
> >> >On Mon, 14 Feb 2005, Mark Vojkovich wrote:
> >> >
> >> >> On Mon, 14 Feb 2005, David Dawes wrote:
> >> >>
> >> >> > On Mon, Feb 14, 2005 at 04:00:18PM -0800, Mark Vojkovich wrote:
> >> >> > >   I just updated on my dual-card system and with the update I see
> >> >> > >a problem restoring the console that I did not see previously.  If
> >> >> > >I startx on the primary card and then quit, the primary card is 
> >> >> > >restored
> >> >> > >correctly.  However, if I startx on both cards and quit, the
> >> >> > >primary card is not restored correctly.  I have a hard time imagining
> >> >> > >how it could be a driver issue since the "nv" driver knows nothing 
> >> >> > >about
> >> >> > >the other cards in the layout (nor should it) and does not change
> >> >> > >its behavior when there is more than one card in the layout.  The
> >> >> > >core server code, on the other hand, does.
> >> >> > >
> >> >> > >   Have there been changes to vgahw, RAC, PCI config code, console
> >> >> > >code, etc... that may have caused this regression?
> >> >> >
> >> >> > Do you know approximately when this problem started?
> >> >>
> >> >>   I haven't updated in a long time on that machine.  I'll try to
> >> >> figure out when, but I'm not sure how to do that reliably.
> >> >
> >> >   I can't tell when I last updated on this machine.
> >>
> >> Can you go back to 4.4 as a first step, or do you know it was post-4.4?
> >
> >   It worked fine with 4.4.  I built sometime after 4.4 but I'm
> >not sure when.
> >
> >
> >>
> >> I tried 4.5.0 RC1 with a multi-head config using a Mach64 and i810,
> >> and didn't see any problem like this.  I can try some other multi-head
> >> configs later this week.
> >
> >
> >What OS are you on?  It could be something specific to Linux
> >console/vt.
>
> That quick test was on Linux.  Without further information, I'd
> suggest trying a snapshot from the last month or so, then work
> backwards or forwards from there.  What do the console restoration
> problems look like?

   I'm left with just a blinking cursor.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Problem restoring console?

2005-02-15 Thread Mark Vojkovich
On Mon, 14 Feb 2005, David Dawes wrote:

> On Mon, Feb 14, 2005 at 07:40:40PM -0800, Mark Vojkovich wrote:
> >On Mon, 14 Feb 2005, Mark Vojkovich wrote:
> >
> >> On Mon, 14 Feb 2005, David Dawes wrote:
> >>
> >> > On Mon, Feb 14, 2005 at 04:00:18PM -0800, Mark Vojkovich wrote:
> >> > >   I just updated on my dual-card system and with the update I see
> >> > >a problem restoring the console that I did not see previously.  If
> >> > >I startx on the primary card and then quit, the primary card is restored
> >> > >correctly.  However, if I startx on both cards and quit, the
> >> > >primary card is not restored correctly.  I have a hard time imagining
> >> > >how it could be a driver issue since the "nv" driver knows nothing about
> >> > >the other cards in the layout (nor should it) and does not change
> >> > >its behavior when there is more than one card in the layout.  The
> >> > >core server code, on the other hand, does.
> >> > >
> >> > >   Have there been changes to vgahw, RAC, PCI config code, console
> >> > >code, etc... that may have caused this regression?
> >> >
> >> > Do you know approximately when this problem started?
> >>
> >>   I haven't updated in a long time on that machine.  I'll try to
> >> figure out when, but I'm not sure how to do that reliably.
> >
> >   I can't tell when I last updated on this machine.
>
> Can you go back to 4.4 as a first step, or do you know it was post-4.4?

   It worked fine with 4.4.  I built sometime after 4.4 but I'm
not sure when.


>
> I tried 4.5.0 RC1 with a multi-head config using a Mach64 and i810,
> and didn't see any problem like this.  I can try some other multi-head
> configs later this week.


What OS are you on?  It could be something specific to Linux
console/vt.


Mark.


___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Problem restoring console?

2005-02-14 Thread Mark Vojkovich
On Mon, 14 Feb 2005, Mark Vojkovich wrote:

> On Mon, 14 Feb 2005, David Dawes wrote:
>
> > On Mon, Feb 14, 2005 at 04:00:18PM -0800, Mark Vojkovich wrote:
> > >   I just updated on my dual-card system and with the update I see
> > >a problem restoring the console that I did not see previously.  If
> > >I startx on the primary card and then quit, the primary card is restored
> > >correctly.  However, if I startx on both cards and quit, the
> > >primary card is not restored correctly.  I have a hard time imagining
> > >how it could be a driver issue since the "nv" driver knows nothing about
> > >the other cards in the layout (nor should it) and does not change
> > >its behavior when there is more than one card in the layout.  The
> > >core server code, on the other hand, does.
> > >
> > >   Have there been changes to vgahw, RAC, PCI config code, console
> > >code, etc... that may have caused this regression?
> >
> > Do you know approximately when this problem started?
>
>   I haven't updated in a long time on that machine.  I'll try to
> figure out when, but I'm not sure how to do that reliably.

   I can't tell when I last updated on this machine.

Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Problem restoring console?

2005-02-14 Thread Mark Vojkovich
On Mon, 14 Feb 2005, David Dawes wrote:

> On Mon, Feb 14, 2005 at 04:00:18PM -0800, Mark Vojkovich wrote:
> >   I just updated on my dual-card system and with the update I see
> >a problem restoring the console that I did not see previously.  If
> >I startx on the primary card and then quit, the primary card is restored
> >correctly.  However, if I startx on both cards and quit, the
> >primary card is not restored correctly.  I have a hard time imagining
> >how it could be a driver issue since the "nv" driver knows nothing about
> >the other cards in the layout (nor should it) and does not change
> >its behavior when there is more than one card in the layout.  The
> >core server code, on the other hand, does.
> >
> >   Have there been changes to vgahw, RAC, PCI config code, console
> >code, etc... that may have caused this regression?
>
> Do you know approximately when this problem started?

  I haven't updated in a long time on that machine.  I'll try to
figure out when, but I'm not sure how to do that reliably.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Problem restoring console?

2005-02-14 Thread Mark Vojkovich
  In case it wasn't clear, only dual-card layouts show the problem.
I can start just the secondary card and the primary console will be
restored correctly, likewise starting on only the primary card works
fine.  Only when starting on both cards will the primary console be
restored incorrectly.  This didn't happen before updating.


MArk.

On Mon, 14 Feb 2005, Mark Vojkovich wrote:

>
>I just updated on my dual-card system and with the update I see
> a problem restoring the console that I did not see previously.  If
> I startx on the primary card and then quit, the primary card is restored
> correctly.  However, if I startx on both cards and quit, the
> primary card is not restored correctly.  I have a hard time imagining
> how it could be a driver issue since the "nv" driver knows nothing about
> the other cards in the layout (nor should it) and does not change
> its behavior when there is more than one card in the layout.  The
> core server code, on the other hand, does.
>
>Have there been changes to vgahw, RAC, PCI config code, console
> code, etc... that may have caused this regression?
>
>
>   Mark.
>
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Problem restoring console?

2005-02-14 Thread Mark Vojkovich
   I just updated on my dual-card system and with the update I see
a problem restoring the console that I did not see previously.  If
I startx on the primary card and then quit, the primary card is restored
correctly.  However, if I startx on both cards and quit, the
primary card is not restored correctly.  I have a hard time imagining
how it could be a driver issue since the "nv" driver knows nothing about
the other cards in the layout (nor should it) and does not change
its behavior when there is more than one card in the layout.  The
core server code, on the other hand, does.

   Have there been changes to vgahw, RAC, PCI config code, console
code, etc... that may have caused this regression?


Mark.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: Can I ask about developping *with* XFree or just *on* XFree

2005-02-02 Thread Mark Vojkovich
   This is a list about developing XFree86.  While some of us might know
a bit about application development, we're probably not the best people
to ask and most of your questions might be met with silence. If you're
looking for X-Window programming resources, Kenton Lee has a site with
a lot of links:

http://www.rahul.net/kenton/xsites.framed.html

Mark.

On Wed, 2 Feb 2005, Adilson Oliveira wrote:

> Hello.
>
> I'm developping applications using xlib functions. Can I ask my
> quastions about it here or this list is just about developing XFree itself?
>
> Thanks
>
> Adilson.
> --
> Nullum magnum ingenium sine mixtura dementiae fuit - Seneca
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: What happened to the fonts?

2005-01-29 Thread Mark Vojkovich
On Fri, 28 Jan 2005, Marc Aurele La France wrote:

> On Fri, 28 Jan 2005, Mark Vojkovich wrote:
>
> >  I tried tracing twm when it is drawing fonts.  I don't really
> > understand the font paths very well, but it looks like it never
> > even draws anything.  It looks like:
>
> >   _XomGetFontSetFromCharSet returns NULL so
> >   _XomConvert returns -1 so
> >   _XomGenericDrawString doesn't draw anything
>
> >   I walked through the loop in _XomGetFontSetFromCharSet.
> > There are two fontsets (ie. font_set_num = 2).  Both have only
> > one charset.  Neither matches the one passed to _XomGetFontSetFromCharSet.
>
> > (gdb) p font_set[0]
> > $62 = {id = 0, charset_count = 1, charset_list = 0x80910d8,
> >  font_data_count = 1, font_data = 0x80916b0,
> >  font_name = 0x8091ea0 
> > "-adobe-helvetica-bold-r-normal--12-120-75-75-p-70-iso88
> > 59-15", info = 0x0, font = 0x80919c0, side = XlcGL, is_xchar2b = 0,
> >  substitute_num = 1, substitute = 0x80916d0, vpart_initialize = 0,
> >  vmap_num = 0, vmap = 0x80916f0, vrotate_num = 1, vrotate = 0x8091700}
> > (gdb) p font_set[1]
> > $63 = {id = 1, charset_count = 1, charset_list = 0x80910a8,
> >  font_data_count = 1, font_data = 0x8091720,
> >  font_name = 0x8091edd 
> > "-adobe-helvetica-bold-r-normal--12-120-75-75-p-70-iso88
> > 59-15", info = 0x0, font = 0x8091c30, side = XlcGR, is_xchar2b = 0,
> >  substitute_num = 0, substitute = 0x8091740, vpart_initialize = 0,
> >  vmap_num = 0, vmap = 0x8091750, vrotate_num = 0, vrotate = 0x0}
>
> > (gdb) p  *font_set[0].charset_list
> > $72 = 0x8084848
> > (gdb) p  *font_set[1].charset_list
> > $73 = 0x8082698
>
> > (gdb) p charset
> > $64 = 0x8081e80
> > (gdb) p *charset
> > $65 = {name = 0x807f238 "ISO8859-1:GL", xrm_name = 1,
> >  encoding_name = 0x807f108 "ISO8859-1", xrm_encoding_name = 2, side = XlcGL,
> >  char_size = 1, set_size = 94, ct_sequence = 0x807f245 "\e(B",
> >  string_encoding = 1, udc_area = 0x0, udc_area_num = 0, source = CSsrcStd}
> > (gdb)
>
> >  I'm not really sure what to do with this information.
>
> Ummm.  See if changing line 19 of
> /usr/X11R6/lib/X11/locale/iso8859-15/XLC_LOCALE to read ...
>
>   nameISO8859-1:GL
>
> ... instead of ...
>
>   nameISO8859-15:GL
>
> ... fixes the problem.  Be mindful of tabs.

   Yes it does.  Twm has fonts again.  So does fvwm2.  Is the
locale file broken or something else that's making a bad assumption?


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: What happened to the fonts?

2005-01-28 Thread Mark Vojkovich
  I tried tracing twm when it is drawing fonts.  I don't really
understand the font paths very well, but it looks like it never
even draws anything.  It looks like:

   _XomGetFontSetFromCharSet returns NULL so
   _XomConvert returns -1 so
   _XomGenericDrawString doesn't draw anything

   I walked through the loop in _XomGetFontSetFromCharSet.
There are two fontsets (ie. font_set_num = 2).  Both have only
one charset.  Neither matches the one passed to _XomGetFontSetFromCharSet.

(gdb) p font_set[0]
$62 = {id = 0, charset_count = 1, charset_list = 0x80910d8,
  font_data_count = 1, font_data = 0x80916b0,
  font_name = 0x8091ea0 "-adobe-helvetica-bold-r-normal--12-120-75-75-p-70-iso88
59-15", info = 0x0, font = 0x80919c0, side = XlcGL, is_xchar2b = 0,
  substitute_num = 1, substitute = 0x80916d0, vpart_initialize = 0,
  vmap_num = 0, vmap = 0x80916f0, vrotate_num = 1, vrotate = 0x8091700}
(gdb) p font_set[1]
$63 = {id = 1, charset_count = 1, charset_list = 0x80910a8,
  font_data_count = 1, font_data = 0x8091720,
  font_name = 0x8091edd "-adobe-helvetica-bold-r-normal--12-120-75-75-p-70-iso88
59-15", info = 0x0, font = 0x8091c30, side = XlcGR, is_xchar2b = 0,
  substitute_num = 0, substitute = 0x8091740, vpart_initialize = 0,
  vmap_num = 0, vmap = 0x8091750, vrotate_num = 0, vrotate = 0x0}

(gdb) p  *font_set[0].charset_list
$72 = 0x8084848
(gdb) p  *font_set[1].charset_list
$73 = 0x8082698


(gdb) p charset
$64 = 0x8081e80
(gdb) p *charset
$65 = {name = 0x807f238 "ISO8859-1:GL", xrm_name = 1,
  encoding_name = 0x807f108 "ISO8859-1", xrm_encoding_name = 2, side = XlcGL,
  char_size = 1, set_size = 94, ct_sequence = 0x807f245 "\e(B",
  string_encoding = 1, udc_area = 0x0, udc_area_num = 0, source = CSsrcStd}
(gdb)


  I'm not really sure what to do with this information.

Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: What happened to the fonts?

2005-01-27 Thread Mark Vojkovich
On Tue, 25 Jan 2005, Marc Aurele La France wrote:

> Mark (and anyone else, of course),
>
> Please tell me whether the attached patch fixes (your version of) the
> problem.

   No, it does not.

Mark.

___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: build problems in programs/xfs

2005-01-23 Thread Mark Vojkovich
On Sat, 22 Jan 2005, Marc Aurele La France wrote:

> It would seem that you are building with SharedLibFont explicitly set to NO,
> which is the default on a Debian system (see linux.cf).  The attached, which
> I've just committed, should fix this problem.

  I wonder if Thomas's problems are related to the ones I'm seeing.
I just synced up today but the problem I've been seeing is still there.
I don't use a font server though.

  Which library is the one in question?  fontenc? fonteconfig?  I can
try replacing it with one from an older build on another machine.


Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: What happened to the fonts?

2005-01-20 Thread Mark Vojkovich
On Thu, 20 Jan 2005, Marc Aurele La France wrote:

> On Thu, 20 Jan 2005, Bukie Mabayoje wrote:
> > Mark Vojkovich wrote:
> >>I synced up and built and now, though the server starts fine,
> >> apps can't get any fonts.  Window managers claim they can't find
> >> fontsets like fixed so menus and such have no text in them.
> >> xfontsel seems to work though.  Anyone know what's going on?
> >> It's like the fonts.alias isn't being read anymore.  I can
> >> see fixed in the fonts.alias and I can see the corresponding
> >> font in the fonts.dir and I can see that the font file exists
> >> and can verify that the server acknowledged that font path.
>
> > I am looking into a similar problem too, when xfs is not running.
>
> I've been able to make the fontset message appear with a simple test case
> (holding down the control key and pressing any mouse button while the pointer
> is in an xterm).

   I don't see that message with xterm.  TWM menus don't work though.
But, if I run TWM remotely from another machine it works.  I think
that implies that it's a problem with the local libraries.  Another
data point, is a test app that opens "fixed" seems to work fine, so
it's not the case that aliases are broken in general - just in some
cases.

Bukie, how long have you been seeing this problem?  I updated
yesterday and saw it for the first time.  The last time I updated
was Nov 26. so that gives a pretty large window.

Mark.


>
> I've also tried this with a build where CHANGELOG 264 is backed out and it
> still happens so something else is causing this.  CHANGELOG 264 is
>
> 264. In font handling, avoid potential security issues related to 
> wrap-around
>  of memory allocation requests (Marc La France).
>
> Marc.
>
> +--+---+
> |  Marc Aurele La France   |  work:   1-780-492-9310   |
> |  Computing and Network Services  |  fax:1-780-492-1729   |
> |  352 General Services Building   |  email:  [EMAIL PROTECTED]  |
> |  University of Alberta   +---+
> |  Edmonton, Alberta   |   |
> |  T6G 2H1 | Standard disclaimers apply|
> |  CANADA  |   |
> +--+---+
> XFree86 developer and VP.  ATI driver and X server internals.
> ___
> Devel mailing list
> Devel@XFree86.Org
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


What happened to the fonts?

2005-01-19 Thread Mark Vojkovich
   I synced up and built and now, though the server starts fine,
apps can't get any fonts.  Window managers claim they can't find
fontsets like fixed so menus and such have no text in them.
xfontsel seems to work though.  Anyone know what's going on?
It's like the fonts.alias isn't being read anymore.  I can
see fixed in the fonts.alias and I can see the corresponding
font in the fonts.dir and I can see that the font file exists
and can verify that the server acknowledged that font path.

Mark.
___
Devel mailing list
Devel@XFree86.Org
http://XFree86.Org/mailman/listinfo/devel


Re: DGA and tearing effects

2004-11-28 Thread Mark Vojkovich
On Sun, 28 Nov 2004 [EMAIL PROTECTED] wrote:

> The problem with this is my project targets older laptops, it's a engine
> management system tuning suite and alot of these car guys have junk bin
> laptops sitting in their cars (pentium class) with a wide array of graphic
> chipsets and displays.  I don't think anyone will be using a accelerated
> glx system, won't find any nvidia's here.  I chose VESA because it seems
> to be the easiest way to get higher than vga resolutions reliably on
> the target hardware... going this route tosses acceleration capabilities
> out the window hence the strive to use direct framebuffer rendering (at
> least when linear framebuffer is available).
>
> It may be passe, but it's the fastest method (and oh so beautiful) I've 
> managed
> to wring out of my p133 development workstation.
>
> Does this glx method provide great results even on Xvesa non-nvidia
> systems?

   I don't have a good survey of OpenGL implementations.
You might ask one of the DRI or Mesa-related lists if vsynced
buffers swaps are very common, or still rare.  And also, what kind
of glDrawPixels performance they get.  I would guess not much attention
has been paid to old pre-AGP machines such as yours though,
so maybe OpenGL is not such a great solution for such old hardware.


Mark.


>
> Also, I was not aware that the flat panels had this vertical retrace
> issue... one of my test machines has a 18" flat panel and it was tearing
> like crazy when I just did a vga_waitretrace() before doing the page
> flip.  However it should be noted, that after switching to Abrashes
> method of polling the display enable bit before performing the flip and
> then waiting for retrace has eliminated all tearing on the flat panel
> display... this was tested in 640x480 800x600 1024x768 and 1280x1024
> the native resolution of the panel.  It has however, caused some tearing on
> my 133 w/matrox on a CRT where before there was none...  this I suspect
> is a matroxism though.
>
> Thanks for the replies, this thread has been prettty informative thus
> far.
>
> Cheers.
>
> On Sat, Nov 27, 2004 at 01:56:44PM -0800, Mark Vojkovich wrote:
> >In my opinion, direct framebuffer rendering is passe.  My
> > recommendation is to render into system memory, use glDrawPixels
> > to copy to a GLXDrawable's back buffer and then use glXSwapBuffers
> > to display the buffer.  At least with NVIDIA's binary drivers
> > this should be faster than direct framebuffer rendering because
> > rendering to system memory is cached, and glDrawPixels uses DMA,
> > and if the __GL_SYNC_TO_VBLANK environment variable is set,
> > glXSwapBuffers will sync to vblank regardless of whether you
> > are rendering to full screen or a smaller window.
> >
> >This would be the most portable method, and I would hope
> > all OpenGL implementations have a way to do vblank-synced
> > swaps by now.
> >
> > Mark.
> >
> > On Sat, 27 Nov 2004 [EMAIL PROTECTED] wrote:
> >
> > > Is XFree86 w/DGA the only way to achieve high performance direct
> > > framebuffer rendering (page flipped) without any negative artifacts on
> > > linux?
> > >
> > > I'm using svgalib w/vesa right now for a strictly 8bpp project and the
> > > only way I've managed to get fast (full) frame rates without tearing or
> > > flickering is page flipping when linear frame buffer is supported.
> > > However, it took some vga hacks to reliably sync before the flip (just
> > > waiting for retrace doesnt work, I duplicated the Abrash-documented method
> > > reading the vga status port and waiting til it is mid-scan (display 
> > > enable)
> > > to set the start address then waiting for retrace to ensure the new offset
> > > gets a draw in).
> > >
> > > It's working fine on all my test machines which it would tear on before I
> > > implemented the Abrash method (previously I just waited for vertical
> > > retrace then flipped the page), but now it tears on the only box the old
> > > approach worked flawlessly on :(  It looks like my matrox millenium II
> > > notices when you change the display start address mid-scan and
> > > demonstrates this with a regular (every frame) tear.  My Abrash books say
> > > to set the address while the display is enabled as it's supposed to have
> > > latched onto the last start address for the duration of the scan... grr.
> > >
> > > Any suggestions would be much appreciated, I know this is a bit of a
> > > thread-hijack but it&#x

Re: DGA and tearing effects

2004-11-27 Thread Mark Vojkovich
On Sat, 27 Nov 2004, James Wright wrote:

>My understanding is that flat panels do not "scan" a screen as a CRT does, 
> so there is no vertcial blank period to perform a page flip. They do have a 
> refresh rate of usually around 60Hz, but his is simply how aften the pixels 
> are able to switch states, or how often the display is refreshed from the 
> panels backbuffer. In a DGA mode if you try waiting for a vblank with a flat 
> panel, then the page flip is performed immediately, instead of waiting for 
> anything. The panels own circuits decide when to change the display anyway, 
> so anything you try to do yourself is moot. If I am incorrect, then I 
> apologise...
>


That's sortof the correct idea for when using the panel's VGA
interface.  For the VGA interface, the panel is not necessarily
refreshing at the rate coming through the VGA connector.  For DVI,
the panel is refreshing at the rate coming through the DVI connector,
but this doesn't necessarily correspond to the timings programmed
in the VGA registers.  At least on the hardware I've worked on,
the VGA timing merely correspond to the input to the flat panel
scaler in the graphics chip, not the output going to the panel.


Mark.

>
>
> On Sat, 27 Nov 2004 13:37:01 -0500
> Michel Dänzer <[EMAIL PROTECTED]> wrote:
>
> > On Sat, 2004-11-27 at 16:40 +, James Wright wrote:
> > >About a year ago I was using DGA for my games graphics library. I
> > > was told by various people that using DGA was not the way to go. At
> > > first I thought this was nonsense, as you can't get vsync using the
> > > more standard XPutImage method (and get tearing). However, all changed
> > > when I bought a laptop with TFT screen. Problem is, there is no vsync
> > > on the new LCD/TFT monitors!
> >
> > There is in my experience, at least if you use the panel's native mode.
> >
> >
> > --
> > Earthling Michel Dänzer  | Debian (powerpc), X and DRI developer
> > Libre software enthusiast|   http://svcs.affero.net/rm.php?r=daenzer
> >
> > ___
> > Devel mailing list
> > [EMAIL PROTECTED]
> > http://XFree86.Org/mailman/listinfo/devel
> >
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DGA and tearing effects

2004-11-27 Thread Mark Vojkovich
   In my opinion, direct framebuffer rendering is passe.  My
recommendation is to render into system memory, use glDrawPixels
to copy to a GLXDrawable's back buffer and then use glXSwapBuffers
to display the buffer.  At least with NVIDIA's binary drivers
this should be faster than direct framebuffer rendering because
rendering to system memory is cached, and glDrawPixels uses DMA,
and if the __GL_SYNC_TO_VBLANK environment variable is set,
glXSwapBuffers will sync to vblank regardless of whether you
are rendering to full screen or a smaller window.

   This would be the most portable method, and I would hope
all OpenGL implementations have a way to do vblank-synced
swaps by now.

Mark.

On Sat, 27 Nov 2004 [EMAIL PROTECTED] wrote:

> Is XFree86 w/DGA the only way to achieve high performance direct
> framebuffer rendering (page flipped) without any negative artifacts on
> linux?
>
> I'm using svgalib w/vesa right now for a strictly 8bpp project and the
> only way I've managed to get fast (full) frame rates without tearing or
> flickering is page flipping when linear frame buffer is supported.
> However, it took some vga hacks to reliably sync before the flip (just
> waiting for retrace doesnt work, I duplicated the Abrash-documented method
> reading the vga status port and waiting til it is mid-scan (display enable)
> to set the start address then waiting for retrace to ensure the new offset
> gets a draw in).
>
> It's working fine on all my test machines which it would tear on before I
> implemented the Abrash method (previously I just waited for vertical
> retrace then flipped the page), but now it tears on the only box the old
> approach worked flawlessly on :(  It looks like my matrox millenium II
> notices when you change the display start address mid-scan and
> demonstrates this with a regular (every frame) tear.  My Abrash books say
> to set the address while the display is enabled as it's supposed to have
> latched onto the last start address for the duration of the scan... grr.
>
> Any suggestions would be much appreciated, I know this is a bit of a
> thread-hijack but it's somewhat related to Eugene's question.  I've been
> considering going down the DGA route and adding X to the mix due to
> the problems I've been encountering...  I'm just not sure it will solve
> all the problems, and will probably add new ones.
>
> Thanks in advance for any input, I'm sure many of you have had to deal
> with similar issues.
>
>
> On Thu, Nov 25, 2004 at 11:38:17AM -0800, Mark Vojkovich wrote:
> >If you want tearless rendering you should be flipping.  Ie. render
> > to a non displayed portion of the framebuffer, then call XDGASetViewport
> > to display it after the copy is finished.  See the DGA test apps at
> > http://www.xfree86.org/~mvojkovi/, specifically texture.tar.gz.
> > If the texture and skull demos aren't tearless, there is a bug in the
> > DGA driver support for your card.
> >
> >
> > Mark.
> >
> > On Thu, 25 Nov 2004, Eugene Farinas wrote:
> >
> > > Hi guys! We're developing a DGA program that render full screen at 
> > > 1280x1024 16 bpp 15fps the video image read from a sony camera, but we're 
> > > experiencing tearing artifacts during rendering. This is a part of the 
> > > code that copies the data to the frame buffer:
> > >
> > > void CAM_APP::DisplayImage_NoPartial(unsigned char* offset)
> > > {
> > >   register int j;
> > >   register unsigned long caddr = (unsigned long) offset;
> > >   for(j=0; j > >   *( (unsigned short*) caddr ) = sTable[g_pBuf[j]];
> > >   }
> > > }
> > >
> > > Where the offset is the start of the buffer destination, and g_pBuf is 
> > > the data captured from the camera. we've tried copying the data during 
> > > vertical resync but we're still experiencing tearing on the image. We're 
> > > using an AMD gx2 geode processor w/ 128 mb ram and 8mb vram. I would like 
> > > to ask your help in removing the tearing artifacts. Thanks.
> > >
> > > ---
> > > Outgoing mail is certified Virus Free.
> > > Checked by AVG anti-virus system (http://www.grisoft.com).
> > > Version: 6.0.797 / Virus Database: 541 - Release Date: 11/15/2004
> > >
> > >
> > > ___
> > > Devel mailing list
> > > [EMAIL PROTECTED]
> > > http://XFree86.Org/mailman/listinfo/devel
> > >
> > ___
> > Devel mailing list
> > [EMAIL PROTECTED]
> > http://XFree86.Org/mailman/listinfo/devel
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Maximizing DVI Flatpanel resolution in nVidia nv

2004-11-25 Thread Mark Vojkovich
  The nv driver contains no code to program the DVI interface.  The
only reason why it works at all with DVI is because the BIOS setup
the timings for the text mode.  Subsequently, the nv driver is not
able to run in any mode other than the one the BIOS setup.  If the
BIOS setup the text mode to 1024x768, the nv driver will not be able
to use a higher mode.

  Many high resolution DVI modes are only possible if complicated
reduced blanking interval timings are used, subsequently, they are
often omitted from the BIOS for lack of space.  That is usually
the reason why the BIOS will setup a mode lower than the native
panel resolution.  Sometimes video BIOSes do support reduced
blanking interval calculations though.  You might want to contact
your card vendor to see if they have an alternative BIOS.

Mark.

On Fri, 26 Nov 2004, Antonino A. Daplas wrote:

> Hi all,
>
> Looking at the xfree86 source of nv, it seems that the maximum resolution
> achieved when the input type is DDI is set by the BIOS (fpWidth/fpHeight).
>
> Is there a way to bypass this limitation such as a flatpanel display capable
> of 1600x1200, but only 1024x768 is achievable.  IOW, what registers
> need to be programmed to set the panel size to a higher value.
>
> Hardware:
> GeForce4 Ti 4600
> Display: Manufacturer: IVM Model: 4800: Name iiyama
>
> Any help will be greatly appreciated.
>
> Tony
>
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DGA and tearing effects

2004-11-25 Thread Mark Vojkovich
  Some OpenGL drivers can do vblank synced flips (NVIDIA's can).
glDrawPixels + glXSwapBuffers should be faster than a DGA implementation.
Even XPutImage should be faster than a DGA implementation, but like
you've pointed out, there's no way to sync XPutImage to vblank.
DGA is frequently broken in drivers so it's good to use something
else.

Mark.

On Thu, 25 Nov 2004, James Wright wrote:

>Isn't DGA mode being phased out? I been using XPutImage and the XVidMode 
> extension to provide fullscreen instead. Only problem being you have no 
> control over when the image is actually copied to the display, so tearing 
> results, unless someone else here would like to enlighten me...
>
>
>
> On Thu, 25 Nov 2004 11:38:17 -0800 (PST)
> Mark Vojkovich <[EMAIL PROTECTED]> wrote:
>
> >If you want tearless rendering you should be flipping.  Ie. render
> > to a non displayed portion of the framebuffer, then call XDGASetViewport
> > to display it after the copy is finished.  See the DGA test apps at
> > http://www.xfree86.org/~mvojkovi/, specifically texture.tar.gz.
> > If the texture and skull demos aren't tearless, there is a bug in the
> > DGA driver support for your card.
> >
> >
> > Mark.
> >
> > On Thu, 25 Nov 2004, Eugene Farinas wrote:
> >
> > > Hi guys! We're developing a DGA program that render full screen at 
> > > 1280x1024 16 bpp 15fps the video image read from a sony camera, but we're 
> > > experiencing tearing artifacts during rendering. This is a part of the 
> > > code that copies the data to the frame buffer:
> > >
> > > void CAM_APP::DisplayImage_NoPartial(unsigned char* offset)
> > > {
> > >   register int j;
> > >   register unsigned long caddr = (unsigned long) offset;
> > >   for(j=0; j > >   *( (unsigned short*) caddr ) = sTable[g_pBuf[j]];
> > >   }
> > > }
> > >
> > > Where the offset is the start of the buffer destination, and g_pBuf is 
> > > the data captured from the camera. we've tried copying the data during 
> > > vertical resync but we're still experiencing tearing on the image. We're 
> > > using an AMD gx2 geode processor w/ 128 mb ram and 8mb vram. I would like 
> > > to ask your help in removing the tearing artifacts. Thanks.
> > >
> > > ---
> > > Outgoing mail is certified Virus Free.
> > > Checked by AVG anti-virus system (http://www.grisoft.com).
> > > Version: 6.0.797 / Virus Database: 541 - Release Date: 11/15/2004
> > >
> > >
> > > ___
> > > Devel mailing list
> > > [EMAIL PROTECTED]
> > > http://XFree86.Org/mailman/listinfo/devel
> > >
> > ___
> > Devel mailing list
> > [EMAIL PROTECTED]
> > http://XFree86.Org/mailman/listinfo/devel
> >
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DGA and tearing effects

2004-11-25 Thread Mark Vojkovich
   If you want tearless rendering you should be flipping.  Ie. render
to a non displayed portion of the framebuffer, then call XDGASetViewport
to display it after the copy is finished.  See the DGA test apps at
http://www.xfree86.org/~mvojkovi/, specifically texture.tar.gz.
If the texture and skull demos aren't tearless, there is a bug in the
DGA driver support for your card.


Mark.

On Thu, 25 Nov 2004, Eugene Farinas wrote:

> Hi guys! We're developing a DGA program that render full screen at 1280x1024 
> 16 bpp 15fps the video image read from a sony camera, but we're experiencing 
> tearing artifacts during rendering. This is a part of the code that copies 
> the data to the frame buffer:
>
> void CAM_APP::DisplayImage_NoPartial(unsigned char* offset)
> {
>   register int j;
>   register unsigned long caddr = (unsigned long) offset;
>   for(j=0; j   *( (unsigned short*) caddr ) = sTable[g_pBuf[j]];
>   }
> }
>
> Where the offset is the start of the buffer destination, and g_pBuf is the 
> data captured from the camera. we've tried copying the data during vertical 
> resync but we're still experiencing tearing on the image. We're using an AMD 
> gx2 geode processor w/ 128 mb ram and 8mb vram. I would like to ask your help 
> in removing the tearing artifacts. Thanks.
>
> ---
> Outgoing mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.797 / Virus Database: 541 - Release Date: 11/15/2004
>
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Xv Overlay snapshot

2004-11-16 Thread Mark Vojkovich
On Tue, 16 Nov 2004, Dorin Lazar wrote:

>   Hello everyone,
>   I am trying to obtain a snapshot of the output of an application that draws
> using hardware accelerated Xv. The application is a video player and uses YUV
> format to display and SDL - it draws using SDL_DisplayYUVOverlay function. I
> want to grab a certain snapshot and place it in another window. I tried to
> use XvGetStill but all it gives me is black output. What should be done? Is
> there any way to get that picture from the overlay to X in a window?

   There is no mechanism to get that data from the overlay.


Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: finding all windows belonging to an application

2004-11-09 Thread Mark Vojkovich
   All the resources allocated by a single client will have the
same XID prefix.  Look at the output of "xwininfo -children -root"
and you'll see what I mean.   What you probably want to do is search
from the root and find the all the top-level windows with your
client's prefix.


Mark.


On Mon, 8 Nov 2004, Grant Wallace wrote:

> Hi,
>   I'm working on modifications to VNC server to share
> individual applications. One thing I'm wondering about
> is how do I find all windows which belong to the same
> application. For instance I currently am able to share
> the application's main window by using xwininfo and
> getting the main windows ID number. Then I just
> traverse the windows tree searching for that id.
> However if the application later opens a dialog box or
> a menu window I'd like to detect that while traversing
> the windows tree and share it also. I haven't yet
> found any field within the window data structure that
> identifies which application a window belongs to.
> What's the best way to find all these related windows
> (related by application rather than window hierarchy)?
>
> Thanks,
> Grant.
>
>
>
>
>
> __
> Do you Yahoo!?
> Check out the new Yahoo! Front Page.
> www.yahoo.com
>
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Xlib: extension "GLX" missing on display "simon4:50.0" while printing

2004-11-05 Thread Mark Vojkovich
   Your app apparently requires OpenGL support.  If XFree86 is
your X-server, you need to add:

  Load "glx"

to the Section "Module" of the XF86Config file.

Mark.

On Fri, 5 Nov 2004, Simon Toedt wrote:

> Hello,
>
> After adding a print support to our application I am getting the
> following warning from our application before it crashes
>
> (XPSERVERLIST=simon:50 ./wasp_core)
> probing
> loading 3D environment
> loading plugins...
>flash. . . x
> print
> Xlib:  extension "GLX" missing on display "simon4:50.0".
> ./wasp_core: Error: couldn't get an RGB, Double-buffered visual.
>
> evey time I run the application. What does this mean and what must i
> do to get rid of it. I am using the GISWxprintglue package on Solaris
> 8 with all the newest set of patches from Sunsolve.
>
> Simon
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: How can I get rid of the cursor for the Xserver?

2004-11-04 Thread Mark Vojkovich
On Thu, 4 Nov 2004, Barry Scott wrote:

> I need to get rid of the cursor from the Xserver.
>
> There are a number of X client programs on screen and
> I cannot modify all of them to hide the cursor. What I want
> is a way to globally hide the cursor.
>
> Is there a configuration option to get rid of the cursor?
> Can I change the cursor to a total invisible one?
>
> Barry

   There is no way to globally hide the cursor aside from hacking
the X-server.  In xc/programs/Xserver/dix/cursor.c have
CheckForEmptyMask() always set bits->emptyMask = TRUE.  This
will force all cursors to be transparent.


Mark.


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Added Pseudocolor Visuals for XFree86?

2004-11-02 Thread Mark Vojkovich
On Mon, 1 Nov 2004, Bussoletti, John E wrote:

>
> At Boeing we have a number of graphics applications that have been
> developed in-house, originally for various SGI platforms.  These
> applications are used for engineering visualization  They work well on
> the native hardware and even display well across the network using third
> party applications under Windows like Hummingbird's ExCeed 3D.  However,
> under Linux, the fail to work properly, either natively or via remote
> display with the original SGI hardware acting as server, due to
> omissions in the available Pseudocolor Visuals.

   Most PC graphics hardware does not support overlays and therefore
doesn't really support simultaneous PseudoColor and TrueColor visuals.
Some PC hardware can, for instance, some Matrox cards and some NVIDIA
Quadro cards and some others.  You'll need to research that pretty
carefully because while some hardware may support it, the drivers
may not.  Hummingbird does PseudoColor emulation in software, probably
by rendering PseudoColor windows offscreen and then translating
into TrueColor windows during PseudoColor window updates and palette
changes.  XFree86 doesn't support this because nobody has cared enough
about it to write support for it.  I don't expect that to change.


>
> Examination of the output of xdpyinfo in the SGI machines shows that the
> SGI X drivers support Pseudocolor visuals at both 8 bit planes and 12
> bit planes.  Similar output under Linux shows support for Pseudocolor
> Visuals at only 8 bit planes.  These applications were built to take
> advantage of the 12 bit plane Pseudocolor Visual under the SGI X
> drivers.

No PC hardware supports palettes with more than 2^8 entries.
A 2^12 entry palette could be implemented only by emultation (rendering
offscreen and then translating to TrueColor windows).

>
> To allow use of these graphics applications within a Linux environment,
> we're contemplating a port of the applications to Directcolor Visuals.
> But prior to initiating such an activity, I've been asked to ask whether
> new developments or releases of the XFree86 X drivers might be in the
> pipeline for future release that might offer a wider variety of
> Pseudocolor Visuals.  Hence this note.

   Porting to depth 24 DirectColor will increase the number of
cards that your application will run on.  Most XFree86 drivers
support simultaneous depth 24 DirectColor and TrueColor visuals,
although there will be color flashing when changing window focus
because PC hardware only supports a single hardware palette.

But if your application requires 12 bit plane palettes, I don't
see how depth 24 (8 plane palettes) will help your situation.

>
> Is there any support for 12 bit plane Pseudocolor Visuals within at
> least one video card and the XFree86 drivers?  Will there be support for
> such features in the future? If so, is there an anticipated release
> date?

   No PC hardware supports 12 bit plane PseudoColor.  No drivers
emulate this in software.  I know of no plans to implement this
and expect adding such a feature to be unlikely.

  My recommendation is that you get away from PseudoColor entirely.
Most people stuck in your position have legacy apps for which they
do not have source code and have no choice.  I recommend doing everything
in TrueColor, and depending on the application, you might want to
consider using OpenGL.   This problem will likely get worse for you
in the future.  Some hardware support 8 bit PseudoColor overlays now
but I expect this to go the way of the dodo.  My impression is that
a future Microsoft operating system will not support 8 bit PseudoColor
modes nor will it support overlays so eventually these will disappear
from the hardware, leaving emulation as the only solution.


Mark
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Memory leaks when lots of graphical operations? [Qt/X11 3.1.1]

2004-10-15 Thread Mark Vojkovich
On Fri, 15 Oct 2004, Robert Currey wrote:

> > > Is there a way to trace X operations?
> >
> >There's no tracing feature in Xlib.
> >
> xmon?

   That will trace protocol.  Not sure if that's useful for
tracking down a client memory leak though.  I'm assuming what he
wants to do is watch Xmalloc/free calls in Xlib.

Mark.

>
> Rob
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Memory leaks when lots of graphical operations? [Qt/X11 3.1.1]

2004-10-15 Thread Mark Vojkovich
On Fri, 15 Oct 2004, [iso-8859-1] Sébastien ZINSIUS wrote:

> the app's... I know it does not seem to be X server that has to be blamed, but I'm 
> convinced that my problem is linked to "graphical" side, either in Qt or Xlib 
> functions. I traced event creations and deletions (Qt side ; as I have an external 
> thread that stress GUI, I have to pass through Qt custom events), and no events seem 
> to be lost on the Qt side (all the event destructors are called). On the other side, 
> memory use is still growing...
>
> Do you know if there was any major memory leak that has been found recently or not 
> in Xlib?

There are no leaks in that stuff.  You've got an app bug or Qt bug
or something.  I recall one app that requested alot of X events but never
actually took them off of the queue so they just piled up in the client.

>
> Is there a way to trace X operations?

   There's no tracing feature in Xlib.


Mark.

>
> Cheers,
>
> Sébastien
>
> -Message d'origine-
> De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] la part
> de Mark Vojkovich
> Envoyé : jeudi 14 octobre 2004 19:50
> À : [EMAIL PROTECTED]
> Objet : Re: Memory leaks when lots of graphical operations? [Qt/X11
> 3.1.1]
>
>
>It's the app's memory usage that climbs or the server's?
>
>   Mark.
>
> On Thu, 14 Oct 2004, [iso-8859-1] Sébastien ZINSIUS wrote:
>
> > Hello!
> >
> > I'm currently developing a graphical application with Qt/X11 3.1.1. This 
> > application does a lot of operations and I'm doing some robustness tests... I have 
> > a test tool that can stimulate application and implies lot of drawing. The target 
> > on which application has to run is a mobile computer with a x86 compatible 133MHz 
> > CPU, 64 MB RAM and 256MB compactflash.
> >
> > My problem is that when frequency of update becomes to high, machine seems not to 
> > be able to treat all the graphical updates (well, it's my feeling...) and memory 
> > use (of application) climbs rapidely (according to this frequency).
> >
> > I looked into Qt source code and tried to print some X dependant informations, 
> > e.g. with XPending, and number of events that have to be dealt by X server, seem 
> > to be the source of the problem.
> >
> > I tried this test on a faster machine (P4 2GHz 512MB) and problem occurs also, but 
> > in very very strength conditions (2 thread running with a 1msec period and 
> > producing in each cycle about 100 operations).
> >
> > I thought that memory would be given back after the "stress period", but I made 
> > the same test only on a 10 second period (stopping automatically stressing 
> > thread), and memory seems to be lost for eternity... (no memory use decrease 
> > followed)
> >
> > Do you have an idea why the memory use climbs? How could I solve this problem?
> >
> > Thanks in anticipation!
> >
> > Cheers,
> >
> > Sébastien
> >
> > ___
> > Devel mailing list
> > [EMAIL PROTECTED]
> > http://XFree86.Org/mailman/listinfo/devel
> >
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Memory leaks when lots of graphical operations? [Qt/X11 3.1.1]

2004-10-14 Thread Mark Vojkovich
   It's the app's memory usage that climbs or the server's?

Mark.

On Thu, 14 Oct 2004, [iso-8859-1] Sébastien ZINSIUS wrote:

> Hello!
>
> I'm currently developing a graphical application with Qt/X11 3.1.1. This application 
> does a lot of operations and I'm doing some robustness tests... I have a test tool 
> that can stimulate application and implies lot of drawing. The target on which 
> application has to run is a mobile computer with a x86 compatible 133MHz CPU, 64 MB 
> RAM and 256MB compactflash.
>
> My problem is that when frequency of update becomes to high, machine seems not to be 
> able to treat all the graphical updates (well, it's my feeling...) and memory use 
> (of application) climbs rapidely (according to this frequency).
>
> I looked into Qt source code and tried to print some X dependant informations, e.g. 
> with XPending, and number of events that have to be dealt by X server, seem to be 
> the source of the problem.
>
> I tried this test on a faster machine (P4 2GHz 512MB) and problem occurs also, but 
> in very very strength conditions (2 thread running with a 1msec period and producing 
> in each cycle about 100 operations).
>
> I thought that memory would be given back after the "stress period", but I made the 
> same test only on a 10 second period (stopping automatically stressing thread), and 
> memory seems to be lost for eternity... (no memory use decrease followed)
>
> Do you have an idea why the memory use climbs? How could I solve this problem?
>
> Thanks in anticipation!
>
> Cheers,
>
> Sébastien
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XAA documentation

2004-08-26 Thread Mark Vojkovich
xc/programs/Xserver/hw/xfree86/xaa/XAA.HOWTO

Mark.

On Thu, 26 Aug 2004, Steven Staton wrote:

> Where is XAA documented?  Google is unaware of it, which is a bad omen.
> Does documentation exist?
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Switching from Xv hardware scaling to X11 output

2004-08-16 Thread Mark Vojkovich
   Don't do the Stop until after you've drawn the non-Xv image.
If the Xv port is an overlay port, drawing the non-Xv image will
replace the Xv image when it overwrites the color key.  If the
port is not an overlay port, Stop doesn't do anything.

Mark.

On Mon, 16 Aug 2004, Nathan Kurz wrote:

> Hello ---
>
> I'm trying to fix an application (cinelerra) that changes from using
> Xv hardware scaling to using unaccelerated X11 output in the middle of
> a video stream.  Everything works fine, but there is a "black flash"
> (output window goes dark) for a fraction of a second at the switch.
>
> Current shutdown code looks something like this:
>
>   case BC_YUV422:
>  XvStopVideo(top_level->display, xv_portid, last_pixmap);
>  for(int i = 0; i < ring_buffers; i++) {
>XFree(xv_image[i]);
>  }
>  XShmDetach(top_level->display, &shm_info);
>  XvUngrabPort(top_level->display, xv_portid, CurrentTime);
>
>  shmdt(shm_info.shmaddr);
>  shmctl(shm_info.shmid,
>  IPC_RMID, 0);
>  break;
>
> Output window goes black soon after XvStopVideo call.  Is there
> something I can do immediately before or after this call to avoid
> having a period of time when no image is shown?  Something I can avoid
> doing?  My blind attempts haven't worked.
>
> Thanks!
>
> Nathan Kurz
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XVIDEO windows not drawn when shaped window covers it - why?

2004-07-29 Thread Mark Vojkovich
This window is using the shape extension?  The Xv DDX code
looks at the full rendering cliplist, so it would only cull away
rendering if there were no parts of the window that X could render
to.  I don't see any shortcuts in the core code that would be
causing incorrect culling for shaped window occlusion, and haven't
experienced a problem like that myself.  A quick test with "xvtest"
and xeyes shows no problem with the NVIDIA drivers.

Perhaps Xine itself is making some decisions based on
occlusion?  You should verify that it works with a simple
Xv app like http://www.xfree86.org/~mvojkovi/xvtest.tar.gz


Mark.


On Thu, 29 Jul 2004, Barry Scott wrote:

> Environment:
>  XFree86 4.4.0
>  Unichrome r20 patches
>  VIA Epia M1
>  CLE266 graphics
>  Mandrake Linux 9.2
>  Kernel 2.4.25 + Epia patches
>
> We have Xine playing a movie in a window, this works fine.
> Then we create another shaped windows that is the exact size of the
> Xine window, but has holes in it and place it over the top of the Xine
> window. The movie image stops being drawn.
> If we offset the second window by one pixel the movie is seen thru the
> holes in the shaped window.
>
> What is stopping the movie being drawn when the Xine window is covered?
> How do work around this problem? Which code made the decision to stop
> updating the Xine window?
>
> Barry
>
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: cursor glint when calling XGetImage

2004-07-27 Thread Mark Vojkovich
   If a software cursor is being used it will be removed before XGetImage
copies that part of the screen.  The only way to avoid that is to make
sure a hardware cursor is being used.  Nearly all drivers support
the traditional 2 color X11 cursors if it's smaller than a certain
size (usually 32x32 or 64x64).  But alot of hardware does not support
the alpha blended cursors available in newer servers.  I'm guessing
that your RedHat9 setup is using alpha blended cursors and they are
all in software (implemented as a software sprite).

   You can force cursors to the traditional X11 cursors by setting
the XCURSOR_CORE environment variable to 1.


Mark.



On Tue, 27 Jul 2004, wallace wrote:

> Hi,
>
>   When i call XGetImage, if cursor is at region i want to capture,cursor will 
> glint once,how to make the cursor not glint?I only find this problem on redhat9,any 
> advice is welcome.
>
>
> Thanks
> Wallace
>
> [EMAIL PROTECTED]
> 2004-07-27
>
>
>
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


DPMS behavior change?

2004-07-18 Thread Mark Vojkovich
   I'm not enabling DPMS, but DPMS is being used anyhow.  This
changed somewhat recently.  Was this "on by default" behavior
change intentional?


Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


DMX has broken binary compatibility

2004-07-17 Thread Mark Vojkovich
   DMX unconditionally changed MAXFORMATS in misc.h which modified
the ScreenRec and broke binary compatiblity.  No third party drivers
will work with XFree86 after the DMX integration.  I think it was
a mistake to unconditionally break binary compatibility in this way.
DMX should be a build option and off by default so that third
party drivers will still work.

Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: libextmod.a does not resolve symbols in XFree86

2004-07-14 Thread Mark Vojkovich
On Wed, 14 Jul 2004, Michael Boccara wrote:

> > > Functions defined in XFree86 are not resolved in libextmod.a
> > when referenced
> > > as extern.
> > > Why ?
> > > Is there a way to help the symbol resolution ?
> > >
> >This is a problem you are seeing with the existing code or only
> > after you modified something?   XFree86 modules can only resolve
> > symbols exported by the core server.  XFree86 modules do not link
> > to external libraries.
> >
>
> Interesting.
> I did modify something. I am actually developing my own X11 extension, in
> the extmod module. The base XFree86 code doesn't have any issue.
> The symbol I am trying to resolve is defined in libdix.a
> (xc/programs/Xserver/dix), which is statically linked to XFree86.
> How does XFree86 export symbols explicitely ?

  DIX symbols are exported in xc/programs/Xserver/hw/xfree86/loader/dixsym.c


Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: libextmod.a does not resolve symbols in XFree86

2004-07-14 Thread Mark Vojkovich
On Wed, 14 Jul 2004, Michael Boccara wrote:

> Functions defined in XFree86 are not resolved in libextmod.a when referenced
> as extern.
> Why ?
> Is there a way to help the symbol resolution ?
>
> Thanks,
>
> Michael Boccara

   This is a problem you are seeing with the existing code or only
after you modified something?   XFree86 modules can only resolve
symbols exported by the core server.  XFree86 modules do not link
to external libraries.


Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: How do I uniquely identify a window ?(other than window id)

2004-07-01 Thread Mark Vojkovich
  The Window ID is the only unique identifier.  A Window is not required
to have properties and some may have NONE.  A window manager, if running,
may add some, but you're not required to have a window manager either.

Mark.

On Thu, 1 Jul 2004, [iso-8859-1] Kala B wrote:

> Hi,
> In XWindows, through which property of a window, do I uniquely identify it (other 
> than window id) ? I am quite new to XWindows, so could somebody help me ? Thanks in 
> advance.
>
> As I understand, it is not necessary for X Clients to set WM_NAME property. So, 
> through which property could I uniquely identify a window ?
>
> Thanks
> kala
>
>
> Yahoo! India Careers: Over 50,000 jobsonline.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XFree 4.4.0 server crash on amd64 while running xsuite

2004-06-25 Thread Mark Vojkovich
On Fri, 25 Jun 2004, Nicolas Joly wrote:

> On Thu, Jun 24, 2004 at 10:13:54AM -0700, Mark Vojkovich wrote:
> >It might be that there is some mismatch in types on amd64.
> > Eg. FB_SHIFT vs FbBits.  It's hard to follow what's going on
> > in fb.h.
>
> Agreed, i'm not comfortable with that piece of code.
>
> But, in my case, FB_SHIFT is defined to 5 and sizeof(FbBits) to 4.

   There is some code in fb.h that suggests that it might have been
expecting FB_SHIFT == 6 for amd64.  Seems like it should have worked
either way though.

   Looks like it walked off the edge of the "FbStip *src" array.
I suspect:

src += srcStride;
   or
src += srcX >> FB_STIP_SHIFT;

is overincrementing.

Mark.

>
> > On Thu, 24 Jun 2004, Nicolas Joly wrote:
> >
> > > On Thu, Jun 24, 2004 at 07:56:53AM -0400, David Dawes wrote:
> > > > On Fri, Jun 18, 2004 at 02:55:17PM +0200, Nicolas Joly wrote:
> > > > >Hi,
> > > > >
> > > > >I just got a XFree 4.4.0 server crash, on my amd64 workstation while
> > > > >running XFree xsuite.
> > > >
> > > > Try running the XFree86 server from within gdb and see what the stack trace
> > > > reports when it crashes.
> > >
> > > Program received signal SIGSEGV, Segmentation fault.
> > > 0x006e939b in fbBltOne ()
> > > (gdb) bt
> > > #0  0x006e939b in fbBltOne ()
> > > #1  0x006f1d65 in fbPutXYImage ()
> > > #2  0x006f1985 in fbPutImage ()
> > > #3  0x0059790c in XAAPutImagePixmap ()
> > > #4  0x006ad91c in ProcPutImage ()
> > > #5  0x006aa40a in Dispatch ()
> > > #6  0x006bbc2a in main ()
> > > #7  0x00405568 in ___start ()
> > >
> > > --
> > > Nicolas Joly
> > >
> > > Biological Software and Databanks.
> > > Institut Pasteur, Paris.
> > > ___
> > > Devel mailing list
> > > [EMAIL PROTECTED]
> > > http://XFree86.Org/mailman/listinfo/devel
> > >
> > ___
> > Devel mailing list
> > [EMAIL PROTECTED]
> > http://XFree86.Org/mailman/listinfo/devel
>
> --
> Nicolas Joly
>
> Biological Software and Databanks.
> Institut Pasteur, Paris.
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XFree 4.4.0 server crash on amd64 while running xsuite

2004-06-24 Thread Mark Vojkovich
   It might be that there is some mismatch in types on amd64.
Eg. FB_SHIFT vs FbBits.  It's hard to follow what's going on
in fb.h.

Mark.

On Thu, 24 Jun 2004, Nicolas Joly wrote:

> On Thu, Jun 24, 2004 at 07:56:53AM -0400, David Dawes wrote:
> > On Fri, Jun 18, 2004 at 02:55:17PM +0200, Nicolas Joly wrote:
> > >Hi,
> > >
> > >I just got a XFree 4.4.0 server crash, on my amd64 workstation while
> > >running XFree xsuite.
> >
> > Try running the XFree86 server from within gdb and see what the stack trace
> > reports when it crashes.
>
> Program received signal SIGSEGV, Segmentation fault.
> 0x006e939b in fbBltOne ()
> (gdb) bt
> #0  0x006e939b in fbBltOne ()
> #1  0x006f1d65 in fbPutXYImage ()
> #2  0x006f1985 in fbPutImage ()
> #3  0x0059790c in XAAPutImagePixmap ()
> #4  0x006ad91c in ProcPutImage ()
> #5  0x006aa40a in Dispatch ()
> #6  0x006bbc2a in main ()
> #7  0x00405568 in ___start ()
>
> --
> Nicolas Joly
>
> Biological Software and Databanks.
> Institut Pasteur, Paris.
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: about InputOnly & InputOutput Windows

2004-06-16 Thread Mark Vojkovich
Nope.  The class is set at window creation time.  I can't think
of a compelling reason to want to change the class of a window that
has already been created.

Mark.

On Wed, 16 Jun 2004, o o wrote:

> Hi!
>
> I do not know if it is the good place to ask this, but I am getting lost. So
> please forgive me if I am wrong.
>
> I would like to know if it is possible to change the class of a window.
> Transform an InputOnly window to an InputOutput window. I am asking this,
> because I am studying on a window manager (wmx-6).
>
> Thanks!
>
> Homan
>
> _
> MSN Search, le moteur de recherche qui pense comme vous !
> http://search.msn.fr/
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: rotate functionality in i8xx driver?

2004-06-07 Thread Mark Vojkovich
On Mon, 7 Jun 2004, Lucas Correia Villa Real wrote:

> On Monday 07 June 2004 08:56, Sebastian Wagner wrote:
> > Is it planned to support Rotate functionality in the i8xx X drivers
> > (especially the i855 / intel extreme graphics 2)? Or is there yet a way
> > to rotate the desktop?
> > Sebastian
>
> You can give a look on Xrandr, a library designed to deal with X Rotate and
> Resize extensions. There's an online manual page here:
> http://www.xfree86.org/current/xrandr.1.html#toc2
>

  XFree86's implementation of RandR never supported rotation.


Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Is there any work on supporting portrait mode?

2004-05-20 Thread Mark Vojkovich
On Thu, 20 May 2004, Barry Scott wrote:

> I'm trying to get X to drive a display in portrait mode.
> But the only driver that seems to work is the nVidia code
> and the performance is terrible.
>
> Is there any work to make a fast portrait mode work?

   Not that I know of.

Mark.

>
> I'm especially interested in support on VIA's CLE266
> and the Intel i810. And to make life more fun I need
> to play movies, I'm using xine at the moment.
>
> Barry
>
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
>
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XAA2 namespace?

2004-04-07 Thread Mark Vojkovich
On Wed, 7 Apr 2004, Andrew C Aitchison wrote:

> On Tue, 6 Apr 2004, Mark Vojkovich wrote:
>
> > I saw changes coming to the X world that I didn't like and started
> > moving away from it a while ago.
>
> > Pardon that public reply folks.  I mistakenly replied to the list
> > rather than just to Alan like I intended.
>
> If you don't mind answering anyway...
>
> Are these undesirable changes related to XFree86 or to X in general ?
> If they relate to X in general, what are they ?
>

   It started when Keith and Jim decided to pressure the XFree86
project on behalf of Linux distributions who felt that the XFree86
project wasn't acting in line with their business plans.

   There are companies who make money from bundling up software
that they didn't write, yet don't feel that what they've gotten
will allow them to compete with Microsoft the way they'd like.
After seeing the courses of action that those parties have decided
to take, I realized that it would become more and more unlikely
that I'd be happy working in such an environment.  This is my
hobby.  I don't do it for any religious or political reasons.
When it become for aggravating than fun, it's time to move to
another hobby.


Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XAA2 namespace?

2004-04-06 Thread Mark Vojkovich
  Pardon that public reply folks.  I mistakenly replied to the list
rather than just to Alan like I intended.


Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: XAA2 namespace?

2004-04-06 Thread Mark Vojkovich
On Tue, 6 Apr 2004, Alan Hourihane wrote:

> Mark,
>
> What's the current status of the new xaa ??

   Not much has changed.  I've been busy with work and lately
haven't been too motivated to work on it anyhow.  I don't even
work on X stuff at NVIDIA anymore.  I saw changes coming to the
X world that I didn't like and started moving away from it a
while ago.  I work on embedded stuff now.


Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Mode Validation question

2004-04-04 Thread Mark Vojkovich
   Lets say you have a DFP with a fixed resolution and therefore
can't run modes with an HDisplay or VDisplay beyond that.  What's
the most efficient way to validate those modes?  I see that
xf86ValidateModes will check pScrn->maxHValue and pScrn->maxVValue
for HTotal and VTotal and it supports maximum virtual desktop sizes,
but I see no facility for limiting HDisplay and VDisplay.


Mark.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: [PATCH] Make MAXSCREENS run-time configurable

2004-03-23 Thread Mark Vojkovich
On Tue, 23 Mar 2004, David Dawes wrote:

> On Mon, Mar 22, 2004 at 05:06:28PM -0800, Mark Vojkovich wrote:
> >   This sounds like it will completely break binary compatibility.
>
> It looks like it does change the size of some data structures and
> the data types of some fields.  Whether these changes affect the
> module interfaces is something that needs to be checked in each
> case.
>
> I wonder, though, if we'd be better off going all the way and making
> the number of screens dynamic.
>
> David

   ALOT of modules (drivers and extensions) do stuff like:

static int shmPixFormat[MAXSCREENS];
static ShmFuncsPtr shmFuncs[MAXSCREENS];
static DestroyPixmapProcPtr destroyPixmap[MAXSCREENS];


Mark.

> >
> >On Mon, 22 Mar 2004, Rik Faith wrote:
> >
> >> [I posted this last Monday, but it got held for moderation because it
> >> was slightly over 100KB with the patch uncompressed.  I've compressed
> >> the patch for this posting. --Rik Faith]
> >>
> >> Throughout the DMX (dmx.sf.net) work, we have been submitting patches
> >> for bug fixes and/or self-contained code as we finish them.  Towards
> >> that goal, a patch is attached below.  It has also been entered into the
> >> XFree86 bugzilla database as #1269:
> >> http://bugs.xfree86.org/show_bug.cgi?id=1269
> >>
> >> Rik Faith and Kevin Martin
> >>
> >> ==
> >>
> >> The following patch changes MAXSCREENS from a #define to an int.  The
> >> patch is against the XFree86 CVS repository.
> >>
> >> The goals of the patch are as follows:
> >> 1) Allow MAXSCREENS to be determined at run time instead of compile
> >>time (a new -maxscreens command line flag was added).
> >> 2) Make minimal source-code changes to the tree:
> >>a) The name "MAXSCREENS" was not changed -- this allowed all of
> >>   the loops that reference MAXSCREENS to remain unchanged.
> >>b) MAXSCREENSALLOC is a convenience macro that allocates and
> >>   zeros memory, allowing 1-line changes to allocate code (and
> >>   another line to check that allocation succeeded).  Memory is
> >>   zero'd because many routines assume that the previous static
> >>   allocations are zero'd.  The macro is also safe to call
> >>   multiple times since there are places in the code where the
> >>   first use of a MAXSCREENS-sized array is difficult to
> >>   determine (or non-deterministic).
> >>c) In some cases, the existing code zeros the memory.  These
> >>   calls are unchanged, but could be removed.
> >>
> >> The patch has been tested using xtest, and the results from the XFree86
> >> 4.4 tree with and without the patch are identical.
> >>
> >> Some of the changes could not be tested because we do not have the
> >> appropriate hardware available -- it would be possible to substitute
> >> MAXSCREENSDEFAULT for MAXSCREENS in these code paths (i.e., and leave
> >> them as compile-time configurable code paths):
> >> config/cf/iPAQH3600.cf
> >> config/cf/itsy.cf
> >> programs/Xserver/hw/darwin/quartz/fullscreen/fullscreen.c
> >> programs/Xserver/hw/kdrive/kxv.c
> >> programs/Xserver/hw/sun/sunInit.c
> >> programs/Xserver/hw/sunLynx/sunLyInit.c
> >> programs/Xserver/hw/xfree86/drivers/* [all the changes are similar]
> >> programs/Xserver/hw/xfree86/os-support/bsd/arm_video.c
> >> programs/Xserver/hw/xfree86/os-support/dgux/dgux_video.c
> >> programs/Xserver/hw/xwin/InitOutput.c
> >>
> >> The diffstat is below, followed by the patch.
> >>
> >> ==
> >>
> >>  config/cf/iPAQH3600.cf  |2
> >>  config/cf/itsy.cf   |2
> >>  programs/Xserver/GL/mesa/src/X/xf86glx.c|6
> >>  programs/Xserver/Xext/appgroup.c|   13 +
> >>  programs/Xserver/Xext/mbufbf.c  |   19 +-
> >>  programs/Xserver/Xext/panoramiX.c   |   15 +
> >>  programs/Xserver/Xext/panoramiX.h   |2
> >>  programs/Xserver/Xext/panoramiXprocs.c  |  107 +---
> >>  program

Re: [PATCH] Make MAXSCREENS run-time configurable

2004-03-22 Thread Mark Vojkovich
   This sounds like it will completely break binary compatibility.

Mark.

On Mon, 22 Mar 2004, Rik Faith wrote:

> [I posted this last Monday, but it got held for moderation because it
> was slightly over 100KB with the patch uncompressed.  I've compressed
> the patch for this posting. --Rik Faith]
>
> Throughout the DMX (dmx.sf.net) work, we have been submitting patches
> for bug fixes and/or self-contained code as we finish them.  Towards
> that goal, a patch is attached below.  It has also been entered into the
> XFree86 bugzilla database as #1269:
> http://bugs.xfree86.org/show_bug.cgi?id=1269
>
> Rik Faith and Kevin Martin
>
> ==
>
> The following patch changes MAXSCREENS from a #define to an int.  The
> patch is against the XFree86 CVS repository.
>
> The goals of the patch are as follows:
> 1) Allow MAXSCREENS to be determined at run time instead of compile
>time (a new -maxscreens command line flag was added).
> 2) Make minimal source-code changes to the tree:
>a) The name "MAXSCREENS" was not changed -- this allowed all of
>   the loops that reference MAXSCREENS to remain unchanged.
>b) MAXSCREENSALLOC is a convenience macro that allocates and
>   zeros memory, allowing 1-line changes to allocate code (and
>   another line to check that allocation succeeded).  Memory is
>   zero'd because many routines assume that the previous static
>   allocations are zero'd.  The macro is also safe to call
>   multiple times since there are places in the code where the
>   first use of a MAXSCREENS-sized array is difficult to
>   determine (or non-deterministic).
>c) In some cases, the existing code zeros the memory.  These
>   calls are unchanged, but could be removed.
>
> The patch has been tested using xtest, and the results from the XFree86
> 4.4 tree with and without the patch are identical.
>
> Some of the changes could not be tested because we do not have the
> appropriate hardware available -- it would be possible to substitute
> MAXSCREENSDEFAULT for MAXSCREENS in these code paths (i.e., and leave
> them as compile-time configurable code paths):
> config/cf/iPAQH3600.cf
> config/cf/itsy.cf
> programs/Xserver/hw/darwin/quartz/fullscreen/fullscreen.c
> programs/Xserver/hw/kdrive/kxv.c
> programs/Xserver/hw/sun/sunInit.c
> programs/Xserver/hw/sunLynx/sunLyInit.c
> programs/Xserver/hw/xfree86/drivers/* [all the changes are similar]
> programs/Xserver/hw/xfree86/os-support/bsd/arm_video.c
> programs/Xserver/hw/xfree86/os-support/dgux/dgux_video.c
> programs/Xserver/hw/xwin/InitOutput.c
>
> The diffstat is below, followed by the patch.
>
> ==
>
>  config/cf/iPAQH3600.cf  |2
>  config/cf/itsy.cf   |2
>  programs/Xserver/GL/mesa/src/X/xf86glx.c|6
>  programs/Xserver/Xext/appgroup.c|   13 +
>  programs/Xserver/Xext/mbufbf.c  |   19 +-
>  programs/Xserver/Xext/panoramiX.c   |   15 +
>  programs/Xserver/Xext/panoramiX.h   |2
>  programs/Xserver/Xext/panoramiXprocs.c  |  107 +---
>  programs/Xserver/Xext/panoramiXsrv.h|2
>  programs/Xserver/Xext/shm.c |   29 ++-
>  programs/Xserver/Xext/xf86dga2.c|3
>  programs/Xserver/Xext/xprint.c  |4
>  programs/Xserver/Xext/xvdisp.c  |8
>  programs/Xserver/dbe/dbe.c  |5
>  programs/Xserver/dix/cursor.c   |   69 +++
>  programs/Xserver/dix/dispatch.c |9 +
>  programs/Xserver/dix/events.c   |3
>  programs/Xserver/dix/extension.c|5
>  programs/Xserver/dix/globals.c  |2
>  programs/Xserver/dix/main.c |   45 -
>  programs/Xserver/dix/window.c   |2
>  programs/Xserver/fb/fbcmap.c|   11 -
>  programs/Xserver/hw/darwin/quartz/fullscreen/fullscreen.c   |   10 +
>  programs/Xserver/hw/kdrive/kxv.c|4
>  programs/Xserver/hw/sun/sunInit.c   |   14 +
>  programs/Xserver/hw/sunLynx/sunLyInit.c |6
>  programs/Xserver/hw/vfb/InitOutput.c|8
>  programs/Xserver/hw/xfree86/common/xf86Cursor.c |   64 +++
>  programs/Xserver/hw/xfree86/common/xf86xv.c |3
>  progra

Re: Multiple Monitors

2004-03-17 Thread Mark Vojkovich
On Thu, 18 Mar 2004, Jonathon Bates wrote:

> Hi Guys,
> I am in the process of creating an X mod and would like some pointers on
> where to start (i wont have a problem writing the code,
> i am just not sure where to start).
> I am wanting to create the following:
> 
> Y
> Y1Y2
> 
> Where Y, Y1 & Y2 are monitors.
> 
> I always want the cursor to be located on Y, but the screens to the right &
> left of me are visible on other monitors.
> Firstly is this possible? Secondly is this an X mod or a window manager's
> mod?
> 
> Actually having thought about it further, I assume it would be a process of
> setting up X with multiple monitors and then
> moding the window manager??

  If you want these to be 3 separate root windows with the
cursor confined to one of them, XFree86 can probably already
do that.  A "ServerLayout" like the following:

Section "ServerLayout"
Identifier "DualHead"
Screen  0  "ScreenY" 0 0
Screen  1  "ScreenY1" 0 0
Screen  2  "ScreenY2" 0 0
InputDevice"Mouse0" "CorePointer"
InputDevice"Keyboard0" "CoreKeyboard"
EndSection

  probably does that already since there are no "LeftOf" etc...
to tell the server how to route the cursor when it goes offscreen,
so it ends up getting stuck on screen 0.
  

Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: remove power features

2004-03-15 Thread Mark Vojkovich
   I'm suspicious of your diagnosis.  But why don't you just
turn DPMS off.  It's off by default.  It had to be specified
explicitly in the XF86Config in order to turn it on in the
first place.  A DPMS related problem would be a video card
driver specific one.


Mark.

On Mon, 15 Mar 2004, nothanks wrote:

> Hi
> xfree power saving features are killing my server
> 
> I should recompile with this stuff removed.
> I'll try now.
> I'm going with these
> ftp://ftp.xfree86.org/pub/XFree86/4.4.0/source/
> 
> I'll give it a quick go - but accelerated-x will be bought very soon.
> 
> My opinion on the specific error - i think it is a dpms call to a non dpms monitor 
> that throws in the monkey wrench.
> 
> thanks- i don't expect a reply-i'm actually quite Jarred at this point
> ( like what - 10 lines of code destroying ALL of RMS and LT's work ! )
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
> 

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Xinerama & xtest

2004-03-15 Thread Mark Vojkovich
On Mon, 15 Mar 2004, Alex Deucher wrote:

> --- Alan Hourihane <[EMAIL PROTECTED]> wrote:
> > I remember that a couple of extra tests failed with Xinerama enabled.
> > 
> 
> Weren't there also some fixes for xtest and xinerama that came from the
> dmx project?  Were those ever integrated?

   The Xinerama task force (X.org) made the newer (newer than what
we normally use) test suite Xinerama aware.  I believe all that was
done was to make sure the source and destination were on the same 
screen so that the test doesn't fail.


Mark.

> 
> 
> > The ones I'm seeing are XCopyArea and XCopyPlane. Are these the ones
> > that are expected to fail - Mark V. ?
> > 
> > Alan.
> 
> 
> __
> Do you Yahoo!?
> Yahoo! Mail - More reliable, more storage, less spam
> http://mail.yahoo.com
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
> 

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Xinerama & xtest

2004-03-15 Thread Mark Vojkovich
On Mon, 15 Mar 2004, Alan Hourihane wrote:

> I remember that a couple of extra tests failed with Xinerama enabled.
> 
> The ones I'm seeing are XCopyArea and XCopyPlane. Are these the ones
> that are expected to fail - Mark V. ?

   Yes.  Xinerama won't copy between framebuffers, but will generate
GraphicsExpose events.


Mark.


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: s3 driver / accel display widths

2004-03-13 Thread Mark Vojkovich
   They varied from chip to chip.  They generally added them as
the chips got newer.  You'll probably want to look at an old
XFre86 3.x driver.


Mark.

On Sat, 13 Mar 2004, Keith Johnson wrote:

> After upgrading from 4.3 to 4.4 I found my mode of 1152x864 was 
> exceeding video ram, which led me to s3_driver:248 or so to the 
> s3AccelLinePitches array.
> -
> I have no clue which sizes are valid or not, but adding a 1152 worked 
> accellerated on my setup. (card detects as '86c968 [Vision 968 VRAM]'
> -
> As a side note, the array seems to be missing a 0 terminator.
> 
> 
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
> 

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Via XvMC Extension?

2004-03-13 Thread Mark Vojkovich
On Sat, 13 Mar 2004, [ISO-8859-1] Thomas Hellström wrote:

> Hi!
> 
> I'm currently writing an XvMC-type driver to the via unichrome hardware 
> mpeg2 decoder. It uses much of the current XvMC functionality for 
> context- and surface and subpicture handling, but requires some extra 
> functionality for
> surface rendering. I'm using dri / drm for resource locking and fb 
> memory / mmio access. Current status is that it works nicely but lacks 
> subpicture support implementation and multiple contexts implementation 
> for simultaneous decoding. X server side has been implemented similar to 
> the i810 XvMC driver extension, but there is no requirement that the 
> user should be root.
> 
> I've attached the needed extension(?) functionality as a header file.
> 
> Now the questions:
> 1. Should / Could this be added as an XvMC extension or should this 
> ideally be an unofficial via driver API?

  Adding VLD-level acceleration support to XvMC would require more
work than this.  I would say analysis comparing with Microsoft's DXVA 
and collaboration between at least two hardware vendors would be necessary.  
That's what was done for the current XvMC interface.

  If you can support the IDCT-level acceleration that XvMC exposes,
I would recommend doing that, and keeping the extra functionality as
an unofficial via driver API for the time being.


Mark.


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DGA - the future?

2004-03-09 Thread Mark Vojkovich
On Mon, 8 Mar 2004, James Wright wrote:

>It doesn't seem all that long ago that DGA V2 was added, why was it ever 
> introduced if it causes
> grief for the driver writers? What where the original intentions of including the 
> DGA extension into
> Xfree86?
> 

  DGA2 was added five years ago, and I regret it.  Even then, I had
the feeling that it was a bad idea.  We should have encouraged more
forwardlooking APIs like OpenGL.  At the time, the transition from
SW rendered games to HW rendered games was just happening.

  DGA was originally added by Jon Tombs back in 1995 so that Dave Taylor
could implement xf86quake.  I believe Dave Taylor found card support
in SVGALib to be lacking.


Mark.


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DGA - the future?

2004-03-07 Thread Mark Vojkovich
On Sun, 7 Mar 2004, James Wright wrote:

>We are concentrating on developing games which utilise polished 2d graphics 
> engines,
> rather than 3d. I know it sounds crazy but its what we want to do...
> 
>With most 2d engines the number of pixels drawn is usually kept to a minimum, 
> unless
> there is a huge amount of overdraw going on, then its nearly always faster to draw
> direct to the framebuffer. If we do need to scroll the whole screen, then we would 
> try
> to alter start of viewport address rather than transferring the entire screen.
> 
>I'm just concerned that the DGA extension will be removed with no adequete 
> replacement.
> The main issue with DGA seems to be the way it requires root privs and can write to
> other parts of memory. Can we not have some sort of "/dev/dga" device or is this not
> the place to ask ;)  is this not feasible?
> 
> 
> James

   I think the biggest problem with DGA is that driver writers
don't want to support it.  I don't even test it anymore.  If it
didn't work, I wouldn't know about it until somebody complained.
The DGA mode complicates what the driver has to do.  We're trying
to focus on how we use the offscreen memory for graphics command
buffers, pixmaps, textures and video overlays, and don't like to 
have to deal with a mode where we have to make concessions for
some app that wants to scribble on the framebuffer itself.

   As far as I'm concerned, there are alternatives.  You can
render to an XImage and use ShmPutImage, or if you need vblank
syncing you can use OpenGL.  Apps having direct framebuffer access
is something that I would consider to be a legacy feature.  The
natural tendency is to drop support for that sort of thing 
eventually.  Besides, I'm not sure we can guarantee that future
hardware is going to be very amenable to direct framebuffer
access.  I've seen some evidence suggesting that it's not.


Mark.

> 
> 
> 
> On Sat, 6 Mar 2004 19:02:00 -0800 (PST)
> Mark Vojkovich <[EMAIL PROTECTED]> wrote:
> 
> >I expect it will go away eventually.  It's still the case for
> > most access patterns that rendering in system memory and then
> > copying the result to the framebuffer is faster than CPU rendering
> > directly to the framebuffer.  Only the most simple game engines (write-
> > only SW scanline renderers) can benefit from direct framebuffer access.
> > Why aren't you using OpenGL?
> > 
> > Mark.
> > 
> > On Sun, 7 Mar 2004, James Wright wrote:
> > 
> > > Hello,
> > > 
> > >Apologies if this is the incorrect list to post to but i couldn't decide 
> > > between the general "forum"
> > > list or this one. My question concerns the DGA extension in XFree86, whether it 
> > > will be removed from 
> > > future versions, and the alternatives. We are currently in the process of 
> > > developing games for the
> > > Linux OS. We require direct access to the video framebuffer, the ability to 
> > > change resolution, refresh
> > > rate, indexed palettes, and the ability to alter the start screen position 
> > > pointer (for hardware
> > > scrolling). At first we wrote our 2D drawing libs to use SVGALib, but after 
> > > numerous problems with memory
> > > leaks and bad support for many gfx cards we switched to X11->DGAv2. We are 
> > > reasonably happy with DGA as
> > > it stands, with the only annoyance being that it requires root privs. I have 
> > > seen it mentioned that
> > > DGA could be removed in future XFree86 releases, is this true? If so, what are 
> > > the alternatives for us
> > > to use? It is obvious that there are alot of apps out there that really can't 
> > > justify the use of DGA,
> > > but I feel that this application (games) really can benefit from using it. Any 
> > > extra layers between
> > > our drawing and the framebuffer is just extra overhead and latency for us...
> > > 
> > > Any sugestions or comments appreciated...
> > > 
> > > 
> > > Thanks,
> > > James
> > > 
> > > 
> > >  
> > > ___
> > > Devel mailing list
> > > [EMAIL PROTECTED]
> > > http://XFree86.Org/mailman/listinfo/devel
> > > 
> > 
> > ___
> > Devel mailing list
> > [EMAIL PROTECTED]
> > http://XFree86.Org/mailman/listinfo/devel
> > 
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
> 

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: DGA - the future?

2004-03-06 Thread Mark Vojkovich
   I expect it will go away eventually.  It's still the case for
most access patterns that rendering in system memory and then
copying the result to the framebuffer is faster than CPU rendering
directly to the framebuffer.  Only the most simple game engines (write-
only SW scanline renderers) can benefit from direct framebuffer access.
Why aren't you using OpenGL?

Mark.

On Sun, 7 Mar 2004, James Wright wrote:

> Hello,
> 
>Apologies if this is the incorrect list to post to but i couldn't decide between 
> the general "forum"
> list or this one. My question concerns the DGA extension in XFree86, whether it will 
> be removed from 
> future versions, and the alternatives. We are currently in the process of developing 
> games for the
> Linux OS. We require direct access to the video framebuffer, the ability to change 
> resolution, refresh
> rate, indexed palettes, and the ability to alter the start screen position pointer 
> (for hardware
> scrolling). At first we wrote our 2D drawing libs to use SVGALib, but after numerous 
> problems with memory
> leaks and bad support for many gfx cards we switched to X11->DGAv2. We are 
> reasonably happy with DGA as
> it stands, with the only annoyance being that it requires root privs. I have seen it 
> mentioned that
> DGA could be removed in future XFree86 releases, is this true? If so, what are the 
> alternatives for us
> to use? It is obvious that there are alot of apps out there that really can't 
> justify the use of DGA,
> but I feel that this application (games) really can benefit from using it. Any extra 
> layers between
> our drawing and the framebuffer is just extra overhead and latency for us...
> 
> Any sugestions or comments appreciated...
> 
> 
> Thanks,
> James
> 
> 
>  
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
> 

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: SupportConvertXXtoXX

2004-03-06 Thread Mark Vojkovich
On Sat, 6 Mar 2004, David Dawes wrote:

> >   I thought we stopped using 64 bit scanlines altogether.
> 
> Hmm, yes it looks that way.  I guess we can remove that then, and the
> related code in xf86Init.c.
> 

 When you use 64 bit scanlines you introduce a mess in the
PutImage code.  You have to translate the images which have
a protocol specified maximum padding of 32 bits to the internal 
64 bit format.  It's not worth it on any hardware we support.
Especially given how important PutImage is compared to optimal
alignment for software rendering, which is primarily what 64 
bit padding was trying to improve.

Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: SupportConvertXXtoXX

2004-03-05 Thread Mark Vojkovich
On Fri, 5 Mar 2004, David Dawes wrote:

> On Sat, Mar 06, 2004 at 03:28:09AM +0100, Thomas Winischhofer wrote:
> >Mark Vojkovich wrote:
> >> On Fri, 5 Mar 2004, Thomas Winischhofer wrote:
> >> 
> >> 
> >>>David Dawes wrote:
> >>>
> >>>>On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote:
> >>>>
> >>>>>What exactly does a video driver have to be able to do if the 
> >>>>>SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided 
> >>>>>the hardware supports, for instance, 24bpp (framebuffer depth) only? 
> >>>>
> >>>>
> >>>>It has to use a framebuffer layer that can do this conversion.  fb
> >>>>can, as can xf24_32bpp (if your driver uses cfb).  The s3virge
> >>>>driver is an example that can still be run with the xf24_32bpp
> >>>>method, and it does the following to figure out what to load:
> >>>>
> >>>>case 24:
> >>>>  if (pix24bpp == 24) {
> >>>>mod = "cfb24";
> >>>>reqSym = "cfb24ScreenInit";
> >>>>  } else {
> >>>>mod = "xf24_32bpp";
> >>>>reqSym = "cfb24_32ScreenInit";
> >>>>  }
> >>>>
> >>>>Most drivers use fb these days, and it has support for this built-in,
> >>>>and enabled automatically.
> >>>
> >>>So it is save just to set these, I assume (since my driver uses fb). 
> >>>(Just wondered why the *driver* and not the layer taking care of this 
> >>>has to (not) set these.)
> >> 
> >> 
> >>Do you mean the flag?  The layer above does not know whether
> >> or not the driver/HW supports a 24 bpp framebuffer.  The "nv" driver,
> >> for example, does not. 
> >
> >Whether or not the hardware does support 24bpp (framebuffer depth, not 
> >talking about color depth) should be determined by setting/clearing 
> >SupportXXbpp. Why the *driver* needs to set "SupportConvert" is 
> >beyond me. My understanding is that the respective fb layer should take 
> >care of this (if supported) based on SupportXXbpp (especially since the 
> >*driver* does not need to care about this, as David told me. It just 
> >depends on what layer I choose for above the driver level).
> 
> There are two things here.  One, the *fb module isn't loaded at
> the point where this information is required.  Two, only the driver
> knows which (if any) *fb layer(s) it will use.  It is the driver's
> responsibility to characterise what it can do.  The cost, as
> currently implemented, of this model is a reasonable amount of
> boiler plate in drivers.
> 
> >But anyway, my question was answered. Seems to be save to set this 
> >obsure SupportConvert32to24 flag if using the generic fb layer.
> 
> Yes.  However, we didn't have today's generic fb layer when this
> stuff was first written.  Fortunately for its ease of adoption,
> the driver model didn't mandate a specific *fb layer or hard code
> its expected characteristics :-)
> 
> Looking at the xf86SetDepthBpp() code, there appears to be another
> wrinkle, because these flags get cleared for
> (BITMAP_SCANLINE_UNIT == 64) platforms:
> 
> #if BITMAP_SCANLINE_UNIT == 64
> /*
>  * For platforms with 64-bit scanlines, modify the driver's depth24flags
>  * to remove preferences for packed 24bpp modes, which are not currently
>  * supported on these platforms.
>  */
> depth24flags &= ~(SupportConvert32to24 | SupportConvert32to24 |
>   PreferConvert24to32 | PreferConvert32to24);
> #endif
> 
> This has been there for a long time (before we had fb).  I'm not
> sure if it is still valid or not.  Anyone with 64-bit scanline platforms
> care to comment?
> 

   I thought we stopped using 64 bit scanlines altogether.


Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: SupportConvertXXtoXX

2004-03-05 Thread Mark Vojkovich
On Fri, 5 Mar 2004, Thomas Winischhofer wrote:

> David Dawes wrote:
> > On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote:
> >>What exactly does a video driver have to be able to do if the 
> >>SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided 
> >>the hardware supports, for instance, 24bpp (framebuffer depth) only? 
> > 
> > 
> > It has to use a framebuffer layer that can do this conversion.  fb
> > can, as can xf24_32bpp (if your driver uses cfb).  The s3virge
> > driver is an example that can still be run with the xf24_32bpp
> > method, and it does the following to figure out what to load:
> > 
> > case 24:
> >   if (pix24bpp == 24) {
> > mod = "cfb24";
> > reqSym = "cfb24ScreenInit";
> >   } else {
> > mod = "xf24_32bpp";
> > reqSym = "cfb24_32ScreenInit";
> >   }
> > 
> > Most drivers use fb these days, and it has support for this built-in,
> > and enabled automatically.
> 
> So it is save just to set these, I assume (since my driver uses fb). 
> (Just wondered why the *driver* and not the layer taking care of this 
> has to (not) set these.)

   Do you mean the flag?  The layer above does not know whether
or not the driver/HW supports a 24 bpp framebuffer.  The "nv" driver,
for example, does not. 


Mark.



___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: SupportConvertXXtoXX

2004-03-04 Thread Mark Vojkovich
On Fri, 5 Mar 2004, Thomas Winischhofer wrote:

> Mark Vojkovich wrote:
> > On Fri, 5 Mar 2004, Thomas Winischhofer wrote:
> > 
> > 
> >>What exactly does a video driver have to be able to do if the 
> >>SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided 
> >>the hardware supports, for instance, 24bpp (framebuffer depth) only? 
> > 
> > 
> >It's expected to support a 24bpp framebuffer.  
> 
> So far, so good.
> 
>  > Depth 24/32 bpp will get translated to depth 24/24 bpp.
> 
> By whom (ie what layer)? Does the video driver in any way need to take 
> care of this?

   Not that I can remember.  XAA and fb should take care of it.
This used to be the default mode for the MGA driver, but 3D wasn't
usable in 24 bpp so I think the DRI folks changed the default.

Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


  1   2   3   4   >