Hi,

I am trying to track down what looks like an X server memory leak
related to GL.  Any help would be greatly appreciated.  I've searched
most of the XFree86 archives, google and various other places, but none
of the previous discussions really seemed to apply (e.g. some problems
in 4.1.0 were corrected in 4.2.0).

My configuration:
  Linux 2.2.12-20 #12, i686  (Customised RedHat 6.1)
  256 MB memory, 512 MB swap
  XFree86 4.2.0, Card: NVidia Riva TNT2 M64 rev 21,
  Driver: nv (not nvidia, though I have used that as well)
  Window manager: GNOME 1.4, WM: Sawfish, Desktop: Nautilus

The symptom is that server memory allocated for a GL app (local or GLX)
is not freed when the app exits.  For example, running an app of ours
(OpenInventor, OpenGL-based) the X server grew (as shown by top) from
~60M to about ~130M, and remained there after exiting the app.  Some but
not all of that "gap" seems to get reused across app runs so it could be
heap fragmentation, but there's also visible growth every time.  I have
checked from /proc/pid/maps that this is not mapped devices or such
misreported by top.   This is also evidenced by the time it takes to
swap the server in after a while :-).  Thus the increase looks like real
memory usage; I assume much of the original 60M is mapped from the video
card.

Over time the server can grow to several hundred megabytes. 
Occasionally it shrinks some, e.g. from 400M to 200-300M, but never
fully.  If I leave the computer on its own, e.g. leave the office over
the weekend, the memory consumption also grows, sometimes over 500M. 
Currently I am assuming this is caused by GL screen savers, but I have
no proof.

In practise the system becomes unusable after about a week and the only
cure is to log out, causing server restart.  For comparison, I used to
be able to stay logged in for months (fvwm95-2 + XFree86 3.x that came
with RH 6.1).  A sure killer is to run some of our larger GL apps with
very large scene graphs a few times.  3-4 days away from the computer
with xlock on seems to achieve the same.

It seems both `nv' and `nvidia' have this problem, though my impression
was that `nvidia' suffered more (I could have been fooled by miscounted
device mappings back then, I switched back about a month ago).  IIRC I
didn't have this problem with ATI Rage 3D Pro, but I could be wrong -- I
didn't run very long with that card, 4.2.0 and GL apps.

Any suggestions on how to track this down would be helpful.  Are there
features I could try to disable to see if they are the cause?  I will
try to revert back to fvwm95-2 to see if this is somehow caused by
GNOME.  I could also try to swap the old ATI card back in to see if the
software GL has the same problem.

//lat
-- 
Let us so live that when we come to die even the undertaker
will be sorry.  --Mark Twain
_______________________________________________
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert

Reply via email to