How to disable smoothed touchpad scroll events

2011-08-19 Thread Clemens Eisserer
Hi,

Somewhere arround Fedora-14 a feature was introduced somewhere (either
xorg or touchpad-driver) to smooth touchpad scroll events.
This causes quite some troubles, as a few Applications change their
zoom-factor on Ctrl+Scroll (e.g. FireFox or Geany editor), so when pressing
Ctrl+Something right after scrolling, often a few scroll events are left
and cause the zoom-factor to change.

Is there a way to revert to the old behaviour?

Thank you in advance, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com

xrestop shows 3 clients - each with 15mb pixmaps allocated

2011-06-08 Thread Clemens Eisserer
Hi,

I am running Fedora 15, and xrestop shows 3 clients with 15mb pixmaps allocated,
although no PID is shown, and two of the three clients seem to have no
pixmap (0) allocated.
I am using xfce 4.8 without a composition manager.

Any idea whats going on?

Thanks, Clemens

xrestop - Display: localhost:0
  Monitoring 24 clients. XErrors: 0
  Pixmaps:   64670K total, Other:  54K total, All:   64724K total

res-base Wins  GCs Fnts Pxms Misc   Pxm mem  Other   Total   PID Identifier
000 1020   8615000K  4K  15004K   ?   unknown
020 0111015000K  1K  15001K   ?   unknown
0a0 0001015000K  0B  15000K   ?   unknown
140 5   3008   25 9128K  1K   9129K  1581 Desktop
24041   401  178  424 8962K 12K   8975K  1703 Gmail - Postei
10074   361  159  323  810K 11K822K  1577 xfwm4
2e018   601   12   42  384K  3K387K  2398 Terminal - ce@
12012   5905   31  128K  2K130K  1580 xfce4-panel
2a0 4   3104   15  128K  1K129K  1650 xfce4-mixer-pl
1c0 6   2902   14  128K  1K129K  1591 Clipman
0c0 2100  1890B  4K  4K   ?   screensaver
160 4   2901   694B  2K  2K  1583 xfce4-settings
2c0 5   3013   18
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


When was xrender 0.11 introduced?q

2011-05-03 Thread Clemens Eisserer
Hi,

Does anybody remember which xorg-server version brought support for
xrender 0.11?
I think about not using some paths for render  0.11, however if its
too recent I would probably exclude more xorg-server versions than
required.

Thanks, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Why is sizeof(XID)==8 on AMD64?

2011-03-28 Thread Clemens Eisserer
Hi,

I just ran in a couple of bugs because my code assumes sizeof(XID)==4
on all platforms which doesn't seem to hold on AMD64.
Whats the reason for XID to be 8bytes (unsigned long) on AMD64, I
thought an x-id (in protocol sence) is defined as 4 bytes anyway?

Will Xlib/xcb ever return values which don't fit in 4 bytes? If so, I
guess I have to change a lot of java-code which assumes this -
otherwise I would only need to adapt the Java--Xlib interface a bit.

Thank you in advance, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Why is sizeof(XID)==8 on AMD64?

2011-03-28 Thread Clemens Eisserer
Hi Matthieu,

Thanks for your explanation =)

 This is a mistake done 25 years ago or so. It's hard to justify it,
 but it is so. A number of things that are 32 bits on the wire
 are represented as 64 bits long at the API level on LP64 machines.

Is it considered more or less safe to store those 64-bit XIDs in 32bit
variables?
If not really required I would prefer not to change all my code.

Thanks, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Need help to diagnose slowdown problem

2010-12-22 Thread Clemens Eisserer
Hi Samuel,

Usually you won't find log entries about slowdown problems unless
something goes really terribly wrong.

Are you using the proprietary nvidia driver or nouveau?

If you are using the proprietary nvidia problem, could you switch to
nouveau to see if the problem persists?

To really get to the root of the problem, you will have to do some profiling:
- Install all debugging symbolds for the applications involved
(including libc, gcc, xorg, kde, ...)
- When your system runs slow, use a system-wide profiler like sysprof
to see where cpu cycles are spent.

Good luck, Clemens


2010/12/22 Samuel Gilbert samuel.gilb...@ec.gc.ca:
 Hello everyone,

  Ever since I've started using KDE4 on 3 different systems, I'm having
 performance problems.  Here is what happens :

 After working in a session for a while the Xorg process starts to take more
 and more CPU.  I will generally notice it when some actions such as scrolling
 in dolphin or typing in kmail are totally unresponsive.  The subjective effect
 is like running remote X applications through a slow 56.6Kbps modem
 connection.

 I have found a 100% reproducible way to trigger the problem : All I have to do
 is to use digiKam's aspect crop tool on about 30 photographs.

 When the problem occurs, the X server process will take around 10~15% CPU when
 there are absolutely no events going on.  Doing thing such as switching
 windows or scrolling in a existing window (Firefox, dolphin, ooffice, etc..)
 will cause the Xorg process to jump to 100% CPU usage.  Closing windows will
 help a little, but the only way to get Xorg to behave properly once again is
 to completely restart the session.

 I have tried to disable composting both in KDE and directly in
 /etc/X11/Xorg.conf to no avail.  The problem occurs on 3 different systems
 that do not have the same versions of software components.  Here are the
 common factors :

 Linux x86_64
 Nvidia graphic cards (9400M, NVS 160M, 8300)
 KDE =4.4.2
 Xorg = 1.7.6

 Two machines are laptops with Intel Core 2 CPUs and the other one is a desktop
 with an AMD CPU.  I also checked ~/.xsession-errors and /var/log/Xorg.0.log,
 but I didn't find anything that looked helpful to understand the issue I'm
 facing.

 Any help and suggestions on how to diagnose what's going on will be greatly
 appreciated!

 Cheers and happy holidays,
  Samuel
 ___
 xorg@lists.freedesktop.org: X.Org support
 Archives: http://lists.freedesktop.org/archives/xorg
 Info: http://lists.freedesktop.org/mailman/listinfo/xorg
 Your subscription address: linuxhi...@gmail.com

___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com

Re: Composite ClipNotify fix and expose event elimination

2010-12-20 Thread Clemens Eisserer
Hi,

Could it be  https://bugs.freedesktop.org/show_bug.cgi?id=22566 is
related to this?

- Clemens

2010/12/20  ville.syrj...@nokia.com:
 Rather than continue my attempts to hack around the issue of incorrect
 ClipNotifys during window redirection changes, I decided to tackle the
 issue in more proper manner.

 This series will remove the internal MapWindow+UnmapWindow cycle and
 replace it with a single ValidateTree+HandleExposures pass through
 the affected windows.

 As a nice bonus, this also eliminates the unnecessary expose events
 that are generated in the process. Those expose events have been a
 problem for us for quite some time. For the N900 Daniel did a hack that
 simply suppressed expose events around the MapWindow/UnmapWindow calls.
 ___
 xorg-de...@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel

___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Any plan to promote coordinats to 32 bits?

2010-11-23 Thread Clemens Eisserer
Hi Teika,

As far as I know there hasn't been a lot of development to fix that,
theres not enough pain for now.
Unfourtunatly it would mean re-implementing a lot of X11's core
protocol as new extensions.

Probably a better way to fix that would be to create X12, or to use
something different like e.g. Wayland.
But don't get me wrong, I don't think Wayland will solve all problems
magically just because its not called X ;)

- Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Which one to buy ?

2010-11-13 Thread Clemens Eisserer
Hi Xavier,

 Here comes that time again: I have to buy a graphic card, I want to buy
 a radeon, not too long/wide (limited space in the box), reasonably
 recent and powerful, to be used in a debian/sid system.
 What do I buy if I want 3D (compiz at least) now, or at least very
 soon ?
 I guess if AMD pours money on linux support, it's because they want
 their stuff to be bought. But it's not clear yet when the latest gen is
 usable without fglrx.

I don't know how much you intend to game, but I would recommend a
Radeon-HD 5750 (basically a half 5850).
The series won't be replaced by a 67xx series (as far as it looks for
now) and are for the power they offer quite cheap.
Open driver support Xrender and OpenGL quite ok.

- Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Howto create a window with ARGB32-visual?

2010-09-24 Thread Clemens Eisserer
Hi,

I would like to create a Window with ARGB32-visual to reproduce a bug
I experience
with shaped ARGB32 windows when not using a composition manager,
however I always get a BadMatch error.

My attemp tried to find a 32-bit visual, and pass that to XCreateWindow:

XVisualInfo info;
int cnt;
XVisualInfo *visInfos = XGetVisualInfo(display, 0, NULL, cnt);

while(cnt--  0) {
  if(visInfos[x].depth == 32) {
   info = visInfos[x];
   }
}

XCreateWindow(display, root, 0, 0, 200, 200, 0, 32, InputOutput,
info.visual,  0, NULL);

Any idea whats wrong here?

Thank you in advance, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Howto create a window with ARGB32-visual?

2010-09-24 Thread Clemens Eisserer
Hi,

 The default border-pixmap and colormap are CopyFromParent.  That won't
 fly if your window and its parent have different depths/visuals.  That's
 pretty clearly explained in the protocol spec's description of
 CreateWindow, AFAICT.

Setting a new ColorMap as well as a BorderPixmap helped :)

Thanks again, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com

Re: Reasons for FreePicture to cause RenderBadPicture?

2010-08-30 Thread Clemens Eisserer
Hi Michel,

 Pixmaps are reference-counted and the picture takes a reference on the
 pixmap, so the pixmap can't go away before the picture.

 However this isn't true for windows, so as soon as the window is
 destroyed presumably the picture is destroyed as well or becomes
 invalid.

Can this be considered as bug?  I just tried the same for GCs and
XFreeGC doesn't generate any error when the Window has already been
destroyed.
Sure there is an implementation detail behind it, but GCs don't show
the same behavour and the whole thing somehow feels quite inconsistent
:(


 If you can't avoid using window pictures, it's probably best to
 make sure you destroy any pictures before the windows themselves.

I fear this would mean major refactoring, but seems the only sane way to go.

Thanks, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Howto override wrong EDID information?

2010-08-30 Thread Clemens Eisserer
Hi,

I would like to use an iiyama X436S at 1280x1024 on VGA1 with my
Laptop, unfourtunatly it seems the EDID information the monitor sends
is bogus:

 VGA1 connected (normal left inverted right x axis y axis)
   1024x768   60.0
   800x60060.3 56.2
   848x48060.0
   640x48059.9

The monitor is capable of 1280x1024, is it possible to override the
max-resolution, if possible without touching xorg.conf.d?
Both xrandr and kde's control center only let me select 1024x768 as
maximal resolution :/

Thank you in advance, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Reasons for FreePicture to cause RenderBadPicture?

2010-08-27 Thread Clemens Eisserer
Hi,

I have some code which causes RenderBadPicture errors from time to
time, and I have troubles finding the cause.

I have inserted some debug code and switched to synchronous mode,
however I don't get an idea whats going wrong.
Some pictures don't seem to cause problem, whereas other pictures
cause a single or even multiple errors although only freed once in
what seems to be a free call to free a completly different picture
like FreePicture(4c00184) which results in an error for picture
4c00189.

Any ideas whats going wrong?

If the parent drawable the Picture belongs to (e.g. a window or
picture) is freed, are all corresponding Pictures freed automatically
and cause a RenderBadPicture when a FreePicture is later attempt?

Thank you in advance, Clemens


Free request for: 4c00102, is pixmap: 1
Freeing picture 4c00102
-
Free request for: 4c00106, is pixmap: 1
Freeing picture 4c00106
 Xerror RenderBadPicture (invalid Picture parameter), XID 4c00106, ser# 10171
 Major opcode 147 (Unknown)
 Minor opcode 7
-
Free request for: 4c000f0, is pixmap: 0
Freeing picture 4c000f0
-
Free request for: 4c00189, is pixmap: 0
Freeing picture 4c00189
 Xerror RenderBadPicture (invalid Picture parameter), XID 4c00189, ser# 20872
 Major opcode 147 (Unknown)
 Minor opcode 7
--
Free request for: 4c00184, is pixmap: 0  //Whats going on
here, I free 4c00184 and get an error for 4c00189 multiple times?
Freeing picture 4c00184
 Xerror RenderBadPicture (invalid Picture parameter), XID 4c00189, ser# 20874
 Major opcode 147 (Unknown)
 Minor opcode 7
--
Free request for: 4c00182, is pixmap: 0
Freeing picture 4c00182
 Xerror RenderBadPicture (invalid Picture parameter), XID 4c00189, ser# 20876
 Major opcode 147 (Unknown)
 Minor opcode 7
---
Free request for: 4c00180, is pixmap: 0
Freeing picture 4c00180
 Xerror RenderBadPicture (invalid Picture parameter), XID 4c00189, ser# 20878
 Major opcode 147 (Unknown)
 Minor opcode 7
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Reasons for FreePicture to cause RenderBadPicture?

2010-08-27 Thread Clemens Eisserer
Hi again,

Seems I have found the cause of the problem: Freeing Pictures that
belong to an already destroyed window cause the RenderBadPicture
error. The XID values in the error-log were wrong and therefor
misleading.

What puzzles me is the inconsistent behaviour:
When a Window is destroyed, all its associated Pictures are freed,
however this is not the case for Pixmaps.
Even after calling XFreePixmap the assiciated Picture-Objects stay alive.

Any idea whats the idea behind this inconsistency, or is it a bug?
I feel really uncomfortable relying on this behaviour in my code :/

Thank you in advance, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Easy way to bisect xorg-xserver?

2010-07-07 Thread Clemens Eisserer
Hi,

I would like to do some bisecting to find a regression that was
introduced somewhere between 1.6 and 1.7.

I tried but for me as a non-developer its quite hard to get the right
versions of the right libraries, and I gave up after a few hours.
Is there a script which can download and build the appropriate
libraries requiered for a certain version of the x-server?

Thank you in advance, Clemens

The bug I am talking about is:
https://bugs.freedesktop.org/show_bug.cgi?id=25497
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: X server 1.9 release thoughts

2010-04-11 Thread Clemens Eisserer
Hi,

 The question is whether driver maintainers want to deal with
 non-maintenance changes (like new hardware support) in the stable branch
 of the X server, which will require additional work as they back-port
 things from master.

Was a bit shocked when I heard driver will be merged back into xorg.
When it they were taken out I had the possibility to install new
drivers without upgrading my distribution, I guess that opportunity
would be gone. It least I won't install git xorg just to test intel
driver's RC releases.

For the we have soo many #ifdef's in our code argument, well isn't
it up to the driver devs how late back they intend to support xorg.

- Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Howto limit agp aperture size?

2010-04-04 Thread Clemens Eisserer
Hi Dave,

 Don't think any version of nouveau ever did that. Are you sure its actually
 allocated the RAM?

Well I don't know exactly whats going on - but when using nouveau
instead of the proprietary nvidia driver, freesays memory useage is
about 250mb higher than normal - which would correspond to the
aperture size of 256mb.

I also experienced the same on an older system with a TNT2 - aperture
is 128mb there, with 384mb total ram. With nouveau there was about
128mb less free memory :/

Any ideas what could be going wrong? Is there any way to limit
aperture size, except in bios?

Thanks again, Clemens

PS:
 (II) EXA(0): Offscreen pixmap area of 33550336 bytes

Is there any way, to give all the VRAM to EXA instead only half of it?
(I don't need opengl)
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Howto limit agp aperture size?

2010-04-04 Thread Clemens Eisserer
Hi again,

Using Nouveau with KMS and an unified kernel-side memory manager seems
to solve the problems :)
After updating to Fedora-12 I get working KMS, normal memory useage
and overall way better performance.
Great that I can finally remove the proprietary legacy driver!

- Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg


Howto limit agp aperture size?

2010-04-03 Thread Clemens Eisserer
Hi,

Is there any way, to limit agp aperture size, maybe by passing a
parameter to the agpgart module?
The bios doesn't provide any setting unfourtunatly :/

Background: I am using a quite old version of nouveau (that one
shipped with Fedora-11), and nouveau seems to reserve the whole apg
aperture (256mb) instead of using it on-demand. That means half of the
system's ram is statically dedicated to nouveau.

Thank you in advance, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg


Re: xorg-server profiling

2010-03-08 Thread Clemens Eisserer
Hi,

 I was wondering if someone did/does xorg profiling.
 any special precautions for someone who might want to try?

With a new kernel + sysprof thats pretty simple - just install
debuginfos and enjoy :)

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: X11 fullscreen

2010-01-31 Thread Clemens Eisserer
Man, don't have a job? Is your time worth anything to you?
And by the way ... I've never read so many *strange* arguments in one
discussion.

(using shm ximage for normal drawing is bullshit)

- Clemens

2010/1/30 Russell Shaw rjs...@netspace.net.au:
 Daniel Stone wrote:
 On Sat, Jan 30, 2010 at 12:13:23AM +1100, Russell Shaw wrote:
 This means abstracting
 everything with pointer indirections leading to slow

 Any performance problems you may have are not caused by excessive
 pointer dereferences.

 Not directly. In the context of widget kits, pointer dereferences
 often hide from the programmer what low level function is being called,
 especially when there's multiple levels. More of a pain to understand
 and write code knowing it will run well (sigh broken record).

 feature-bare toolkits.

 Which features are you missing from current toolkits?

 Foolproof multithreading. I should be able to easily have two
 windows being updated from different threads without interaction
 problems due to static vars in the toolkit.

 Until relatively recently, various toolkits had no kind of centralized
 hot-button help system that i could find.

 It's way too hard to make a new non-trivial widget when it
 should be much easier.

 Many widgets have performance problems when you want to scroll
 through 10k items from a database. I'm sure they can be made to
 work well with enough detailed knowledge of the widget, but to
 get that far, i had to figure out how widgets and everything
 should work because of lack of know how and documentation.
 Makes a toolkit rather pointless when the barrier to being
 productive is that high.

 I should be able to fork and exec from within a GUI program
 without problems. A gui framework baggage that comes with
 widgets should be minor in memory cost.

 Last time i was using gtk, there was no definitive way of
 parsing configuration files (tho there is now i think).

 I wanted ligatures and proper kerning in fonts. I wanted
 access to all the features in a truetype font file. Last
 i looked, pango had little documentation about using it
 in great or sufficient detail. Not knowing anything about
 non-english text, i had no hope of even knowing what to
 ask about pango. A simple block diagram of how it processes
 utf8 clusters would have gone a *long* way. Some explanation
 of what's in a font file and what contextual analysis is
 would have helped a lot.

 I wanted more control over hints for the window manager.
 That may have already existed, but there was no overview
 documentation in gtk about that years ago when i used it.
 I had to learn all the fine details of Xlib and icccm
 just to figure out what questions to ask.

 I wanted printer support. I know now that's rather vague
 and out of scope for widgets. There were no gtk docs explaining
 that. I used to be using the printer GDI in windows.

 There was no support for widget settings persistance, or
 docs saying what to do about it. If i last used a file dialog
 on a certain directory, i wanted it to open there next time
 i used the program. I know what i should do in my own way now.

 There was no drawing support in gtk other than gdk which i
 found over a year later was xlib calls. Ran slow as hell.
 Could use cairo now, but i stick closer to the metal and
 use opengl or shm images. Cairo can draw to a printer context
 iirc, but i'd rather just generate postscript output directly.

 I wanted to have accurate colour management, but i see that
 as out of scope of widgets now.

 I wanted to programmatically generate menus on the fly
 that adapt to the results of database retrieval based on
 ealier stages of the menu hierarchy. At some point gtk
 changed to XML files to define menus. That totally pissed
 me off and was when i abandoned gtk.

 I wanted to do window-in-window mdi. Any mention leads to
 howls of denial that you don't need it or it's unuseable
 because you can't use the app on a dual-head setup.
 Well, i wanted to just a drag an embedded mdi document with
 a mouse so that it magically becomes a top-level window.
 Likewise, i could drag it over the mdi container and it
 would become re-embedded and managed by the mdi window
 manager.

 I wanted to have a widget that acts as a window manager
 complete with icon handling. Then i could use a family
 of related applications within that shell widget, and
 have them all appear there in the same state next time
 i log on.

 I wanted to make independent X apps such as editors
 become embedded in my own widgets. I still think about
 that area.

 I wanted the whole thing to run well on a 10MHz 8-bit cpu.
 It still would if i omit scaleable shaded 3D buttons and
 do another suitable small windowing system. Memory limits
 for a full unicode font and various window buffers would be
 pushing it a bit. I still aim for that efficiency.

 I've read the qt book and tried qt and read the stroustrop
 book multiple times and know everything about C++ but remain
 unimpressed at the complexity 

Re: Slow 2D with radeon KMS

2010-01-30 Thread Clemens Eisserer
 I just tried KMS with radeon driver, and 2D seems notably slow.
 Widgets takes time to draw, scrolling in Dolphin or Firefox lags, as if
 some 2D acceleration was not working alright.

I experience the same, running Ubuntu-10.4-alpha2 on my HD3850.
Logs look quite normal as far as I can tell.
I experience very low performance, and many visual artifacts even when
running without composition manager.

Should I open a bug-report about the corruptions and attache
logscreenshots or are those problems known and to be expected due to
the experimental nature of radeon-kms?

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Netbooks... really slow with OGL... can someone help me with a solution?

2009-10-29 Thread Clemens Eisserer
Hi,

 I am a game programmer, and I live in a country where normal computer has
 configurations that remember netbooks (like having VIA or SIS chipsets...).
 Because of that I started making a game using memory blitting instead of
 OpenGL, but I found that on X it runs really slow without OpenGL...
 I was told that it is because you need to give a bitmap to X, that then copy
 it to the vram, even on fullscreen.
 So in my mind the solution would be have direct access to the vram, I found
 out then the existance of XDGA...
 But...
 Does XDGA still exists? It is still shipped with X?
 If the awnser to any of the previous questions are no (or if you want to
 awnser anyway), there are a alternative solution?

DGA is outdated, and usually no longer supported.

The typical way to do that stuff is to upload your contents into
pixmaps, and later blit those pixmaps using XRenderCompositePicture,
this way your bitmaps can even have alpha - very much like OpenGL.
Usually capable drivers will do the whole step on the GPU, if not
you'll end up in pixman's SSE2 optimized blitting routines.

If you need direct access to your contents using the CPU (for effects
or whatever), for large bitmaps it usually pays off to use the SHM
extension, have a loot at ShmPutImage (or whatever its called).
But I really recommend uploading your contents once, and later create
effects and whatever by simply using your pre-uplaoded pixmaps.

Good luck, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Server crash uploading multiple glyphs at once with XRenderAddGlyphs

2009-08-20 Thread Clemens Eisserer
Hi Dave,

Should I file a bugreport about this on bugzilla?
What additional data would be useful to track that issue down?
Could this be used as a security whole?

Thanks, Clemens

2009/8/16 Clemens Eisserer linuxhi...@gmail.com

 Hi Dave,

  Can you get valgrind traces by any chance? not sure we can tell
  much other than memory got corrupted from this.

 It seems at least for this case, sha1_block_data_order is reading data
 from random locations:

 ==17163== Invalid read of size 4
 ==17163==at 0x439E91A: sha1_block_data_order (sx86-elf.s:76)
 ==17163==by 0xFA42F463: ???
 ==17163==  Address 0x4815360 is 0 bytes after a block of size 4,096 alloc'd
 ==17163==at 0x4028D7E: malloc (vg_replace_malloc.c:207)
 ==17163==by 0x80AE954: Xalloc (utils.c:1056)
 ==17163==by 0x80AA42D: AllocateInputBuffer (io.c:1017)
 ==17163==by 0x80A9545: InsertFakeRequest (io.c:498)

 I had a look at the source but I have a pretty hard time figuring out
 whats going on there :-/
 The crash appears with a quite large framework I am working on, quite
 hard to build your own. I could provide a binary package or wireshark
 protocol if that would help?

 The valgrind log is attached, hope it helps a bit.

 Thanks, Clemens

 PS: I've found another problem when uploading multiple glyphs at once
 causes a memleak. I've attached a short testcase - fills up my 3GB
 pretty quick.
 There's a malloc in CreatePicture which is in some cases never freed,
 called at render.c : 1147.
 But again, I don't understand why it works sometimes and sometimes not :-/

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: [cairo] [xlib] Use server-side gradients.

2009-08-18 Thread Clemens Eisserer
I know this is a bit off-topic, but it came recently to my mind:

Wouldn't it be possible to provide half-accelerated linear gradients
by simply rendering the gradient into a temporary 1x? surface, and
using the various repeat modes + the gradient transformation on that
surface?
This way destination surfaces could stay untouched by the CPU, and
there would be no need to change drivers in any way?

- Clemens

2009/8/18 Chris Wilson ch...@chris-wilson.co.uk:
 On Tue, 2009-08-18 at 11:39 -0700, Carl Worth wrote:
 This change to NEWS helps a bit, I suppose. But still, let's not skip
 the mailing list, OK? (Or if I missed a message, I'll just blame my
 old MUA and you can ignore me.)

 My apologies. I considered the risk of this change to be minimum since
 to disable server-side gradients would take just a single line. When I
 pushed the commit I tried to ping the relevant developers and even let
 the Mozilla developers know about the impending performance regression.
 I consider the fact that cairo is hiding gradients from the drivers is
 allowing *their* bugs to stagnant, and as our traces show they are in
 widespread use across the desktop and so deserve acceleration.

 However, I forgot to do this in email and so the warnings went amiss.
 -ickle

 ___
 cairo mailing list
 ca...@cairographics.org
 http://lists.cairographics.org/mailman/listinfo/cairo

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Server crash uploading multiple glyphs at once with XRenderAddGlyphs

2009-08-14 Thread Clemens Eisserer
Hi,

I tried to enhance the glyph-upload paths of my java2d-xrender backend
by uploading multiple glyphs at once, however doing so makes
xorg-1.6.99.1 (Fedora rawhide ~20090810) crash quite frequently.

I experienced those crashes with both vesa and the intel driver. Twice
the crashes happend in glibc's memory management, and another time in
some sha1 assembler. (guess the stuff which checks for equal glyphs).

Are those problems already known?

- Clemens

One of those stack-traces:

malloc_consolidate (av=value optimized out) at malloc.c:5114
5114  nextsize = chunksize(nextchunk);
(gdb) bt
#0  malloc_consolidate (av=value optimized out) at malloc.c:5114
#1  0x00b17359 in _int_malloc (av=value optimized out,
bytes=value optimized out) at malloc.c:4348
#2  0x00b198ee in __libc_malloc (bytes=can't compute CFA for this frame
) at malloc.c:3638
#3  0x080a87ca in Xalloc (amount=3856) at utils.c:1070
#4  0x08088f43 in AllocatePixmap (pScreen=0x84faa18,
pixDataSize=1879048192) at pixmap.c:116
#5  0x0049e9e5 in fbCreatePixmapBpp (pScreen=0x84faa18, width=136,
---Type return to continue, or q return to quit---
height=7, depth=can't compute CFA for this frame
) at fbpixmap.c:53
#6  0x0049eaef in fbCreatePixmap (pScreen=0x84faa18, width=can't
compute CFA for this frame
)
at fbpixmap.c:95
#7  0x081a8b1f in miGlyphs (op=3 '\003', pSrc=can't compute CFA for this frame
) at glyph.c:683
#8  0x08118a8d in damageGlyphs (op=176 '\260', pSrc=can't compute CFA
for this frame
) at damage.c:721
#9  0x081a8fd7 in CompositeGlyphs (op=176 '\260', pSrc=can't compute
CFA for this frame
) at glyph.c:632
#10 0x08112f9f in ProcRenderCompositeGlyphs (client=0x97db6d8)
at render.c:1415
#11 0x0810eb44 in ProcRenderDispatch (client=0xc1d3b0) at render.c:2041
#12 0x0806ee37 in Dispatch () at dispatch.c:426
#13 0x08063115 in main (argc=6, argv=0xbfb082f4, envp=0xbfb08310)
at main.c:282
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Skype causes a segmentation fault (1.6.2 RC1) and restarting of the X Server

2009-06-18 Thread Clemens Eisserer
Hi Miro,

I don't get that crash.
If you know howto use gdb and can ssh into the machine, it would be
great if you could install debug-packages and debug it yourself:
http://www.x.org/wiki/Development/Documentation/ServerDebugging

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Why has XRenderAddGlyphs only a single images pointer?

2009-05-04 Thread Clemens Eisserer
Hi,

Until now I've used XRenderAddGlyphs only one glyph at a time, but to
improve efficiency I would like upload multiple glyphs per call.
What confuses me however is, why there's only a single images pointer
(and not an  char **image), to pass the glyph-image-data.

How is that supposed to be done?

Thank you in advance, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Xserver 2D Performance Decrease

2009-03-23 Thread Clemens Eisserer
Hi,

 I am using a program that relies on 2d APIs to draw lines and
 circles, and the performance of this program had an huge decrease since my
 change to Debian-lenny.
Most likely you are now using EXA (newer intel drivers default to it),
a new acceleration architecture which does no longer accelerate lines
or circles - so it will cause software fallbacks.

You still can use XAA with the intel driver:
 Section Device
   Identifier  Videocard0
Driver  intel
Option  AccelMethod XAA
Option  XAANoOffscreenPixmaps true
 EndSection

however its not certain how long this will work, there are already
reports that it crashes when using opengl apps.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Performance of XRenderCompositeTrapezoids in the aliased case

2009-02-02 Thread Clemens Eisserer
Hi,

1.) As far as I have seen the only way to get aliased rasterization
with XRenderCompositeTrapezoids is to pass PictStandardA1 as mask
format.
However a lot of hardware can't use A1 as mask, leading to a fallback.

On my 945GM I get for a 100x100 circle consisting of ~180 traps:
20ms, A8
120ms A1
270ms no mask format

Wouldn't it make sence to use an A8 mask instead and tell pixman to
render aliased?

2.) What do you think about a data structure where EXA drivers could
tell EXA which features they support.
This way EXA could e.g. choose to use A8 instead of A1 only when it
really needed?
This could help in various cases to decide which route to go.

Thanks, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Compiling xf86-input-keyboard fails

2009-02-01 Thread Clemens Eisserer
Hi,

I just tried to compile xorg/git with the xorg-git.sh shell script,
and it fails in xf86-input-keyboard with the following error:

kbd.c: In function 'KbdProc':
kbd.c:567: warning: passing argument 1 of 'InitKeyboardDeviceStruct'
from incompatible pointer type
kbd.c:567: warning: passing argument 2 of 'InitKeyboardDeviceStruct'
from incompatible pointer type
kbd.c:567: warning: passing argument 3 of 'InitKeyboardDeviceStruct'
from incompatible pointer type
kbd.c:567: warning: passing argument 4 of 'InitKeyboardDeviceStruct'
from incompatible pointer type
kbd.c:567: error: too many arguments to function 'InitKeyboardDeviceStruct'
kbd.c: In function 'PostKbdEvent':
kbd.c:699: error: 'KeyClassRec' has no member named 'state'
kbd.c:702: error: 'KeyClassRec' has no member named 'state'
kbd.c:714: error: 'KeyClassRec' has no member named 'state'
kbd.c:726: error: 'KeyClassRec' has no member named 'curKeySyms'
kbd.c:727: error: 'KeyClassRec' has no member named 'curKeySyms'
kbd.c:728: error: 'KeyClassRec' has no member named 'curKeySyms'
kbd.c:791: error: 'KeyClassRec' has no member named 'modifierMap'
kbd.c:847: error: 'KeyClassRec' has no member named 'modifierMap'
kbd.c:854: error: 'KeyClassRec' has no member named 'modifierKeyMap'
kbd.c:854: error: 'KeyClassRec' has no member named 'maxKeysPerModifier'
kbd.c:857: error: 'KeyClassRec' has no member named 'modifierKeyMap'
kbd.c:857: error: 'KeyClassRec' has no member named 'maxKeysPerModifier'
kbd.c: At top level:
kbd.c:863: warning: 'ModuleInfoRec' is deprecated

Any idea what could be wrong?

Thank you in advance, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Compiling xf86-input-keyboard fails

2009-02-01 Thread Clemens Eisserer
Hi Chris,

 xf86-input-keyboard has been broken for more than a week now.  Worse,
 the server segfaults on launch.  Input folks, is someone working on
 landing fixes for this breakage?
Yes, I experience the same - but I thought this was caused by the
missing keyboard stuff.

Thanks your reply, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: client-side font rendering very very slow in X.org xserver 1.5.3 w/r200: massive fetches from VRAM, why?

2009-01-29 Thread Clemens Eisserer
 Are you using the same version of kde on both systems?  IIRC kde 4
 switched to using a1 surfaces for font rendering which isn't currently
 accelerated by EXA.  Notice the _a1 fetch below.
I've seen quite many different reports about slow EXA which turned out
to be caused by the A1 mask format (I haven't seen anybody using A4).

Wouldn't it be possible to re-direct A1 allocation to an A8 pixmap,
and convert at up- and downloading?
At least for text, where pixmaps can't be accessed anyway?

Thanks, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: libXrender - documentation?

2009-01-26 Thread Clemens Eisserer
 I can't see any such calls of XRender* functions in the bits of xft that I
 have been looking at (notably in xftcore.c).
Because xft deals with Glyphs, and for performance/bandwith resons
glyphs are handled in a special manner.
That was what I ment with (using the XRender*Glyphs functions).
xftglyphs: XRenderAddGlyphs
xftrender: XRenderCompositeText...

I mentioned Trapezoids only because everthing else AddGlyph and
XPutImage is not really server-side antialiasing, but client-side with
server-side blending. (however for glyphs that doesn't matter, they
are rasterized once client-side, are uploaded and used again and again
by number).

However as far as I know xft already has a XRender aware backend
(using the XRender*Glyphs functions), as well as legacy support for
pre-xrender servers.
I have never used xft by myself, but without RENDER knowledge someone
can see that:
- Probably all functions with *Glyph* inside, are text related
- That there are glyph-sets, to which glyphs can be uploaded and rendered from
- searching the functions in XFT's source.

I know that its not easy, but someone can't expect a step-by-step
tutorial for such low-level stuff.

 For sure the Opera/QT combination is not doing anything like that - all
 the calls that actually pass glyphs to/from the server use good ol' Xlib.
 Though there is evidence that xft does use Xrender elsewhere in its
 workings.
I don't know about opera, but im am pretty (99,5%) sure QT uses Render
- now if Opera uses QT's graphic context for drawing it will
implicitly use it.

 But who is actually responsible for the development/maintenance of xft?
 For sure they do not seem to hang around on this list, though I gather
 they are within the overall Xorg structure somewhere.
XFT is more or less a sample implementation, and as far as I know its
not used a lot.
As far as I know QT does its own glyph handling, so does GTK with pango.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: libXrender - documentation?

2009-01-26 Thread Clemens Eisserer
 I know that its not easy, but someone can't expect a step-by-step
 tutorial for such low-level stuff.

 hmm - the obvious conclusion is that xft is just a minor/useless library.
 Perhaps it should be removed, then.
The whole discussion is about RENDER's documentation, not xft.

 Or was your comment directed to potential users (probably not, since
 it isn't polite to treat users disrespectfully).
I've written quite a bunch of answers to charles' questions (even some
off-list), so I started to become a bit impatient.
At least I tried to help, instead of coming arround, missing the point
and telling others howto behave :P ;)

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: libXrender - documentation?

2009-01-26 Thread Clemens Eisserer
 The whole discussion is about RENDER's documentation, not xft.

 very well, then apply my comments to RENDER (they're both presented
 as libraries that no one should try to use without some other library
 as a sanitizing layer).

Well at least the sanitizing layers still depend on libXrender...

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: libXrender - documentation?

2009-01-26 Thread Clemens Eisserer
 maybe i'm missing the point: what's the point in a library that needs
 another library
 in order to be used? i don't really think it's a good idea. maybe it
 would be better
 to better implement the library in order for it to be used without
 another sanitizing
 library,

Its the same as with xcb or Xlib.
Both aren't used a lot directly by application programmers, instead
programmers use QT4/Gtk or Java2D as sanitizing libraries
Xlib's task is to allow access to the X11 protocol from C, exatly what
libXrender does for the RENDER extension.

 that would cost more resources.
libXRender only contains X11 protocol generation and a few definitions.
If Cairo/QT4/... would have to generate the protocol themself, it
would probably lead to a lot of duplicate code and bloat.

 If the interfaces are documented, it's possible to change the library
 implementations without breaking applications.
Well, thats exactly what the protocol specification is about, it
contains everything you need to know to implement a library.

However, the protocol specification is not a programmer manual howto
use RENDER or libXRender, and at least what I've heard, this is what
most people are asking for (including myself about a year ago ;) ).

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Intel Graphics package and patch requirments

2009-01-25 Thread Clemens Eisserer
Hi,

 So, IMO, it's unfair to call 2008Q4 release stable and recommended to
 ordinary users/OSVs, at least for gma950 users.
I have to agree, even 2.6.1 is still far away from release quality.

I hope the whole GEM-ification is soon finished, before distributions
start deploying that driver.
After all, a few RENDER accaleration bugs I reported haven't been
looked at too ... so hopefully after the transistion is finished devs
will have time to look at other stuff too :-/

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: libXrender - documentation?

2009-01-22 Thread Clemens Eisserer
Most likely I will write some XRender documentation as part of my
bachelor thesis.

 In a subsequent thread 2D antialiasing? on this list, I was bemoaning
 the fact that antialiasing by that method would waste huge amounts of
 bandwidth if the client were separated from the Xserver by some slow
 network, and someone claimed to me, offlist, that Xrender provided
 server-side antialiasing.

 So I wanted to verify that claim. Now that this list has pointed me to
 Keith Packard's The X Rendering Extension, I have done a quick scan of
 that, but can still find no mention of antialiasing. Moreover, that
 describes the protocol, rather than the libXrender interface that is more
 conveniently used to access it,
Yes, that was me ;)

Antialiasing is usually done by transferring the geometry you intend
to render into a mask-pixmap, this can be done with:
- XPutImage (client-side geometry rasterization)
- XRenderAddTraps

and doing a composite operation with that mask.

or by using an implicit mask with:
- XRenderCompositeTrapezoids

However as far as I know xft already has a XRender aware backend
(using the XRender*Glyphs functions), as well as legacy support for
pre-xrender servers.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: libXrender - documentation?

2009-01-21 Thread Clemens Eisserer
Hi Charles,

Unfourtunatly XRender is not very well documented, probably the best
thing available is the specification.
The reason is that most programmers use higher-level APIs like Cairo
or QT4 to access XRender, so if you don't have a good reason why you
directly want to mess with it I recommend using cairo too.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: dga does't work correctly

2009-01-08 Thread Clemens Eisserer
Which video driver are you using?
You could try to switch to XAA, if you're using an open-source driver.

- Clemens

2009/1/8 Pawel K pawlac...@yahoo.com:
 Hello

 It looks like DGA is not fully operational on my system:
 I have the following in

 /etc/X11/xorg.conf:
 SubSection  extmod
 #  Optionomit xfree86-dga   # don't initialise the DGA extension
 EndSubSection

 Xorg.0.log:
 (II) Loading extension XFree86-DGA

 xdpyinfo:
 number of extensions:31
 ...
 XFree86-DGA
 ...

 I'm using the following versions:
 xorg-server-1.3.0.0-r6
 xorg-x11-7.2
 mplayer-1.0_rc2_p27725-r1
 xmame-0.106
 linux kernel 2.6.25.14

 When I launch xmame (as root) I get the following info message:
 XDGAOpenFramebuffer failed
 Use of DGA-modes is disabled

 When I launch:
 mplayer -vo dga

 I get the follwing info:
 [swscaler @ 0x8a49f40]using unscaled yuv420p - rgb32 special
 converter
 vo_dga: Mode: depth=15, bpp=16, r=007c00, g=0003e0, b=1f, not
 supported (-bpp 15)
 vo_dga: Mode: depth=16, bpp=16, r=00f800, g=0007e0, b=1f, not
 supported (-bpp 16)
 vo_dga: Mode: depth=24, bpp=24, r=ff, g=00ff00, b=ff, native (-
 bpp 24)
 vo_dga: Framebuffer mapping failed!!!
 FATAL: Cannot initialize video driver.
 vo_dga: Mode: depth=24, bpp=32, r=ff, g=00ff00, b=ff, native (-
 bpp 32)
 VO: [dga] 400x300 = 400x300 BGRA
 vo_dga: DGA 2.0 available :-) Can switch resolution AND depth!
 vo_dga: Selected hardware mode  640 x  480 @  60 Hz @ depth 24, bitspp
 32.
 vo_dga: Video parameters by codec: 400 x 300, depth 24, bitspp 32.
 VO: [dga] 400x300 = 400x300 BGRA
 vo_dga: DGA 2.0 available :-) Can switch resolution AND depth!
 vo_dga: Selected hardware mode  640 x  480 @  60 Hz @ depth 24, bitspp
 32.
 vo_dga: Video parameters by codec: 400 x 300, depth 24, bitspp 32.
 vo_dga: Framebuffer mapping failed!!!
 FATAL: Cannot initialize video driver.
 VO: [dga] 400x300 = 400x300 BGRA
 vo_dga: DGA 2.0 available :-) Can switch resolution AND depth!
 vo_dga: Selected hardware mode  640 x  480 @  60 Hz @ depth 24, bitspp
 32.
 vo_dga: Video parameters by codec: 400 x 300, depth 24, bitspp 32.
 vo_dga: Framebuffer mapping failed!!!
 FATAL: Cannot initialize video driver.

 FATAL: Could not initialize video filters (-vf) or video output (-vo).

 Do you know what can be wrong ?

 thanks for any help


 ___
 xorg mailing list
 xorg@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/xorg

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Profiling redraws with Xorg 1.5.99.3

2008-12-26 Thread Clemens Eisserer
 Could be. Shame the new optimised implementation of the private lookup
 had to be reverted on the 1.5 branch.
Its not shame, its just that ABI changes don't fit in minor releases.

 I'm running 1.5.99, and it would appear that the patch is applied. It is
 a shame dixLookupPrivate is consuming so many cycles still.
I doubt the new implementation could consume 13% of total cycles.
Its _very_ quick and should consume only a few cycles.

 The patch for the performance issue is not applied on the server I'm
 using, so I'll try applying that and rebuilding. Thanks for the pointer!
Could be. When have you pulled your version?
I thought the patch was applied to 1.6 quite a while ago.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Bad 2D performance with intel driver on Mobile GM965/GL960

2008-12-20 Thread Clemens Eisserer
Hmm, I guess 11.1 uses intel-2.5, which has (at leat on my 945GM)
quite a number of performance problems.
Xorg-7.3 (xserver 1.5.x) also has quite a bad performance bug for
dixLookupPrivate which will only be fixed for 1.6 because of API
issues.

If you don't use a xrender-based composition manager, reverting to XAA
could probably help:
 Section Device
Identifier  Videocard0
 Driver  intel
 Option  AccelMethod XAA
Option  XAANoOffscreenPixmaps true
 EndSection

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: 2D antialiasing?

2008-12-12 Thread Clemens Eisserer
 And that is no problem at all, until you want to throw away the glyph that
 was there before and write a new glyph in its place. And then you need to
 know what the original background behind the old glyph was, but the server
 does not have that information, and so it has to be kept in the
 application, which is precisely what Xrender was trying to avoid :-( .

Well, I guess I missed your point. What does this special use-case
have to do with antialiasing in general?
If the glyph would be not antialiased, you would have exactly the same
problem ;)

In your case you can simply save a copy in an additional pixmap
before you add any glyphs - and copy the stuff you need back from that
pixmap.
However I guess it isn't that easy ;)

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: 2D antialiasing?

2008-12-11 Thread Clemens Eisserer
  From the plain X Server POV, antialiasing is always going to be hard,
 because to do it you need to know the background color or pixmap, and the
 Xserver does not keep track of how you had earlier set it, so it is up to
 individual toolkits to keep track, and not all of them do.
Well, using XRender nobody needs to keep track of anything, its just
composition with a mask (which is generated by the CPU in the case of
trapezoids).

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Slow exaOffscreenAlloc ?

2008-12-09 Thread Clemens Eisserer
Hi again,

 We're moving from EXA to UXA, which fixes a lot of the performance
 problem by having an allocator that doesn't suck.  The remainder of the
 fix would be accelerating trapezoids.

Any plans to merge UXA and EXA?
Having all the code duplicated doesn't seem a very wise idea.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: EXA and migration [was: shrink xrender featureset]

2008-11-25 Thread Clemens Eisserer
 That's not 'strong vocabulary' but simply baseless flamebait.

Would it make sence to implement some fallback-optimizations like:
- Copy pictures without drawables (gradients) to a temporary surface,
if the driver supports composition?
- Support solid write-only operations (X11 core drawing) for which EXA
does not provide hooks (e.g. diagonal lines) through a temporal mask,
if the driver supports composition?

For both cases I saw ping-pong migration killing performance, and I
had to implement work-arrounds in my application myself which are
built on assumptions about the accaleration architecture's behaviour
and sometimes cause degraded performance.

By the way thanks a lot for the EXA improvements in 1.5/1.6 :)

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [IDEA] shrink xrender featureset

2008-11-23 Thread Clemens Eisserer
 Trapezoids for example would require implementing a rasteriser in shaders.
 Pretty much everything that doesn't get accelerated these days requires
 shaders.
 Tomorrow someone might come and ask for a different type of gradient, why
 even bother?

Well if you let me decide between software rendering on client or
software rendering on server, I would prefer the latter.
Furthermore, how would you like to generate aa geometry if not with
trapezoids - do you like to XPutImage the geometry to the server?

 Fallbacks are rarely efficient, iirc intel GEM maps memory with write
 combining, that isn't very friendly for readback.
For gradients you don't really need to do fallbacks, and for
trapezoids you can use a temporary mask.
This is all write-only, its just a matter how the driver/accaleration
architecture handle this.

 I intentionally brought this up before people actually implement this. The
 question is why not use opengl or whatever is available to do this? You're
 putting fixed stuff into a library that only hardware with flexible shaders
 can do, why not use something that just exposes this flexibility in the
 first place?
Well, first of all - because its already there...and except some not
so mature areas work quite well.
Second, Java has an OpenGL backend and currently, I am not sure wether
even the current NVidia drivers are able to run it and I am pretty
sure _none_ od the open drivers can.
I guess XRender has the adavantage that drivers are simpler to
implement compared to a ful-fledged OpenGL implementation.

Once OpenGL is stable and mature, and scalable enough to run dozens of
apps simultaneously, it should not be a problem to host XRender on top
of it.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [IDEA] shrink xrender featureset

2008-11-22 Thread Clemens Eisserer
Hi,

 Currently there exist several operations in xrender that are better
 off client side or through some other graphic api (imo). Think of
 trapezoid rasterisation, gradient rendering, etc.
 Doing this stuff
 client side avoids unforseen migration issues and doesn't create any
 false impressions with the api users.
Well, in my opinion this is not a question where to do the stuff
(client/server) - but rather how.
Both, trapezoids and gradients cause migration because EXA currently
has a quite strict view on accaleration.
If something can be done on hw its done by hw, otherwise eerything
else is migrated out.

You still could to the same on the server you would do on the server-side.
Just imagine you copy gradients or traps to temporary surface before
you use them in a composition operation - it would be the same as
client side, except you don't need to copy everything arround.
Furthermore drivers often can fallback efficiently, like Intel drivers with GEM.

If you omit gradients or trapezoids you would also have to transport a
lot of stuff over wire - not really nice.

 My suggestion would be to deprecate everything, except solid,
 composite, cursor stuff and glyphs. The idea is to stop doing
 seemingly arbitrary graphics operations that end up causing slowness
 most of the time (if not worked around properly). At this stage
 noone accelerates these operations, so there can be no complaints
 about that.
Well at least nvidia plans to accalerate gradients as well as
trapezoids in theirproprietary drivers.
Intel also has plans to optimize gradients with shaders.

My opinion is that RENDER is quite fine, but there are some parts
where drivers are lacking.
Hopefully the situation will improve soon, at least for gradients.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Better gradient handling (migration/fallbacks)

2008-11-14 Thread Clemens Eisserer
Hi,

Do you think there is any chance of getting the gradient hooks into 1.6?
Would not be too bad if no driver is able to accalerate it for now,
but at least users would not need xserver 1.7 to get accalerated
gradients.
Distributors usually tend to update drivers, but they almost never
switch to a new xorg major version.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH] When converting from double to fixed, round carefully.

2008-11-14 Thread Clemens Eisserer
Hi,

 Perhaps we should extend Render to include 64-bit floating point transforms...
That would be really great.
I am doing some tricks with mask-transformations having quite a hard
time by the fixed-point limitations, especially for large scales (like
100x).

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Better gradient handling (migration/fallbacks)

2008-11-13 Thread Clemens Eisserer
Hi,

I've experienced some performance problems with gradients when working
on the xrender/java2d backend.

A typical problematic case is when mask and desitation picture were in
VRAM, and a gradient is used as source.
As far as I understand this causes mask and dst to be moved out into
sysmem, the composition is done by pixman and at the next accalerated
operation the whole thing is moved back.
In profiles I saw that about 35% of total cycles where spent in
moveIn/moveOut and 5% in gradient generation itself, for a rather
boring UI like the following:
http://picasaweb.google.com/linuxhippy/LinuxhippySBlog?authkey=tXfo8RSnq4s#5224085419010972994

What I did to work arround the problem was to use a temporary pixmap,
copy the gradient to the pixmap and use that pixpap later for
composition.
This means only moveIn's and enhanced performance a lot, about 3-4x
for the UI workload mentioned above.

This seems to be an acceptable workarround but causes an unnescessary
burden for UMA architectures like Intel+GEM, so doing this be default
should be up to the driver.
Would it be possible to pass gradients down to the driver, to allow
the driver to decide what to do with the gradient, or even provide
accaleration for it.
How complex would it be to provide the nescessary hooks?
As far as I know two-stop gradients often can be accalerated with some
texture-mapping tricks, and everything more complex still could be
done with shaders.

I am no xorg/exa expert, so maybe I just do not understand things and
draw wrong conclusions.

Thanks, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Better gradient handling (migration/fallbacks)

2008-11-13 Thread Clemens Eisserer
 We just need to accelerate gradients, and is where any effort in
 software should occur.  It's on our schedule, but not for quite a while.
 Setting up the X Server to allow drivers to request gradients was easy
 last time I did it, though I've misplaced the branch it looks like.
 Then someone would just have to write the shader for it, and for
 915-class hardware that shouldn't be hard.
Glad to know its on your schedule.
Of course accalerated gradients would be even better :)

Thanks for your reply, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: 3d very slow on 945GM using intel driver

2008-10-21 Thread Clemens Eisserer
How well do simple 3d-opengl-apps like tuxracer or openarea perform.
I can both without troubles on windows, but they are not really
playable on linux :-/

- Clemens

2008/10/21 Adam Lantos [EMAIL PROTECTED]:
 Now I tried with vblank disabled, and voilá! - 860fps.
 So I guess that was the problem, thanks for pointing it out

 cheers,
  Adam


 On Tue, Oct 21, 2008 at 1:36 PM, Adam Lantos [EMAIL PROTECTED] wrote:
 Hello Keith,


 It topped at about 70-75fps max. Now it produces only ~50-56fps...

 These 'no drirc found' messages could be my problem? (See glxinfo output 
 below)

 distro: gentoo
 kernel: vanilla 2.6.27.1
 mesa: 7.2
 xorg-server: 1.5.1
 xf86-video-intel: 2.4.2-r2


 thanks,
  Adam



 [EMAIL PROTECTED] ~ $ LIBGL_DEBUG=verbose glxinfo
 name of display: :0.0
 libGL: XF86DRIGetClientDriverName: 1.9.0 i915 (screen 0)
 libGL: OpenDriver: trying /usr/lib/dri/tls/i915_dri.so
 libGL: OpenDriver: trying /usr/lib/dri/i915_dri.so
 drmOpenDevice: node name is /dev/dri/card0
 drmOpenDevice: open result is 4, (OK)
 drmOpenByBusid: Searching for BusID pci::00:02.0
 drmOpenDevice: node name is /dev/dri/card0
 drmOpenDevice: open result is 4, (OK)
 drmOpenByBusid: drmOpenMinor returns 4
 drmOpenByBusid: drmGetBusid reports pci::00:02.0
 libGL error:
 Can't open configuration file /etc/drirc: No such file or directory.
 Failed to initialize TTM buffer manager.  Falling back to classic.
 display: :0  screen: 0
 direct rendering: Yes
 server glx vendor string: SGI
 server glx version string: 1.2
 server glx extensions:
GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer,
GLX_OML_swap_method, GLX_SGI_swap_control, GLX_SGIS_multisample,
GLX_SGIX_fbconfig, GLX_SGIX_visual_select_group
 client glx vendor string: SGI
 client glx version string: 1.4
 client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_allocate_memory,
GLX_MESA_copy_sub_buffer, GLX_MESA_swap_control,
GLX_MESA_swap_frame_usage, GLX_OML_swap_method, GLX_OML_sync_control,
GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync,
GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap
 GLX version: 1.2
 GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_swap_control,
GLX_MESA_swap_frame_usage, GLX_OML_swap_method, GLX_SGI_swap_control,
GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_visual_select_group
 OpenGL vendor string: Tungsten Graphics, Inc
 OpenGL renderer string: Mesa DRI Intel(R) 915GM 20061102 x86/MMX/SSE2
 OpenGL version string: 1.4 Mesa 7.2
 OpenGL extensions:
GL_ARB_depth_texture, GL_ARB_fragment_program, GL_ARB_multisample,
GL_ARB_multitexture, GL_ARB_point_parameters, GL_ARB_shadow,
GL_ARB_texture_border_clamp, GL_ARB_texture_compression,
GL_ARB_texture_cube_map, GL_ARB_texture_env_add,
GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar,
GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_vertex_program, GL_ARB_window_pos, GL_EXT_abgr, GL_EXT_bgra,
GL_EXT_blend_color, GL_EXT_blend_equation_separate,
GL_EXT_blend_func_separate, GL_EXT_blend_logic_op, GL_EXT_blend_minmax,
GL_EXT_blend_subtract, GL_EXT_clip_volume_hint, GL_EXT_cull_vertex,
GL_EXT_compiled_vertex_array, GL_EXT_copy_texture,
GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_multi_draw_arrays,
GL_EXT_packed_depth_stencil, GL_EXT_packed_pixels,
GL_EXT_point_parameters, GL_EXT_polygon_offset, GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_shadow_funcs, GL_EXT_stencil_wrap, GL_EXT_subtexture,
GL_EXT_texture, GL_EXT_texture3D, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_lod_bias, GL_EXT_texture_object, GL_EXT_texture_rectangle,
GL_EXT_vertex_array, GL_3DFX_texture_compression_FXT1,
GL_APPLE_client_storage, GL_APPLE_packed_pixels,
GL_ATI_blend_equation_separate, GL_ATI_separate_stencil,
GL_IBM_rasterpos_clip, GL_IBM_texture_mirrored_repeat,
GL_INGR_blend_func_separate, GL_MESA_pack_invert, GL_MESA_ycbcr_texture,
GL_MESA_window_pos, GL_NV_blend_square, GL_NV_light_max_exponent,
GL_NV_point_sprite, GL_NV_texture_rectangle, GL_NV_texgen_reflection,
GL_NV_vertex_program, GL_NV_vertex_program1_1, GL_OES_read_format,
GL_SGIS_generate_mipmap, GL_SGIS_texture_border_clamp,
GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod, GL_SGIX_depth_texture,

Re: Is interpolation of image-border specified by Render?

2008-10-18 Thread Clemens Eisserer
 Where do these transformation matrices come from?
They were created by the Java AffineTransform class.
I just dumped it and copied it into the C file.

I basically get an AffineTransformation instance (set by the user),
inverse it and set it on the source.
For the mask I do exactly the same, except I scale it up, by the needed amount.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Is interpolation of image-border specified by Render?

2008-10-18 Thread Clemens Eisserer
Hi Maarten,

 Do you have a test program or at least share the transformation matrix
 you're using, because i'm curious why it fails so badly.
Yes I created one, http://pastebin.com/f729a71aa
The testcase works perfectly with pixman (even with much higher
scale), but on intel seems the mask has too small x/y values.
Would be really interesting how other hardware/drivers behave ;)

Have you
 tried using a 1x1 mask pixel and scaling that an integer amount?
I used a 16x16 mask ... just without any further thinking, thought it
would give me more headroom till I hit precision limits.
I've now tried it with a 1x1 mask (as in the attached testcase), its the same.
It seems only to work when mask is 0.75-1.5 of the size of the source,
otherwise the pixel-borders differ :-/

Thanks, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Is interpolation of image-border specified by Render?

2008-10-18 Thread Clemens Eisserer
Hi again,

Sorry I completly forgot that the black pixels I see are caused by
another bug in the intel-driver, which is only visible on i965.
This bug causes areas out of source-surface-bounds to appear black
instead of transparent, so if your driver does that properly you
shouldn't see the artifacts, even if they are there.
On my 945GM with latest intel-git it looks like this:
http://picasaweb.google.com/linuxhippy/Mask_Transformation#

So it seems the mask is a moved a bit left/up, thats why pixel show up
which are outside of source-surface bounds.
I am currently trying to write a test-case which does not depend on
that behaviour, but seems not that easy :-/

Thanks for your patience, Clemens


2008/10/18 Maarten Maathuis [EMAIL PROTECTED]:
 On Sat, Oct 18, 2008 at 12:52 PM, Clemens Eisserer [EMAIL PROTECTED] wrote:
 Where do these transformation matrices come from?
 They were created by the Java AffineTransform class.
 I just dumped it and copied it into the C file.

 I basically get an AffineTransformation instance (set by the user),
 inverse it and set it on the source.
 For the mask I do exactly the same, except I scale it up, by the needed 
 amount.

 - Clemens
 ___
 xorg mailing list
 xorg@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/xorg


 What are the precise artifacts you see?

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


ProcPutImage calls exaDoMoveOutPixmap, 4x slowdown

2008-10-15 Thread Clemens Eisserer
Hi Michel,

Thanks a lot for your investigation.

 Does the attached xserver patch help? Looks like we're syncing
 unnecessarily in the migration no-op case.
Yes, a lot. My benchmark went up from ~12fps to ~19fps and the
fallback is gone according to the profile.
I am still only at 50% of intel-2.1.1/xorg-server-1.3's throughput,
however a lot of time is spent inside the intel-driver - I guess its
related to the refactoring to make it GEM ready.

Thanks again, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


ProcPutImage calls exaDoMoveOutPixmap, 4x slowdown

2008-10-14 Thread Clemens Eisserer
Hello,

I've a use-case where the client uploads 32x32 A8 images to an
256x256x8 pixmap  which is later used as mask in a composition
operation.
The test-case is able to render with 40fps on xserver-1.3/intel-2.1.1
however with the latest GIT of both I only get ~10-15fps.
Unfourtunatly I've not been able to create a stand-alone testcase
which triggers this problem :-/

Using sysprof I can see a lot of time is spent moving data arround,
very strange is that PutImage seems to cause a readback:
ProcPutImage-ExaCheckPutImage-exaPrepareAccessReg-exaDoMigration-exaDoMoveOutPixmap-exaCopyDirty-exaWaitSync-I830EXASync
In Composite I see the re-uploading again.

Any idea why ProcPutImage could to fallback (there's plenty of free vram)?
Are there tools / settings which could help me to identify the problem?

Thank you in advance, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: ProcPutImage calls exaDoMoveOutPixmap, 4x slowdown

2008-10-14 Thread Clemens Eisserer
Hi,

 There is ofcource a fallback system, which is pretty much a memcpy.
Ah, I guess that was that memcpy I always saw in moveIn / moveOut ;)

 intel has never had an UploadToScreen hook.
Ah interesting, because I saw 4x better performance with intel-2.1.1 /
xserver-1.3.
With this configuration the putted data was just memcpy'd to vram, but
now it seems to be a readback-put-upload cycle :-/
I'll try to find a small test-case and report a bug.

 I'm just mentioning uxa,
 because they did realize exa wasn't perfect for them (in it's current
 form), they just haven't fixed exa yet to be a little more smart for
 non-vram cards.
Yes, I also really hope they merge it back soon.

Thanks again, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: ProcPutImage calls exaDoMoveOutPixmap, 4x slowdown

2008-10-14 Thread Clemens Eisserer
Sorry for the email flood ...

 2.1.1 probably used XAA as default, which didn't try to accelerate much.
No, the results were with EXA enabled - although results with XAA are
again magnitudes better ;)

Thanks, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: ProcPutImage calls exaDoMoveOutPixmap, 4x slowdown

2008-10-14 Thread Clemens Eisserer
Hi,

 I think this is because intel does not provide an UploadToScreen hook
 (because it has no vram). It hasn't made (visible) effort to
 reintegrate UXA in EXA,
Btw. I was using EXA without GEM.
Has the UploadToScreen hook been removed when preparing the driver for
UXA and/or GEM?
One thing which puzzles me, if the intel-driver does not define
UploadToScreen, how can pixmaps end in vram at all, or are there
other, slower paths which take care of this in that case?

Thanks a lot, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: XTerm exits immediatly with self-compiled xorg

2008-10-09 Thread Clemens Eisserer
 I think you need to build xserver with --disable-builtin-fonts.
Thanks a lot, that worked :)

 /etc/fonts/ is configuration for the fontconfig library, not the X
 server.
Ah, ok.

Thanks, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


XTerm exits immediatly with self-compiled xorg

2008-10-08 Thread Clemens Eisserer
Hello,

I am currently trying to build xorg from git, and it mostly works
except some font stuff.

When I try to start xterm it quits immediatly with the following messages:

The XKEYBOARD keymap compiler (xkbcomp) reports:^M
 Warning:  Type ONE_LEVEL has 1 levels, but RALT has 2 symbols^M
   Ignoring extra symbols^M
Errors from xkbcomp are not fatal to the X server^M
Warning: Cannot convert string nil2 to type FontStruct^M  - Xterm

However twm seems to work fine.

I also saw some errors before that stating that for locale xyz no
fonts could be found. Is Xorg simply missing fonts?
Any ideas how I could fix that, or why xorg does not use the font
configuration from the productive xorg in /etc/font/font.conf (the new
one is in /opt/xorg)?
I also tried copying /usr/share/fonts to /opt/xorg/share/fonts, but
without success.

Any ideas are welcome, sorry for bothering with stuff like that :-/

Thank you in advance, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: pixman with and without SSE2 benchmarks?

2008-09-28 Thread Clemens Eisserer
 I'd bet against that :-). Core 2 has magnificent SSE performance indeed,
 but that's true for MMX just as well.
Well, Core2 (and AMD K10) got support for 128bit operations per clock,
whereas previous processors only supported 64bit at once, and took 2
cycles for 128 bit operations.
MMX is just 64-bit, so it should't matter much there.

Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Poll: Should Xorg change from using Ctrl+Alt+Backspace to something harder for users to press by accident?

2008-09-23 Thread Clemens Eisserer
no
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg