Hi,
Somewhere arround Fedora-14 a feature was introduced somewhere (either
xorg or touchpad-driver) to smooth touchpad scroll events.
This causes quite some troubles, as a few Applications change their
zoom-factor on Ctrl+Scroll (e.g. FireFox or Geany editor), so when pressing
Ctrl+Something
Hi,
I am running Fedora 15, and xrestop shows 3 clients with 15mb pixmaps allocated,
although no PID is shown, and two of the three clients seem to have no
pixmap (0) allocated.
I am using xfce 4.8 without a composition manager.
Any idea whats going on?
Thanks, Clemens
xrestop - Display:
Hi,
Does anybody remember which xorg-server version brought support for
xrender 0.11?
I think about not using some paths for render 0.11, however if its
too recent I would probably exclude more xorg-server versions than
required.
Thanks, Clemens
___
Hi,
I just ran in a couple of bugs because my code assumes sizeof(XID)==4
on all platforms which doesn't seem to hold on AMD64.
Whats the reason for XID to be 8bytes (unsigned long) on AMD64, I
thought an x-id (in protocol sence) is defined as 4 bytes anyway?
Will Xlib/xcb ever return values
Hi Matthieu,
Thanks for your explanation =)
This is a mistake done 25 years ago or so. It's hard to justify it,
but it is so. A number of things that are 32 bits on the wire
are represented as 64 bits long at the API level on LP64 machines.
Is it considered more or less safe to store those
Hi Samuel,
Usually you won't find log entries about slowdown problems unless
something goes really terribly wrong.
Are you using the proprietary nvidia driver or nouveau?
If you are using the proprietary nvidia problem, could you switch to
nouveau to see if the problem persists?
To really get
Hi,
Could it be https://bugs.freedesktop.org/show_bug.cgi?id=22566 is
related to this?
- Clemens
2010/12/20 ville.syrj...@nokia.com:
Rather than continue my attempts to hack around the issue of incorrect
ClipNotifys during window redirection changes, I decided to tackle the
issue in more
Hi Teika,
As far as I know there hasn't been a lot of development to fix that,
theres not enough pain for now.
Unfourtunatly it would mean re-implementing a lot of X11's core
protocol as new extensions.
Probably a better way to fix that would be to create X12, or to use
something different like
Hi Xavier,
Here comes that time again: I have to buy a graphic card, I want to buy
a radeon, not too long/wide (limited space in the box), reasonably
recent and powerful, to be used in a debian/sid system.
What do I buy if I want 3D (compiz at least) now, or at least very
soon ?
I guess if
Hi,
I would like to create a Window with ARGB32-visual to reproduce a bug
I experience
with shaped ARGB32 windows when not using a composition manager,
however I always get a BadMatch error.
My attemp tried to find a 32-bit visual, and pass that to XCreateWindow:
XVisualInfo info;
int cnt;
Hi,
The default border-pixmap and colormap are CopyFromParent. That won't
fly if your window and its parent have different depths/visuals. That's
pretty clearly explained in the protocol spec's description of
CreateWindow, AFAICT.
Setting a new ColorMap as well as a BorderPixmap helped :)
Hi Michel,
Pixmaps are reference-counted and the picture takes a reference on the
pixmap, so the pixmap can't go away before the picture.
However this isn't true for windows, so as soon as the window is
destroyed presumably the picture is destroyed as well or becomes
invalid.
Can this be
Hi,
I would like to use an iiyama X436S at 1280x1024 on VGA1 with my
Laptop, unfourtunatly it seems the EDID information the monitor sends
is bogus:
VGA1 connected (normal left inverted right x axis y axis)
1024x768 60.0
800x60060.3 56.2
848x48060.0
640x480
Hi,
I have some code which causes RenderBadPicture errors from time to
time, and I have troubles finding the cause.
I have inserted some debug code and switched to synchronous mode,
however I don't get an idea whats going wrong.
Some pictures don't seem to cause problem, whereas other pictures
Hi again,
Seems I have found the cause of the problem: Freeing Pictures that
belong to an already destroyed window cause the RenderBadPicture
error. The XID values in the error-log were wrong and therefor
misleading.
What puzzles me is the inconsistent behaviour:
When a Window is destroyed, all
Hi,
I would like to do some bisecting to find a regression that was
introduced somewhere between 1.6 and 1.7.
I tried but for me as a non-developer its quite hard to get the right
versions of the right libraries, and I gave up after a few hours.
Is there a script which can download and build the
Hi,
The question is whether driver maintainers want to deal with
non-maintenance changes (like new hardware support) in the stable branch
of the X server, which will require additional work as they back-port
things from master.
Was a bit shocked when I heard driver will be merged back into
Hi Dave,
Don't think any version of nouveau ever did that. Are you sure its actually
allocated the RAM?
Well I don't know exactly whats going on - but when using nouveau
instead of the proprietary nvidia driver, freesays memory useage is
about 250mb higher than normal - which would correspond
Hi again,
Using Nouveau with KMS and an unified kernel-side memory manager seems
to solve the problems :)
After updating to Fedora-12 I get working KMS, normal memory useage
and overall way better performance.
Great that I can finally remove the proprietary legacy driver!
- Clemens
Hi,
Is there any way, to limit agp aperture size, maybe by passing a
parameter to the agpgart module?
The bios doesn't provide any setting unfourtunatly :/
Background: I am using a quite old version of nouveau (that one
shipped with Fedora-11), and nouveau seems to reserve the whole apg
aperture
Hi,
I was wondering if someone did/does xorg profiling.
any special precautions for someone who might want to try?
With a new kernel + sysprof thats pretty simple - just install
debuginfos and enjoy :)
- Clemens
___
xorg mailing list
Man, don't have a job? Is your time worth anything to you?
And by the way ... I've never read so many *strange* arguments in one
discussion.
(using shm ximage for normal drawing is bullshit)
- Clemens
2010/1/30 Russell Shaw rjs...@netspace.net.au:
Daniel Stone wrote:
On Sat, Jan 30, 2010 at
I just tried KMS with radeon driver, and 2D seems notably slow.
Widgets takes time to draw, scrolling in Dolphin or Firefox lags, as if
some 2D acceleration was not working alright.
I experience the same, running Ubuntu-10.4-alpha2 on my HD3850.
Logs look quite normal as far as I can tell.
I
Hi,
I am a game programmer, and I live in a country where normal computer has
configurations that remember netbooks (like having VIA or SIS chipsets...).
Because of that I started making a game using memory blitting instead of
OpenGL, but I found that on X it runs really slow without
Hi Dave,
Should I file a bugreport about this on bugzilla?
What additional data would be useful to track that issue down?
Could this be used as a security whole?
Thanks, Clemens
2009/8/16 Clemens Eisserer linuxhi...@gmail.com
Hi Dave,
Can you get valgrind traces by any chance? not sure we
I know this is a bit off-topic, but it came recently to my mind:
Wouldn't it be possible to provide half-accelerated linear gradients
by simply rendering the gradient into a temporary 1x? surface, and
using the various repeat modes + the gradient transformation on that
surface?
This way
Hi,
I tried to enhance the glyph-upload paths of my java2d-xrender backend
by uploading multiple glyphs at once, however doing so makes
xorg-1.6.99.1 (Fedora rawhide ~20090810) crash quite frequently.
I experienced those crashes with both vesa and the intel driver. Twice
the crashes happend in
Hi Miro,
I don't get that crash.
If you know howto use gdb and can ssh into the machine, it would be
great if you could install debug-packages and debug it yourself:
http://www.x.org/wiki/Development/Documentation/ServerDebugging
- Clemens
___
xorg
Hi,
Until now I've used XRenderAddGlyphs only one glyph at a time, but to
improve efficiency I would like upload multiple glyphs per call.
What confuses me however is, why there's only a single images pointer
(and not an char **image), to pass the glyph-image-data.
How is that supposed to be
Hi,
I am using a program that relies on 2d APIs to draw lines and
circles, and the performance of this program had an huge decrease since my
change to Debian-lenny.
Most likely you are now using EXA (newer intel drivers default to it),
a new acceleration architecture which does no longer
Hi,
1.) As far as I have seen the only way to get aliased rasterization
with XRenderCompositeTrapezoids is to pass PictStandardA1 as mask
format.
However a lot of hardware can't use A1 as mask, leading to a fallback.
On my 945GM I get for a 100x100 circle consisting of ~180 traps:
20ms, A8
120ms
Hi,
I just tried to compile xorg/git with the xorg-git.sh shell script,
and it fails in xf86-input-keyboard with the following error:
kbd.c: In function 'KbdProc':
kbd.c:567: warning: passing argument 1 of 'InitKeyboardDeviceStruct'
from incompatible pointer type
kbd.c:567: warning: passing
Hi Chris,
xf86-input-keyboard has been broken for more than a week now. Worse,
the server segfaults on launch. Input folks, is someone working on
landing fixes for this breakage?
Yes, I experience the same - but I thought this was caused by the
missing keyboard stuff.
Thanks your reply,
Are you using the same version of kde on both systems? IIRC kde 4
switched to using a1 surfaces for font rendering which isn't currently
accelerated by EXA. Notice the _a1 fetch below.
I've seen quite many different reports about slow EXA which turned out
to be caused by the A1 mask format (I
I can't see any such calls of XRender* functions in the bits of xft that I
have been looking at (notably in xftcore.c).
Because xft deals with Glyphs, and for performance/bandwith resons
glyphs are handled in a special manner.
That was what I ment with (using the XRender*Glyphs functions).
I know that its not easy, but someone can't expect a step-by-step
tutorial for such low-level stuff.
hmm - the obvious conclusion is that xft is just a minor/useless library.
Perhaps it should be removed, then.
The whole discussion is about RENDER's documentation, not xft.
Or was your
The whole discussion is about RENDER's documentation, not xft.
very well, then apply my comments to RENDER (they're both presented
as libraries that no one should try to use without some other library
as a sanitizing layer).
Well at least the sanitizing layers still depend on libXrender...
maybe i'm missing the point: what's the point in a library that needs
another library
in order to be used? i don't really think it's a good idea. maybe it
would be better
to better implement the library in order for it to be used without
another sanitizing
library,
Its the same as with xcb
Hi,
So, IMO, it's unfair to call 2008Q4 release stable and recommended to
ordinary users/OSVs, at least for gma950 users.
I have to agree, even 2.6.1 is still far away from release quality.
I hope the whole GEM-ification is soon finished, before distributions
start deploying that driver.
After
Most likely I will write some XRender documentation as part of my
bachelor thesis.
In a subsequent thread 2D antialiasing? on this list, I was bemoaning
the fact that antialiasing by that method would waste huge amounts of
bandwidth if the client were separated from the Xserver by some slow
Hi Charles,
Unfourtunatly XRender is not very well documented, probably the best
thing available is the specification.
The reason is that most programmers use higher-level APIs like Cairo
or QT4 to access XRender, so if you don't have a good reason why you
directly want to mess with it I
Which video driver are you using?
You could try to switch to XAA, if you're using an open-source driver.
- Clemens
2009/1/8 Pawel K pawlac...@yahoo.com:
Hello
It looks like DGA is not fully operational on my system:
I have the following in
/etc/X11/xorg.conf:
SubSection extmod
#
Could be. Shame the new optimised implementation of the private lookup
had to be reverted on the 1.5 branch.
Its not shame, its just that ABI changes don't fit in minor releases.
I'm running 1.5.99, and it would appear that the patch is applied. It is
a shame dixLookupPrivate is consuming so
Hmm, I guess 11.1 uses intel-2.5, which has (at leat on my 945GM)
quite a number of performance problems.
Xorg-7.3 (xserver 1.5.x) also has quite a bad performance bug for
dixLookupPrivate which will only be fixed for 1.6 because of API
issues.
If you don't use a xrender-based composition
And that is no problem at all, until you want to throw away the glyph that
was there before and write a new glyph in its place. And then you need to
know what the original background behind the old glyph was, but the server
does not have that information, and so it has to be kept in the
From the plain X Server POV, antialiasing is always going to be hard,
because to do it you need to know the background color or pixmap, and the
Xserver does not keep track of how you had earlier set it, so it is up to
individual toolkits to keep track, and not all of them do.
Well, using
Hi again,
We're moving from EXA to UXA, which fixes a lot of the performance
problem by having an allocator that doesn't suck. The remainder of the
fix would be accelerating trapezoids.
Any plans to merge UXA and EXA?
Having all the code duplicated doesn't seem a very wise idea.
- Clemens
That's not 'strong vocabulary' but simply baseless flamebait.
Would it make sence to implement some fallback-optimizations like:
- Copy pictures without drawables (gradients) to a temporary surface,
if the driver supports composition?
- Support solid write-only operations (X11 core drawing) for
Trapezoids for example would require implementing a rasteriser in shaders.
Pretty much everything that doesn't get accelerated these days requires
shaders.
Tomorrow someone might come and ask for a different type of gradient, why
even bother?
Well if you let me decide between software
Hi,
Currently there exist several operations in xrender that are better
off client side or through some other graphic api (imo). Think of
trapezoid rasterisation, gradient rendering, etc.
Doing this stuff
client side avoids unforseen migration issues and doesn't create any
false impressions
Hi,
Do you think there is any chance of getting the gradient hooks into 1.6?
Would not be too bad if no driver is able to accalerate it for now,
but at least users would not need xserver 1.7 to get accalerated
gradients.
Distributors usually tend to update drivers, but they almost never
switch to
Hi,
Perhaps we should extend Render to include 64-bit floating point transforms...
That would be really great.
I am doing some tricks with mask-transformations having quite a hard
time by the fixed-point limitations, especially for large scales (like
100x).
- Clemens
Hi,
I've experienced some performance problems with gradients when working
on the xrender/java2d backend.
A typical problematic case is when mask and desitation picture were in
VRAM, and a gradient is used as source.
As far as I understand this causes mask and dst to be moved out into
sysmem,
We just need to accelerate gradients, and is where any effort in
software should occur. It's on our schedule, but not for quite a while.
Setting up the X Server to allow drivers to request gradients was easy
last time I did it, though I've misplaced the branch it looks like.
Then someone
How well do simple 3d-opengl-apps like tuxracer or openarea perform.
I can both without troubles on windows, but they are not really
playable on linux :-/
- Clemens
2008/10/21 Adam Lantos [EMAIL PROTECTED]:
Now I tried with vblank disabled, and voilá! - 860fps.
So I guess that was the problem,
Where do these transformation matrices come from?
They were created by the Java AffineTransform class.
I just dumped it and copied it into the C file.
I basically get an AffineTransformation instance (set by the user),
inverse it and set it on the source.
For the mask I do exactly the same,
Hi Maarten,
Do you have a test program or at least share the transformation matrix
you're using, because i'm curious why it fails so badly.
Yes I created one, http://pastebin.com/f729a71aa
The testcase works perfectly with pixman (even with much higher
scale), but on intel seems the mask has
-case which does not depend on
that behaviour, but seems not that easy :-/
Thanks for your patience, Clemens
2008/10/18 Maarten Maathuis [EMAIL PROTECTED]:
On Sat, Oct 18, 2008 at 12:52 PM, Clemens Eisserer [EMAIL PROTECTED] wrote:
Where do these transformation matrices come from?
They were
Hi Michel,
Thanks a lot for your investigation.
Does the attached xserver patch help? Looks like we're syncing
unnecessarily in the migration no-op case.
Yes, a lot. My benchmark went up from ~12fps to ~19fps and the
fallback is gone according to the profile.
I am still only at 50% of
Hello,
I've a use-case where the client uploads 32x32 A8 images to an
256x256x8 pixmap which is later used as mask in a composition
operation.
The test-case is able to render with 40fps on xserver-1.3/intel-2.1.1
however with the latest GIT of both I only get ~10-15fps.
Unfourtunatly I've not
Hi,
There is ofcource a fallback system, which is pretty much a memcpy.
Ah, I guess that was that memcpy I always saw in moveIn / moveOut ;)
intel has never had an UploadToScreen hook.
Ah interesting, because I saw 4x better performance with intel-2.1.1 /
xserver-1.3.
With this configuration
Sorry for the email flood ...
2.1.1 probably used XAA as default, which didn't try to accelerate much.
No, the results were with EXA enabled - although results with XAA are
again magnitudes better ;)
Thanks, Clemens
___
xorg mailing list
Hi,
I think this is because intel does not provide an UploadToScreen hook
(because it has no vram). It hasn't made (visible) effort to
reintegrate UXA in EXA,
Btw. I was using EXA without GEM.
Has the UploadToScreen hook been removed when preparing the driver for
UXA and/or GEM?
One thing
I think you need to build xserver with --disable-builtin-fonts.
Thanks a lot, that worked :)
/etc/fonts/ is configuration for the fontconfig library, not the X
server.
Ah, ok.
Thanks, Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
Hello,
I am currently trying to build xorg from git, and it mostly works
except some font stuff.
When I try to start xterm it quits immediatly with the following messages:
The XKEYBOARD keymap compiler (xkbcomp) reports:^M
Warning: Type ONE_LEVEL has 1 levels, but RALT has 2 symbols^M
I'd bet against that :-). Core 2 has magnificent SSE performance indeed,
but that's true for MMX just as well.
Well, Core2 (and AMD K10) got support for 128bit operations per clock,
whereas previous processors only supported 64bit at once, and took 2
cycles for 128 bit operations.
MMX is just
no
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg
67 matches
Mail list logo