Re: redraw, who (xlib/cairo), when...

2013-05-28 Thread Glynn Clements

First Last wrote:

 after some reads and looking at the event.c from jwm, my clock works!
 I'm using a structure like this :

     while(XPending(dpy)==0){
     usleep(100);
     updateClock(myClock);
 

Ugh. This causes the process to be scheduled up to 10,000 times per
second, which is almost a busy wait. I would expect this process to
consume far more CPU than is necessary.

It would be significantly more efficient to use select() or poll() on
the connection to the X server so that the process is only scheduled
when there is something for it to do. Or failing that, at least
increase the sleep interval substantially.

-- 
Glynn Clements gl...@gclements.plus.com
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Triple monitor, one card; xrandr = xorg.conf

2013-05-28 Thread Nick Urbanik

Dear Folks,

On 26/05/13 08:52 +1000, Nick Urbanik wrote:

Sorry, perhaps I have been insufficiently clear on what my aim is.

1. If you look at the xrandr command (which, as I said, works like a
  charm), you will see that I want to put the DisplayPort-0 in the
  middle, the DVI-0 on the left, the VGA-0 on the right.

2. Automatic configuration works just fine, with DisplayPort-0 on the
  left, DVI-0 in the middle, VGA-0 on the right.  But I don't want
  that order, because the DisplayPort-0 monitor is new and fabulous,
  while the other two monitors are old and not so good.

3. I am trying to make the xorg.conf do what the xrandr command does,
  when X starts up.

Perhaps I need to specify that the DVI-0 should be on the left?


Making a tiny amount of progress: with this configuration, X can start
(a nice change from some previous attempts!), and lightdm shows the
screens in the correct order!  Hooray!

But when I log in, the behaviour is rather odd; the VGA appears on the
left; the DisplayPort is next, with the DVI mirroring the DisplayPort.
My xrandr command gets it back into the desired state and order.  But
I still wish that I could achieve the same result with xorg.conf as
with the xrandr command, right from the beginning.

Advice most welcome.

Section ServerLayout
   Identifier home
   Screen left-screen 0 0
   Screen middle-screen RightOf left-screen
   Screen right-screen  RightOf middle-screen
EndSection

Section Device
   Identifier ati-5450
   #Screen 0
   #Screen 1
   #Screen 2
   Option Monitor-DVI-0 left
   Option Monitor-DisplayPort-0 middle
   Option Monitor-VGA-0 right
   #BusID PCI:01:00.0
   Driver radeon
EndSection

Section Screen
   Identifier left-screen
   Device ati-5450
   Monitor left
EndSection

Section Screen
   Identifier middle-screen
   Device ati-5450
   Monitor middle
EndSection

Section Screen
   Identifier right-screen
   Device ati-5450
   Monitor right
EndSection

Section Monitor
   Identifier left
   Option LeftOf DisplayPort-0
EndSection

Section Monitor
   Identifier middle
   Option RightOf DVI-0
EndSection

Section Monitor
   Identifier right
   Option RightOf DisplayPort-0
EndSection
--
Nick Urbanik http://nicku.org   ni...@nicku.org
GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: redraw, who (xlib/cairo), when...

2013-05-28 Thread First Last
Hi Glynn and other guys,
     while(XPending(dpy)==0){
     usleep(100);
     updateClock(myClock);
 

Ugh. This causes the process to be scheduled up to 10,000 times per
second, which is almost a busy wait. I would expect this process to
consume far more CPU than is necessary.
do you mix up with nanosleep()(http://linux.die.net/man/3/usleep)? so, if I 
right, I have settled the sleep time at 0.1s (jwm uses the frequency of the 
microprocessor to set the sleep time). When I wrote this I expected to get 
something who works.I will try with select()or poll()as you suggested, it sound 
better.

thx,
-Nicoo___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com

Re: Triple monitor, one card; xrandr = xorg.conf

2013-05-28 Thread Alex Deucher
On Sat, May 25, 2013 at 9:00 AM, Nick Urbanik ni...@nicku.org wrote:
 Dear Folks,

 It is trivially easy to set up three monitors with an ATI 5450 video
 card.  One is plugged into the VGA, the other into the DVI, and the
 last into the DisplayPort connector.

 Now I want the display port monitor in the middle, as it is the most
 fabulous.

 This is easy with xrandr:
 xrandr --output DVI-0 --mode 1920x1200 --output DisplayPort-0 --mode
 1920x1200 --right-of DVI-0 --output VGA-0 --mode 1600x1200 --right-of
 DisplayPort-0

 which works like a charm.

 Now trying to write that as xorg.conf.  Miserable failure.  And Gnome
 3 is unable to start.

 So here is the xorg.conf I have tried so far:

 Section Device
   Identifier ati-5450
   Option Monitor-DVI-0 left
   Option Monitor-DisplayPort-0 middle
   Option Monitor-VGA-0 right

You can skip the above lines if you just change the monitor
identifiers to match your output names.  Also, I'm not sure that
having relative locations for every monitor will work correctly.  For
the farthest left one, leave out the orientation.

 EndSection

Try something like this:


Section Device
  Identifier ati-5450
EndSection

Section Monitor
   Identifier DVI-0
EndSection

Section Monitor
   Identifier DisplayPort-0
   Option RightOf DVI-0
EndSection

Section Monitor
   Identifier VGA-0
   Option RightOf DisplayPort-0
EndSection


 Section Monitor
   Identifier left
 EndSection

 Section Monitor
   Identifier middle
   Option RightOf left
 EndSection

 Section Monitor
   Identifier right
   Option RightOf middle
 EndSection

 and here is the output of xrandr:
 $ xrandr
 Screen 0: minimum 320 x 200, current 5440 x 1200, maximum 8192 x 8192
 DisplayPort-0 connected 1920x1200+1920+0 (normal left inverted right x axis
 y axis) 518mm x 324mm
1920x1200  60.0*+
1920x1080  60.0 1600x1200  60.0 1680x1050  60.0
 1280x1024  60.0 1280x960   60.0 1024x768   60.0
 800x60060.3 640x48060.0 720x40070.1  DVI-0
 connected 1920x1200+0+0 (normal left inverted right x axis y axis) 518mm x
 324mm
1920x1200  60.0*+
1920x1080  50.0 60.0 1600x1200  60.0 1680x1050
 59.9 1280x1024  60.0 1440x900   59.9 1280x960   60.0
 1280x800   59.9 1280x720   50.0 60.0 1024x768   60.0
 800x60060.3 56.2 720x57650.0 720x48059.9
 640x48060.0  VGA-0 connected 1600x1200+3840+0 (normal left inverted
 right x axis y axis) 408mm x 306mm
1600x1200  60.0*+
1280x1024  75.0 60.0 1280x960   60.0 1152x864
 75.0 1024x768   75.1 70.1 60.0 832x62474.6
 800x60072.2 75.0 60.3 56.2 640x48072.8
 75.0 66.7 60.0 720x40070.1
 Any suggestions on how to make a working xorg.conf?

 Fedora 18 x86_64:
 xorg-x11-server-Xorg-1.13.3-3.fc18.x86_64
 xorg-x11-drv-ati-7.0.0-0.9.20121015gitbd9e2c064.fc18.x86_64
 $ xrandr --version
 xrandr program version   1.4.0
 Server reports RandR version 1.4
 --
 Nick Urbanik http://nicku.org   ni...@nicku.org
 GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
 ___
 xorg@lists.x.org: X.Org support
 Archives: http://lists.freedesktop.org/archives/xorg
 Info: http://lists.x.org/mailman/listinfo/xorg
 Your subscription address: alexdeuc...@gmail.com
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Triple monitor, one card; xrandr = xorg.conf

2013-05-28 Thread Nick Urbanik

Dear Alex,

Thank you so much for taking the time to make a thoughtful reply.

On 29/05/13 00:21 -0400, Alex Deucher wrote:

On Sat, May 25, 2013 at 9:00 AM, Nick Urbanik ni...@nicku.org wrote:
You can skip the above lines if you just change the monitor
identifiers to match your output names.  Also, I'm not sure that
having relative locations for every monitor will work correctly.  For
the farthest left one, leave out the orientation.


EndSection


Try something like this:


Section Device
 Identifier ati-5450
EndSection

Section Monitor
  Identifier DVI-0
EndSection

Section Monitor
  Identifier DisplayPort-0
  Option RightOf DVI-0
EndSection


Yes, that works for when lightdm starts: the screens are in the right
order.  Its simplicity is beautiful (the xorg.conf you wrote).

But when I log into XFCE 4.10, then it does as described in my last
post on this topic: VGA-0 on the left, then DVI-0, then at 1900 pixels
from the left of VGA-0, which takes us into the territory of DVI-0,
then DisplayPort-0 continues, mirroring DVI-0 for all but 150 pixels
on the left and 150 on the right.  Is it XFCE being naughty?

As before, my xrandr command makes everything pop into the right place
and behave properly, but I'd like to have things just work.

As a probably unrelated side issue, neither gdm nor Gnome 3 start up,
regardless of presence or shape of xorg.conf.
--
Nick Urbanik http://nicku.org   ni...@nicku.org
GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


[ANNOUNCE] libXfixes 5.0.1

2013-05-28 Thread Alan Coopersmith
libXfixes is the Xlib-based client API for the X-FIXES extension.

This bug fix release includes the fix for the recently announced 
CVE-2013-1983, along with some other cleanups  warning fixes.

Adam Jackson (1):
  configure: Remove AM_MAINTAINER_MODE

Alan Coopersmith (7):
  Strip trailing whitespace
  Replace deprecated Automake INCLUDES variable with AM_CPPFLAGS
  Remove duplicate declaration of XFixesExtensionName in Xfixesint.h
  XFixesFetchRegionAndBounds: use nread in call to XReadPad
  Use _XEatDataWords to avoid overflow of _XEatData calculations
  integer overflow in XFixesGetCursorImage() [CVE-2013-1983]
  libXfixes 5.0.1

Colin Walters (1):
  autogen.sh: Implement GNOME Build API

Peter Hutterer (1):
  man: remove current, we're way past 1.0.

git tag: libXfixes-5.0.1

http://xorg.freedesktop.org/archive/individual/lib/libXfixes-5.0.1.tar.bz2
MD5:  b985b85f8b9386c85ddcfe1073906b4d
SHA1: e14fa072bd70b30eef47391cac637bdb4de9e8a3
SHA256: 63bec085084fa3caaee5180490dd871f1eb2020ba9e9b39a30f93693ffc34767

http://xorg.freedesktop.org/archive/individual/lib/libXfixes-5.0.1.tar.gz
MD5:  ce48a9f75bcdb134218dcc0cddc73b50
SHA1: 88e9fe9c3288feb5362fe97fa7ae534f724b75e3
SHA256: 81b692856c0e7ab2778a34a32aa6b3f455b9b58cf388f009cba872ed933ae9c0

-- 
-Alan Coopersmith-  alan.coopersm...@oracle.com
 Oracle Solaris Engineering - http://blogs.oracle.com/alanc


pgpobSVe8Zz1F.pgp
Description: PGP signature
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com

Re: [RFC][PATCH] Make GetXIDRange O(1) instead of O(N^2)

2013-05-28 Thread Roberto Ragusa
On 05/28/2013 06:35 AM, Jamey Sharp wrote:
 I can't give this a full review, but off-hand it seems like a good idea to me!

[...]

 You'll want to be careful about the indentation in your patch when you put 
 the final version together for review. This version has several different 
 indentation levels throughout.

Hi,

the bizarre indentation levels are sometimes used to separate
clean parts from dirty parts (repeated prototypes, debug code, ...).
This is mostly a proof of concept to receive feedback, I would
never submit something in that shape for merging. ;-)
(as I already wrote in the first sentences of my mail).

I hope someone can explain the ID reuse stuff. I'm not able to say
what applications rely on this, but I can say that in my first
implementation I assumed one add and one free for each ID and the
KDE desktop was not able to start (and fluxbox too, IIRC).

-- 
   Roberto Ragusamail at robertoragusa.it
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH libXi 1/3] Fix potential corruption in mask_len handling

2013-05-28 Thread walter harms


Am 28.05.2013 07:52, schrieb Peter Hutterer:
 First: check for allocation failure on the mask.
 XI2 requires that the mask is zeroed, so we can't just Data() the mask
 provided by the client (it will pad) - we need a tmp buffer. Make sure that
 doesn't fail.
 
 Second:
 req-mask_len is a uint16_t, so check against malicious mask_lens that would
 cause us to corrupt memory on copy, as the code always allocates
 req-mask_len * 4, but copies mask-mask_len bytes.
 
 Signed-off-by: Peter Hutterer peter.hutte...@who-t.net
 ---
  src/XIGrabDevice.c  | 18 --
  src/XIPassiveGrab.c |  9 -
  src/XISelEv.c   | 30 +-
  3 files changed, 45 insertions(+), 12 deletions(-)
 
 diff --git a/src/XIGrabDevice.c b/src/XIGrabDevice.c
 index dd1bd10..2bff3d8 100644
 --- a/src/XIGrabDevice.c
 +++ b/src/XIGrabDevice.c
 @@ -50,6 +50,17 @@ XIGrabDevice(Display* dpy, int deviceid, Window 
 grab_window, Time time,
  if (_XiCheckExtInit(dpy, XInput_2_0, extinfo) == -1)
   return (NoSuchExtension);
  
 +if (mask-mask_len  INT_MAX - 3 ||
 +(mask-mask_len + 3)/4 = 0x)
 +return BadValue;
 +

Is the INT_MAX needed here ? running X on 16bit machines seems very odd
(is that possible ?)
That makes the following possible
 (mask-mask_len + 3)/4 = 0x
 mask-mask_len + 3 = 0x * 4
 mask-mask_len =  0x00 -3



just my two cents,
re
 wh

 +/* mask-mask_len is in bytes, but we need 4-byte units on the wire,
 + * and they need to be padded with 0 */
 +len = (mask-mask_len + 3)/4;
 +buff = calloc(4, len);
 +if (!buff)
 +return BadAlloc;
 +
  GetReq(XIGrabDevice, req);
  req-reqType  = extinfo-codes-major_opcode;
  req-ReqType  = X_XIGrabDevice;
 @@ -59,14 +70,9 @@ XIGrabDevice(Display* dpy, int deviceid, Window 
 grab_window, Time time,
  req-grab_mode = grab_mode;
  req-paired_device_mode = paired_device_mode;
  req-owner_events = owner_events;
 -req-mask_len = (mask-mask_len + 3)/4;
 +req-mask_len = len;
  req-cursor = cursor;
  
 -
 -/* mask-mask_len is in bytes, but we need 4-byte units on the wire,
 - * and they need to be padded with 0 */
 -len = req-mask_len;
 -buff = calloc(1, len * 4);
  memcpy(buff, mask-mask, mask-mask_len);
  
  SetReqLen(req, len, len);
 diff --git a/src/XIPassiveGrab.c b/src/XIPassiveGrab.c
 index 53b4084..4ed2f09 100644
 --- a/src/XIPassiveGrab.c
 +++ b/src/XIPassiveGrab.c
 @@ -51,6 +51,14 @@ _XIPassiveGrabDevice(Display* dpy, int deviceid, int 
 grabtype, int detail,
  if (_XiCheckExtInit(dpy, XInput_2_0, extinfo) == -1)
   return -1;
  
 +if (mask-mask_len  INT_MAX - 3 ||
 +(mask-mask_len + 3)/4 = 0x)
 +return -1;
 +
 +buff = calloc(4, (mask-mask_len + 3)/4);
 +if (!buff)
 +return -1;
 +
  GetReq(XIPassiveGrabDevice, req);
  req-reqType = extinfo-codes-major_opcode;
  req-ReqType = X_XIPassiveGrabDevice;
 @@ -68,7 +76,6 @@ _XIPassiveGrabDevice(Display* dpy, int deviceid, int 
 grabtype, int detail,
  len = req-mask_len + num_modifiers;
  SetReqLen(req, len, len);
  
 -buff = calloc(4, req-mask_len);
  memcpy(buff, mask-mask, mask-mask_len);
  Data(dpy, buff, req-mask_len * 4);
  for (i = 0; i  num_modifiers; i++)
 diff --git a/src/XISelEv.c b/src/XISelEv.c
 index 0471bef..55c0a6a 100644
 --- a/src/XISelEv.c
 +++ b/src/XISelEv.c
 @@ -53,6 +53,8 @@ XISelectEvents(Display* dpy, Window win, XIEventMask* 
 masks, int num_masks)
  int i;
  int len = 0;
  int r = Success;
 +int max_mask_len = 0;
 +char *buff;
  
  XExtDisplayInfo *info = XInput_find_display(dpy);
  LockDisplay(dpy);
 @@ -60,6 +62,26 @@ XISelectEvents(Display* dpy, Window win, XIEventMask* 
 masks, int num_masks)
  r = NoSuchExtension;
  goto out;
  }
 +
 +for (i = 0; i  num_masks; i++) {
 +current = masks[i];
 +if (current-mask_len  INT_MAX - 3 ||
 +(current-mask_len + 3)/4 = 0x) {
 +r = -1;
 +goto out;
 +}
 +if (current-mask_len  max_mask_len)
 +max_mask_len = current-mask_len;
 +}
 +
 +/* max_mask_len is in bytes, but we need 4-byte units on the wire,
 + * and they need to be padded with 0 */
 +buff = calloc(4, ((max_mask_len + 3)/4));
 +if (!buff) {
 +r = -1;
 +goto out;
 +}
 +
  GetReq(XISelectEvents, req);
  
  req-reqType = info-codes-major_opcode;
 @@ -79,19 +101,17 @@ XISelectEvents(Display* dpy, Window win, XIEventMask* 
 masks, int num_masks)
  
  for (i = 0; i  num_masks; i++)
  {
 -char *buff;
  current = masks[i];
  mask.deviceid = current-deviceid;
  mask.mask_len = (current-mask_len + 3)/4;
 -/* masks.mask_len is in bytes, but we need 4-byte units on the wire,
 - * and they need to be padded with 0 */
 -buff = calloc(1, mask.mask_len 

Re: [PATCH 2/6] gpu: host1x: Fix syncpoint wait return value

2013-05-28 Thread Thierry Reding
On Mon, May 27, 2013 at 09:55:46AM +0300, Arto Merilainen wrote:
 On 05/26/2013 01:12 PM, Thierry Reding wrote:
 * PGP Signed by an unknown key
 
 On Fri, May 17, 2013 at 02:49:44PM +0300, Arto Merilainen wrote:
[...]
 Thinking about it, maybe it would be good to have two separate error
 codes. Keeping -EAGAIN for the case where a zero timeout was passed
 doesn't sound too bad to differentiate it from the case where a non-
 zero timeout was passed and it actually timed out. What do you think?
 
 I agree, in this case it would not look bad at all. However, user
 space libraries may loop until the ioctl return code is something
 else than -EAGAIN or -EINTR. Especially function drmIoctl() in
 libdrm does this which is why I noted this isssue in the first
 place.
 
 If user space uses zero timeout to just check if a syncpoint value
 has already passed the library continues looping until the syncpoint
 value actually passes. Of course, we could just modify the ioctl
 interface to cast this return code to something else but that does
 not seem correct.

That doesn't sound right. Maybe drmIoctl() needs fixing instead. Looking
at the history, drmIoctl() was introduced to automatically loop if a
signal was received (commit 8b9ab108ec1f2ba2b503f713769c4946849b3cb2).
However the ioctl(3p) manpage doesn't mention that ioctl() returns
EAGAIN in case it is interrupted by a signal.

I'm adding Keith as author of that commit and the xorg-devel mailing
list on Cc to get some more eyes on this.

Thierry


pgpdMv5PlQBrk.pgp
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libXi 1/3] Fix potential corruption in mask_len handling

2013-05-28 Thread Alan Coopersmith

On 05/28/13 12:22 AM, walter harms wrote:

Is the INT_MAX needed here ? running X on 16bit machines seems very odd
(is that possible ?)


16-bits would be SHORT_MAX.  INT_MAX is 32-bits on all platforms X currently
supports.

--
-Alan Coopersmith-  alan.coopersm...@oracle.com
 Oracle Solaris Engineering - http://blogs.oracle.com/alanc
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [RFC][PATCH] Make GetXIDRange O(1) instead of O(N^2)

2013-05-28 Thread Peter Harris
On 2013-05-28 03:15, Roberto Ragusa wrote:
 I hope someone can explain the ID reuse stuff. I'm not able to say
 what applications rely on this, but I can say that in my first
 implementation I assumed one add and one free for each ID and the
 KDE desktop was not able to start (and fluxbox too, IIRC).

Clients can reuse IDs, as long as the lifespans are disjoint (otherwise
GetXIDRange would be much less useful). A client cannot reuse an active
ID (see LEGAL_NEW_RESOURCE).

The server can reuse IDs internally in order to create related objects
of a constrained lifetime. For example, A GLX window has a
__glXDrawableRes resource which uses the same ID as the RT_WINDOW it is
based on, so when the client frees the RT_WINDOW the __glXDrawableRes is
also freed automatically.

(Why didn't SGI just use AllocateWindowPrivate? The shared resource ID
mechanism predates the shared devPrivates mechanism. Possibly by enough
that shared devPrivates didn't exist when GLX was being written).

The whole point of GetXIDRange was to avoid the need to track free
resource IDs on the client side. Since computers have a few orders of
magnitude more storage now than they did in 1987, maybe it makes sense
to add (optional?) resource ID tracking to Xlib instead of the server?
In the typical case, Xlib could be re-using IDs before generating new
ones, so the rbtree (or even a simple stack of reusable IDs) shouldn't
grow as large as it would in the server.

(In xcb, resource IDs are explicit. An xcb application that would
benefit from reusing free IDs can already do so.)

Peter Harris
-- 
   Open Text Connectivity Solutions Group
Peter Harrishttp://connectivity.opentext.com/
Research and DevelopmentPhone: +1 905 762 6001
phar...@opentext.comToll Free: 1 877 359 4866
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH libXi 1/3] Fix potential corruption in mask_len handling

2013-05-28 Thread walter harms


Am 28.05.2013 17:55, schrieb Alan Coopersmith:
 On 05/28/13 12:22 AM, walter harms wrote:
 Is the INT_MAX needed here ? running X on 16bit machines seems very odd
 (is that possible ?)
 
 16-bits would be SHORT_MAX.  INT_MAX is 32-bits on all platforms X
 currently
 supports.
 

This was not my point.  0xff is less then INT_MAX. so this condition
will apply before INT_MAX any time.

re,
 wh
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH libXi 1/3] Fix potential corruption in mask_len handling

2013-05-28 Thread Alan Coopersmith

On 05/28/13 10:16 AM, walter harms wrote:



Am 28.05.2013 17:55, schrieb Alan Coopersmith:

On 05/28/13 12:22 AM, walter harms wrote:

Is the INT_MAX needed here ? running X on 16bit machines seems very odd
(is that possible ?)


16-bits would be SHORT_MAX.  INT_MAX is 32-bits on all platforms X
currently
supports.



This was not my point.  0xff is less then INT_MAX. so this condition
will apply before INT_MAX any time.


Ah, I misunderstood - I see what you're asking now.   With the tighter
condition added, I suppose the INT_MAX check is redundant (and may be
optimized out if the compiler is smart enough).

--
-Alan Coopersmith-  alan.coopersm...@oracle.com
 Oracle Solaris Engineering - http://blogs.oracle.com/alanc
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


X Protocol: Match Error does not identify what failed.

2013-05-28 Thread Ralph Corderoy
Hi,

Some errors, e.g. Window, contain the bad resource ID but the common
Match error doesn't use any of the many spare bytes it has to give the
caller a clue what was disliked in the specified request.  I realise it
would vary per request, but some indication, e.g. the third in a
LISTofVALUE, or a mask, would be helpful.

Is there a historical reason it wasn't done like this?

-- 
Cheers, Ralph.
https://plus.google.com/115649437518703495227/about
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] dix: Include selection.h directly.

2013-05-28 Thread Keith Packard
Maarten Lankhorst maarten.lankho...@canonical.com writes:

 Fixes the implicit declaration of DeleteWindowFromAnySelections during 
 debian's udeb build.

 Signed-off-by: Maarten Lankhorst maarten.lankho...@canonical.com

Reviewed-by: Keith Packard kei...@keithp.com

-- 
keith.pack...@intel.com


pgp6D_N4XPC6s.pgp
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [RFC][PATCH] Make GetXIDRange O(1) instead of O(N^2)

2013-05-28 Thread Keith Packard
Roberto Ragusa m...@robertoragusa.it writes:

 The basic problem is that when a client (xcb?) asks for free ids, the
 code makes a lot of inefficient guesses and spends an awful amount of time
 in identifying range of elements NOT present in its data structures
 (hashes).

The core protocol has no mechanism for discovering free ID ranges;
support for this was added in the XC-MISC extension around 1993 to deal
with long-running applications (typically window managers). As such, the
internal X server data structure wasn't touched when this extension was
implemented and so the operation to find unused ranges of XIDs is pretty
inefficient.

 This patch, in addition to raising the max hash size from 2^11 to 2^16 (2^11 
 was
 decided in 1994, or maybe even 1987), adds explicit tracking of free ids.
 The data structure I've chosen is an rb-tree of free extents.

I'm not excited at the notion of adding another data structure solely to
speed this fairly uncommon operation.

Given that we can extract the necessary information from the existing
data structure, here are a couple of brief ideas:

 1) Cache just a few free XID ranges; when XIDs are freed, you can
extend or merge them together. This would make the operations
cheaper at least, and might cover the bulk of the performance
problems you found.

 2) Delay constructing the new data structure until the application
actually runs out of IDs. Once that happens, you could turn on the
free ID tracking. I can imagine building this incrementally by
finding two free ranges each time the client asks for one and adding
them both to the new data structure until you fail to find any more.

 3) Create an entirely new data structure for the whole resource DB that
would make finding free extents less expensive. I'd suggest a
skip-list as that makes in-order traversal really cheap (to find a
free extent) while also reducing the average per-node cost down
close to one pointer.

In any case, what we really need is credible performance data to use to
compare the old and new implementation. I think we need data that shows:

 1) 'real' application performance. The cairo performance stuff using
Render is probably a good way to measure this.

 2) Memory usage. How much additional memory does this use, and,
ideally, how much additional memory is getting traversed when
manipulating the resource data base. Measuring the X server memory
usage in a couple of situations with and without the changes should
suffice.

 3) Performance of applications running out of XIDs to know whether
the change actually fixes the problem. Have you already constructed
some way of measuring this?

 - clients sometime add resources with already used IDs; it looks like the 
 newer
 resource is returned by get, and that all of them are deleted by free;
 is this a bug in the applications? I decided to not change the external 
 behavior of the
 code in these circumstances (see the functions containing maybe in
 their names)

Are you saying that you're seeing XIDs come over the wire that are
duplicates of those already in the resource database? If so, that would
be a bug in both the client side library and the X server. Every new ID
received should be checked with LEGAL_NEW_RESOURCE which makes sure that
the ID is unique.

However, while the protocol requires that IDs be unique across all
resource types, the server implementation does not. This allows the X
server to associate other data with specific IDs and have that freed
automatically. Looking up IDs always involves the type or class of the
ID, so code can be assured of getting the right data structure back.

Before I added the devPrivates infrastructure, this was the only way for
drivers and extensions to attach data to core X server objects.

I haven't used this feature of the server resource data base in
probably twenty years. However, the DRI2 extension *does* do this, so
you will need to handle it correctly.

-- 
keith.pack...@intel.com


pgpgkboPDVWaA.pgp
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH 2/6] gpu: host1x: Fix syncpoint wait return value

2013-05-28 Thread Keith Packard
Thierry Reding thierry.red...@gmail.com writes:


 That doesn't sound right. Maybe drmIoctl() needs fixing instead. Looking
 at the history, drmIoctl() was introduced to automatically loop if a
 signal was received (commit 8b9ab108ec1f2ba2b503f713769c4946849b3cb2).
 However the ioctl(3p) manpage doesn't mention that ioctl() returns
 EAGAIN in case it is interrupted by a signal.

EAGAIN is being returned when the GPU is wedged to ask the application
to re-submit the request, which will presumably be held until the  GPU
is un-wedged.

-- 
keith.pack...@intel.com


pgpGijRQ94dnB.pgp
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

[ANNOUNCE] libFS 1.0.5

2013-05-28 Thread Alan Coopersmith
libFS is the protocol binding library used by clients of X Font Servers (xfs),
such as xfsinfo, fslsfonts, and the X servers themselves.

This minor bugfix release includes the fix for the security issue recently
reported as CVE-2013-1996, as well as a number of other cleanups of the
memory allocation  error handling code noticed while working on that.

Adam Jackson (1):
  configure: Remove AM_MAINTAINER_MODE

Alan Coopersmith (9):
  Replace deprecated Automake INCLUDES variable with AM_CPPFLAGS
  Get rid of unnecessary casts in FS*alloc calls
  Get rid of unnecessary casts in FSfree calls
  Use NULL instead of 0 for null pointers
  Avoid reading outside bounds when _FSReply receives an Error response
  Avoid accessing freed memory on realloc failure in FSListFontsWithXInfo
  Get rid of more duplication in error cleanup code in FSListFontsWithXInfo
  Sign extension issue and integer overflow in FSOpenServer() 
[CVE-2013-1996]
  libFS 1.0.5

Colin Walters (1):
  autogen.sh: Implement GNOME Build API

Thomas Klausner (1):
  Fix a prototype error

git tag: libFS-1.0.5

http://xorg.freedesktop.org/archive/individual/lib/libFS-1.0.5.tar.bz2
MD5:  e3c77ca27942ebc5eb2ca99f29363515
SHA1: 3a94bc42775f4aa2eac14a51e0043299d7cd31b6
SHA256: 22eb3005dd8053aef7ff82758da5dd59ca9738410bcf847e675780e3a1f96107

http://xorg.freedesktop.org/archive/individual/lib/libFS-1.0.5.tar.gz
MD5:  c380f6c782e47de394fbd3c2774f2bf8
SHA1: dd5b5e71270dcfe4156c0ee5aa4453421fd06a40
SHA256: c4d925393997dbc41cc7f4a871dde3c54039043845e6e3d13c6c887c53c7a1d9

-- 
-Alan Coopersmith-  alan.coopersm...@oracle.com
 Oracle Solaris Engineering - http://blogs.oracle.com/alanc


pgpPPSeR60AEd.pgp
Description: PGP signature
___
xorg-announce mailing list
xorg-announce@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-announce


How to easily swap the connector order?

2013-05-28 Thread Ing. Daniel Rozsnyó

Hi,
  I am using a configuration for two heads, :0.0 and :0.1:

/Section Device//
//Identifier  devDn//
//Driver  radeon//
//BusID   PCI:0:1:0//
//Option  monitor-VGA-0 monDn//
//Screen  0//
//EndSection//
//
//Section Device//
//Identifier  devUp//
//Driver  radeon//
//BusID   PCI:0:1:0//
//Option  monitor-HDMI-0  monUp//
//Screen  1//
//EndSection/

But the HDMI is always assigned to :0.0 and the VGA port is always :0.1.

How can one reverse the order of the detected connectors? I would like 
to have an easy option to swap the two.
I have found that in the old days, a MonitorLayout option was the best 
way to do that, but nowadays it does not work:


/(WW) RADEON(0): Option MonitorLayout is not used/

Is it dependant on something?

The reason to swap the heads here is due to the fact that the software 
which accesses the two screens has hard coded the 0/1 values in many 
places, but the monitors are sometime swapped in the installation and it 
would be much easier to just swap them by software. Also this dual 
screen way of access is a legacy thing and is not easy to change - it is 
an embedded system, now ported to a G series T56N apu (in Q7 form factor):


00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI 
Wrestler [Radeon HD 6320]


Thanks,

Daniel





___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


Re: How to easily swap the connector order?

2013-05-28 Thread Michel Dänzer
On Die, 2013-05-28 at 18:07 +0200, Ing. Daniel Rozsnyó wrote:
 Hi,
   I am using a configuration for two heads, :0.0 and :0.1:
 
 Section Device
 Identifier  devDn
 Driver  radeon
 BusID   PCI:0:1:0
 Option  monitor-VGA-0 monDn
 Screen  0
 EndSection
 
 Section Device
 Identifier  devUp
 Driver  radeon
 BusID   PCI:0:1:0
 Option  monitor-HDMI-0  monUp
 Screen  1
 EndSection
 
 But the HDMI is always assigned to :0.0 and the VGA port is
 always :0.1.
 
 How can one reverse the order of the detected connectors? I would like
 to have an easy option to swap the two.

Option ZaphodHeads should allow you to choose which output(s) to use
in which Section Device. See the radeon manpage.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast |  Debian, X and DRI developer
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


Re: How to easily swap the connector order?

2013-05-28 Thread Ing. Daniel Rozsnyó

On 05/28/2013 06:47 PM, Michel Dänzer wrote:

On Die, 2013-05-28 at 18:07 +0200, Ing. Daniel Rozsnyó wrote:

Hi,
   I am using a configuration for two heads, :0.0 and :0.1:

Section Device
 Identifier  devDn
 Driver  radeon
 BusID   PCI:0:1:0
 Option  monitor-VGA-0 monDn
 Screen  0
EndSection

Section Device
 Identifier  devUp
 Driver  radeon
 BusID   PCI:0:1:0
 Option  monitor-HDMI-0  monUp
 Screen  1
EndSection

But the HDMI is always assigned to :0.0 and the VGA port is
always :0.1.

How can one reverse the order of the detected connectors? I would like
to have an easy option to swap the two.

Option ZaphodHeads should allow you to choose which output(s) to use
in which Section Device. See the radeon manpage.




Great, now that works:

Section Device
Identifier  devDn
Driver  radeon
BusID   PCI:0:1:0
Option  monitor-VGA-0 monDn
Option  ZaphodHeads   VGA-0
Screen  0
EndSection

Section Device
Identifier  devUp
Driver  radeon
BusID   PCI:0:1:0
Option  monitor-HDMI-0  monUp
Option  ZaphodHeads   HDMI-0
Screen  1
EndSection


Thank you. I have googled to ZaphodHeads option, but I got the feeling 
it is is rather be used for some special purpose mode of multi-seat setup.

So cryptic naming of the options is hell.

Can I leave out the monitor-* options? In xorg.conf, I still have two 
Screen objects where the monitors and devices are linked together... not 
sure what is the correct way today.


Daniel

___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 50327] CPU power consumption and temperature way to high when using radeon drivers.

2013-05-28 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=50327

--- Comment #13 from Bastian Triller bastian.tril...@gmail.com ---
Applying the changes against 3.8.13 as stated in
https://bugs.freedesktop.org/show_bug.cgi?id=49981#c30 lets the low power
profile change the frequencies. The temperatures drop around 10-15°C.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


Re: How to easily swap the connector order?

2013-05-28 Thread Alex Deucher

 From: Ing. Daniel Rozsnyó dan...@rozsnyo.com
To: Michel Dänzer mic...@daenzer.net 
Cc: xorg-driver-ati@lists.x.org 
Sent: Tuesday, May 28, 2013 1:03 PM
Subject: Re: How to easily swap the connector order?
 

On 05/28/2013 06:47 PM, Michel Dänzer wrote:
 On Die, 2013-05-28 at 18:07 +0200, Ing. Daniel Rozsnyó wrote:
 Hi,
    I am using a configuration for two heads, :0.0 and :0.1:

 Section Device
          Identifier  devDn
          Driver      radeon
          BusID       PCI:0:1:0
          Option      monitor-VGA-0 monDn
          Screen      0
 EndSection

 Section Device
          Identifier  devUp
          Driver      radeon
          BusID       PCI:0:1:0
          Option      monitor-HDMI-0  monUp
          Screen      1
 EndSection

 But the HDMI is always assigned to :0.0 and the VGA port is
 always :0.1.

 How can one reverse the order of the detected connectors? I would like
 to have an easy option to swap the two.
 Option ZaphodHeads should allow you to choose which output(s) to use
 in which Section Device. See the radeon manpage.



Great, now that works:

Section Device
         Identifier  devDn
         Driver      radeon
         BusID       PCI:0:1:0
         Option      monitor-VGA-0 monDn
         Option      ZaphodHeads   VGA-0
         Screen      0
EndSection

Section Device
         Identifier  devUp
         Driver      radeon
         BusID       PCI:0:1:0
         Option      monitor-HDMI-0  monUp
         Option      ZaphodHeads   HDMI-0
         Screen      1
EndSection


Thank you. I have googled to ZaphodHeads option, but I got the feeling 
it is is rather be used for some special purpose mode of multi-seat setup.
So cryptic naming of the options is hell.

It has nothing to do with multi-seat.  Most people just use xrandr or whatever 
gui tool their distor provides to adjust displays on the fly.  The only reason 
you need to use an xorg.conf like above is if you want independent X screens on 
the same card.  For historical reasons that mode of operation is called Zaphod 
mode, hence ZaphodHeads.




Can I leave out the monitor-* options? In xorg.conf, I still have two 
Screen objects where the monitors and devices are linked together... not 
sure what is the correct way today.


You only need the monitors sections if you want to force certain behavior in 
the monitors (e.g., rotation or specific modelines).  If your monitor sections 
are empty, you can skip them.  Also, you can remove the the monitor-OUTPUT 
lines from your config if you just set the monitor identifier to the name of 
the output.  E.g,

Section Monitor

    Identifier HDMI-0

    ...

EndSection

Alex

___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 50327] CPU power consumption and temperature way to high when using radeon drivers.

2013-05-28 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=50327

--- Comment #14 from tobi...@yahoo.de ---
I would like to test that, but have no clue how to apply those changes.
Would be nice if someone can give me instructions on that.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati