Re: [ANNOUNCE] xf86-video-intel 2.5.99.2

2009-01-15 Thread Vasily Khoruzhick
On Friday 16 January 2009 05:20:17 Giovanni Masucci wrote:

> If I can ask, are these 6 patches going to enter the next 2.6.28.x 
> releases or they'll just be in 2.6.29?

Just out of curiosity, does anybody got this driver working stable and fast on 
gma950 on 2.6.28 kernel (with these 6 patches)?

I've just tried xf86-video-2.6.0, xorg-server-1.5.99.901 and mesa-7.3_rc2,
still got artefacts with uxa (same as on 
http://fenix-fen.at.tut.by/screen-3.png) and xserver hangs (and no way to 
stop it except restarting whole system) after using 3d for ~2-3 mins (with 
wine even faster :))
With EXA (and DRI1) I got message like "No MTTR for 0xc000" in dmesg every 
time when xserver starts, and 3D performance is terrible (7-10fps in Quake3)

Should I file a bug on bugs.freedesktop.org or it's a know issue?

Regards
Vasily


signature.asc
Description: This is a digitally signed message part.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

multiple screens - pci-e

2009-01-15 Thread telenet
Have an rare error.But do not know if it is driver/xorg or os error:
Have a computer with multiple screens(8or 12).Nvidia as driver.Debian Lenny as 
os.
When I start the computer,one or two screens/monitors are not starting or 
waking up.But when I reboot(reboot button) all screens works perfect.
Only when I start the computer I have screens not working.
Debian Lenny with 180.22 nvidia driver in this case not working.
With debian etch and nvidia  100.14.19 no problem at all.
Again, do not know if it is xorg or nvidia issue.
Did replace cables, monitors,vga card no solution: one or two screens not 
working at startup from pc,only with reboot getting all screens.

gd___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Window Manager: Intercepting mouse events

2009-01-15 Thread Rémi Cardona
Le 16/01/2009 02:02, Bipin George Mathew a écrit :
> Is it possible to use a combination of XGrabButton on the root window
> and use XSendEvent to send the transformed co-ordinates?

Here's a snippet of XSendEvent's man page :

---
The XSendEvent function identifies the destination window, determines 
which clients should receive the specified events, and ignores any 
active grabs.
---

It's still the server that determines which window gets the event. In a 
composited server, either the server needs to know the 3D geometry of 
each window (which I think is a bad idea), or you need to be able to 
specify which top-level window will receive the event.

But I think with XGrabButton and XSendEvent, you'll run into other 
problems, such as receiving the input you've generated yourself. So for 
every click, you would have to Ungrab, SendEvent and Grab again... At 
least, that's how I understand it.

Cheers

-- 
Rémi Cardona
LRI, INRIA
remi.card...@lri.fr
r...@gentoo.org
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[ANNOUNCE] xinput 1.4.0

2009-01-15 Thread Peter Hutterer
Main features added to this version is support for listing and changing input
device properties.

Note that this release is also MPX/XI2-aware. XI2 is still undergoing changes,
so XI2 support is only enabled if you build it on a machine that's running
libXi from git.

Cheers,
  Peter

Benjamin Close (2):
  Correct the check for XI2, not every shell supports ==, but they do =
  Clean up the detection of XI2

Bryce Harrington (1):
  Add --list-props, --watch-props and --set-int-prop options to man page.

Julien Cristau (1):
  Change xinput_CFLAGS to AM_CFLAGS to clear automake-1.10 warning

Paulo Cesar Pereira de Andrade (2):
  Mandriva patches to xinput.
  Compile warning fix.

Peter Hutterer (28):
  Print out attachment of slave devices.
  Add support for device hierarchy changes.
  Add --loop to "xinput list". Re-prints devices when hierarchy changes.
  Register for DeviceClassesChangedEvents, reprint the list when we get one.
  Add support to set the client pointer.
  Don't overwrite daemon with argc.
  Replace Fred's name in the main license text with a general "The authors".
  Remove deprecated imakefile.
  Update XiSelectEvent API usage, has device argument now.
  Modify to work with the changes in the XChangeDeviceHierarchy API.
  Use new XQueryInputVersion() request to tell the server we can do XI 2.
  Use find_device_info instead of requireing device ids on the cmdline.
  Test for XI2 functions in libXi, add #ifdefs to build in non-XI2 setups.
  Remove ChangeLog, is autogenerated now anyway.
  Add list-props, set-int-prop and watch-props parameters.
  Property code: If the Atom specified was an Atom, actually use it too.
  Print property values in addition to their names.
  Don't require extension devices for button mapping.
  Use XI 1.5 property events.
  Require inputproto 1.9.99.4
  Use updated property events.
  Add --delete-prop option.
  Require inputproto 1.9.99.5
  Require libXi 1.2 and inputproto 1.5.
  Fix wrong type conversion in listing Atom properties.
  Don't linebreak after listing a string or atom property.
  Add set-atom-prop to set properties containing other properties.
  xinput 1.4.0

Sascha Hlusiak (2):
  Add --get-button-map option.
  Call XSync instead XFlush to be able to handle errors

Simon Thum (1):
  Add set-float-prop option to set properties using floating point numbers.

git tag: xinput-1.4.0

http://xorg.freedesktop.org/archive/individual/app/xinput-1.4.0.tar.bz2
MD5: ef43538bb3b445d2d69d5adbf76c149e  xinput-1.4.0.tar.bz2
SHA1: 3caa25b24a2b5c7b00ab4a781999a76f47e92827  xinput-1.4.0.tar.bz2

http://xorg.freedesktop.org/archive/individual/app/xinput-1.4.0.tar.gz
MD5: d063e5e3a34ce3f866858b4930aed48c  xinput-1.4.0.tar.gz
SHA1: ac89977347df97a1ea72cb2ce1fb52c66edee25c  xinput-1.4.0.tar.gz



pgpeVSC6RiHCK.pgp
Description: PGP signature
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Fedora 10, Xorg 7.4 and US15W - Poulsbo - please help - I'm stuck

2009-01-15 Thread Dan Naughton
Are there drivers for the US15W / Poulsbo chipset?  I just got the install
done with Fedora 10 in text mode, and it tanked setting up the xserver.
>From the Xorg.0.log, it looks like it tried every driver, then failed.  I
was hoping the "intel" driver was the answer, but I guess that doesn't
support the US15W.  I tried in the Intel sight.  They have the IEGD drivers
for Poulsbo as binaries, but they only cover up to xorg 7.3?  (I tried it
anyway, and they failed)

If anyone knows how to get xorg 7.4 running with Poulsbo, can someone help
me out.

Thanks for helping.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: [ANNOUNCE] xf86-video-intel 2.5.99.2

2009-01-15 Thread Giovanni Masucci
On venerdì 16 gennaio 2009 04:12:46 Jin, Gordon wrote:
> Sami Farin wrote on Friday, January 16, 2009 2:29 AM:
> > On Tue, Jan 13, 2009 at 17:24:10 +0800, Jin, Gordon wrote:
> >> Tino Keitel wrote on Friday, January 09, 2009 3:45 AM:
> >>> On Thu, Jan 08, 2009 at 16:04:55 +0800, Zhenyu Wang wrote:
>  Subject: [ANNOUNCE] xf86-video-intel 2.5.99.2
> >>>
> >>> I'd like to know how/what can/should be tested before the release.
> >>> What versions of kernel/xserver/mesa/drm/whatever are required?  And
> >>> for what features (XvMC, UXA, DRI, DRI2, GEM, KMS, etc.), and what
> >>> chips can use which features?
> >>
> >> Tino,
> >>
> >> Thanks for the question, and sorry for my late reply.
> >>
> >> In general, the release component info is maintained at
> >> http://intellinuxgraphics.org/download.html.
> >> So at that page you can find a recommended package (what we call
> >> 2008Q4 release, and it's -rc3 for now), with:
> >> xf86-video-intel: 2.6-branch. It's tagged as 2.5.99.2 for now.
> >> mesa: intel-2008-q4 branch. It's forked from master at some point
> >> (early Dec. 2008) and cherry-picked patches from master on demand,
> >> so a little more conservative than master. Of course master tip is
> >> supposed to work too, but not validated by Intel. libdrm: master
> >> branch. (note >2.4.2 is required for xf86-video-intel 2.6, and tip
> >> is recommended)
> >> kernel: Eric's drm-intel tree. For 2008Q4, we base on 2.6.28 kernel.
> >> So we are recommending drm-intel-2.6.28 branch, which adds 5 patches
> >> on top of 2.6.28.
> >
> > Can you put all the individual patches for 2.6.28
> > available at http://intellinuxgraphics.org/2008Q4.html ?
>
> Good suggestion. I'm putting the combination of the 6 patches with a link
> to "6 patches" on that page.
If I can ask, are these 6 patches going to enter the next 2.6.28.x  releases 
or they'll just be in 2.6.29?
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH] : quirk for AOpen MP45

2009-01-15 Thread Zhenyu Wang
On 2009.01.10 09:15:36 +0100, Vincent Mussard wrote:
> Hi
> 
> I own an AOpen MP45 mini-pc which doesn't have an LVDS output although 
> xorg reports one.
> Like for the other mini-pc, this patch solves the problem.
> 
> Thanks
> 
> Vincent
> 
> ---
> 
> diff -Naubr xf86-video-intel-2.5.99.2/src/i830_quirks.c 
> xf86-video-intel-2.5.99.2.new/src/i830_quirks.c
> --- xf86-video-intel-2.5.99.2/src/i830_quirks.c 2009-01-08 
> 07:32:38.0 +0100
> +++ xf86-video-intel-2.5.99.2.new/src/i830_quirks.c 2009-01-10 
> 08:49:18.0 +0100
> @@ -233,6 +233,7 @@
> { PCI_CHIP_I915_GM, 0xa0a0, SUBSYS_ANY, quirk_ignore_lvds },
> { PCI_CHIP_I945_GM, 0xa0a0, SUBSYS_ANY, quirk_ignore_lvds },
> { PCI_CHIP_I965_GM, 0xa0a0, SUBSYS_ANY, quirk_ignore_lvds },
> +{ PCI_CHIP_GM45_GM, 0xa0a0, SUBSYS_ANY, quirk_ignore_lvds },
> { PCI_CHIP_I965_GM, 0x8086, 0x1999, quirk_ignore_lvds },
> 
> /* Apple Mac mini has no lvds, but macbook pro does */
> 

2.6-branch doesn't have my LVDS detect patch, as I'd like to get
more testing before putting in stable. 
Could you try git master to see if LVDS has the right detection?
I know some Aopen might still not work, so that we can use your quirk.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


RE: [ANNOUNCE] xf86-video-intel 2.5.99.2

2009-01-15 Thread Jin, Gordon
Sami Farin wrote on Friday, January 16, 2009 2:29 AM:
> On Tue, Jan 13, 2009 at 17:24:10 +0800, Jin, Gordon wrote:
>> Tino Keitel wrote on Friday, January 09, 2009 3:45 AM:
>>> On Thu, Jan 08, 2009 at 16:04:55 +0800, Zhenyu Wang wrote:
 
 Subject: [ANNOUNCE] xf86-video-intel 2.5.99.2
>>> 
>>> I'd like to know how/what can/should be tested before the release.
>>> What versions of kernel/xserver/mesa/drm/whatever are required?  And
>>> for what features (XvMC, UXA, DRI, DRI2, GEM, KMS, etc.), and what
>>> chips can use which features?
>> 
>> Tino,
>> 
>> Thanks for the question, and sorry for my late reply.
>> 
>> In general, the release component info is maintained at
>> http://intellinuxgraphics.org/download.html. 
>> So at that page you can find a recommended package (what we call
>> 2008Q4 release, and it's -rc3 for now), with: 
>> xf86-video-intel: 2.6-branch. It's tagged as 2.5.99.2 for now.
>> mesa: intel-2008-q4 branch. It's forked from master at some point
>> (early Dec. 2008) and cherry-picked patches from master on demand,
>> so a little more conservative than master. Of course master tip is
>> supposed to work too, but not validated by Intel. libdrm: master
>> branch. (note >2.4.2 is required for xf86-video-intel 2.6, and tip
>> is recommended)
>> kernel: Eric's drm-intel tree. For 2008Q4, we base on 2.6.28 kernel.
>> So we are recommending drm-intel-2.6.28 branch, which adds 5 patches
>> on top of 2.6.28.  
> 
> Can you put all the individual patches for 2.6.28
> available at http://intellinuxgraphics.org/2008Q4.html ?

Good suggestion. I'm putting the combination of the 6 patches with a link to "6 
patches" on that page.

Thanks
Gordon
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH] Count the number of logically down buttons in buttonsDown

2009-01-15 Thread Peter Hutterer
On Thu, Jan 15, 2009 at 08:22:17PM -0500, Thomas Jaeger wrote:
> From d6ea6d45d5d3ca74bb665f32439f440b30a8939d Mon Sep 17 00:00:00 2001
> From: Thomas Jaeger 
> Date: Sat, 20 Dec 2008 16:17:02 +0100
> Subject: [PATCH] Don't release grabs unless all buttons are up
> 
> Previously, only buttons <= 5 would count here, but the core protocol
> allows for 255 buttons.
> 
> http://lists.freedesktop.org/archives/xorg/2009-January/042092.html

Pushed, thanks again.

Cheers,
  Peter

> ---
>  Xi/exevents.c |2 +-
>  dix/events.c  |2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/Xi/exevents.c b/Xi/exevents.c
> index f3f9d39..6bf9e56 100644
> --- a/Xi/exevents.c
> +++ b/Xi/exevents.c
> @@ -1118,7 +1118,7 @@ ProcessOtherEvent(xEventPtr xE, DeviceIntPtr device, 
> int count)
>   xE->u.u.detail = key;
>   return;
>   }
> -if (!b->state && device->deviceGrab.fromPassiveGrab)
> +if (!b->buttonsDown && device->deviceGrab.fromPassiveGrab)
>  deactivateDeviceGrab = TRUE;
>  }
>  
> diff --git a/dix/events.c b/dix/events.c
> index a042089..e23cf8f 100644
> --- a/dix/events.c
> +++ b/dix/events.c
> @@ -3929,7 +3929,7 @@ ProcessPointerEvent (xEvent *xE, DeviceIntPtr mouse, 
> int count)
>   if (xE->u.u.detail == 0)
>   return;
>  filters[mouse->id][Motion_Filter(butc)] = MotionNotify;
> - if (!butc->state && mouse->deviceGrab.fromPassiveGrab)
> + if (!butc->buttonsDown && mouse->deviceGrab.fromPassiveGrab)
>   deactivateGrab = TRUE;
>   break;
>   default:
> -- 
> 1.6.0.6

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH] Count the number of logically down buttons in buttonsDown

2009-01-15 Thread Thomas Jaeger
Peter Hutterer wrote:
> On Mon, Jan 05, 2009 at 11:55:40AM -0500, Thomas Jaeger wrote:
>> From 3f8ba578ad18b7135031197f6ec5145afcd1479a Mon Sep 17 00:00:00 2001
>> From: Thomas Jaeger 
>> Date: Mon, 22 Dec 2008 00:55:09 +0100
>> Subject: [PATCH] Count the number of logically down buttons in buttonsDown
>>
>> This fixes the following bug.  Assuming your window manager grabs
>> Alt+Button1 to move windows, map Button3 to 0 via XSetPointerMapping,
>> then press the physical button 3 (this shouldn't have any effect), press
>> Alt and then button 1.  The press event is delivered to the application
>> instead of firing the grab.
> 
> Signed off and pushed (finally). Thanks for the patch.
> Can you send me the updated version of the other patch please, AFAICT there
> was a minor change missing.

Thanks.  This one should do it now.
>From d6ea6d45d5d3ca74bb665f32439f440b30a8939d Mon Sep 17 00:00:00 2001
From: Thomas Jaeger 
Date: Sat, 20 Dec 2008 16:17:02 +0100
Subject: [PATCH] Don't release grabs unless all buttons are up

Previously, only buttons <= 5 would count here, but the core protocol
allows for 255 buttons.

http://lists.freedesktop.org/archives/xorg/2009-January/042092.html
---
 Xi/exevents.c |2 +-
 dix/events.c  |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Xi/exevents.c b/Xi/exevents.c
index f3f9d39..6bf9e56 100644
--- a/Xi/exevents.c
+++ b/Xi/exevents.c
@@ -1118,7 +1118,7 @@ ProcessOtherEvent(xEventPtr xE, DeviceIntPtr device, int count)
 	xE->u.u.detail = key;
 	return;
 	}
-if (!b->state && device->deviceGrab.fromPassiveGrab)
+if (!b->buttonsDown && device->deviceGrab.fromPassiveGrab)
 deactivateDeviceGrab = TRUE;
 }
 
diff --git a/dix/events.c b/dix/events.c
index a042089..e23cf8f 100644
--- a/dix/events.c
+++ b/dix/events.c
@@ -3929,7 +3929,7 @@ ProcessPointerEvent (xEvent *xE, DeviceIntPtr mouse, int count)
 	if (xE->u.u.detail == 0)
 		return;
 filters[mouse->id][Motion_Filter(butc)] = MotionNotify;
-	if (!butc->state && mouse->deviceGrab.fromPassiveGrab)
+	if (!butc->buttonsDown && mouse->deviceGrab.fromPassiveGrab)
 		deactivateGrab = TRUE;
 	break;
 	default:
-- 
1.6.0.6

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Window Manager: Intercepting mouse events

2009-01-15 Thread Bipin George Mathew
Is it possible to use a combination of XGrabButton on the root window and
use XSendEvent to send the transformed co-ordinates? I guess the
shortcomings of doing this is that applications may not honor synthesized
events.


On Mon, Jan 12, 2009 at 3:22 PM, Rémi Cardona  wrote:

> Le 12/01/2009 21:29, Bipin George Mathew a écrit :
>
>  I am writing a window manager where I am transforming the window
>> contents (using the composite extensions). After applying the
>> transformation, I also need to ensure that mouse events are transformed
>> and redirected to the XClients appropriately. What is the best way to do
>> so? I came across this X Event extension - Xevie - Is this extension
>> recommended for this WM use-case?
>>
>
> There's no official extension to do that. And XEvIE has been broken for
> years and was removed a couple weeks ago from master. Besides, it didn't do
> what you want.
>
> Compiz folks had Xserver patches to add mesh-type OpenGL-like transforms to
> windows. Don't know what state those patches are in...
>
> Metisse (shameless plug) has working input-redirection but it requires an
> additional X server process.
>
> Both approaches have their advantages and shortcomings, pick your poison.
> :)
>
> Cheers
>
> --
> Rémi Cardona
> LRI, INRIA
> remi.card...@lri.fr
> r...@gentoo.org
>
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: [PATCH] Count the number of logically down buttons in buttonsDown

2009-01-15 Thread Peter Hutterer
On Mon, Jan 05, 2009 at 11:55:40AM -0500, Thomas Jaeger wrote:
> From 3f8ba578ad18b7135031197f6ec5145afcd1479a Mon Sep 17 00:00:00 2001
> From: Thomas Jaeger 
> Date: Mon, 22 Dec 2008 00:55:09 +0100
> Subject: [PATCH] Count the number of logically down buttons in buttonsDown
> 
> This fixes the following bug.  Assuming your window manager grabs
> Alt+Button1 to move windows, map Button3 to 0 via XSetPointerMapping,
> then press the physical button 3 (this shouldn't have any effect), press
> Alt and then button 1.  The press event is delivered to the application
> instead of firing the grab.

Signed off and pushed (finally). Thanks for the patch.
Can you send me the updated version of the other patch please, AFAICT there
was a minor change missing.

Cheers,
  Peter

> ---
>  Xi/exevents.c  |8 
>  include/inputstr.h |6 +-
>  2 files changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/Xi/exevents.c b/Xi/exevents.c
> index 2aa3161..b4359a8 100644
> --- a/Xi/exevents.c
> +++ b/Xi/exevents.c
> @@ -895,10 +895,10 @@ UpdateDeviceState(DeviceIntPtr device, xEvent* xE, int 
> count)
>  *kptr |= bit;
>   if (device->valuator)
>   device->valuator->motionHintWindow = NullWindow;
> -b->buttonsDown++;
> - b->motionMask = DeviceButtonMotionMask;
>  if (!b->map[key])
>  return DONT_PROCESS;
> +b->buttonsDown++;
> + b->motionMask = DeviceButtonMotionMask;
>  if (b->map[key] <= 5)
>   b->state |= (Button1Mask >> 1) << b->map[key];
>   SetMaskForEvent(device->id, Motion_Filter(b), DeviceMotionNotify);
> @@ -927,10 +927,10 @@ UpdateDeviceState(DeviceIntPtr device, xEvent* xE, int 
> count)
>  *kptr &= ~bit;
>   if (device->valuator)
>   device->valuator->motionHintWindow = NullWindow;
> -if (b->buttonsDown >= 1 && !--b->buttonsDown)
> - b->motionMask = 0;
>  if (!b->map[key])
>  return DONT_PROCESS;
> +if (b->buttonsDown >= 1 && !--b->buttonsDown)
> + b->motionMask = 0;
>   if (b->map[key] <= 5)
>   b->state &= ~((Button1Mask >> 1) << b->map[key]);
>   SetMaskForEvent(device->id, Motion_Filter(b), DeviceMotionNotify);
> diff --git a/include/inputstr.h b/include/inputstr.h
> index 4719d37..515b6aa 100644
> --- a/include/inputstr.h
> +++ b/include/inputstr.h
> @@ -185,7 +185,11 @@ typedef struct _ValuatorClassRec {
>  
>  typedef struct _ButtonClassRec {
>  CARD8numButtons;
> -CARD8buttonsDown;/* number of buttons currently down */
> +CARD8buttonsDown;/* number of buttons currently down
> +   This counts logical buttons, not 
> +physical ones, i.e if some buttons
> +are mapped to 0, they're not counted
> +here */
>  unsigned short   state;
>  Mask motionMask;
>  CARD8down[DOWN_LENGTH];
> -- 
> 1.6.0.4
> 

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Proper way to enable port access tracing with current xserver

2009-01-15 Thread Alex Deucher
On Thu, Jan 15, 2009 at 4:53 PM, Alex Villací­s Lasso
 wrote:
> Alex Deucher escribió:
>> On Thu, Jan 15, 2009 at 3:10 PM, Alex Villací­s Lasso
>>  wrote:
>>
>>> I am trying to enable I/O port tracing on current xserver head in my home
>>> machine (Linux 2.6.28 on x86 Pentium 4 32-bits, ProSavageDDR-K as primary
>>> card, Oak OTI64111 as secondary card) in order to learn about the register
>>> initialization for the video BIOS of both the Savage and the Oak chipsets:
>>>
>>> * For savage, I want to eventually see the POST port accesses as they occur
>>> in VESA, so that the current driver can do the same port enabling on the
>>> case of a savage as secondary card. Currently, the xorg driver can
>>> initialize a secondary savage without BIOS (but see below for caveat), but
>>> the colors are washed out and horrible artifacts appear on any attempt to
>>> accelerate operations. Same issue happens with the savagefb kernel
>>> framebuffer driver.
>>> * For oak, I want to peek at the register initialization for mode switching
>>> in VESA, in order to have better understanding towards writing a driver for
>>> the chipset.
>>>
>>
>> http://people.freedesktop.org/~airlied/xresprobe-mjg59-0.4.21.tar.gz
>>
>> This will dump io accesses when you execute bios code using the
>> included x86 emulator.
>>
>> Alex
>>
>>
>  From a quick skim over the contents of the file, I see an x86emu
> directory. I think I have seen a directory with that name in the xserver
> sources. Is it safe to switch to x86emu on an x86 32-bits in the xserver
> source? Or do I have to keep in mind some special consideration?

We already do.  the xserver uses x86emu by default now for x86.

Alex
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Albert Vilella
How about a "gdm restart"? That is effectively an X server restart, right?

Then it's only about switching on and off the hardware, right?

On Thu, Jan 15, 2009 at 7:20 PM, Glynn Clements wrote:

>
> Stephane Marchesin wrote:
>
> > and if you want to keep your session in between, we lack
> > - X.Org infrastructure to hand a session from a graphics driver to
> > another (there are a million of possible problems here)
>
> Right; like a million display parameters which a client can query, but
> for which there is no mechanism to request notification of changes,
> and thus are (implicitly) constant over the lifetime of the client.
>
> I know that the X developers don't consider incompatible changes to be
> completely out of the question, but if you're talking about a
> particular screen suddenly changing e.g. its glGet* values, I don't
> see that happening.
>
> And I don't think that it's realistic for the server to expose a
> single set of parameters for two very different graphics chips.
>
> It's more realistic to treat this as a traditional multiple-"Screen"
> setup, with the ability to enable and disable screens. Obviously,
> windows would have to either be opened on the appropriate screen
> (programs which need the 3D GPU on the screen which has one), or the
> application/toolkit would need to explicitly provide migration.
>
> --
> Glynn Clements 
> ___
> xorg mailing list
> xorg@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/xorg
>
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: No video overlay on Intel X4500HD

2009-01-15 Thread Jeffrey Baker
On Wed, Jan 14, 2009 at 4:32 PM, Keith Packard  wrote:
> but then we got distracted

Pretty much sums up the state of the intel driver from August 2006 to present.

-jwb
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Current tinderbox regression (xconsole)

2009-01-15 Thread Chris Ball
http://tinderbox.x.org/builds/2009-01-15-0023/logs/xconsole/#build

xconsole.c:185:45: error: sys/stropts.h: No such file or directory

xconsole doesn't build on Fedora 9+ machines, because sys/stropts.h
went away.  Anyone know what the source fix/conditional include should
look like?

- Chris.
-- 
Chris Ball   
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Proper way to enable port access tracing with current xserver

2009-01-15 Thread Alex Villací­s Lasso
Alex Deucher escribió:
> On Thu, Jan 15, 2009 at 3:10 PM, Alex Villací­s Lasso
>  wrote:
>   
>> I am trying to enable I/O port tracing on current xserver head in my home
>> machine (Linux 2.6.28 on x86 Pentium 4 32-bits, ProSavageDDR-K as primary
>> card, Oak OTI64111 as secondary card) in order to learn about the register
>> initialization for the video BIOS of both the Savage and the Oak chipsets:
>>
>> * For savage, I want to eventually see the POST port accesses as they occur
>> in VESA, so that the current driver can do the same port enabling on the
>> case of a savage as secondary card. Currently, the xorg driver can
>> initialize a secondary savage without BIOS (but see below for caveat), but
>> the colors are washed out and horrible artifacts appear on any attempt to
>> accelerate operations. Same issue happens with the savagefb kernel
>> framebuffer driver.
>> * For oak, I want to peek at the register initialization for mode switching
>> in VESA, in order to have better understanding towards writing a driver for
>> the chipset.
>> 
>
> http://people.freedesktop.org/~airlied/xresprobe-mjg59-0.4.21.tar.gz
>
> This will dump io accesses when you execute bios code using the
> included x86 emulator.
>
> Alex
>
>   
 From a quick skim over the contents of the file, I see an x86emu 
directory. I think I have seen a directory with that name in the xserver 
sources. Is it safe to switch to x86emu on an x86 32-bits in the xserver 
source? Or do I have to keep in mind some special consideration?

-- 
perl -e '$x=2.4;print sprintf("%.0f + %.0f = %.0f\n",$x,$x,$x+$x);'

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Proper way to enable port access tracing with current xserver

2009-01-15 Thread Alex Deucher
On Thu, Jan 15, 2009 at 3:10 PM, Alex Villací­s Lasso
 wrote:
> I am trying to enable I/O port tracing on current xserver head in my home
> machine (Linux 2.6.28 on x86 Pentium 4 32-bits, ProSavageDDR-K as primary
> card, Oak OTI64111 as secondary card) in order to learn about the register
> initialization for the video BIOS of both the Savage and the Oak chipsets:
>
> * For savage, I want to eventually see the POST port accesses as they occur
> in VESA, so that the current driver can do the same port enabling on the
> case of a savage as secondary card. Currently, the xorg driver can
> initialize a secondary savage without BIOS (but see below for caveat), but
> the colors are washed out and horrible artifacts appear on any attempt to
> accelerate operations. Same issue happens with the savagefb kernel
> framebuffer driver.
> * For oak, I want to peek at the register initialization for mode switching
> in VESA, in order to have better understanding towards writing a driver for
> the chipset.

http://people.freedesktop.org/~airlied/xresprobe-mjg59-0.4.21.tar.gz

This will dump io accesses when you execute bios code using the
included x86 emulator.

Alex
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: No video overlay on Intel X4500HD

2009-01-15 Thread Barry Scott
Keith Packard wrote:
> On Wed, 2009-01-14 at 19:35 +, Daniel Gultsch wrote:
>   
>> Hi Guys,
>>
>> My major problem is that i dont have the "Intel(R) Video Overlay" but
>> only the "Intel(R) Textured Video" - as reported by xvinfo | grep -i
>> adaptor. This causes tearing and i really need to watch movies :-)
>> 
>
> The textured adapter causes tearing because it doesn't synchronize the
> screen update to the vblank. Synchronizing this operation involves
> either:
>  A. queuing a command to stop the graphics engine until the vblank
> interval and then queuing the rendering commands right after
> that. 
>  B. waiting for the vblank interval to occur and then quickly
> queueing suitable rendering commands to the graphics engine
>
>   
Or change the client to wait for VBLANK before calling XPutImage.
(Only works if the movie player is the only client waiting on VBLANK
because of trade offs in the Intel DRM code).

Barry

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Proper way to enable port access tracing with current xserver

2009-01-15 Thread Alex Villací­s Lasso
I am trying to enable I/O port tracing on current xserver head in my 
home machine (Linux 2.6.28 on x86 Pentium 4 32-bits, ProSavageDDR-K as 
primary card, Oak OTI64111 as secondary card) in order to learn about 
the register initialization for the video BIOS of both the Savage and 
the Oak chipsets:


* For savage, I want to eventually see the POST port accesses as they 
occur in VESA, so that the current driver can do the same port enabling 
on the case of a savage as secondary card. Currently, the xorg driver 
can initialize a secondary savage without BIOS (but see below for 
caveat), but the colors are washed out and horrible artifacts appear on 
any attempt to accelerate operations. Same issue happens with the 
savagefb kernel framebuffer driver.
* For oak, I want to peek at the register initialization for mode 
switching in VESA, in order to have better understanding towards writing 
a driver for the chipset.


Now, I tried to perform the changes shown in the attached patch, but 
without success - the server shows no output that hints of a trace. I 
tried disabling ioperm() as the code comments suggest, but it seems to 
be made redundant by the iopl(3) call on the same line, and if I disable 
both, I get SIGSEGV as the port accesses are not enabled in vgahw. So 
how should I properly enable I/O port tracing n the current xserver? 
Maybe the code comments are out of date?


Another question I have is this: as far as I understand, PCI video cards 
have to run the POST (or do an equivalent operation) in order to execute 
the chipset-specific hocus-pocus that enables legacy vga port access 
(0x3c0 through 0x3df). So only one chipset can be mapped into that I/O 
address range at a time (right?). When initializing a secondary card via 
POST, the real-mode code of the secondary card will also attempt to map 
its own registers into that range (I would assume). So what steps are 
taken in the xserver to move the primary card out of the way (if at all) 
so that the second card initializes properly? What happens if the 
drivers for both chipsets require some access to the legacy I/O ports in 
order to perform normal operations? (for example, if both are driven by 
the VESA driver) How can I tell (from lspci output or from other 
sources) which card is currently mapped into the legacy I/O range? This 
questions arise from the fact that the current xserver head, despite 
having a correction for the lspciaccess reading of ROM 
(https://bugs.freedesktop.org/show_bug.cgi?id=18160), still locks up in 
int10 after reading the Oak ROM BIOS and trying to initialize it as a 
secondary card (with savage as primary). I want to check whether the 
wrong PCI chipset is mapped at the VGA I/O port range, or whether the 
wrong POST is being executed. I know that it is not enough to look at 
the enabled status of the PCI card, since I have enabled both on my 
machine, and the primary one (the one initialized at boot time) still is 
in control of the VGA I/O port range.


When I boot my home machine with Oak as the primary, the savage PCI card 
ends up disabled (as reported in lspci). If I then attempt to run the 
savage driver for xserver without further ado, and without using the VGA 
BIOS to set modes, the xserver hangs (endless loop trying to enable the 
acceleration registers). I have to manually enable the card with setpci 
or sysfs before the driver initializes it it properly. Somewhere the 
xserver should be doing this for me. Where? In the xserver code, or the 
driver code? Is it ok to use libpciaccess to enable the card from within 
the savage driver?


Still another question. From the savage driver code, I see that it has a 
replica of the VGA register range as a range of MMIO declared as one 
resource in the PCI card. This allows VGA legacy registers to be 
programmed when I boot with Oak as primary, without having the savage 
driver run the POST (which ties back to the first issue). Is this a 
requirement for all PCI cards, to have a replica of VGA registers 
somewhere to be programmed when VGA legacy mapping is not available? Or 
are there known PCI cards that require legacy I/O ports to be enabled 
for basic mode switching?


--
perl -e '$x=2.4;print sprintf("%.0f + %.0f = %.0f\n",$x,$x,$x+$x);'

diff -ur /home/alex/instaladores-linux/xserver/xorg-git/xserver/hw/xfree86/int10/helper_exec.c xserver/hw/xfree86/int10/helper_exec.c
--- /home/alex/instaladores-linux/xserver/xorg-git/xserver/hw/xfree86/int10/helper_exec.c	2008-12-03 10:59:10.0 -0500
+++ xserver/hw/xfree86/int10/helper_exec.c	2009-01-13 22:53:09.0 -0500
@@ -18,7 +18,7 @@
 #include 
 #endif
 
-#define PRINT_PORT 0
+#define PRINT_PORT 1
 
 #include 
 
@@ -33,7 +33,7 @@
 #ifdef _X86EMU
 #include "x86emu/x86emui.h"
 #else
-#define DEBUG_IO_TRACE() 0
+#define DEBUG_IO_TRACE() 1
 #endif
 #include 
 

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Arkadiusz Miskiewicz
On Thursday 15 of January 2009, Alex Deucher wrote:
> On Thu, Jan 15, 2009 at 11:20 AM, Albert Vilella  wrote:
> > now the question is:
> >
> > leaving Nvidia and the downstream problems aside, how difficult would it
> > be to convince ATI/AMD to provide such kind of documentation?
> > Anyone insider here that can answer?
>
> We can definitely look into it, the problem is we already have a
> backlog of stuff with higher priority
> management bits, investigating IDCT/UVD, etc.) to work through at the
> moment, so I cannot say when we'd get to hybrid graphics. 

Ability to just activate desired graphic card from OS when bios set in 
hybrid-mode would be very good start. I hope it would allow to stop X, switch 
card, start X on second card.

> Alex

-- 
Arkadiusz MiśkiewiczPLD/Linux Team
arekm / maven.plhttp://ftp.pld-linux.org/
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Glynn Clements

Stephane Marchesin wrote:

> and if you want to keep your session in between, we lack
> - X.Org infrastructure to hand a session from a graphics driver to
> another (there are a million of possible problems here)

Right; like a million display parameters which a client can query, but
for which there is no mechanism to request notification of changes,
and thus are (implicitly) constant over the lifetime of the client.

I know that the X developers don't consider incompatible changes to be
completely out of the question, but if you're talking about a
particular screen suddenly changing e.g. its glGet* values, I don't
see that happening.

And I don't think that it's realistic for the server to expose a
single set of parameters for two very different graphics chips.

It's more realistic to treat this as a traditional multiple-"Screen"
setup, with the ability to enable and disable screens. Obviously,
windows would have to either be opened on the appropriate screen
(programs which need the 3D GPU on the screen which has one), or the
application/toolkit would need to explicitly provide migration.

-- 
Glynn Clements 
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: xserver: Branch 'server-1.6-branch' - 2 commits

2009-01-15 Thread Keith Packard
On Wed, 2009-01-14 at 23:08 -0800, Jeremy Huddleston wrote:

> ah ... when builddir != srcdir.  Sorry, I always forget that... =/   
> I'll give that a try...
> 
> Could we just do something like:
> 
> dix-config-post.h:
>   $(CP) $(srcdir)/include/dix-config-post.h $(builddir)/include
> 
> all: dix-config-post.h

No, there shouldn't be any reason to copy header files around, and lots
of reasons not to. Just messing with suitable INCLUDE directives should
work eventually.

-- 
keith.pack...@intel.com


signature.asc
Description: This is a digitally signed message part
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Albert Vilella
Thanks Alex for your answer. It's great that you can look into it, and I can
say that for one, I am optimistic about it :-p

On Thu, Jan 15, 2009 at 4:56 PM, Alex Deucher  wrote:

> On Thu, Jan 15, 2009 at 11:20 AM, Albert Vilella 
> wrote:
> > now the question is:
> >
> > leaving Nvidia and the downstream problems aside, how difficult would it
> be
> > to convince ATI/AMD to provide such kind of documentation?
> > Anyone insider here that can answer?
>
> We can definitely look into it, the problem is we already have a
> backlog of stuff with higher priority (finishing 3D, newer power
> management bits, investigating IDCT/UVD, etc.) to work through at the
> moment, so I cannot say when we'd get to hybrid graphics.  The other
> problem is that since many of these hybrid solutions are multi-vendor,
> we may not have the rights release certain IP.  Even if would could
> release some information, as has been stated previously, the driver
> stack needs significant work to support something like this.
>
> Alex
>
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Xrandr loop with gnome-settings-daemon [WAS: Re: Intel GM45: Loop of continuously triggered output detections]

2009-01-15 Thread Alberto Milone
On Wednesday 14 January 2009 15:30:55 Peter Clifton wrote:
> On Wed, 2009-01-14 at 15:05 +0100, Soeren Sandmann wrote:
> > Peter Clifton  writes:
> > > Should gnome-settings-daemon be avoiding retaliating to a notification
> > > by requesting XRRGetScreenSizeRange, or should XRRGetScreenSizeRange
> > > avoid calling a procedure which will emit another notification?
> >
> > I'm pretty sure gnome-settings-daemon is doing what it's supposed to
> > do here. RandR is designed so that clients are supposed to update
> > their information in response to notifications. It's been a while
> > since I looked at it though.
> >
> > I don't think XRRGetScreenSizeRange should generate notifications.
>
> Probably not, but since XRRGetScreenSizeRange turns out to be an
> expensive operation (it causes the Intel driver to re-probe its
> outputs), its also not ideal that it is being called for every single
> change in backlight brightness, and for other non-related Xrandr events.
>
> Best wishes,

I have noticed that if I prevent both "gnome-settings-daemon" and "gnome-
power-manager" from listening to RandR I can't reproduce the loop.

Furthermore, accoring the following bug report, it looks like the problem can 
be reproduced only with libxrandr2 1.2.99.2 or higher while it works well with 
1.2.3-1:
https://bugs.edge.launchpad.net/ubuntu/+source/libxrandr/+bug/307306

Any ideas on this or on where I could look in libxrandr's code?

Regards,

Alberto Milone
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Alex Deucher
On Thu, Jan 15, 2009 at 11:20 AM, Albert Vilella  wrote:
> now the question is:
>
> leaving Nvidia and the downstream problems aside, how difficult would it be
> to convince ATI/AMD to provide such kind of documentation?
> Anyone insider here that can answer?

We can definitely look into it, the problem is we already have a
backlog of stuff with higher priority (finishing 3D, newer power
management bits, investigating IDCT/UVD, etc.) to work through at the
moment, so I cannot say when we'd get to hybrid graphics.  The other
problem is that since many of these hybrid solutions are multi-vendor,
we may not have the rights release certain IP.  Even if would could
release some information, as has been stated previously, the driver
stack needs significant work to support something like this.

Alex
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Matthias Hopf
On Jan 15, 09 16:20:32 +, Albert Vilella wrote:
> leaving Nvidia and the downstream problems aside, how difficult would it be
> to convince ATI/AMD to provide such kind of documentation?
> Anyone insider here that can answer?

In the current (approximate) list of

- 3D documentation (huge)
- General(!) Powermanagement support
- r8xx Documentation (which will be out then)
- Enhanced 3D documentation (even larger)
- Displayport support

this comes probably last. Add a year per line, and you'll get your docs
in about 5 years time.
Pessimistic from the time it takes point of view, optimistic from the
"we can get the information" point of view.

Matthias

-- 
Matthias Hopf   ____   __
Maxfeldstr. 5 / 90409 Nuernberg   (_   | |  (_   |__  m...@mshopf.de
Phone +49-911-74053-715   __)  |_|  __)  |__  R & D   www.mshopf.de
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Albert Vilella
now the question is:

leaving Nvidia and the downstream problems aside, how difficult would it be
to convince ATI/AMD to provide such kind of documentation?
Anyone insider here that can answer?

On Thu, Jan 15, 2009 at 2:34 PM, Daniel Stone  wrote:

> On Thu, Jan 15, 2009 at 02:13:45PM +, Alan Cox wrote:
> > > Right, which reduces it to a simple power management issue akin to
> > > powering down the 3D core on any modern chipset when you're not doing
> > > any rendering.
> > >
> > > Adding different devices with separate drivers is another matter
> > > altogether.
> >
> > Isn't dual driver support logically equivalent to xrandr mirrored to both
> > with either one or the other currently a 'switched away' vt ?
>
> Yes, which we don't really handle well now.
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAklvSV4ACgkQUVYB1rKAgJR3YQCeItsIf1FxuSjmGzyH4Djebh9H
> /Y8AoJxc3u2K+TrG3Sv7yG+tR5pdWppq
> =Kk8B
> -END PGP SIGNATURE-
>
>
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Autoconfiguration of non-PCI devices during Xorg startup

2009-01-15 Thread Michael Casadevall
I've been recently working on resolving issues with Xorg's
autoconfiguration mechanism with respect to non-PCI based graphic cards.
Although Xorg -configure currently can handle these types of devices
(assuming the individual driver probes work correctly that is), there is
no automatic mechanism in xf86AutoConfig.c to handle non-PCI cards on
Linux (there is a little bit of support on Sun by reading the
framebuffer device).

The main issue I'm trying to resolve is that on the ARM architecture,
many boards which do have video out often have it hooked directly to the
processor or otherwise bypassing a carded bus like PCI. In these cases,
autoconfiguration fails since there is no mechanism for these devices.
To further complicate the matter, there also is no standardized
interface for seeing these type of display devices.

What I would like to implement is a lookup mechanism similar to PCI
based lookup files, expect it would use the information available from
/proc/fb, or /proc/cpuinfo to associate a display device with a specific
video driver. For instance, on X startup, it would probe /proc/fb, read
the ID tag of the framebuffer device, and then load the correct driver
based off that. The drawback of checking /proc/fb however is that it
depends on the framebuffer driver being present in the kernel.

To avoid this, since on ARM we're dealing with an architecture that is
often space limited, the lookup could fall back upon /proc/cpuinfo, and
read the Hardware and Revision tags from there, and then look up against
 another set of data files. As many ARM boards have a one board == one
video device relationship, this works well for a vast majority of boards
without requiring framebuffer devices compiled into the running kernel.

Since this work will benefit the users of X on ARM in general, I'd like
to get the work I do merged upstream, and have it in a way that will be
acceptable for merging at the get-go. Any comments or criticisms of the
design are greatly welcomed.
Michael
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Daniel Stone
On Thu, Jan 15, 2009 at 02:13:45PM +, Alan Cox wrote:
> > Right, which reduces it to a simple power management issue akin to
> > powering down the 3D core on any modern chipset when you're not doing
> > any rendering.
> > 
> > Adding different devices with separate drivers is another matter
> > altogether.
> 
> Isn't dual driver support logically equivalent to xrandr mirrored to both
> with either one or the other currently a 'switched away' vt ?

Yes, which we don't really handle well now.


signature.asc
Description: Digital signature
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Xfbdev on intelfb framebuffer.

2009-01-15 Thread Peter Hanzel
Hello again,

I have found a solutions. There was another call to ioctl 
FBIOPUT_VSCREENINFO in the same file
on line 670:

Bool
fbdevEnable (ScreenPtr pScreen)
{

/* display it on the LCD */
k = ioctl (priv->fd, FBIOPUT_VSCREENINFO, &priv->var);
if (k < 0)
{
 perror ("FBIOPUT_VSCREENINFO");
 return FALSE;
}

So I commented out this line also and voila XFbDev is working on intelfb 
with no problems.
It also works for VesaFb.


Maybe some patch to xorg?


- Original Message - 
From: "Peter Hanzel" 
To: 
Sent: Wednesday, January 14, 2009 2:27 PM
Subject: Xfbdev on intelfb framebuffer.


> Hello.
>
> I have compiled Xfbdev. It works great with VesaFramebuffer in kernel 
> compiled.
> But I have Intel 965GM graphic so I changed kernel to work with "intelfb" 
> framebuffer. Console in framebuffer works good.
> But when I start X I received:
>
> error: invalid argumenr
> No screens found
> Existing.
>
> I have digged into sources and found this line:
>
> xorg-server-1.2.0\xorg-server-1.2.0\hw\kdrive\fbdev\fbdev.c
> line 201
>
>k = ioctl (priv->fd, FBIOPUT_VSCREENINFO, &var);
>
>if (k < 0)
>{
> fprintf (stderr, "error: %s\n", strerror (errno));
> return FALSE;
>}
>
> So this is causing problems. IntelFb framebuffer on my laptop doesn't 
> support changing video modes, so this is probably reason, why it returns 
> error.
> But as next I have commented out this lines, and recompiled Xfbdev.
> But it doesn't start. It passes init, but screen stays black and text 
> cursor is shown (but not blinking). Ctrl+Alt+Backspace is not working.
> I have only to reboot. (Ctrl+Alt+Del works).
> But when I reboot to kernel with VesaFramebuffer, the newly compiled 
> Xfbdev works like a charm. So only kernel differencies make this X to 
> hang.
>
> Next i will try it with logging to file and strace.
>
> Any suggestions?
>
> Thanks.
>
>
> 

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH] GLX: Avoid a crash if we ever end up trying to use glapi_noop_table

2009-01-15 Thread Jon TURNEY
Brian Paul wrote:
> 
> I'm in favor of this patch.  But I'm not such which xserver branch(es) 
> it should be applied to.  Can someone clue me in?

Applied to git master as commit 
c745db1674c3cb55249c9eb6e74939b74c42409c.

I'm not sure if I've understood your question correctly, but I think you could 
go to http://www.x.org/wiki/Server16Branch if you wished to nominate it for 
that branch (although I have a hard time convincing myself it's worthy)

> Jon TURNEY wrote:
>> I'm not sure if this is patch-worthy or not, but whilst getting GLX to 
>> work again on Cygwin/X I came across this...
>>
>> If the GL dispatch table pointer points to glapi_noop_table, (due to 
>> some kind of terrible failure during GL initialization), running 
>> glxinfo for e.g. will crash the X server, as DoGetString(GL_VERSION) 
>> tries to do atof() on the null pointer returned by the noop dispatch 
>> function.
>>
>> Given that all that noop dispatch table stuff is in there, I guess 
>> it's preferable that it doesn't crash in that case.
>>
>>
>>
>> 
>>
>> From 2e9ddcdaa1890204ec69ba6848cb1c49d5b85ef3 Mon Sep 17 00:00:00 2001
>> Message-Id: 
>> <2e9ddcdaa1890204ec69ba6848cb1c49d5b85ef3.1231288719.git.jon.tur...@dronecode.org.uk>
>>  
>>
>> In-Reply-To: 
>> References: 
>> From: Jon TURNEY 
>> Date: Mon, 5 Jan 2009 13:52:45 +
>> Subject: [PATCH 18/22] GLX: Avoid a crash when we have an 
>> uninitialized GL context
>>
>> If the GL dispatch table pointer points to glapi_noop_table,
>> (due to some kind of GL initialization failure), DoGetString(GL_VERSION)
>> (for example as invoked by glxinfo) will crash as it tries to
>> do atof() on the null pointer returned by the noop dispatch function
>>
>> Signed-off-by: Jon TURNEY 
>> ---
>>  glx/single2.c |3 +++
>>  1 files changed, 3 insertions(+), 0 deletions(-)
>>
>> diff --git a/glx/single2.c b/glx/single2.c
>> index 0ca808c..50a59ed 100644
>> --- a/glx/single2.c
>> +++ b/glx/single2.c
>> @@ -335,6 +335,9 @@ int DoGetString(__GLXclientState *cl, GLbyte *pc, 
>> GLboolean need_swap)
>>  string = (const char *) CALL_GetString( GET_DISPATCH(), (name) );
>>  client = cl->client;
>>  
>> +if (string == NULL)
>> +  string = "";
>> +
>>  /*
>>  ** Restrict extensions to those that are supported by both the
>>  ** implementation and the connection.  That is, return the
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: xorg-server-1.5.1fails to compile with linux-libc-headers-2.6.11.2and gcc 3.4.3

2009-01-15 Thread Jeremy Henty
On Thu, Jan 15, 2009 at 04:22:29PM +0200, Angel Tsankov wrote:
> Angel Tsankov wrote:

> Any ideas why this happens?  Could it be linux-libc-headers or glibc
> being too old to build xorg-server 1.5.1 or is it something else?

Quite possibly.   I had similar problems  in this past.   What is your
system.  If you're still on gcc-3.4.3 I'm guessing it's quite old?

In fact,  isn't linux-libc-headers itself obsolete?   Doesn't a modern
Linux install sanitised headers of its own?

Regards, 

Jeremy Henty
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: xorg-server-1.5.1fails to compile with linux-libc-headers-2.6.11.2and gcc 3.4.3

2009-01-15 Thread Angel Tsankov
Angel Tsankov wrote:
> Compiling xorg-server-1.5.1 (from xorg 7.4) with GCC 3.4.3 produces
> the following error message:
>
> In file included from linuxPci.c:271:
> /usr/include/linux/pci.h:454: error: parse error before "pci_power_t"
> linuxPci.c:553: warning: no previous prototype for 'xf86AccResFromOS'
>
Some investigation reveals that the error is due to macro __bitwise being 
undefined at "/usr/include/linux/pci.h:454".  The reason for this is that 
"linuxPci.h" includes (indirectly) "sys/kd.h" which suppresses the inclusion 
of "linux/types.h" where the macro is defined. Then linuxPci.h includes 
"linux/pci.h" which uses __bitwise to typedef pci_power_t.

Any ideas why this happens? Could it be linux-libc-headers or glibc being 
too old to build xorg-server 1.5.1 or is it something else?

Regards,
Angel 



___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Stephane Marchesin
On Thu, Jan 15, 2009 at 15:13, Alan Cox  wrote:
>> Right, which reduces it to a simple power management issue akin to
>> powering down the 3D core on any modern chipset when you're not doing
>> any rendering.
>>
>> Adding different devices with separate drivers is another matter
>> altogether.
>
> Isn't dual driver support logically equivalent to xrandr mirrored to both
> with either one or the other currently a 'switched away' vt ?

Yeah, in a perfect world it is. But :
- cross-card xrandr doesn't exist
- not all drivers support xrandr + EXA which shatter will require to
achieve cross-card xrandr
- that also assumes the drivers power down the chip while switched away

Stephane
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Alan Cox
> Right, which reduces it to a simple power management issue akin to
> powering down the 3D core on any modern chipset when you're not doing
> any rendering.
> 
> Adding different devices with separate drivers is another matter
> altogether.

Isn't dual driver support logically equivalent to xrandr mirrored to both
with either one or the other currently a 'switched away' vt ?
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Daniel Stone
On Thu, Jan 15, 2009 at 01:05:10PM +, Alan Cox wrote:
> On Thu, 15 Jan 2009 10:39:44 +
> Albert Vilella  wrote:
> > > > What is the current support and roadmap for discrete graphics card hot
> > > > switching in Xorg?
> > >
> > > OLPC does automatic switching of display controller for power management.
> > 
> > Interesting. So the OLPC also has a discrete and an integrated graphics
> > card?
> > Are these Intel or what brand?
> 
> It has a dumb frame buffer that can run with the main video card turned
> off, so the image can be updated on the dumb fb and the main video
> powered down as much as possible.

Right, which reduces it to a simple power management issue akin to
powering down the 3D core on any modern chipset when you're not doing
any rendering.

Adding different devices with separate drivers is another matter
altogether.

Cheers,
Daniel


signature.asc
Description: Digital signature
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Alan Cox
On Thu, 15 Jan 2009 10:39:44 +
Albert Vilella  wrote:

> >
> > > What is the current support and roadmap for discrete graphics card hot
> > > switching in Xorg?
> >
> > OLPC does automatic switching of display controller for power management.
> 
> 
> Interesting. So the OLPC also has a discrete and an integrated graphics
> card?
> Are these Intel or what brand?

It has a dumb frame buffer that can run with the main video card turned
off, so the image can be updated on the dumb fb and the main video
powered down as much as possible.

None of it is Intel, Intel are the folks who compete with OLPC
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


FOSDEM DevRoom: Update.

2009-01-15 Thread Luc Verhaegen
Hi all,

I've received word back from the fosdem organisers: Our initial schedule
is posted; 7 talks out of a possible 11 (max 13 slots) are currently
taken up. You can see the current schedule at:
http://wiki.x.org/wiki/fosdem2009 and now also at:
http://www.fosdem.org/2009/schedule/rooms/h.1309

As you can see, we are in h1309 this year. Which means that we have boosted
capacity by 50%. It also means that our room has 3 (!) doors, aka airholes,
this year. It is the first room from the "upper level" stands area, where
Fedora/Centos was located last year.

So we have 4 (max 6) talk slots still available, with a room like this, we
better fill them up :)

Thanks,

Luc Verhaegen.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Stephane Marchesin
On Thu, Jan 15, 2009 at 13:30, Colin Guthrie  wrote:
> 'Twas brillig, and Stephane Marchesin at 15/01/09 10:40 did gyre and gimble:
>> On Thu, Jan 15, 2009 at 11:21, Albert Vilella  wrote:
>>> Hi all,
>>>
>>> What is the current support and roadmap for discrete graphics card hot
>>> switching in Xorg?
>>
>> There is no support, and AFAIK no roadmap either. There are many
>> technical reasons why this is not possible today. In short, I wouldn't
>> suggest getting a dual GPU laptop with the purpose of using it under
>> linux, as one of the GPUs will probably stay unused.
>
> Erm, forgive me if I'm wrong, but I thought this was something that Adam
> Jackson's Shatter work would go part way to resolving?
>
> http://www.ziobudda.net/node/103982
>

Sure, however :
- it's not done yet
- even then, the rest of the points I raised are still not covered
- it relies on EXA, which is not implemented by the nvidia binary
driver. So even if it was done, you could switch between "intel" and
"nv" (or "intel" and "nouveau" but still there is no finished 3D).

Stephane
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Colin Guthrie
'Twas brillig, and Stephane Marchesin at 15/01/09 10:40 did gyre and gimble:
> On Thu, Jan 15, 2009 at 11:21, Albert Vilella  wrote:
>> Hi all,
>>
>> What is the current support and roadmap for discrete graphics card hot
>> switching in Xorg?
> 
> There is no support, and AFAIK no roadmap either. There are many
> technical reasons why this is not possible today. In short, I wouldn't
> suggest getting a dual GPU laptop with the purpose of using it under
> linux, as one of the GPUs will probably stay unused.

Erm, forgive me if I'm wrong, but I thought this was something that Adam 
Jackson's Shatter work would go part way to resolving?

http://www.ziobudda.net/node/103982

Col

-- 

Colin Guthrie
gmane(at)colin.guthr.ie
http://colin.guthr.ie/

Day Job:
   Tribalogic Limited [http://www.tribalogic.net/]
Open Source:
   Mandriva Linux Contributor [http://www.mandriva.com/]
   PulseAudio Hacker [http://www.pulseaudio.org/]
   Trac Hacker [http://trac.edgewall.org/]

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: How to test GLX performance?

2009-01-15 Thread Nikos Chantziaras
Alan James Caruana wrote:
> Hi,
> 
> I am writing an X Server for the company I work for, and I have implemented
> the GLX extension.
> I know that it works because 'glxinfo' gives output, 'glxgears' works, and
> some sample GLX
> programs I downloaded also do work,  but now I want to test for performance.
> 
> What programs/methods exist to test the performance of the GLX ?

Try the Phoronix Test Suite.

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Draft XI 2 protocol specification

2009-01-15 Thread Simon Thum
Peter Hutterer wrote:
> that detail the changes made? If so, yes - definitely an option and I'll try
> to spec something decent out.
Yep, that's what I meant.

> I remember now. We said that we can basically include axis information twice,
> once in its raw state, unclipped and unaccelerated, once in its processed
> form. In GPE, both forms are easily accessible, so stuffing that in may not be
> too hard.
Cool.

> I think 16.16 for screen coordinates is definitely viable.
We're approaching 4k resolutions. 16.16 doesn't feel too comfortable
with me, 24.8 is more like it. I don't know that to do with 16 bit
subpixel tbh.
> For actual axis information it's more tricky. Absolute axes with a defined
> range are easy enough as integers, as scaling and thus subpixel information
> has to be achieved on the client side anyway.
Maybe we have different things in mind here. I'd say a class of widgets,
e.g. sliders and panning controls (gimp in LR edge), can make sense of
sub-pixel screen coordinates. If that's what "achieved on the client
side" refers to, then yes. But the server ideally should deliver a
sub-pixel screen translation, independent of what my device looks like.
 Maybe that should be covered in corresponding master dev events.

> Relative axes are more complicated as they lack a defined axes range. One
> option would be the definition of a per-axis scaling factor as part of the
> device capabilities. Data in the valuators is then always INT32, multiplied by
> this scaling factor (for many devices this scaling factor is probably 1
> anyway).
I don't see what this buys. At the end of the day, a client wants to
know what the value reflects, which properties (in the math sense) it
has. Something like "the (sub-)pixel distance in screen coordinates
since the previous event, ignoring clipping. If the axis is not
translated to screen, device values are reported." Or simply device
(driver) values always.

It may make sense to have different specs for master and slave devs, and
a bit indicating master-ness in the (relative) event.

> This gives us the ability to do subpixel precision with an arbitrary per-axis
> subpixel resolution. But then again, the same can be achieved by just using
> floats in the first place.
> 
> Just thinking out aloud here.
:) You know I'm not a float-hater, but wrt devices one needs to ensure
decent precision is available over the whole (potentially large) screen.
May get difficult to achieve.

In general, XRelativeMotionEvent and XDeviceEvent have a large overlap.
If you implement raw/cooked, an extra rel event may be superfluous.

I assumed master devs are not core-only, as that correct? If yes, I'm
all "let master/slave decide on coordinate system issues".

Cheers,

Simon
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Stephane Marchesin
On Thu, Jan 15, 2009 at 12:22, Albert Vilella  wrote:
>>> There is no support, and AFAIK no roadmap either. There are many
>>> technical reasons why this is not possible today. In short, I wouldn't
>>> suggest getting a dual GPU laptop with the purpose of using it under
>>> linux, as one of the GPUs will probably stay unused.
>
> Just to clarify the current situation: in some laptops, like Sony Vaio
> models (SZ-series, Z-series), this feature is "partially" working:
>
> One can do a cold reboot: using the hardware stamina/speed switch in the
> laptop to switch off the discrete
> graphics card (Nvidia) at BIOS time. But the latest models (Z-series) allow
> for a hot switch, right now only in Windows Vista.
> If one installs Linux on these, both the Nvidia and the Intel will appear in
> lspci, but xorg will not be able to handle both,
> and the Nvidia hardware will be wasting battery and not being used.
> Some people has managed to revert back to the cold reboot feature by
> installing Windows XP on the laptop,
> then switching on/off the discrete graphics card at BIOS time.

This is not what you'd call support. This is just the bios exposing a
single graphics card. As far as X.Org is concerned, there is only one
graphics card at a time.

>
> So the next step is the hot switch. My hunch is that Windows Vista does some
> sort of "gdm restart" equivalent,
> by the looks of this video on computer.tv:
>
> Jump to 4:10 for the switching bit:
>
> http://www.youtube.com/watch?v=Qcvu2Aluy7g
>
> This machine is the successor of the SZ premium series, and has a Dynamic
> Hybrid Graphics system that will enable/disable the nvidia graphics card
> using a software "hot" switch instead of a hardware "cold" switch (SZ
> series).
>
> http://vaio-online.sony.com/prod_info/series1/z/interview_Z/index_05.html
>
> Can I ask someone who is expert enough in xorg to give a list of
> blockers/things to try for this to happen, so that people can play with?
> For example, people has been investigating BiosBase on the Nvidia side of
> things:
>
> http://avilella.googlepages.com/vaioz (look for BiosBase)
> http://bugs.freedesktop.org/show_bug.cgi?id=2597#c37
>

Basically, we lack :
- documentation on how to switch GPUs at the laptop level (i.e. do
what the bios does at boot when you choose the card in the bios)
- documentation on cold booting the nvidia GPU
- driver support on both sides implementing proper GPU power up/shut down
(we're talking about something big here)

and if you want to keep your session in between, we lack
- X.Org infrastructure to hand a session from a graphics driver to
another (there are a million of possible problems here)
- drivers supporting said infrastructure
(we're talking about something real huge here)

IMO all this is not very likely to happen. When you buy a laptop on
which you want to run linux, I really suggest you check hardware
compatibility. This is no different than unsupported wifi chips.

Stephane
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Daniel Stone
On Thu, Jan 15, 2009 at 10:33:09AM +, Alan Cox wrote:
> On Thu, 15 Jan 2009 10:21:53 +
> Albert Vilella  wrote:
> > What is the current support and roadmap for discrete graphics card hot
> > switching in Xorg?
> 
> OLPC does automatic switching of display controller for power management.

That's not even remotely comparable.  (FWIW, every Nokia internet tablet,
including 2004/2005's 770, right up to 2008's N810, has done this.)

Cheers,
Daniel


signature.asc
Description: Digital signature
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

How to test GLX performance?

2009-01-15 Thread Alan James Caruana
Hi,

I am writing an X Server for the company I work for, and I have implemented
the GLX extension.
I know that it works because 'glxinfo' gives output, 'glxgears' works, and
some sample GLX
programs I downloaded also do work,  but now I want to test for performance.

What programs/methods exist to test the performance of the GLX ?

Thanks
Alan J. Caruana
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Albert Vilella
>
> There is no support, and AFAIK no roadmap either. There are many
>> technical reasons why this is not possible today. In short, I wouldn't
>> suggest getting a dual GPU laptop with the purpose of using it under
>> linux, as one of the GPUs will probably stay unused.
>
>
Just to clarify the current situation: in some laptops, like Sony Vaio
models (SZ-series, Z-series), this feature is "partially" working:

One can do a cold reboot: using the hardware stamina/speed switch in the
laptop to switch off the discrete
graphics card (Nvidia) at BIOS time. But the latest models (Z-series) allow
for a hot switch, right now only in Windows Vista.
If one installs Linux on these, both the Nvidia and the Intel will appear in
lspci, but xorg will not be able to handle both,
and the Nvidia hardware will be wasting battery and not being used.
Some people has managed to revert back to the cold reboot feature by
installing Windows XP on the laptop,
then switching on/off the discrete graphics card at BIOS time.

So the next step is the hot switch. My hunch is that Windows Vista does some
sort of "gdm restart" equivalent,
by the looks of this video on computer.tv:

Jump to 4:10 for the switching bit:

http://www.youtube.com/watch?v=Qcvu2Aluy7g

This machine is the successor of the SZ premium series, and has a Dynamic
Hybrid Graphics system that will enable/disable the nvidia graphics card
using a software "hot" switch instead of a hardware "cold" switch (SZ
series).

http://vaio-online.sony.com/prod_info/series1/z/interview_Z/index_05.html
Can I ask someone who is expert enough in xorg to give a list of
blockers/things to try for this to happen, so that people can play with?
For example, people has been investigating BiosBase on the Nvidia side of
things:

http://avilella.googlepages.com/vaioz (look for BiosBase)
http://bugs.freedesktop.org/show_bug.cgi?id=2597#c37

Thanks,

Albert.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

[PATCH] xrandr: Simplify transform and scale code

2009-01-15 Thread Éric Piel
Hello,
While reading the code of xrandr, I noticed some little possible
optimizations. Here they are :-)

Eric
--

The init_transform() function sets up a unit matrix, so only the scaling
factors need to be updated. Additionally, the code for the transform
option initialised twice the matrix, which is not needed.

Signed-off-by: Eric Piel 
---
 xrandr.c |2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/xrandr.c b/xrandr.c
index bbdb348..5c369cd 100644
--- a/xrandr.c
+++ b/xrandr.c
@@ -2256,7 +2256,6 @@ main (int argc, char **argv)
init_transform (&output->transform);
output->transform.transform.matrix[0][0] = XDoubleToFixed (sx);
output->transform.transform.matrix[1][1] = XDoubleToFixed (sy);
-   output->transform.transform.matrix[2][2] = XDoubleToFixed (1.0);
if (sx != 1 || sy != 1)
output->transform.filter = "bilinear";
else
@@ -2279,7 +2278,6 @@ main (int argc, char **argv)
   &transform[2][0],&transform[2][1],&transform[2][2])
!= 9)
usage ();
-   init_transform (&output->transform);
for (k = 0; k < 3; k++)
for (l = 0; l < 3; l++) {
output->transform.transform.matrix[k][l] = 
XDoubleToFixed
(transform[k][l]);
-- 
1.6.0.5

begin:vcard
fn;quoted-printable:=C3=89ric Piel
n;quoted-printable:Piel;=C3=89ric
org:Technical University of Delft;Software Engineering Research Group
adr:HB 08.080;;Mekelweg 4;Delft;;2628 CD;The Netherlands
email;internet:e.a.b.p...@tudelft.nl
tel;work:+31 15 278 6338
tel;cell:+31 6 2437 9135
x-mozilla-html:FALSE
url:http://pieleric.free.fr
version:2.1
end:vcard

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

[PATCH] xrandr: document transform and scale in the manpage

2009-01-15 Thread Éric Piel
Hello,
I was missing the documentation for the scale and transformation options
of xrandr. So I tried to write it. I played a bit with them, had a look
at the code, did some additionaly guesswork, and hopefully the description
should not be too far from the truth ;-)

Eric
--

The new --transform and --scale options were added, but not yet
documented. This includes also an example of usage of panning and
scaling at the same time.

Signed-off-by: Eric Piel 
---
 xrandr.man |   46 ++
 1 files changed, 46 insertions(+), 0 deletions(-)

diff --git a/xrandr.man b/xrandr.man
index 75d31c1..c6a9b3a 100644
--- a/xrandr.man
+++ b/xrandr.man
@@ -37,7 +37,11 @@ xrandr \- primitive command line interface to RandR extension
 .br
 .B RandR version 1.3 options
 .br
+.B Per-output options
+.br
 [\-\-panning 
\fIwidth\fPx\fIheight\fP[+\fIx\fP+\fIy\fP[/\fItrack_width\fPx\fItrack_height\fP+\fItrack_x\fP+\fItrack_y\fP[/\fIborder_left\fP/\fIborder_top\fP/\fIborder_right\fP/\fIborder_bottom\fP
+[\-\-scale \fIx\fPx\fIy\fP]
+[\-\-transform 
\fIa\fP,\fIb\fP,\fIc\fP,\fId\fP,\fIe\fP,\fIf\fP,\fIg\fP,\fIh\fP,\fIi\fP]
 .br
 .B RandR version 1.2 options
 .br
@@ -69,6 +73,7 @@ xrandr \- primitive command line interface to RandR extension
 [\-\-off]
 [\-\-crtc \fIcrtc\fP]
 [\-\-gamma \fIred\fP:\fIgreen\fP:\fIblue\fP]
+
 .br
 .B RandR version 1.0 and version 1.1 options
 .br
@@ -118,6 +123,8 @@ not report it as supported or a higher version is available.
 .PP
 .SH "RandR version 1.3 options"
 .PP
+Options for RandR 1.3 are used as a superset of the options for RandR 1.2.
+.PP
 .B "Per-output options"
 .IP "\-\-panning 
\fIwidth\fPx\fIheight\fP[+\fIx\fP+\fIy\fP[/\fItrack_width\fPx\fItrack_height\fP+\fItrack_x\fP+\fItrack_y\fP[/\fIborder_left\fP/\fIborder_top\fP/\fIborder_right\fP/\fIborder_bottom\fP]]]"
 This option sets the panning parameters.  As soon as panning is
@@ -127,6 +134,39 @@ pointer tracking area (which defaults to the same area). 
The last four
 parameters specify the border and default to 0. A width or height set to zero
 disables panning on the according axis. You typically have to set the screen
 size with \fI--fb\fP simultaneously.
+.IP "\-\-transform 
\fIa\fP,\fIb\fP,\fIc\fP,\fId\fP,\fIe\fP,\fIf\fP,\fIg\fP,\fIh\fP,\fIi\fP"
+Specifies a transformation matrix to apply on the output. Automatically a 
bilinear filter is selected.
+The mathematical form corresponds to:
+.RS 
+.RS 
+a b c
+.br
+d e f
+.br
+g h i
+.RE
+The transformation matrix multiplied by a coordinate vector of a pixel of the
+output (extended to 3 values) gives the approximate coordinate vector of a 
pixel
+in the graphic buffer. Typically, \fIa\fP and
+\fIe\fP corresponds to the scaling on the X and Y axes, \fIc\fP and \fIf\fP
+corresponds to the tranlastion on those axes, and \fIg\fP, \fIh\fP, and \fIi\fP
+are respectively 0, 0 and 1. It also allows to express a rotation of an angle T
+with:
+.RS 
+cos T  -sin T   0
+.br
+sin T   cos T   0
+.br
+ 0   0  1
+.RE
+As a special argument, instead of
+passing a matrix, one can pass the string \fInone\fP, in which case the default
+values are used (a unit matrix without filter).
+.IP "\-\-scale \fIx\fPx\fIy\fP"
+Changes the dimensions of the output picture. Values superior to 1 will lead to
+a compressed screen (screen dimension bigger than the dimension of the output
+mode), and values below 1 leads to a zoom in on the output. This option is
+actually a shortcut version of the \fI\-\-transform\fP option.
 .PP
 .SH "RandR version 1.2 options"
 These options are only available for X server supporting RandR version 1.2
@@ -250,6 +290,12 @@ Enables panning on a 1600x768 desktop while displaying 
1024x768 mode on an outpu
 .RS 
 xrandr --fb 1600x768 --output VGA --mode 1024x768 --panning 1600x0
 .RE
+.PP
+Have one small 1280x800 LVDS screen showing a small version of a huge 
3200x2000 desktop, and have a
+big VGA screen display the surrounding of the mouse at normal size.
+.RS 
+xrandr --fb 3200x2000 --output LVDS --scale 2.5x2.5 --output VGA --pos 0x0 
--panning 3200x2000+0+0/3200x2000+0+0/64/64/64/64
+.RE
 .SH "SEE ALSO"
 Xrandr(3), cvt(1)
 .SH AUTHORS
-- 
1.6.0.5

begin:vcard
fn;quoted-printable:=C3=89ric Piel
n;quoted-printable:Piel;=C3=89ric
org:Technical University of Delft;Software Engineering Research Group
adr:HB 08.080;;Mekelweg 4;Delft;;2628 CD;The Netherlands
email;internet:e.a.b.p...@tudelft.nl
tel;work:+31 15 278 6338
tel;cell:+31 6 2437 9135
x-mozilla-html:FALSE
url:http://pieleric.free.fr
version:2.1
end:vcard

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Timo Aaltonen
On Thu, 15 Jan 2009, Stephane Marchesin wrote:

> On Thu, Jan 15, 2009 at 11:21, Albert Vilella  wrote:
>> Hi all,
>>
>> What is the current support and roadmap for discrete graphics card hot
>> switching in Xorg?
>
> There is no support, and AFAIK no roadmap either. There are many
> technical reasons why this is not possible today. In short, I wouldn't
> suggest getting a dual GPU laptop with the purpose of using it under
> linux, as one of the GPUs will probably stay unused.

or fail to start X without setting the BusID, since the server can't 
decide which one to use:

http://bugs.freedesktop.org/show_bug.cgi?id=18321

been there since 1.5, and unfortunately hasn't gathered much activity..

t

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Mikhail Gusarov

Twas brillig at 11:40:00 15.01.2009 UTC+01 when marche...@icps.u-strasbg.fr did 
gyre and gimble:

 SM> In short, I wouldn't suggest getting a dual GPU laptop with the
 SM> purpose of using it under linux, as one of the GPUs will probably
 SM> stay unused.

Well, it should be possible to run some number-crunching on unused
discrete GPU :)

-- 


pgph1DXHB2QqS.pgp
Description: PGP signature
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Stephane Marchesin
On Thu, Jan 15, 2009 at 11:21, Albert Vilella  wrote:
> Hi all,
>
> What is the current support and roadmap for discrete graphics card hot
> switching in Xorg?

There is no support, and AFAIK no roadmap either. There are many
technical reasons why this is not possible today. In short, I wouldn't
suggest getting a dual GPU laptop with the purpose of using it under
linux, as one of the GPUs will probably stay unused.

Stephane
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Albert Vilella
>
> > What is the current support and roadmap for discrete graphics card hot
> > switching in Xorg?
>
> OLPC does automatic switching of display controller for power management.


Interesting. So the OLPC also has a discrete and an integrated graphics
card?
Are these Intel or what brand?

 > There are currently ~40 users of Sony Vaio Z series using Linux that
would
> like this feature to be implemented. See:
>

> > https://launchpad.net/~sony-vaio-z-series
> 
>
> Perhaps they can all extract the documentation from Nvidia 8)


With the bad spell they have had lately, hopefully Nvidia will disappear as
a company soon and let Intel and ATI carry on supporting Linux :-p
No, seriously, I am guessing Nvidia will add this to the list of things to
do for their binary blob, but ATI/AMD should be in a better position to
support or help in supporting this feature in Linux, right?
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Alan Cox
On Thu, 15 Jan 2009 10:21:53 +
Albert Vilella  wrote:

> Hi all,
> 
> What is the current support and roadmap for discrete graphics card hot
> switching in Xorg?

OLPC does automatic switching of display controller for power management.

> There are currently ~40 users of Sony Vaio Z series using Linux that would
> like this feature to be implemented. See:
> 
> https://launchpad.net/~sony-vaio-z-series

Perhaps they can all extract the documentation from Nvidia 8)

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Current support and roadmap for discrete graphics card hot switching

2009-01-15 Thread Albert Vilella
Hi all,

What is the current support and roadmap for discrete graphics card hot
switching in Xorg?

See:

https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/312756

http://forum.notebookreview.com/showthread.php?t=258304

Hybrids with the Ability to turn off the 3d chip:

AMD/ATI calls it PowerXpress and Nvidia HybridPower. It uses 2 graphics
cards, one energy efficient with little 3d power and the other for gaming
(fast and uses more/a lot of power), the user can choose which chip to use.
This is not a new concept. Sony has build in 2 graphics chips into some of
their laptops for years. In the past a reboot was required to switch between
the chips. With the new generation it is possible to change between chips on
the fly, the screen will flicker but no need to reboot. At least in Windows
Vista (XP, Linux not supported) the user can switch freely between the chips
or set up a profile to do so automatically (eg when on battery use low power
chip and when plugged in use the more powerful chip).

The graphic card hybrid not only works with two Nvidia or AMD cards but the
low power Intel graphics solutions (mostly shard memory) can also be
combined with 3d chips from AMD or Nvidia. This solution is ideal for users
who want maximum battery life and be able to play current games. The most
likely combination is Intel shard memory graphics card for battery life and
some low to mid level 3d chip. This will not give great 3d performance but
enable you to play some games.

Limitations are the drivers. Special drivers are needed depending on which
graphic chips are combined in the hybrid. This will most likely make you
depended on the Notebook manufactures driver support. It is uncertain if 3rd
party drivers (such as laptopvideo2go) will be usable.

One of the models is the Sony Vaio Z series. Right now, both cards are
visible under Linux, but there is no way to hot-switching-off (if that is a
word...) the Nvidia card. For a summary of users' experimentation with this
laptop and Linux.

There are currently ~40 users of Sony Vaio Z series using Linux that would
like this feature to be implemented. See:

https://launchpad.net/~sony-vaio-z-series

Also, see:

http://forum.notebookreview.com/showthread.php?t=325616&page=1
http://forum.notebookreview.com/showthread.php?t=325616&page=2
http://forum.notebookreview.com/showthread.php?t=325616&page=3
http://forum.notebookreview.com/showthread.php?t=325616&page=4
http://forum.notebookreview.com/showthread.php?t=325616&page=5
Thanks,

Albert.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: No video overlay on Intel X4500HD

2009-01-15 Thread David Amiel
Le Jeu 15 janvier 2009 01:32, Keith Packard a écrit :
> On Wed, 2009-01-14 at 19:35 +, Daniel Gultsch wrote:
>> Hi Guys,
>>
>> My major problem is that i dont have the "Intel(R) Video Overlay" but
>> only the "Intel(R) Textured Video" - as reported by xvinfo | grep -i
>> adaptor. This causes tearing and i really need to watch movies :-)
>
> The textured adapter causes tearing because it doesn't synchronize the
> screen update to the vblank. Synchronizing this operation involves
> either:
>  A. queuing a command to stop the graphics engine until the vblank
> interval and then queuing the rendering commands right after
> that.
>  B. waiting for the vblank interval to occur and then quickly
> queueing suitable rendering commands to the graphics engine
>
> Of the two, A. is trivial where it works (it does work on the X4500),
> but it means that all rendering on the screen stops once the command to
> display the video is queued to the card. That seems fairly harsh. I
> believe there is a patch around that will do this though; it might be
> reasonable if the video was filling the screen.


Avoiding tearing in a particular case, even with side effects, would be a
lot better than to be stuck with it in all cases.

Could you point us to this patch ? or maybe you can make it available in
the driver through an xorg option ?



> So, we'd like to do B., but that requires the ability to stop the
> graphics engine in the middle of some drawing operation and switch it
> over to the 'update the video' command sequence at vblank time. We've
> explored several options here, but haven't gotten anything working. This
> is also tied in with the DRI2 work, which needs exactly the same
> operation.
>
> And that doesn't consider multi-head environments where you have to know
> which monitor you want to sync with so that you can wait for the right
> time.
>
> The overlay can easily synchronize because the overlay isn't connected
> to the graphics pipeline, and the overlay registers are all nicely
> double-buffered. So, you just poke a pointer to the new image into the
> overlay registers and they get swapped automatically at vblank time,
> making the transition entirely tearless.
>
> We're about to go try to make this work for DRI2, and so Xv should come
> along more-or-less for free. We sketched out some ideas on how this
> might work at XDS last fall, but then got distracted getting DRI2
> support into the kernel in time for 2.6.28 -- we dropped the vblank
> stuff as that doesn't have a huge impact on the kernel, or application
> interfaces.
>
> --
> keith.pack...@intel.com
> ___
> xorg mailing list
> xorg@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/xorg

regards,

David


___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: No video overlay on Intel X4500HD

2009-01-15 Thread Michel Dänzer
On Wed, 2009-01-14 at 16:32 -0800, Keith Packard wrote:
> 
> The textured adapter causes tearing because it doesn't synchronize the
> screen update to the vblank. Synchronizing this operation involves
> either:
>  A. queuing a command to stop the graphics engine until the vblank
> interval and then queuing the rendering commands right after
> that. 
>  B. waiting for the vblank interval to occur and then quickly
> queueing suitable rendering commands to the graphics engine

 C. What the radeon driver does: queuing a command to stop the
graphics engine until scanout is outside of the vertical CRTC
area to be rendered to and then queuing the rendering commands
right after that.

> Of the two, A. is trivial where it works (it does work on the X4500),
> but it means that all rendering on the screen stops once the command to
> display the video is queued to the card. That seems fairly harsh.

With C., the percentage of time the pipeline stalls more or less
corresponds to the percentage of the vertical CRTC area covered by the
video, which I suspect is generally more acceptable than video tearing.


> And that doesn't consider multi-head environments where you have to know
> which monitor you want to sync with so that you can wait for the right
> time.

That's well covered by CRTC coverage analysis or something like the
XV_CRTC attribute in ambiguous cases.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

RE: [Intel-gfx] [ANNOUNCE] xf86-video-intel 2.6.0

2009-01-15 Thread Jin, Gordon
Zhenyu Wang wrote on Thursday, January 15, 2009 2:22 PM:
> Here's xf86-video-intel 2.6.0 release. Full changelog against 2.5.1
> is below. 
> We had DRI2 and 965 XvMC branch merged, and other bunch of fixes. We
> also 
> have basic support for SDVO LVDS from last rc.
> 
> This'll be included in Intel 2008 Q4 release, and Gordon Jin will
> update 
> on that build soon.

I've put the related component info and known bugs at 
http://intellinuxgraphics.org/2008Q4.html.

I also moved the release info into the Home page.

Gordon
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Fwd: Re: Fwd: Draft XI 2 protocol specification

2009-01-15 Thread Roderick Colenbrander
Hi Peter,

Since we need XI2 in Wine to fix some major input issues (relative mouse input) 
I forwarded your spec draft to our mailinglist and one of our devs has some 
questions.

Regards,
Roderick Colenbrander

 Original-Nachricht 
Datum: Wed, 14 Jan 2009 23:54:41 -0700
Von: Vitaliy Margolen 
An: Roderick Colenbrander 
CC: wine-de...@winehq.org
Betreff: Re: Fwd: Draft XI 2 protocol specification

Roderick Colenbrander wrote:
> Hi Vitaliy,
> 
> Peter Hutterer has submitted a draft specification of Xinput2 to the xorg
> 
mailinglist. As you know it will offer relative mouse movements. He is
asking for feedback. Since I have no experience with Xinput you might want
to review it and see if it works out for Wine.
> 

I'd say I like it so far. Should work really nicely for DInput. And I think
it should work for "RawInput" as well - one more API introduced in XP.
However someone needs to see how well this will work for tablet input?

The next big question - joysticks. If Wine ever decides to use X11 for
joysticks we'll need much more then what's available now in XI 1.x
(force-feedback anyone?) or spec'ed here. The other direction (to the
device) is IMHO needed as well.

Also not sure where all the extra keyboard add-ons belongs in. XI or
something else? Things like extra displays, LEDs - all the cool stuff on
some gaming devices.

Vitaliy.

-- 
Sensationsangebot verlängert: GMX FreeDSL - Telefonanschluss + DSL 
für nur 16,37 Euro/mtl.!* http://dsl.gmx.de/?ac=OM.AD.PD003K1308T4569a
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg