Re: Disabling monitors with bad EDIDs?!?!
On 24 September 2010 07:07, Kai-Uwe Behrmann k...@gmx.de wrote: Manufacturers have typical good colour measurement equipement and appear to put some reasonable colorimetric data inside the EDID. The colorimetric precission is of course rough, given that the gamma curves are represented by just one float value. Can't we use the xy co-ordinates provided in the colour point data? Am I missing something obvious? Thanks. Richard. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Re: Disabling monitors with bad EDIDs?!?!
On 22 September 2010 21:28, Kai-Uwe Behrmann k...@gmx.de wrote: Some devices need a firmware update. I checked on osX which handles dynamic EDIDs fine. Nouveau on Fedora13 and the nvidia driver openSUSE-11.2 fail to update the EDID info. The X server needs to be restarted before the changed EDID becomes visible. The intel driver on Fedora 14 seems to handle it correctly, which would suggest it's a nouveau bug rather than an xserver bug. Out of interest, how accurate is the colorspace information you get from the EDID normally? I know you do an auto-profile in oyranos and I wondered if typical monitors were accurate, or the EDIDs just get stuffed with some half-correct (semi-random) values by the vendors. Obviously DreamColor monitors are pretty accurate, but what about the bargain basement LCD panels and the non-name LCD screens used in typical notebooks? Thanks, Richard. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Re: Disabling monitors with bad EDIDs?!?!
On 19 September 2010 11:56, Dave Airlie airl...@gmail.com wrote: I'd be interested in checking if your EDID has somehow been corrupted. The LP2480zx has a feature where it can update the EDID with the correct colorspace co-ordinates depending on the chosen colorspace. It would appear if you turn this feature off (to never change the EDID even if you change the 3d lut) the EDID checksum is incorrect. I'm pretty sure this EDID updating feature is turned on by default, although it's kinda hard to turn on if you ever turned it off, as the OSD for this feature is only available if you have a valid input, which in the case of the new drm, you won't. if its an expensive monitor it might be smart, power down the monitor, unplug all cables (power + video), leave it, and reconnect everything. I've tried this. I'll contact HP engineers directly of course, but this is likely to affect quite a few people I think. Richard. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Re: Getting the i2c interface from randr
On 7 July 2010 08:06, Alex Deucher alexdeuc...@gmail.com wrote: You'd expose a separate connector property for each ddc/ci control. You could either name them for common ones, or just list them based on the control number. E.g. It's not really just a set of properties, it's a proper command interface. See http://en.wikipedia.org/wiki/Display_Data_Channel#DDC.2FCI -- I really don't want to feel the bitter wrath of Matthew Garrett when I add a 30ms poll in the kernel just to keep some of the properties up to date and the i2c link up :-) It's also horribly vendor specific. For instance, I want to update the PMB data block in a DreamColor HP monitor. To do this I have to make raw calls like http://www.boichat.ch/nicolas/ddcci/specs.html -- which the possible kernel interface would have to support. There's no way I could break this vendor specific block of memory into meaningful properties and ever hope to have any sort of kernel---device coherence. Property(XA_STRING) I2C: i2c_device_name Right, this makes sense. Rather than having a userspace app due the i2c transaction, you'd just expose the controls listed in the ddc/ci interface provided by the monitor and the kernel would do the actual i2c transaction. That would mean putting a lot of different, often horrible, horrible quirks in the kernel. That app could then get/set the control value via xrandr or sysfs. I'm not sure that's complete enough for the second use-case. For brightness it kinda makes sense, and then we can even wire in backlight support to external panels so that xrandr capable programs magically gain support. But for the more advanced usages, we really need the raw ic2 device. Richard. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Re: Mesa in git master fails to build
2010/7/5 Michel Dänzer mic...@daenzer.net: Looks like your src/gallium/state_trackers/egl/depend (and possibly other depend files) has a stale reference to it, remove it/them and try again. Works perfectly, thanks. Richard. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Getting the i2c interface from randr
I'm trying to do two things: * Control the brightness of the external panel using DDC/CI * Update the pre and post LUTs on a 30bit external panel, using i2c I fully appreciate that different monitors have different quirks for the i2c command interface. Some of these can be auto-detected and worked around, some of these need quirks as they are horribly broken (and I'm planning to pretty much ignore this hardware for now). I wanted to avoid putting yet more i2c code in X or in drm, as it's quite a bit of privileged code doing fairly scary stuff with the hardware, I just want to write a tiny shim library to be able to send a few limited commands to the ic2 interface for the given output. And herein lies my problem. We don't know which i2c ports correspond to which X RandR output. I assume the kernel knows, but that information isn't shared with X. From the documentation I've been able to scrape together, OSX has APIs were the display server just points a program at an i2c interface name, and the userspace does the i2c command stream as and when required. Of course, if you guys think this better belongs in the kernel with the other i2c bits, that would probably also be sane (but quite a bit of work to deal with the quirks). Comments? Thanks, Richard. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Mesa in git master fails to build
gmake[4]: Entering directory `/home/hughsie/Code/mesa/src/gallium/state_trackers/egl' gmake[4]: *** No rule to make target `common/native_probe.h', needed by `common/egl_g3d_api.o'. Stop. Somebody forgot to git add a file perhaps? Richard. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Per-output DPMS?
I get quite a few bug reports from people using gnome-power-manager who want to turn off the internal laptop panel when docked, and turn the external panel on. On idle, the external panel should then blank, and then just the external panel should return to life when the mouse is moved. Ideally I could set a per-output DPMS setting, although I guess the same affect could be done with XRandr and turning the output off, although this takes a few seconds to come back on, and upsets the screen geometry. What's a sane thing to do in this case? Do I really want to switch off the output? Thanks, Richard. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Getting the XRROutputInfo for a window xid
gnome-color-manager currently manages the per-session display correction. It sets up the gamma ramp and sets an atom called _ICC_PROFILE on each _output_ as you should have different vcgt profiles for each physical monitor. We currently also set the primary output profile as an atom on the _screen_ for compatibility. Now, at the moment applications like GIMP check the per-screen _ICC_PROFILE and use that as the output profile. This doesn't work very well when there is more than one output, as GIMP could be using the wrong ICC profile for the output it's currently being displayed on. It can get more pathological, where one window could be spanning more than one output, although this isn't the common case. For experimenting with this idea, I would like to be able to get the xrandr-output from the window xid. I've been told by a few people that the only way to do this would be to construct the display space (where the monitors are positioned) and then work out the window co-ordinates and try to fit one to the other. If there is any better ideas you've got, or any code to do this, I would be very grateful. Thanks. Richard. ___ xorg-devel mailing list xorg-devel@lists.x.org http://lists.x.org/mailman/listinfo/xorg-devel