rv516 x1300 1002:7183 no longer syncs 2 simultaneous outputs on modesetting DIX
rv516 x1300 1002:7183 is discrete desktop PCIe card. Before apt-get full-upgrade on Bookworm today it was last upgraded in early May, then running 5.18 kernel and mesa 21.3.8, and both 1920x1200 on DVI and 1680x1050 on VGA worked as expected. Now with 5.19 or 5.18 kernel and 22.2.0 mesa, both displays fail to sync if the modesetting DIX is used. <https://paste.debian.net/1255583/> is Xorg.0.log. I recognize nothing in it that would explain the sync failure. All is good using the radeon DDX. openSUSE Tumbleweed's 5.19.10 & 5.19.8 & 22.2.0 echo the Bookworm behavior. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
rv516 x1300 1002:7183 no longer syncs 2 simultaneous outputs on modesetting DIX
Before apt-get full-upgrade on Bookworm today it was last upgraded in early May, then running 5.18 kernel and mesa 21.3.8, and both 1920x1200 on DVI and 1680x1050 on VGA worked as expected. Now with 5.19 or 5.18 kernel and 22.2.0 mesa, both displays fail to sync if the modesetting DIX is used. <https://paste.debian.net/1255583/> is Xorg.0.log. I recognize nothing in it that would explain the sync failure. All is good using the radeon DDX. openSUSE Tumbleweed's 5.19.10 & 5.19.8 & 22.2.0 echo the Bookworm behavior. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: 2 yo hp monitor, old dell, no video
gene heskett composed on 2022-08-16 19:08 (UTC-0400): > I bought two of these monitors around 20 months back when they were on > sale at Staples. This one works great > on a newer ASUS mb & debian bullseye. 1920x1080, no xorg.conf. > The 6 or 8 yo AOC 1366x768 I've been using on an older dell OptiPlex > 7010DT to drive my 3d printers > went black today so I unpacked the other monitor. > This dell has a db15 output with an hdmi plug on the other end of the > cable. Its menu's work, but it > can't find a signal. > Does this dells video need a special driver to hit it with a 1920x1080 > signal. > Xorg.0.log from the Dell attached. No special driver is available or required AFAIK. I've never known an analog 15-pin VGA PC output to be able to communicate fully with a display via a display's HDMI input. I suppose there may be some sort of non-passive way to do this, but the VGA to HDMI cable you have probably isn't any such thing. Digital output to analog input /is/ typically possible with an inexpensive passive adapter or adapter cable. Optiplexes usually have also a DisplayPort output. They can be used with a passive adapter to connect to a display's HDMI or DVI port. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: Xserver Error loading intel_drv.so undefined symbol:I810InitMC
Ken Moffat composed on 2022-07-29 02:11 (UTC+0100): > Given what Felix said about i810, I find it odd that anything in the > recent video stack references it. Oh, I see it is in > src/legacy/i810/. > Using the following autogen options: > ./autogen.sh --prefix=/usr \ > --sysconfdir=/etc \ > --localstatedir=/var \ > --enable-kms-only \ > --enable-uxa \ > --mandir=/usr/share/man > reports at the end: > xf86-video-intel 2.99.917 will be compiled with: > Xorg Video ABI version: 25.2 (xorg-server-21.1.3) > pixman version: pixman-1-0.40.0 > Acceleration backends: none *sna uxa > Additional debugging support? none > Support for Kernel Mode Setting? yes > Support for legacy User Mode Setting (for i810)? no > ^^^ > Support for Direct Rendering Infrastructure: *DRI2 DRI3 Present > Support for Xv motion compensation (XvMC and libXvMC): yes > Support for display hotplug notifications (udev): yes > Build additional tools and utilities? xf86-video-intel-backlight-helper > intel-virtual-output > Running make appears to NOT build anything in i810. ... According to what I read in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1016151 i810 is dead since kernel 5.16rc1. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: Xserver Error loading intel_drv.so undefined symbol:I810InitMC
Ahsan composed on 2022-07-27 20:47 (UTC+0200): > /usr/lib/xorg/modules/drivers/intel_drv.so: undefined symbol: I810InitMC The i810 series supports Intel's original x86 iGPU from 1999. If you have need to use an 810 or 815 iGPU, you have the world's sympathy. GPUs didn't come much worse in their day. Unless you're an afficionado of antique PCs, you don't want to need the i810 kernel driver to build. Most newer Intel iGPUs need an i915 or newer kernel device driver. As Ken already wrote, most iGPUs since around 2008 can run X well on the newer technology "modesetting" device independent X (DIX) driver packaged with the X server. As a consequence, the xf86-video-intel device dependent X (DDX) driver hasn't had an official release in nearly a decade. All that said, Debian 11 is providing an Intel DDX that works with my relic i810 iGPU: # inxi -SCmyz System: Kernel: 5.10.0-12-686 i686 bits: 32 Desktop: Trinity R14.1.0 Distro: Debian GNU/Linux 11 (bullseye) Memory: RAM: total: 492.5 MiB used: 177.9 MiB (36.1%) Array-1: capacity: 512 MiB slots: 2 EC: None Device-1: DIMM_A size: 256 MiB speed: 100 MT/s Device-2: DIMM_B size: 256 MiB speed: 100 MT/s CPU: Info: single core model: Pentium III (Coppermine) bits: 32 cache: 256 KiB note: check Speed (MHz): 997 min/max: N/A core: 1: 997 # inxi -Gayz Graphics: Device-1: Intel 82810E DC-133 Graphics vendor: Dell OptiPlex GX110 driver: N/A bus-ID: 00:01.0 chip-ID: 8086:7125 class-ID: 0300 Display: x11 server: X.Org v: 1.20.11 driver: X: loaded: intel gpu: N/A display-ID: :0 screens: 1 Screen-1: 0 s-res: 1680x1050 s-dpi: 90 s-size: 474x303mm (18.7x11.9") s-diag: 563mm (22.1") Monitor-1: default res: 1680x1050 hz: 60 size: N/A OpenGL: renderer: llvmpipe (LLVM 11.0.1 128 bits) v: 4.5 Mesa 20.3.5 compat-v: 3.1 direct render: Yes # -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: nouveau going off the deep end...
Robert Heller composed on 2022-06-19 18:40 (UTC-0400): > I don't use any other 3D programs (maybe FreeCAD). KiCaD is not a 3D > program, > it is 2D. This is an integrated video chipset on the motherboard -- I don't > have a separate video card. > How do I turn off the compositor? Do I need it? Some DEs require it be enabled globally, but allow it to be disabled locally via its own settings. How Mate works I don't know. To disable globally, use a .conf file in /etc/X11/xorg.conf.d/ containing the following: Section "Extensions" Option "Composite" "Disable" EndSection -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: nouveau going off the deep end...
Robert Heller composed on 2022-06-20 08:38 (UTC-0400): > At Mon, 20 Jun 2022 00:10:24 -0400 Felix Miata wrote: >> You're using the nouveau kernel driver, but are you using the nouveau DDX >> display >> driver, or the modesetting DIX display driver? Does switching to the other >> make >> any difference? > I am not sure -- I don't have a X config file - the X server is using its > defaults. > sauron% inxi -xxx -GS > System:Host: sauron Kernel: 4.15.0-187-generic x86_64 bits: 64 gcc: 7.5.0 >Desktop: MATE 1.20.1 (Gtk 2.24.32) info: mate-panel dm: N/A >Distro: Ubuntu 18.04.6 LTS > Graphics: Card: NVIDIA C77 [GeForce 8200] bus-ID: 02:00.0 chip-ID: 10de:0849 >Display Server: X.Org 1.19.6 driver: nouveau >Resolution: 1366x768@59.79hz >OpenGL: renderer: NVAA version: 3.3 Mesa 20.0.8 Direct Render: Yes This output suggests only that Xorg is using the nouveau DDX. To switch to the modesetting DIX, which is the actual upstream default, and not a discrete package, removing the nouveau is the easy way: sudo apt purge xserver-xorg-video-nouveau Alternatively, you can create a config file in /etc/X11/xorg.con* to specify use of modesetting instead of nouveau. >> # inxi -GSaz --vs >> inxi 3.3.19 (2022-06-17) Looks like you are using the broken ancient inxi that came with 18.04 that is incapable of reporting the loaded kernel graphics driver, and a lot of other things a recent version can. Before trying to use it again you should purge it, and install the upstream version, if you wish good information from it: https://smxi.org/docs/inxi-installation.htm#inxi-manual-install -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: nouveau going off the deep end...
Robert Heller composed on 2022-06-19 16:31 (UTC-0400): > I am running Ubuntu 18.04 on an AMD Phenom(tm) II X4 945 Processor, 8Gig of > RAM, with a NVIDIA Corporation C77 [GeForce 8200] (rev a2) video chipset. > There is some sort of bug in the version of KiCaD I have > (4.0.7+dfsg1-1ubuntu2) with its pcbnew program that puts my machine in a > state > where I have to use the "magic" SysRq key to forceably reboot it (I can ssh > in > from another computer, but /sbin/reboot does not work). > I've included the last of the kernel log. It looks like something is broken > in nouveau, which I am guessing has something to do with the video somehow. > (And no, I am not going to download and install NVIDIA's video driver.) > I don't know if this is a kernel problem (I current have kernel > 4.15.0-187-generic), or something in X Server. You're using the nouveau kernel driver, but are you using the nouveau DDX display driver, or the modesetting DIX display driver? Does switching to the other make any difference? I have a little bit newer NV84, also with Tesla architecture used by GeForce 8200, working normally on 18.04, using modesetting DIX: # inxi -GSaz --vs inxi 3.3.19 (2022-06-17) System: Kernel: 4.15.0-187-generic arch: x86_64 bits: 64 compiler: gcc v: 7.5.0 parameters: ro root=LABEL= net.ifnames=0 ipv6.disable=1 noresume mitigations=auto consoleblank=0 plymouth.enable=0 Desktop: Trinity v: R14.0.13 tk: Qt v: 3.5.0 info: kicker wm: Twin v: 3.0 vt: 7 dm: TDM Distro: Ubuntu 18.04.6 LTS (Bionic Beaver) Graphics: Device-1: NVIDIA G84 [GeForce 8600 GT] vendor: XFX Pine driver: nouveau v: kernel alternate: nvidiafb non-free: 340.xx status: legacy (EOL, try --gpu) arch: Tesla process: 40-80nm built: 2006-13 pcie: gen: 1 speed: 2.5 GT/s lanes: 16 ports: active: DVI-I-1,DVI-I-2 empty: none bus-ID: 01:00.0 chip-ID: 10de:0402 class-ID: 0300 Display: x11 server: X.Org v: 1.19.6 driver: X: loaded: modesetting alternate: fbdev,nouveau,vesa gpu: nouveau display-ID: :0 screens: 1 Screen-1: 0 s-res: 3840x1200 s-dpi: 120 s-size: 812x254mm (31.97x10.00") s-diag: 851mm (33.5") Monitor-1: DVI-I-1 pos: primary,left model: NEC EA243WM serial: built: 2011 res: 1920x1200 hz: 60 dpi: 94 gamma: 1.2 size: 519x324mm (20.43x12.76") diag: 612mm (24.1") ratio: 16:10 modes: max: 1920x1200 min: 640x480 Monitor-2: DVI-I-2 pos: right model: Samsung built: 2009 res: 1920x1080 hz: 60 dpi: 305 gamma: 1.2 size: 160x90mm (6.3x3.54") diag: 184mm (7.2") ratio: 16:9 modes: max: 1920x1080 min: 720x400 OpenGL: renderer: NV84 v: 3.3 Mesa 20.0.8 direct render: Yes -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: Error (EE) no screens found(EE)
Joachim Torbicki composed on 2022-05-18 08:25 (UTC): > after update with my opensuse tumbleweed i have an error when starting kde.It > fails with a X11 error "no screens found(EE) ". - > see Xorg.1.log > Informationen zu Schema x11: > > Repository : Haupt-Repository (OSS) > Name : x11 > Version : 20200505-32.2 > Arch : x86_64 > Anbieter : openSUSE > When trying Xorg -configure i get the error on the screenshot. Xorg -configure is a waste of time that rarely works for graphics issues. openSUSE, like most current distros, should work just fine automatically, with no /etc/X11/xorg.conf file, and no configuration files in /etc/X11/xorg.conf.d/ affecting graphics. Make sure xf86-video-amdgpu, kernel-firmware-amdgpu and libdrm_amdgpu1 are all installed, and try again with no xorg.conf file. If it doesn't work, watch for the next TW release, then upgrade right away. I saw mention of an X bug with AMD, but don't know any details except that the bug was found right at release and the fix is in the pipe to whichever release follows 20220516. I may have the same problem, and have been trying to find the package at fault. I think it may be kernel-firmware-amdgpu. If this is it, you should be able to simply use a previous kernel whose initrd was not rebuilt with the current kernel-firmware-amdgpu version. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: Overscan issue with HDMI-connected screen
Àlex Magaz composed on 2021-12-31 12:46 (UTC+0100): > xrandr continues without finding the underscan/*border properties. I suggest to bring this to the attention of the intel driver developers: https://lists.freedesktop.org/mailman/listinfo/intel-gfx and consider reporting a bug: https://01.org/linuxgraphics/documentation/how-report-bugs https://gitlab.freedesktop.org/drm/intel/-/issues/new Include output from: inxi -Ga, or inxi -Gxx, or lspci -nnk | grep -A1 VGA -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: Overscan issue with HDMI-connected screen
Àlex Magaz composed on 2021-12-30 22:09 (UTC+0100): > Felix Miata wrote: >> Àlex Magaz composed on 2021-12-30 15:57 (UTC+0100): >>> I'm trying to fix an overscan issue (the image overflows by all 4 sides) >>> with my screen when connected through HDMI. It works fine over DVI. >> I fixed my Proscan TV's overscan with: >> xrandr --output HDMI-1 --set underscan on --set "underscan vborder" 20 --set >> "underscan hborder" 35 > Thanks, I already tried it, but those properties aren't available on the > Intel driver. That's why I'm using the '--transform' option. This begs the issue why the intel DDX driver. All my Intel IGPs from G41-up (2008?) (G41, Q43, Q45, Haswell, Kaby Lake HD 630, Rocket Lake UHD 730) are running on the modesetting DIX driver. Q43 was where I needed to set underscan with modesetting DIX on Proscan TV. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: Overscan issue with HDMI-connected screen
Àlex Magaz composed on 2021-12-30 15:57 (UTC+0100): > I'm trying to fix an overscan issue (the image overflows by all 4 sides) > with my screen when connected through HDMI. It works fine over DVI. I fixed my Proscan TV's overscan with: xrandr --output HDMI-1 --set underscan on --set "underscan vborder" 20 --set "underscan hborder" 35 -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: Xorg framebuffer and brightness of screen
Riza Dindir composed on 2021-11-12 00:31 (UTC-0500): > Hello Felix, > Here is the Xorg log url: https://pastebin.com/hGcDzSG8 You hadn't mentioned you're using BSD. I have no experience with any BSD except as appears on MacOS, so don't know what to expect. On Linux we have KMS, which xrandr uses to learn the required output name needed to which to direct a command to affect output characteristics. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: Xorg framebuffer and brightness of screen
Riza Dindir composed on 2021-11-11 10:16 (UTC+0300): >> On Wed, Nov 10, 2021 at 10:15:18AM +0300, Riza Dindir wrote: >>> I woukd like to know if I can adjust the screen brightness when using the >>> framebuffer device on xorg? > I am using gop, and a framebuffer. When i try to use the xrandr, it says > "gamma is set to 0", or "failed to get size of gamma for output default". > I can set the gamma uaing xgamma and in the xorg.conf file. If you upload /var/log/Xorg.0.log to a pastebin and share its URL someone may spot a clue to your obstacle. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
IRC - is there a support channel for amdgpu display driver?
Is there an IRC channel somewhere for amdgpu display driver issues? I couldn't find one on libera or oftc, and freenode won't let me join. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
Re: [discuss] Dedicated- and integrated graphics cards do not work simultaneously
Felix Miata composed on 2021-07-18 16:25 (UTC-0400): > Ole Reier Ulland composed on 2021-07-18 15:05 (UTC+0200): >> (EE) AIGLX error: Calling driver entry point failed >> It is very important to disable Kscreen2 as a startup service, because as >> enabled it deletes the Kscreen configurations, in /home/[user]/.local/share/ >> kscreen, that did not work out right. I this case it always removed the >> configurations for the dGPU. >> I believe I have come to the end of the line here. I conclude that it is >> impossible to get Xorg/Kscreen to function with dual graphic cards. >> xorg.conf, kscreen configurations, and Xorg.0.log should be available here, >> https://pastebin.com/E5CbXQET > This from that log is a key failure: > (WW) intel(0): Unknown chipset > Apparently it's not just the kernel that needs to be newer than Mageia 8's > 5.10. > If there's a backported X server, it must be needed also, and/or newer > firmware, > for your i7-11700K 8086:4c8a Rocket Lake. I was in too big a rush to send my last post. Rocket Lake support is obviously missing from Mageia 8's version of xf86-video-intel (packaged as x11-driver-video-intel in Mageia). Intel's driver development has been focusing on the modesetting DIX, while its named driver has been in maintenance mode for 7+ years. So, for the IGP to function as well as possible, either remove x11-driver-video-intel, or specify 'Driver "modesetting"' in /etc/X11/xorg.conf.d/ in some 'Section "Device"' file dedicated to the IGP, e.g. 15-intel.conf. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: [discuss] Dedicated- and integrated graphics cards do not work simultaneously
Ole Reier Ulland composed on 2021-07-18 15:05 (UTC+0200): > (EE) AIGLX error: Calling driver entry point failed > It is very important to disable Kscreen2 as a startup service, because as > enabled it deletes the Kscreen configurations, in /home/[user]/.local/share/ > kscreen, that did not work out right. I this case it always removed the > configurations for the dGPU. > I believe I have come to the end of the line here. I conclude that it is > impossible to get Xorg/Kscreen to function with dual graphic cards. > xorg.conf, kscreen configurations, and Xorg.0.log should be available here, > https://pastebin.com/E5CbXQET This from that log is a key failure: (WW) intel(0): Unknown chipset Apparently it's not just the kernel that needs to be newer than Mageia 8's 5.10. If there's a backported X server, it must be needed also, and/or newer firmware, for your i7-11700K 8086:4c8a Rocket Lake. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Freenode fallout
IMO, this mass employee move is all politics, thus, no reason for anyone to move anything anywhere. It's unfortunate for FOSS no matter what anyone does, so I'm for maintaining the status quo. AFAICT, nothing about freenode has stopped working since the hit the fan. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel
Re: Lost sliding windows w/ DVI Syncmaster monitor
Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Lost sliding windows w/ DVI Syncmaster monitor
Paul Rogers composed on 2020-04-02 18:18 (UTC-0700): > ...vga=791 video=1024x768 at 60 3 nouveau.modeset=0... > the vga= implies framebuffers It only affects the vttys, and then only if KMS disabled. IOW, no expected impact on X. > video= for VESA driver With NVidia and AMD GPUs, this applies only the the vttys, and only when KMS is enabled. It does pass through to X if an Intel GPU with the Intel DDX. IOW, no expected impact on X with your GK208. I did boot once or twice along the way with neither vga= nor video= at least in part to ensure the lack of impact on X whilst using VESA, as I have no material previous experience using VESA purposely. > I don't use framebuffers, straight VESA all the way. I did try booting with > video=1024x768. No joy. Were you expecting it to affect X in some manner? Consequent to my comment, no joy WRT what? Using neither, the vttys will either be 80x25 if KMS is disabled, making text too big for my comfort, or the display's native mode if KMS is enabled, making text too small for my comfort. I generally use a vtty mode that makes text ~double the height (~141% of the size) that it would be using defaults. > nouveau.modeset=0 is equivalent to nomodeset? If you're using the VESA > driver that does nothing, right? Tried that, no joy. Right, equivalent to nomodeset except limited to NVidia GPUs, so not affecting X operation using VESA. Again, what effect were you expecting consequent to my comment that produced no joy? > Have you got fluxbox on those distros? Get panning with those? On none do I have fluxbox. On most I use IceWM when I want simple for testing. In forming my previous response I did on a KDE/Plasma installation find it was overriding the configuration, and didn't want to dig into its settings to try to determine how to override its settings override. So on it I found IceWM behaved as expected (working panning), but didn't include that activity in the response. To confirm I repeated Ubuntu 16.04/kernel 4.4.x/XServer 1.18.4 but with IceWM. As with the others and VESA, xrandr doesn't tell the whole story when it's in use. Again it reports 1024x768 is the resolution, when what appears on the screen is in 800x600 mode panning out in all 4 directions to 1024x768. IOW, success! http://fm.no-ip.com/Tmp/Linux/Xorg/paulR/xorg.0.log-g5eas-u1604-icewm-paulR-panningOK.txt # inxi -SGxxza System:Kernel: 4.4.0-174-generic x86_64 bits: 64 compiler: gcc v: 5.4.0 parameters: ro root=LABEL=foobarbaz noresume plymouth.enable=0 mitigations=auto consoleblank=0 vga=791 3 nomodeset Desktop: IceWM 1.3.8+githubmod+20150914+fa3fdef dm: startx Distro: Ubuntu 16.04.6 LTS (Xenial Xerus) Graphics: Device-1: XGI Z7/Z9 vendor: Gigabyte driver: xgifb v: kernel bus ID: 0a:03.0 chip ID: 18ca:0020 Device-2: NVIDIA G98 [GeForce 8400 GS Rev. 2] driver: N/A bus ID: 0b:00.0 chip ID: 10de:06e4 Display: server: X.Org 1.18.4 driver: vesa resolution: 1024x768~N/A OpenGL: renderer: llvmpipe (LLVM 6.0 128 bits) v: 3.3 Mesa 18.0.5 compat-v: 3.0 direct render: Yes # cat /etc/X11/xorg.conf Section "Device" Identifier "VESA" Driver "vesa" EndSection Section "Monitor" Identifier "Dell" HorizSync 30-81 VertRefresh 56-76 EndSection Section "Screen" Identifier "Screen0" Device "VESA" Monitor "Dell" DefaultDepth16 Virtual 1024 768 SubSection "Display" Depth 24 Modes "800x600" "1024x768" "640x480" EndSubSection SubSection "Display" Depth 16 Modes"800x600" "1024x768" "640x480" EndSubSection EndSection # xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 800x6000.00 1024x768 0.00* 640x4800.00 # hwinfo --monitor 30: None 00.0: 10002 LCD Monitor [Created at monitor.97] Unique ID: rdCR.8OK+7dncs2E Hardware Class: monitor Model: "DELL 1704FPT" > ...Note the complete absence of modelines in xorg.conf. They are an > anachronism. > OK, I took those out. I use them because some video cards & monitors leave me > with a wide "front porch", but for a test... No joy. Again I find myself unable to grok your expectation consequent to my comment. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Lost sliding windows w/ DVI Syncmaster monitor
anachronism. Modern server versions are perfectly capable of generating modelines at least as well as GTF and CVT, as long as they get valid HorizSync and VertRefresh from somewhere. Note also inclusion of nouveau.modeset=0 on each kernel command line. I also tried with others: 1-Fedora 31: no sign of the applied panning configuration that worked in olders. http://fm.no-ip.com/Tmp/Linux/Xorg/paulR/xorg.0.log-big41-f31-XOCpanningFAILS-paulR.txt KP+ & KP- not tested. 2-openSUSE Tumbleweed: no sign of the applied panning configuration that worked in olders. http://fm.no-ip.com/Tmp/Linux/Xorg/paulR/xorg.0.log-big41-stw-XOCpanningFAILS-paulR.txt Panning via modesetting DDX and xrandr (no xorg.conf*) works as expected: http://fm.no-ip.com/Tmp/Linux/Xorg/paulR/xorg.0.log-big41-stw-paulR-xrandrPanningOK.txt KP+ & KP- not tested. 3-Debian Buster: ran into multiple problems, not the least of which X wouldn't do better than 640x480 with VESA on my Samsung, so I aborted the attempt. http://fm.no-ip.com/Tmp/Linux/Xorg/paulR/xorg.0.log-g5eas-d11buster-XOCbad-paulR.txt As with TW, panning via modesetting DDX and xrandr (no xorg.conf*) works as expected: http://fm.no-ip.com/Tmp/Linux/Xorg/paulR/xorg.0.log-g5eas-d11buster-paulR-xrandrPanningOK.txt KP+ & KP- had no impact on screen resolution/mode. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Lost sliding windows w/ DVI Syncmaster monitor
Paul Rogers composed on 2020-04-01 13:02 (UTC-0700): >> I don't think it has anything to do with the Samsung or the Dell, but >> rather with >> however you are trying to configure. How exactly are you configuring? > See attachment. Or are you asking for the build options for the server? xorgconfd.tgz had what I was looking for. :-) > >> Please show us your GPU specification, thus: > >> # inxi -Gxx > Graphics: > Device-1: NVIDIA GK208 [GeForce GT 635] driver: N/A bus ID: 01:00.0 > chip ID: 10de:1280 > Display: server: X.Org 1.18.4 driver: vesa resolution: 1024x768~N/A > OpenGL: renderer: Gallium 0.4 on llvmpipe (LLVM 3.8 256 bits) > v: 3.3 Mesa 12.0.1 compat-v: 3.0 direct render: Yes Rather than a configuration of the type you're familiar with, I'll bet the newer way will work, if not with your custom installed system, at least with a common recent live media distro as a means of proving. Modern Xorg automagic should get the whole job done, up until the point of panning configuration, if instead of VESA, using the newer technology default DDX driver provided by the XServer rather than in a separate package. Start by eliminating 20-vesa.conf, 40-samsung.conf and 90-screen.conf from xorg.conf.d/. Let X startup in the Samsung's native 1280x1024 mode using the default "modesetting" DDX. Then try as follows: # xrandr | head Screen 0: minimum 320 x 200, current 1280 x 1024, maximum 8192 x 8192 DVI-I-1 connected 1280x1024+0+0 (normal left inverted right x axis y axis) 338mm x 270mm 1280x1024 60.02*+ 75.02 1280x960 60.00 1152x864 75.00 1024x768 60.0475.0370.0760.00 960x720 60.00... # xrandr --fb 1280x1280 --output DVI-I-1 --mode 1024x768 --panning 1280x1280 # xrandr | head Screen 0: minimum 320 x 200, current 1280 x 1280, maximum 8192 x 8192 DVI-I-1 connected 1280x1280+0+0 (normal left inverted right x axis y axis) 338mm x 270mm panning 1280x1280+0+0 1280x1024 60.02 + 75.02 1280x960 60.00 1152x864 75.00 1024x768 60.0475.03* 70.0760.00 960x720 60.00... # inxi -SGxx System:Host: g5eas Kernel: 4.4.180-102-default x86_64 bits: 64 compiler: gcc v: 4.8.5 Desktop: Trinity R14.0.6 tk: Qt 3.5.0 wm: Twin dm: startx Distro: openSUSE Leap 42.3 Graphics: Device-1: XGI Z7/Z9 vendor: Gigabyte driver: xgifb v: kernel bus ID: 0a:03.0 chip ID: 18ca:0020 Device-2: NVIDIA G98 [GeForce 8400 GS Rev. 2] driver: nouveau v: kernel bus ID: 0b:00.0 chip ID: 10de:06e4 Display: server: X.Org 1.18.3 driver: modesetting tty: N/A OpenGL: renderer: Gallium 0.4 on llvmpipe (LLVM 3.8 128 bits) v: 3.3 Mesa 17.0.5 compat-v: 3.0 direct render: Yes http://fm.no-ip.com/SS/Xorg/1024x0768on1280x1280.jpg I used nearly the same kernel and XServer as yours, as well as an NVidia GPU not terribly newer, and a Dell LCD. I do have a 1280x1024 Samsung SyncMaster 914V, but it's upstairs, and has no DVI port, so more bother to test with. Ignore what you see about the XGI GPU. This Gigabyte's BIOS doesn't understand how to do it's job correctly when a PCI GeForce is installed. If it works as I expect, you can put xrandr in whatever script you start X with, or another. If it fails, I'd like to see the resulting Xorg.0.logs from both your custom build and any recent live distro that depends fully on automagic X configuration. Be sure you do not have the Xorg nouveau DDX installed, as it will usurp the modesetting DDX, unless you perform additional manual configuration. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Lost sliding windows w/ DVI Syncmaster monitor
Paul Rogers composed on 2020-03-31 23:09 (UTC-0700): > For years, in xorg.conf I've enjoyed declaring a larger than > comfortable maximum resolution at the end of the modes, then one more > comfortable for my old eyes for use. Then I could slide the visible > window around the larger virtual window to bring diferent parts into > view. I recently upgraded to a DVI Syncmaster monitor and this > facility no longer works. The mouse pointer will disappear off the > right & bottom edges, but the viewable area is firmly fixed in the > upper left corner of the virtual window. This seems to come from the > monitor--I tried it also with a Dell monitor, and had the same > behavior. Is there come configuration option that will bring it back > for me? The effect you like is called panning. I don't think it has anything to do with the Samsung or the Dell, but rather with however you are trying to configure. How exactly are you configuring? Also, you're using a very slow generic X driver (VESA) that I suspect may not support panning. It's also possible that the problem is your distro's build of 1.18.4 server or driver is broken WRT panning. ISTR panning was broken in the server for quite some time, depending on configuration method, and the patch to fix for which may not have been included in your quite old 1.18.4 version. Please show us your GPU specification, thus: # inxi -Gxx Panning is working fine here, 1920x1200 visible area on a 2560x1440 desktop, and 1440x900 visible area on a 1920x1920 desktop, using the default modesetting driver: # inxi -SGxx System:Host: hp945 Kernel: 4.12.14-lp151.28.44-default x86_64 bits: 64 compiler: gcc v: 7.5.0 Desktop: KDE 3 wm: kwin dm: startx Distro: openSUSE Leap 15.1 Graphics: Device-1: NVIDIA GT218 [GeForce 210] vendor: eVga.com. driver: nouveau v: kernel bus ID: 01:00.0 chip ID: 10de:0a65 Display: server: X.Org 1.20.3 driver: modesetting resolution: 1920x1200~60Hz OpenGL: renderer: llvmpipe (LLVM 7.0 128 bits) v: 3.3 Mesa 18.3.2 compat-v: 3.1 direct render: Yes # xrandr --fb 2560x1440 --output DVI-I-1 --mode 1920x1200 --panning 2560x1440 # xdpyinfo | grep dimen dimensions:2560x1440 pixels (541x304 millimeters) # xrandr | egrep 'onnect|creen|\*' | grep -v disconn | sort -r Screen 0: minimum 320 x 200, current 2560 x 1440, maximum 8192 x 8192 DVI-I-1 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 519mm x 324mm panning 2560x1440+0+0 1920x1200 59.95*+ moz@00srv:/pub/Screenshots/KDE> moz@00srv:/pub/Screenshots/KDE> moz@00srv:/pub/Screenshots/KDE> cat /pub/Tmp/out # inxi -SGxx System:Host: hp945 Kernel: 4.12.14-lp151.28.44-default x86_64 bits: 64 compiler: gcc v: 7.5.0 Desktop: KDE 3 wm: kwin dm: startx Distro: openSUSE Leap 15.1 Graphics: Device-1: NVIDIA GT218 [GeForce 210] vendor: eVga.com. driver: nouveau v: kernel bus ID: 01:00.0 chip ID: 10de:0a65 Display: server: X.Org 1.20.3 driver: modesetting resolution: 1440x900~60Hz OpenGL: renderer: llvmpipe (LLVM 7.0 128 bits) v: 3.3 Mesa 18.3.2 compat-v: 3.1 direct render: Yes # xrandr --fb 1920x1920 --output DVI-I-1 --mode 1440x900 --panning 1920x1920 # xdpyinfo | grep dimen dimensions:1920x1920 pixels (406x406 millimeters) # xrandr | egrep 'onnect|creen|\*' | grep -v disconn | sort -r Screen 0: minimum 320 x 200, current 1920 x 1920, maximum 8192 x 8192 DVI-I-1 connected primary 1920x1920+0+0 (normal left inverted right x axis y axis) 519mm x 324mm panning 1920x1920+0+0 1440x900 59.90* -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X.org fails to start with "permission" issues, does not even try loading Intel driver
Mikhail Ramendik composed on 2020-01-30 21:04 (UTC): > I also wonder why modeset is the driver of choice at all, seeing as > this is an Intel chipset ant the i915 module is loaded. All my Fedoras on non-ancient Intel GPUs are running on the modesetting DDX. Last official Intel DDX release was nearly 5 years ago. It's been nothing but gits since then: https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/ -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Using Intel GPU "TearFree" option changes my detected screens / edid information
Seba Kerckhof composed on 2019-10-15 16:34 (UTC+0200): > I was experiencing tearing on my Debian system. I read about the intel > driver "TearFree" option and configured it as explained here: > https://wiki.archlinux.org/index.php/Intel_graphics#Tearing > While it does seem to help with the tearing, it changes my detected > screens. By this I mean if I run xrandr, my display ports have a different > name and the detected screens have a different EDID (model/vendor), > different resolutions etc. You apparently weren't using the Intel DDX, but the (default) modesetting DDX. By applying: /etc/X11/xorg.conf.d/20-intel.conf Section "Device" Identifier "Intel Graphics" Driver "intel" Option "TearFree" "true" EndSection you switched to the Intel DDX, which uses a different naming convention. The modesetting DDX's names work the same for Intel, AMD and NVidia GPUs. Why vendor:model would change I have no idea. The Intel DDX hasn't had an official release in over 4 years, while Intel's DDX writers are significant contributors to the modesetting DDX. It seems as though the Intel DDX is informally deprecated. I've been using it exclusively where supported in spite of tearing (on Haswell), which doesn't really bother me. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: XOrg in Debian10/Buster not usable with AMD Duron / Matrox G400
Markus Hiereth composed on 2019-08-11 22:05 (UTC+0200): ... I was able to replicate the (EE) tail of your attached log via an /etc/X11/xorg.conf file containing the following: Section "Monitor ... DefaultDepth16 ... EndSection Commenting away DefaultDepth restored normal X operation. I suggest removing any similar line you find anywhere in /etc/X11/xorg.conf.d/*, and if that alone is insufficient, then remove /etc/X11/xorg.conf if it exists, and all files in /etc/X11/xorg.conf.d/ that contain any of the following lines: Section "Device" Section "Monitor" Section "Screen" whatever their filenames may be, then try. If it still doesn't work, add the attached /etc/X11/xorg.conf verified working here. Try it as-is first. It may need HorizSync and VertRefresh adjusted to match your display. X is smart enough to auto-generate suitable modelines given appropriate HorizSync and Vertrefresh. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ Section "Device" Identifier "DefaultDevice" Driver "mga" EndSection Section "Monitor" Identifier "DefaultMonitor" VendorName "Dell" ModelName "DELL 1704FPT" HorizSync 30-81 VertRefresh 56-76 EndSection Section "Screen" Identifier "DefaultScreen" Device "DefaultDevice" Monitor "DefaultMonitor" EndSection ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: XOrg in Debian10/Buster not usable with AMD Duron / Matrox G400
Markus Hiereth composed on 2019-08-10 22:27 (UTC+0200): > [ 3974.700] (II) MGA(0): VESA VBE DDC supported > [ 3974.700] (II) MGA(0): VESA VBE DDC Level none > [ 3974.700] (II) MGA(0): VESA VBE DDC transfer in appr. 0 sec. > [ 3974.844] (II) MGA(0): VESA VBE DDC read failed I spotted another unexpected difference. Mine: [ 1679.533] (II) MGA(0): VESA VBE DDC supported [ 1679.533] (II) MGA(0): VESA VBE DDC Level 2 [ 1679.533] (II) MGA(0): VESA VBE DDC transfer in appr. 1 sec. [ 1681.106] (II) MGA(0): VESA VBE DDC read successfully This suggests to me your problem's root could be in your display's EDID. Have you tried with other displays? -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: XOrg in Debian10/Buster not usable with AMD Duron / Matrox G400
Markus Hiereth composed on 2019-08-10 22:27 (UTC+0200): > Felix Miata schrieb am 10. August 2019 um 21:55 > As I annonced, I made an > attempt with disabled DRI option for the video card, i.e. I introduced > Option "DRI" "False" I suppose DRI support is supposed to be auto-detected but maybe is not working as expected in your environments. https://en.wikipedia.org/wiki/Direct_Rendering_Infrastructure > in the device section of xorg.conf. The result was that the X server > started as expected. Below the respective logfile. > Thus, the problem is solved though I do not know or try to imagine > what benefits DRI "Direct rendering infrastructure" might have. Does this PC with AGP slot and G400 have 4G or more physical RAM? If it does not, as I have no AGP slot boards that support more than 2G and suspect maybe there exists no such things, for completeness sake I would undo the DRI false option and try the non-PAE kernel. See here why I make this suggestion: https://bugzilla.opensuse.org/show_bug.cgi?id=1118689#c29 Anyway, great to know these great old GPUs remain useful for others than myself. :-) -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: XOrg in Debian10/Buster not usable with AMD Duron / Matrox G400
Markus Hiereth composed on 2019-08-10 20:24 (UTC+0200): > Your mga module was compiled for another version of the server: > 66c80 >> (II) Loading /usr/lib/xorg/modules/drivers/mga_drv.so > < (II) Loading /usr/local/lib/xorg/modules/drivers/mga_drv.so > 68c82,88 > < compiled for 1.20.4, module version = 2.0.0 > --- >> compiled for 1.20.3, module version = 2.0.0 >> Module class: X.Org Video Driver >> ABI class: X.Org Video Driver, version 24.0 >> (II) LoadModule: "modesetting" >> (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so >> (II) Module modesetting: vendor="X.Org Foundation" >> compiled for 1.20.4, module version = 1.20.4 My Buster /etc/apt/sources.list: deb http://ftp.debian.org/debian buster main contrib non-free deb http://ftp.debian.org/debian buster-backports main deb http://security.debian.org/ buster/updates main contrib non-free deb http://ftp.debian.org/debian buster-updates main deb http://mirror.xcer.cz/trinity-sb buster deps-r14 main-r14 deb-src http://mirror.xcer.cz/trinity-sb buster deps-r14 main-r14 deb http://www.deb-multimedia.org buster main non-free I have no /usr/local/lib/xorg/* here, nor any idea how to explain my mga_drv.so reporting compiled for 1.20.3. There is minimal corruption here at window and panel edges, but it doesn't interfere with normal use. Here's a fresh log, nominally larger, 691 lines instead of 683: http://fm.no-ip.com/Tmp/Linux/Xorg/Mga/xorg.0.log-gx27c-deb10-G400-normal -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: XOrg in Debian10/Buster not usable with AMD Duron / Matrox G400
Markus Hiereth composed on 2019-08-09 11:35 (UTC+0200): > could You send a Xorg.log - file that allows a comparison of the > startup procedures of the X server on your and on my system. http://fm.no-ip.com/Tmp/Linux/Xorg/Mga/xorg.0.log-mga400-deb10-201908090655-0400 -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: XOrg in Debian10/Buster not usable with AMD Duron / Matrox G400
Markus Hiereth composed on 2019-07-16 11:05 (UTC+0200): ... I have Buster working with P4 and G400 mostly OK here. xrandr --dpi 120 cannot get size of gamma for output default. # dpkg-query -l | grep mga ii xserver-xorg-video-mga 1:2.0.0-1 i386 X.Org X server -- MGA display driver # inxi -GxxSa System:Host: gx27c Kernel: 4.19.0-5-686 i686 bits: 32 compiler: gcc v: 8.3.0 parameters: root=/dev/sda20 ipv6.disable_ipv6=1 net.ifnames=0 noresume mitigations=auto consoleblank=0 3 Desktop: Trinity R14.0.7 tk: Qt 3.5.0 wm: Twin dm: startx Distro: Debian GNU/Linux 10 (buster) Graphics: Device-1: Matrox Systems MGA G400/G450 driver: N/A bus ID: 01:00.0 chip ID: 102b:0525 Display: tty server: X.Org 1.20.4 driver: mga unloaded: modesetting alternate: fbdev,vesa resolution: 1680x1050~60Hz OpenGL: renderer: llvmpipe (LLVM 7.0 128 bits) v: 3.3 Mesa 18.3.6 compat-v: 3.1 direct render: Yes Which BIOS version does your G400 have? I upgraded mine to 2.1 about 10 years ago. I have the software that did the upgrade if you need it. It doesn't seem available from matrox.com any more. http://fm.no-ip.com/Tmp/Hardware/Gfxcard/mgabios.zip -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X.Org modules which could use some help to release
Adam Jackson composed on 2019-07-22 11:23 (UTC-0400): > On Sun, 2019-07-21 at 09:29 -0400, Felix Miata wrote: >> Adam Jackson composed on 2019-07-16 15:33 (UTC-0400): >>> On Sun, 2019-07-14 at 18:34 -0700, Alan Coopersmith wrote: >>>> - driver/xf86-video-tseng >>> These are drivers for some fairly ancient PCI devices, both Tseng Labs >>> and Ark Logic were out of the graphics chip business by 1999. If >>> someone wants to verify that they work with a current release, neat, >>> but this is serious necrocomputing territory. >> Does any distro package this? It's the best there ever was for DOS users. I >> would >> test if I could find an .rpm (Mageia, openSUSE, Fedora) or .deb (Buster, >> AntiX) to >> install. > Fedora did for a while: > https://src.fedoraproject.org/rpms/xorg-x11-drv-tseng/tree/f16 No joy: (EE) TSENG(0): No valid Framebuffer address in PCI config space; Does this mean the ET6100 needs manual configuration of PCI Bus ID? Other? Xorg.0.logs & dmesgs from F16 using <https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/16/Everything/i386/os/Packages/xorg-x11-drv-tseng-1.2.4-7.fc16.i686.rpm>: http://fm.no-ip.com/Tmp/Linux/Xorg/Tseng/ -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel
Re: X.Org modules which could use some help to release
Adam Jackson composed on 2019-07-16 15:33 (UTC-0400): > On Sun, 2019-07-14 at 18:34 -0700, Alan Coopersmith wrote: >> - driver/xf86-video-tseng > These are drivers for some fairly ancient PCI devices, both Tseng Labs > and Ark Logic were out of the graphics chip business by 1999. If > someone wants to verify that they work with a current release, neat, > but this is serious necrocomputing territory. Does any distro package this? It's the best there ever was for DOS users. I would test if I could find an .rpm (Mageia, openSUSE, Fedora) or .deb (Buster, AntiX) to install. Without nomodeset on ET6100 I get black screen framebuffer booting Buster or Tumbleweed. Trying X in Buster or TW neither FBDEV nor VESA succeeds: http://fm.no-ip.com/Tmp/Linux/Xorg/xorg.0.log-et6100-buster http://fm.no-ip.com/Tmp/Linux/Xorg/dmesg-et6100.txt AntiX fbdev works @1024x768: # inxi -GxxSM System:Host: hp945 Kernel: 4.4.8-antix.2-amd64-smp x86_64 bits: 64 compiler: gcc v: 4.9.3 Console: N/A dm: N/A Distro: Debian GNU/Linux 8 (jessie) Machine: Type: Desktop System: HP-Pavilion product: RX900AA-ABA a6010n v: N/A serial: CNX71002LP Chassis: Hewlett-Packard type: 3 v: serial: N/A Mobo: ASUSTek model: LEONITE v: 5.00 serial: MS1C71S40800293 BIOS: Phoenix v: 5.10 date: 01/30/2007 Graphics: Card-1: Intel 82945G/GZ Integrated Graphics driver: N/A bus ID: 00:02.0 chip ID: 8086:2772 Card-2: Tseng Labs ET6000 driver: N/A bus ID: 01:04.0 chip ID: 100c:3208 Display: server: X.Org 1.16.4 driver: fbdev,modesetting unloaded: tseng,vesa resolution: 1024x768~N/A OpenGL: renderer: Gallium 0.4 on llvmpipe (LLVM 3.5 128 bits) v: 3.0 Mesa 10.3.2 direct render: Yes -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel
Re: Corrupt window
Andrew Kurn composed on 2019-06-18 20:16 (UTC-0700): > On Tue 18 Jun 2019 10:50 -0500, Chris Sorenson wrote: >>> Mon, 17 Jun 2019 15:01:50 -0700 Andrew Kurn composed: >>> I'm using the version of X current for Debian Jessie. >>> I find that windows are sometimes corrupted. The typical case >>> is in XTerm, when the bottom line is covered by a copy of the >>> second-last line. It happens a little more than half the >>> time, so some sort of race condition. >>> It is not confined to XTerm; Emacs and Aisleriot show some >>> of the same behavior. >>> In all cases, moving the window will cure the problem. The >>> redraw is correct. >>> So my conclusion is it's a fault in X. I've never had to >>> report an X bug before. Unlikely a bug report for a product as old as Jessie would get fixed. Versions have advanced considerably since. It may have been a bug that is long since fixed. >>> If you will give me a list of info to provide, I will send >>> it in a later message. >> Your graphics hardware? If you don't know, `lspci -vv` will list everything >> connected to your PCI bus, but please parse out the graphics info rather >> than replying with the whole list. Lspci doesn't report the kernel driver or the DDX (X driver dependent on the kernel's modesetting functionality, KMS). > 00:02.0 VGA compatible controller: Intel Corporation 82Q963/Q965 Integrated > Graphics Controller (rev 02) (prog-if 00 [VGA controller]) ... > Kernel driver in use: i915 > 00:02.1 Display controller: Intel Corporation 82Q963/Q965 Integrated Graphics > Controller (rev 02) You may find switching DDX would clear up the problem. In Jessie the Intel DDX is in xserver-xorg-video-intel, while the modesetting DDX is in xserver-xorg-video-modesetting. Switching can be as simple as purging one and/or installing the other. I have the same GPU but no OS so old, and have seen no evidence of the corruption you describe using the modesetting DDX: # inxi -GxxSM System:Host: gx745 Kernel: 4.12.14-lp151.28.7-default x86_64 bits: 64 compiler: gcc v: 7.4.0 Desktop: KDE 3.5.10 tk: Qt 3.3.8c wm: kwin dm: N/A Distro: openSUSE Leap 15.1 Machine: Type: Desktop System: Dell product: OptiPlex 745 v: N/A serial: 901DSC1 Chassis: type: 15 serial: 901DSC1 Mobo: Dell model: 0GX297 serial: ..CN6986173Q1835. BIOS: Dell v: 2.6.2 date: 08/12/2008 Graphics: Device-1: Intel 82Q963/Q965 Integrated Graphics vendor: Dell driver: i915 v: kernel bus ID: 00:02.0 chip ID: 8086:2992 Display: server: X.Org 1.20.3 driver: modesetting unloaded: fbdev,vesa alternate: intel resolution: 1680x1050~60Hz OpenGL: renderer: Mesa DRI Intel 965Q v: 2.1 Mesa 18.3.2 direct render: Yes # lspci -vv 00:02.0 VGA compatible controller: Intel Corporation 82Q963/Q965 Integrated Graphics Controller (rev 02) (prog-if 00 [VGA controller]) ... 00:02.1 Display controller: Intel Corporation 82Q963/Q965 Integrated Graphics Controller (rev 02) -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
which man page contains valid options for Option "MonitorLayout"?
I've found examples used: LVDS AUTO on https://wiki.gentoo.org/wiki/Xorg/Multiple_monitors, and CRT CRT+LFP on https://help.ubuntu.com/community/XineramaHowTo but no example anywhere where two LCDs are used. man xorg.conf has no instance of the string MonitorLayout. man modesetting and man radeon didn't help either. :-( I'm looking due to https://bugs.freedesktop.org/show_bug.cgi?id=32430 that tells me I have to have a ServerLayout for DisplaySize and at least one other option to be able to work. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
https://freedesktop.org/wiki/Software/xorg/
Page is full of bad links. I can't fix because I don't know what their replacements should be since this exodus from bugzilla began. Bugzilla doesn't seem to have a documentation category, and same for gitlab. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: frustrated by gitlab - where is the release history?
Pekka Paalanen composed on 2019-02-28 10:28 (UTC+0200): > On Wed, 27 Feb 2019 22:18:18 -0500 Felix Miata wrote: >> https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/ has what I'm >> looking for for the Intel >> DDX. I would like to find whatever corresponds to it for the server, which >> on opensuse seems to be >> called xorg-x11-server. URL in its info is merely >> http://xorg.freedesktop.org/. Name in Debian >> seems to be xserver-xorg-core. It seems like >> https://gitlab.freedesktop.org/xorg/xserver should >> have it, but I don't see it. What am I missing? > is it https://gitlab.freedesktop.org/xorg/xserver/tags ? It is. Thanks! -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel
frustrated by gitlab - where is the release history?
https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/ has what I'm looking for for the Intel DDX. I would like to find whatever corresponds to it for the server, which on opensuse seems to be called xorg-x11-server. URL in its info is merely http://xorg.freedesktop.org/. Name in Debian seems to be xserver-xorg-core. It seems like https://gitlab.freedesktop.org/xorg/xserver should have it, but I don't see it. What am I missing? -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel
Re: Xorg setting low resolution with double HDMI output despite forcing to full HD
Francesco Nwokeka composed on 2019-02-21 10:45 (UTC+0100): > I have an Intel NUC with dual HDMI output. Recently I've been > experiencing > a problem with the output resolution of the screens. When booting the system > i > get screen resolution inferior to what I actually try to set via .xinitrc > file: > xrandr --output $(xrandr -q | grep "\sconnected\s" | cut -f1 -d" ") --mode > 1920x1080 Note in the log the string onnected does not appear. AFAICT, only the use of xf86-video-intel DDX driver, which last had an official release in 2015, causes that omission. > local/xf86-video-intel 1:2.99.917+859+g33ee0c3b-1 (xorg-drivers) > Does anyone have any idea to what might be the problem/solution? Easier solution is to purge xf86-video-intel, which when server version is 1.17.0 or higher, should automagically cause utilization of the newer modesetting DDX driver on X restart or reboot. Other option is to reconfigure /etc/X11/xorg.conf.d/50-device.conf to specify driver modesetting. The DDX modesetting driver does not have its own package. Instead it's the default, provided by the server package. When I want to specify modes via xrandr, my script goes in /etc/X11/xinit/xinitrc.d/setup when using openSUSE or Fedora, /etc/X11/Xsession.d/95-setup when using Debian. My choice of setup or 95-setup to contain the xrandr script is arbitrary. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Xorg setting low resolution with double HDMI output despite forcing to full HD
Francesco Nwokeka composed on 2019-02-21 10:45 (UTC+0100): > I have an Intel NUC with dual HDMI output. Recently I've been > experiencing > a problem with the output resolution of the screens. When booting the system > i > get screen resolution inferior to what I actually try to set via .xinitrc > file: > xrandr --output $(xrandr -q | grep "\sconnected\s" | cut -f1 -d" ") --mode > 1920x1080 Note in the log the string onnected does not appear. AFAICT, only the use of xf86-video-intel DDX driver, which last had an official release in 2015, causes that omission. > local/xf86-video-intel 1:2.99.917+859+g33ee0c3b-1 (xorg-drivers) > Does anyone have any idea to what might be the problem/solution? Easier solution is to purge xf86-video-intel, which when server version is 1.17.0 or higher, should automagically cause utilization of the newer modesetting DDX driver on X restart or reboot. Other option is to reconfigure /etc/X11/xorg.conf.d/50-device.conf to specify driver modesetting. The DDX modesetting driver does not have its own package. Instead it's the default, provided by the server package. When I want to specify modes via xrandr, my script goes in /etc/X11/xinit/xinitrc.d/setup. My choice of setup to contain the xrandr script is purely arbitrary. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Will like to release xf86-video-mga Version 1.7 soon
Kevin Brace composed on 2018-11-29 22:25 (UTC+0100): > I do have several models of Millennium 4 MB PCI, one G200 8 MB AGP, one or > two G400 AGP (16 > MB to 32 MB) , one G400 MAX AGP, and several G450 and G550 (mostly AGP, but > at least one > PCI). For G200, I need a mainboard with AGP 3.3V signaling support in order > to use it I have a G550 in a box with openSUSE Tumblweed and Debian 9, and several G400s that I could reinstall in boxes they were removed from due to lack of working MGA drivers, which in addition to Debian and Tumbleweed have Mageia 6 and/or Fedora 28 and/or Fedora 29. I can only test packages built by others. I may have a working PCI G200 as well. -- Evolution as taught in public schools is religion, not science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel
Re: glamor requires at least 128 instructions (64 reported)
Michel Dänzer composed on 2018-09-10 16:33 (UTC+0200): ... > As for being able to use glamor, the rule of thumb is >= R600 for AMD > and IIRC >= i965 for Intel, other than that I'm not sure. Thank you! This largely answers my OP. R600=2007 IME, Intel Gen 3 fail: 2004/5 Grantsdale & Alviso:915 GMA900 2005/6 Lakeport & Callistoga: 945 GMA950 2007Bearlake: G31/G33/Q33/Q35 GMA3100 2010Pineview: Atom D4/D5/N4/N5GMA3150 Intel Gen 4 pass: 2006Lakeport945GZ GMA3000 2006Broadwater Q963/5 G965 GMA3000/GMA X3000 2007BearlakeG35 GMA X3500 2007Crestline GL/GLE/GM/GME 960/965 GMAX3100 2008Eaglelake B/G/Q 41/43/45 GMA4500/GMAX4500/GMAX4500HD 2008Cantiga GL/GS/GM 40/45/47 GMA4500MHD -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: glamor requires at least 128 instructions (64 reported)
Michel Dänzer composed on 2018-09-10 12:15 (UTC+0200): > Felix Miata wrote: >> Michel Dänzer composed on 2018-09-10 09:24 (UTC+0200): >>> Felix Miata wrote: >>>> No google hits describe why this happens or what can be done to avoid it. >>>> How >>>> can one determine whether this is absent software, broken software, or >>>> unsupported GPU? >>> Unsupported GPU. >> I figured as much, but I worded my question here as I did on purpose, trying >> to >> elicit self-determination information. In the instant case I was really >> asking >> on behalf of (trying to help) someone who is using an X1400 GPU getting only >> black 800x600 output on his external display, but I've run into this error >> message multiply before without ever finding out why glamor doesn't find the >> 128 >> instructions it requires. > Because the GPU is too old to support that many instructions in a shader. >>> The recommended driver for ATI/AMD GPUs using the radeon kernel driver >>> is the xf86-video-ati radeon driver, which supports hardware >>> acceleration for your GPU via EXA. >> Does this apply to all ATI GPUs too old for the xf86-video-amdgpu driver? How >> can a user make such a determination? Does one need to ask a developer for >> each >> GPU one comes across that fails with the default/integrated driver? Can one >> use >> PCI IDs or something else in a lookup table somewhere? > If the radeon and amdgpu drivers are installed, the appropriate one is > used by default automagically. Do you not agree that the merger of all functionality of xf86-video-modesetting into the server made xf86-video-* optional software for most installed GPUs? Is there a better method than trying to start the server to determine whether a particular xf86-video package is not optional for any particular GPU of interest? IOW, is there a lookup method or some utility available to know in advance if a particular GPU (is new enough to) support the 128 instructions in a shader necessary for glamor, and thus the included modesetting driver, the video driver that had been if not is responsible for the most commits[1], to function? [1] https://bugs.freedesktop.org/show_bug.cgi?id=94842#c4 -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: glamor requires at least 128 instructions (64 reported)
Michel Dänzer composed on 2018-09-10 09:24 (UTC+0200): > Felix Miata wrote: >> No google hits describe why this happens or what can be done to avoid it. How >> can one determine whether this is absent software, broken software, or >> unsupported GPU? > Unsupported GPU. I figured as much, but I worded my question here as I did on purpose, trying to elicit self-determination information. In the instant case I was really asking on behalf of (trying to help) someone who is using an X1400 GPU getting only black 800x600 output on his external display, but I've run into this error message multiply before without ever finding out why glamor doesn't find the 128 instructions it requires. > The recommended driver for ATI/AMD GPUs using the radeon kernel driver > is the xf86-video-ati radeon driver, which supports hardware > acceleration for your GPU via EXA. Does this apply to all ATI GPUs too old for the xf86-video-amdgpu driver? How can a user make such a determination? Does one need to ask a developer for each GPU one comes across that fails with the default/integrated driver? Can one use PCI IDs or something else in a lookup table somewhere? -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
glamor requires at least 128 instructions (64 reported)
No google hits describe why this happens or what can be done to avoid it. How can one determine whether this is absent software, broken software, or unsupported GPU? http://fm.no-ip.com/Tmp/Linux/Xorg/xorg.0.log-gx320-0glamor-s150 excerpt: [ 122.660] (WW) glamor requires at least 128 instructions (64 reported) [ 122.660] (EE) modeset(0): Failed to initialize glamor at ScreenInit() time. [ 122.660] (EE) Fatal server error: [ 122.660] (EE) AddScreen/ScreenInit failed for driver 0 # inxi -Gxx Graphics: Card: Advanced Micro Devices [AMD/ATI] RC410 [Radeon Xpress 200/1100] bus-ID: 01:05.0 chip-ID: 1002:5a61 Display Server: X.org 1.19.6 driver: modesetting tty size: 180x56 Advanced Data: N/A for root out of X -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
wiki editing
https://dri.freedesktop.org/wiki/ToDo/ https://dri.freedesktop.org/wiki/DDX/ Last edit on the latter was Sat 13 Apr 2013 11:15:48 PM UTC Neither have a "home" page link. I wanted to update these, but can't figure out how to determine if I even have an account that can. Selecting "Edit" produces a popup provides only input for user name and password, nothing about gaining an account or lost password. -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel
Re: gitlab migration
James Cloos composed on 2018-06-12 17:38 (UTC-0400): > Two comments: > BZ is superior to GL (or GH or the like). Strongly agree, especially for returning useful search results!!! > Mailing lists are vastly superior to any web-only Strongly agree!!! -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel
xrandr has panning, while man xorg.conf does not
Why not? 'Option "Panning" "HresXVres"' in 'Section "Monitor"' certainly used to work many moons ago, e.g. in server 1.9.3, the same way as it still does in server 1.19.6, and as --panning works with xrandr - with a single display. But, is panning not supposed to work if a screen is comprised of more than one display? I've been trying both methods, and can't seem to find a combination that works either with xrandr or with xorg.conf, AND permits assigning any arbitrary DPI, as can be done with any single display. Here are most of the xrandr tries I've made using a 1920x1200 native Dell and a 1680x1050 native Lenovo with (recent) Intel and (older) GeForce gfx devices: ## Intel HDG630 w/ modeset(0) Debian 9 server 1.19.2; openSUSE Tumbleweed server 1.19.6 xrandr --dpi 108 --output HDMI-2 --auto --primary --output DP-1 --auto --right-of HDMI-2 ## All OK as far as it goes, but without panning, not a rectangle xrandr --dpi 108 --output HDMI-2 --auto --primary --panning 1920x1440 --output DP-1 --auto --right-of HDMI-2 --panning 1920x1440 ## both displays @0,0; DPI 108; screen size 3840x1440 xrandr --dpi 108 --panning 3840x1440 --output HDMI-2 --auto --primary --output DP-1 --auto --right-of HDMI-2 ## DPI ignored (96DPI observed); otherwise OK xrandr --fbmm 900x338 --panning 3840x1440 --output HDMI-2 --auto --primary --output DP-1 --auto --right-of HDMI-2 ## fbmm ignored (96DPI observed); otherwise OK ## NVidia GT218 w/ modeset(0); Debian 9 server 1.19.2 xrandr 1.5; Tumbleweed server 1.19.6 xrandr 1.5.0 xrandr --dpi 108 --output DVI-I-1 --auto --primary --output VGA-1 --auto --right-of DVI-I-1 ## screen size 3600x1200 with 1680x150 rectangle missing at lower right; otherwise OK xrandr --dpi 108 --output DVI-I-1 --auto --primary --panning 1920x1440 --output VGA-1 --auto --right-of DVI-I-1 --panning 1920x1440 ## both displays @0,0; DPI 108; screen size 3600x1440 xrandr --dpi 108 --panning 3840x1440 --output DVI-I-1 --auto --primary --output VGA-1 --auto --right-of DVI-I-1 ## dpi 96; screen size 3600x1200; otherwise OK xrandr --fbmm 900x338 --panning 3840x1440 --output DVI-I-1 --auto --primary --output VGA-1 --auto --right-of DVI-I-1 ## dpi 96; screen size 3600x1200; otherwise OK xrandr --output DVI-I-1 --auto --primary --dpi 108 --panning 3840x1440 --output VGA-1 --auto --right-of DVI-I-1 ## VGA-1 1680x1050@1920,0; DVI-I-1 3840x1440@0,0; DPI 108 xrandr --output DVI-I-1 --auto --primary --output VGA-1 --auto --right-of DVI-I-1 --dpi 108 --panning 3840x1440 ## VGA-1 3840x1440@0,0; DVI-I-1 1920x1200@0,0; DPI 108 ISTR arandr may be able to do such things, but it works on a per user basis. I'm after global application. -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: black screen trying to load modesetting
Perez Rodriguez, Humberto I composed on 2018-03-08 22:33 (UTC): > Using intel nuc BDW hardware device That's not enough. What is output from lspci -nnk | grep -A4 VGA > Cmd command line default = quiet splash You might find useful messages during boot if you remove quiet. > And yes, when sna is not in the system, modesetting is being loaded, but also > in this way it is fail.-- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: black screen trying to load modesetting
Perez Rodriguez, Humberto I composed on 2018-03-08 15:14 (UTC): > i am having black screen when i try to enable modesetting with Xorg, > with SNA works well, please see below my configuration and the issue. > any help here would be appreciated :) > kernel version : 4.13.0-32-generic > this is my configuration file for modesetting > $ cat 20-modesetting.conf > Section "Device" > Identifier "Intel Graphics" > Driver "modesetting" > Option "AccelMethod" "glamor" > EndSection > this is the erros that shows Xorg.0.log > $ : cat -n Xorg.0.log | grep "(EE)" > (WW) warning, (EE) error, (NI) not implemented, (??) unknown. > [ 4.425] (EE) modeset(0): failed to set mode: Invalid argument > [ 5.258] (EE) modeset(0): failed to set mode: No space left on > device > this are my commits for the following drivers: > component: xserver > tag: xorg-server-1.19.99.901-21-g90e0cdd > commit: 90e0cdd42dfda2accfadffa5c550712696902e14 > Component: mesa > tag: 17.3-branchpoint-3811-g55376cb > commit: 55376cb31e2f495a4d872b4ffce2135c3365b873 > Component: xf86-video-intel > tag: 2.99.917-814-gaa36399 > commit: aa36399cca1250d878a4608528a535faeb0e931a Using which Intel gfx hardware device? What's on your kernel cmdline? With most Intel gfx, modesetting is used automatically when xf86-video-intel is not installed and xserver is newer than 1.16.x. I've been using it in such manner with most of my Intel machines for several years, so I have no need of /etc/X11/xorg.conf* to specify either device driver or accel method. -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: monitor hotplug resolution switch
Johann Obermayr composed on 2017-12-13 15:07 (UTC): > now i have compile 2.99.917 and enable VirtualHeads in the xorg.conf. > xrandr --newmode "1920x1080" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 > xrandr --addmode VIRTUAL1 "1920x1080" > xrandr --output VIRTUAL1 --mode "1920x1080" > now it looks like it works. > The resolution is always 1920x1080, even if I connect / disconnect > DisplayPort or HDMI Monitor. > Can I put this settings into the xorg.conf ? > Or is xrandr the only way ? Setting in xorg.conf was the only way before xrandr existed. Setting in xorg.conf may cause those settings to be applied sooner as well, so that they affect any GUI login manager that might be starting, instead of at later window manager startup. Also, you *should* need *no* modelines if you include valid values for these three options: VertRefresh HorizSync PreferredMode I've *never* needed to include modelines to produce the resolution I want. Xorg knows how to auto-generate modelines quite well given those basic substitutes for a display's EDID. -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: monitor hotplug resolution switch
Johann Obermayr composed on 2017-12-13 08:36 (UTC: >> Johann Obermayr composed: >>> i have a x86 machine with i915 graphics. >>> With connected display (1920x1080) all is ok. >>> But if I disconnect the monitor, the resolution switch to 320x200. >>> How can I disable this ? >>> I will always have the same resolution (1920x1080) >>> How can I do this ? >> some more info is certainly necessary >> what OS? >> what remaining monitor? >> what do xrandr say? > yes, sorry. > X86 dualcore Intel(R) Celeron(R) CPU G1820 @ 2.70GHz > With On board graphics i915 'inxi -c0 -G' and/or 'lspci -nnk | grep -A4 VGA' would tell a bit more. > Older Yocto build with Kernel 3.10.0 That kernel version is older than your G1820 CPU. > X.Org X Server 1.14.0 > Release Date: 2013-03-05 > [1274713.521] X Protocol Version 11, Revision 0 > [1274713.521] Build Operating System: Linux 3.2.0-126-generic-pae i686 > [1274713.521] Current Operating System: Linux sigmatek-x86-mp 3.10.0 #48 SMP > PREEMPT Wed Dec 6 18:30:54 CET 2017 i686 > [1274713.521] Current version of pixman: 0.29.2 Why did you omit the "Kernel command line:" line? What's on it might explain why falling back to 320x200, if the age of your hardware being newer than your software does not. > And using xf86-video-intel-2.21.3.tar.bz2 That predates the release of your G1820 (Haswell) CPU by nearly a year. > xorg.conf ... > Section "Screen" > Identifier "Screen0" > Monitor"Monitor0" > DefaultDepth 16 > SubSection "Display" > Viewport 0 0 > Depth 16 > EndSubSection > EndSection > Yes we use 16 bit color depth. That's not as well tested as the higher depths, and may cause inexplicable trouble. > root@sigmatek-x86-mp:~# xrandr > Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192 > VGA1 disconnected > DP1 disconnected > HDMI1 disconnected > DP2 disconnected > HDMI2 disconnected > DP3 connected >1920x1080 60.0 + > HDMI3 disconnected > But after disconnect the monitor, the driver will change resolution to > 320x200. > But we will, that driver does not change the resolution. Or we can define the > default resolution. Section "Monitor" ... Option "PreferredMode" "1920x1080" ... EndSection PreferredMode option should define resolution preferred, if the hardware is supported. -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X is consuming ~100 GiB of RAM(!)
Ewen Chan composed on 2017-12-07 11:22 (UTC-0500):... > My early subjective analysis (with this mgag200 blacklist) puts the time it > takes to run the simulations now on par with Windows and Windows just > worked (properly) like this from the get go. > People keep talking about great and wonderful Linux is, but this experience > has been anything but G200eW is an edge case gfxchip, not something kernel, Xorg and driver developers use. Unless Matrox or those who do use the G200eW get involved with the devel process, reporting problems as they notice them, you cannot expect decent, if any, support. As it turns out, there is one SUSE dev who has shown interest in the MGA driver (see bug I already mentioned), but as long as you are using an old out-of-support openSUSE version, trying to get him to help is a waste of time. Server 15.x and kernel 3.12 are well past their support lives on openSUSE. If what you are actually using is SLE, then you should be entitled to support from SLE. WRT iomem=relaxed, to find out if it would help to use the modesetting driver for this uncommon gfxchip, you will most likely either have to try it yourself, or find the relative few other people who have tried using it, or ask Matrox, to find out about support. FWIW, I took a brief look at an openSUSE Tumbleweed installation last updated 9 months ago, and found the MGA driver at least usable with a G550, but the tools aren't even recognizing it properly. Xorg.0.log reports the MGA driver is in fact in use. # grep RET /etc/os-release PRETTY_NAME="openSUSE Tumbleweed" # uname -a Linux a-865 4.10.4-1-default #1 SMP Sat Mar 18 12:29:57 UTC 2017 (e2ef894) i686 i686 i386 GNU/Linux # inxi -G -c0 Graphics: Card: Matrox Systems Millennium G550 Display Server: X.org 1.19.2 drivers: (unloaded: modesetting,fbdev,vesa) FAILED: mga tty size: 139x49 Advanced Data: # zypper --no-refresh se -si mga Loading repository data... Reading installed packages... S | Name | Type| Version | Arch | Repository --++-+---+--+--- i | xf86-video-mga | package | 1.6.5-1.1 | i586 | OSS # xrandr | head -n9 xrandr: Failed to get size of gamma for output default Screen 0: minimum 400 x 300, current 1280 x 1024, maximum 1280 x 1024 default connected 1280x1024+0+0 0mm x 0mm 1280x1024 75.00* 60.00 1152x864 75.00 1024x768 75.0060.0070.00 1280x960 60.00 960x720 60.00 928x696 60.00 896x672 60.00 # -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X is consuming ~100 GiB of RAM(!)
Ewen Chan composed on 2017-12-07 00:32 (UTC-0500): > 08:01.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA > G200eW WPCM450 (rev 0a) (prog-if 00 [VGA controller]) Seeing this thread get so long makes me curious. I'm neither dev nor all that familiar with the intricacies of Xorg or drivers, but I have had troubling experience with openSUSE and the mga driver, which you can see by looking at: https://bugzilla.opensuse.org/show_bug.cgi?id=1004453 I wonder if it would be worth giving iomem=relaxed on cmdline a try? Reading the bug might provide other clues what to try. IIRC, most distros have abandoned providing any mga driver. Also I wonder if the G200eW driver might have support in the server-integrated modesetting driver? -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X does not start any more since upgrading to Debian 9.1 with NVIDIA GeForce 7100
"Andreas Köhler" composed on 2017-10-06 08:19 (UTC+0200): > using nouveau.config=NvMSI=0 on the kernel command line I already tried ... > without success. As yet you written nothing to indicate what you tried to get the modesetting driver that works on by my old systems to work for you instead of VESA. With Stretch, modesetting should work automatically if all direct (.deb package) and indirect (e.g. blacklisting) NVidia bits are fully purged and xserver-xorg-video-nouveau is not installed. -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X does not start any more since upgrading to Debian 9.1 with NVIDIA GeForce 7100
"Andreas Köhler" composed on 2017-10-05 14:24 (UTC-0400): > getting the NVIDIA GeForce 7100 working under current Debian stretch has been > totally failed. The Xorg nouveau driver failed to detect any screen and the > fallback vesa driver supported only a resolution of 1024x768 ... but it > worked. > The kernel module nouveau driver always created the "write ... FAULT" > messages > and resulted in a system without a properly working mouse in gdms login > dialog > and a totally unstable session. > Finally I plugged in a different NVidia Card into the free PCI-E slot and it > immediately worked fine. So my problem is solved by this workaround but the > problem behind the NVIDIA GeForce 7100 is still open. I believe there are things left for you to try if you want to make that 7100 work. Note that I used the nouveau Xorg driver in neither of my old NVidia systems running Stretch. There's probably still something interfering to prevent the integral modesetting driver from working (primary fallback) instead of the secondary fallback VESA. One possibility is: nouveau.config=NvMSI=0 on the kernel cmdline. I've needed this in the past with the GeForce 6150SE, which is close in vintage to the 7100. Another thing to try is booting Knoppix or some (or more than one) other live distro from CD, DVD or USB stick, to see whether Stretch or Xorg is the more likely cause of "[drm] Failed to open DRM device". -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X does not start any more since upgrading to Debian 9.1 with NVIDIA GeForce 7100
"Andreas Köhler" composed on 2017-10-03 12:53 (UTC+0200): > [ 537.149] (EE) [drm] Failed to open DRM device for (null): -22 > [ 537.149] (EE) [drm] Failed to open DRM device for (null): -22 > [ 537.150] (EE) [drm] Failed to open DRM device for pci::00:10.0: -22 > [ 537.150] (EE) No devices detected. > [ 537.150] (EE) > Fatal server error: > [ 537.150] (EE) no screens found(EE) > [ 537.150] (EE) > Please consult the The X.Org Foundation support > at http://wiki.x.org > for help. > [ 537.150] (EE) Please also check the log file at "/var/log/Xorg.0.log" for > additional information. > [ 537.150] (EE) > [ 537.151] (EE) Server terminated with error (1). Closing log file. > This looks to me as the nouveau driver does not support my older GPU NVIDIA > Corporation C73 [GeForce 7100 / nForce 630i] (rev a2) any more The error messages suggest to me a drm package may be missing. Is libdrm-nouveau2 installed? -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X does not start any more since upgrading to Debian 9.1 with NVIDIA GeForce 7100
"Andreas Köhler" composed on 2017-10-03 14:03 (UTC+0200): > Hello Felix, > thanks for your posting. > The upgrade was already finished. The open source driver nouveau does not > seem > to support the NVIDIA GeForce 7100 any more I doubt that's true. I have several Stretch installations with old GeForce gfxchips. These two hosts have similar in age to the 7100: 8400GS roughly a year newer, 6150SE, roughly the same age (series is older, but specific chip was released 6 months after the 7x00 series was initially released). # cat /etc/debian-version 9.1 # uname -a Linux mcp61 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux # inxi -G -c0 Graphics: Card: NVIDIA C61 [GeForce 6150SE nForce 430] Display Server: X.org 1.19.2 drivers: modesetting (unloaded: fbdev,vesa) tty size: 118x38 Advanced Data: N/A for root # cat /etc/debian-version 9.1 # uname -a Linux g5eas 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux # inxi -G -c0 Graphics: Card: NVIDIA G98 [GeForce 8400 GS Rev. 2] Display Server: X.org 1.19.2 drivers: modesetting (unloaded: fbdev) tty size: 132x43 Advanced Data: N/A for root # dpkg-query -l | egrep 'veau|idia' ii libdrm-nouveau2:amd64 2.4.74-1 amd64 Userspace interface to nouveau-specific kernel... # cat /etc/X11/xorg.conf cat: /etc/X11/xorg.conf: No such file or directory # dpkg-query -l | grep firmware # I suggest you try: 1-purge *vidia* and everything directly related to proprietary driver installation 2-purge xserver-xorg-video-nouveau 3-ensure libdrm-nouveau2 is installed 4-ensure there's nothing left to update from Jessie to Stretch There could be other packages never installed or upgraded to Stretch from Jessie, but since I have at least two GeForce oldies working, I'd think you should be able to too. The Xorg nouveau might work, but I've had no reason I can remember to try it on these two hosts. Note that my installations all use Trinity as Xorg's DM and default DE. https://www.trinitydesktop.org/ https://wiki.trinitydesktop.org/DebianInstall -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X does not start any more since upgrading to Debian 9.1 with NVIDIA GeForce 7100
"Andreas Köhler" composed on 2017-10-02 19:39 (UTC+0200): > Hello @xorg mailing list, > I upgraded from Debian 8.x to 9.1 some days ago and now X does not start any > more. ... > [ 462.623] Current Operating System: Linux uli 3.2.0-4-686-pae #1 SMP > Debian > 3.2.65-1+deb7u2 i686 > [ 462.623] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-4-686-pae > root=UUID=30333d3d-39dc-4635-ab28-81c8eed4a21b ro ... > Since I cannot see any error it is really hard to understand what going wrong > here. Does anybody have a hint for me? > Much thanks and regards, Start by uninstalling the NVidia driver, removing /etc/X11/xorg.conf, and finishing the upgrade. Install the NVidia driver only if you are not pleased with the default modesetting driver's behavior, and you find the alternative Nouveau driver isn't adequate either. 3.2 is Wheezy's kernel. The Stretch kernel is 4.9. -- "Wisdom is supreme; therefore get wisdom. Whatever else you get, get wisdom." Proverbs 4:7 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Dell P2715Q Monitor does not wake up after sleep (off, standby, or suspend)
Gregory Gorsuch composed on 2017-08-07 11:11 (UTC-0400): > I am having an issue with my Dell P2715Q monitor not waking up from > standby, suspend, and off. This problem seems to only occur on > Linux. I have tried Windows 10 and the Dell monitor seems to wake up > o.k. so far. The monitor is connected through a display port. > I would appreciate help solving this problem. I have created a > script using the output from "*xscreensaver-command -watch*" to use > *xrandr* to turn the screen on, but this is a bit of a band-aide > since the screen becomes visible behind the screen saver while the > Dell monitor is being turned back on by the script. When was this purchased? What is its date of manufacture? Did it ever work right with any Linux distro or kernel? Any chance you can still return it to vendor? I suspect this may well be a problem with no other solution. https://www.newegg.com/Product/Product.aspx?Item=N82E16824260115 reviews indicate I'm not the only one with DisplayPort sleep mode trouble with a Dell display. My Dell U2913WM behaves as expected, unless connected via DisplayPort. When connected to my Dell DisplayPort-equipped PCs via DisplayPort, it absolutely must be powered up before turning on the PC. Otherwise, it stays in sleep mode indefinitely. I managed to get Dell to replace it next day on my first phone call, but eventually Dell refused further help because my Dell /PCs/ were "out of support". -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Help setting up xorg.conf on fedora 25 kde
Josh Temple composed on 2017-07-26 08:21 (UTC-0600): > I'm trying to setup an xorg.conf file. This is because I want to setup all > the buttons on my special mouse I use for my arthritis, so this is a big > deal for me. If anyone has some time to spare to help me figure this out > I'd appreciate it very much and would gladly pay someone to do so. > I've tried a lot of stuff, but this gets me the furthest: > -- Ctrl+Alt+F3 > -- $ sudo service gdm stop > -- $ sudo Xorg :5 -configure If KDE is the only or primary desktop you plan to use, sddm is preferred over gdm as display manager. If you're a long time KDE user, I suggest you consider kdm instead. Even though it's no longer "supported", it remains available, and it maintains a larger feature set than sddm or lightdm, probably more than gdm (which I never use) as well. > This gives me a segmentation fault. Here's a snippet of the output: I booted a 64-bit F25/Plasma installation with an older Radeon gfxcard and got a segfault that looks like yours. So I rebooted to F24/Plasma, which succeeded to produce the attached xorg.conf. It contains mostly components that have nothing to do with any pointing device. The following subset of the attachment should be all you need as a starting point for pointing device customization: Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice"Mouse0" "CorePointer" InputDevice"Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection I've not needed customization of any pointing device in probably at least a decade, so am unsure that there are any specific changes to suggest. I do wonder whether and why mouse configuration in desktop settings would not be easier than editing config files. I also suggest that your file be named /etc/X11/xorg.conf.d/25-mouse.conf Finally I believe you should file a bugzilla.redhat.com bug about the segfaulting. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice"Mouse0" "CorePointer" InputDevice"Keyboard0" "CoreKeyboard" EndSection Section "Files" ModulePath "/usr/lib64/xorg/modules" FontPath "catalogue:/etc/X11/fontpath.d" FontPath "built-ins" EndSection Section "Module" Load "glx" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName"Monitor Model" EndSection Section "Device" ### Available Driver options are:- ### Values: : integer, : float, : "True"/"False", ### : "String", : " Hz/kHz/MHz", ### : "%" ### [arg]: arg optional #Option "SWcursor" # [] #Option "kmsdev"# #Option "ShadowFB" # [] #Option "AccelMethod" # #Option "PageFlip" # [] #Option "ZaphodHeads" # Identifier "Card0" Driver "modesetting" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor"Monitor0" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Installation problem
Mike Jonas composed on 2017-07-05 06:11 (UTC+1000): > Felix Miata wrote: >> Mike Jonas composed on 2017-07-04 09:00 (UTC+1000): >>> Acer Aspire S 13 ... Mint 18.1 ... ... >> "Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.4.0-53-generic >> root=UUID=5cba3833-f404-4f25-b775-2f750ad191b6 ro recovery nomodeset" >> is an emergency/recovery cmdline. Whatever you did to cause use of nomodeset >> will prevent the Intel Xorg and the Modeset Xorg video drivers to be >> unusable, >> either of which is prerequisite to a usable modern X Server, hence the >> X server is now disabled..." message you get. > Thanks for replying. "m/c" is just "machine" (that's what we called a > computer 20+ years ago). The problem has now gone away, but I didn't do > anything to make it go away - which makes me nervous that it could > return. I haven't tried your suggestions re 'xorg.conf' and > 'Ctrl-Alt-F2' but I'll try them if restart fails again. - Mike. I have a suspicion what happened, that, because I never use Mint's bootloader, I have no convenient way to confirm. What I think happened is the bootloader was originally configured to remember the last selection, and that after the first successful boot you inadvertently selected a recovery mode bootloader selection, which was remembered until you made a different selection some time later. Recovery mode disables modesetting, which in turn kills Xorg. If this was the case, you need not be nervous about it happening again. You could test by intentionally selecting a recovery mode bootloader menu selection. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Installation problem
Mike Jonas composed on 2017-07-04 09:00 (UTC+1000): > Acer Aspire S 13 ... Mint 18.1 ... What is "M/C"? I don't see any such reference on https://www.cnet.com/products/acer-aspire-s-13/specs/ and don't remember seeing it refer to anything other than Midnight Commander on any version of Linux. xorg -configure is an anachronism. It almost never will fix anything, and IME more often it will create additional impediments to finding solutions. Deleting or renaming the xorg.conf file it created should precede further attempts to diagnose. If when you are looking at a black screen and key in "Ctrl-Alt-F2", does the screen un-black to a login prompt? "Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.4.0-53-generic root=UUID=5cba3833-f404-4f25-b775-2f750ad191b6 ro recovery nomodeset" is an emergency/recovery cmdline. Whatever you did to cause use of nomodeset will prevent the Intel Xorg and the Modeset Xorg video drivers to be unusable, either of which is prerequisite to a usable modern X Server, hence the X server is now disabled..." message you get. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: X Server Configuration Error
Anthony Blaser composed on 2017-06-12 03:52 (UTC+0200): > After having launched as root the command "Xorg -configure", I got the > following Fatal Server Error : "Could not create lock file in > /tmp/.tXo-lock" . > Knowing the root partition and the whole file system are absolutely > clean, as well the directory /tmp is currently accessible and clean > ... I have really no idea where to put my hands to solve this issue ! . What problem do you hope that "Xorg -configure" would solve??? "Xorg -configure" is an anachronism that only very rarely serves any useful purpose. When it succeeds to write a file, it's loaded with bits that Xorg gets right automatically every time, which obfuscate the few bits that could possibly be of any real use in solving a real problem. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
confused by xrandr man page example
"Enables panning on a 1600x768 desktop while displaying 1024x768 mode on an output called VGA: xrandr --fb 1600x768 --output VGA --mode 1024x768 --panning 1600x0" I get the same visible area and mouse access whether --panning 1600x768 or --panning 1600x0 while --panning 1600x384 seems to constrain access to the left 800. It seems obvious why x is 1600, but y=0 seems inconsistent with the specification of x. Are 0 and 768 equally correct for y=768? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: xrandr - Multiple monitors, one rotated, mouse can disappear into non-desktop space
Ryan Felder composed on 2017-05-02 12:10 (UTC-0500): ... > I cannot seem to find any xrandr setting to address this. I am running > Ubuntu 16.04, Elementary/Pantheon desktop environment. My xrandr output is > below. Is there anything I can do to fix this? > > Screen 0: minimum 8 x 8, current 3125 x 1920, maximum 32767 x 32767 > DP1 disconnected (normal left inverted right x axis y axis) > DP2 disconnected (normal left inverted right x axis y axis) > HDMI1 connected primary 1200x1920+0+0 left (normal left inverted right x ... > HDMI2 connected 1920x1200+1205+538 (normal left inverted right x axis y > axis) 520mm x 320mm > VGA1 disconnected (normal left inverted right x axis y axis) > VIRTUAL1 disconnected (normal left inverted right x axis y axis) How has rotation of HDMI1 been set up? Maybe the problem is caused by the 5mm gap between the two displays (or the tool that configured the rotation), and eliminating the gap or changing the tool would solve the problem? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: triple head: xrandr commands ignored
Alex Deucher composed on 2017-04-28 15:15 (UTC-0400): > Felix Miata wrote: > All evergreen cards only have 2 PLLs so that means they can only > support two independent clocks. If you want to use more than two > displays, you'll need to use displayport or for non-displayport, at > least two of the displays need to be using the exact same display > timing. Thanks! Is PLL count information that is readily available in Xorg.0.log, gfxinfo, dmesg or elsewhere? Is this nv218/GT210 similarly limited? http://fm.no-ip.com/Tmp/Linux/Xorg/hwinfo-gfx-geforce710-gt218-fi965.txt -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-driver-ati mailing list xorg-driver-ati@lists.x.org https://lists.x.org/mailman/listinfo/xorg-driver-ati
Re: triple head: xrandr commands ignored
Felix Miata composed on 2017-04-28 05:22 (UTC-0400): > Felix Miata composed on 2017-04-27 06:35 (UTC-0400): >> openSUSE Tumbleweed >> kernel 4.10.10 >> server 1.19.3 >> ATI HD5450 PCIe gfxcard (Cedar) ... When I use this startup script: xrandr --output DVI-0 --primary --mode 1920x1200 --left-of HDMI-0 xrandr --output HDMI-0 --mode 1920x1080 xrandr --output VGA-0 --mode 1680x1050 --above DVI-0 xrandr --dpi 108 things improve dramatically. 1920x1080 appears right of 1920x1200, and mouse can reach them both, but 1680x1050 remains asleep. Xrandr and Xorg.0.log combined: http://fm.no-ip.com/Tmp/Linux/Xorg/ATI3/xorg.0.log-tw-fi965-hd5450-atiDrv-viz-e Is this low-budget Sapphire ATI Cedar gfxcard not supposed to be able to produce on all three outputs at once? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-driver-ati mailing list xorg-driver-ati@lists.x.org https://lists.x.org/mailman/listinfo/xorg-driver-ati
Re: triple head: xrandr commands ignored
Felix Miata composed on 2017-04-27 06:35 (UTC-0400): > openSUSE Tumbleweed > kernel 4.10.10 > server 1.19.3 > ATI HD5450 PCIe gfxcard (Cedar) > During BIOS display, 1920x1080 Dell is blank, the other two in 80x25. When KMS > kicks in the framebuffers sometimes only two of the three light up, and in > 1680x1050. Other times all three light up in 1680x1050. > In Xorg, instead of two beside and one above, all three are mirrors of the > lower > of the three native resolutions: > left Dell 1920x1080 on HDMI-1 > right Dell 1920x1200 on DVI-I-1 > above Lenovo 1680x1050 on VGA-1 > All three come out 1680x1050. > Attached Xorg.0.log has these EEs: > [ 262.280] (EE) Failed to load module "ati" (module does not exist, 0) > [ 262.872] (EE) modeset(0): failed to set mode: Invalid argument > [ 263.155] (EE) modeset(0): failed to set mode: Invalid argument > [ 307.296] (EE) modeset(0): failed to set mode: Invalid argument > [ 307.620] (EE) modeset(0): failed to set mode: Invalid argument > Where's the invalid argument coming from? > Why aren't the specified modes 1920x1080 and 1920x1200 applied to the displays > that support them? > On restart without closing the Konsole sessions, they reopen in the > unreachable > portion of the vertically extended desktop, where I can't move them down > without > knowning the magic keys that allow the keyboard to make the moves. The mouse > is > constrained to the 1680x1050 each monitor actually displays. > Attachment is full Xorg.0.log plus xrandr, lspci and inxi output, plus the > xrandr script that should be configuring the layout and specifying the three > different native display modes. Same bad things happen using the xf86-video-ati driver instead of modeset(0). Instead of attaching and waiting on moderation delay due to over-limit email size, the info and Xorg.0.log are these files: http://fm.no-ip.com/Tmp/Linux/Xorg/ATI3/xrandr-tw-fi965-hd5450-atiDrv.txt http://fm.no-ip.com/Tmp/Linux/Xorg/ATI3/xorg.0.log-tw-fi965-hd5450-atiDrv When I move the HDMI cable from the 1920x1080 Dell to a 1920x1080 Vizio and reboot, behavior changes. Instead of 3 displays each running 1680x1050, the 1680x1050 display sleeps, the 1920x1080 Vizio is 1920x1080 containing (bottom) panel and auto-opened Konsole, the 1920x1200 Dell displays the desktop background but nothing else, the mouse pointer is restricted to the 1920x1080, and xrandr -q reports the modes the xrandr startup script expects (3 displays running in their respective native modes) but with a total size of 5760x2250. http://fm.no-ip.com/Tmp/Linux/Xorg/ATI3/xrandr-tw-fi965-hd5450-atiDrv-Vizio.txt http://fm.no-ip.com/Tmp/Linux/Xorg/ATI3/xorg.0.log-tw-fi965-hd5450-atiDrv-vizio -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-driver-ati mailing list xorg-driver-ati@lists.x.org https://lists.x.org/mailman/listinfo/xorg-driver-ati
triple head: xrandr commands ignored
:36:crtc-0] [ 968.781403] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 968.781416] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:38:crtc-1] [ 1525.824074] blk_update_request: I/O error, dev fd0, sector 0 [ 1525.824077] floppy: error -5 while reading block 0 [ 1996.214193] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 1996.214208] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:40:crtc-2] [ 1996.247847] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 1996.247862] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:40:crtc-2] [ 2003.473806] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 2003.473820] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:40:crtc-2] [ 2049.082152] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 2049.082167] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:40:crtc-2] [ 2050.715773] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 2050.715786] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:36:crtc-0] [ 2050.995998] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 2050.996221] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:38:crtc-1] [ 2089.567973] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 2089.567986] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:36:crtc-0] [ 2089.895291] [drm:atombios_crtc_mode_fixup [radeon]] *ERROR* unable to allocate a PPLL [ 2089.895306] [drm:drm_crtc_helper_set_config [drm_kms_helper]] *ERROR* failed to set mode on [CRTC:38:crtc-1] -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ [ 262.250] X.Org X Server 1.19.3 Release Date: 2017-03-15 [ 262.254] X Protocol Version 11, Revision 0 [ 262.255] Build Operating System: openSUSE SUSE LINUX [ 262.257] Current Operating System: Linux fi965 4.10.10-1-default #1 SMP PREEMPT Wed Apr 12 11:18:29 UTC 2017 (a78ebd0) x86_64 [ 262.257] Kernel command line: root=LABEL=sTWp10sv5 ipv6.disable=1 net.ifnames=0 noresume drm.drm_debug=1 drm_debug=1 3 [ 262.259] Build Date: 22 March 2017 02:47:40AM [ 262.261] [ 262.262] Current version of pixman: 0.34.0 [ 262.265]Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 262.265] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 262.270] (==) Log file: "/var/log/Xorg.0.log", Time: Thu Apr 27 05:36:22 2017 [ 262.271] (==) Using config directory: "/etc/X11/xorg.conf.d" [ 262.273] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 262.273] (==) No Layout section. Using the first Screen section. [ 262.273] (==) No screen section available. Using defaults. [ 262.273] (**) |-->Screen "Default Screen Section" (0) [ 262.273] (**) | |-->Monitor "" [ 262.273] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 262.273] (**) Option "DontZap" "off" [ 262.273] (**) Option "ZapWarning" "off" [ 262.273] (**) Option "AllowMouseOpenFail" "on" [ 262.273] (**) Option "BlankTime" "120" [ 262.273] (==) Automatically adding devices [ 262.273] (==) Automatically enabling devices [ 262.273] (==) Automatically adding GPU devices [ 262.273] (==) Max clients allowed: 256, resource mask: 0x1f [ 262.273] (WW) The directory "/usr/share/fonts/misc/sgi" does not exist. [ 262.273]Entry deleted from font path. [ 262.273] (==) FontPath set to: /usr/share/fonts/misc:unscaled, /usr/share/fonts/Type1/, /usr/share/fonts/100dpi:unscaled, /usr/share/fonts/75dpi:unscaled, /usr/share/fonts/ghostscript/, /usr/share/fonts/cyrillic:unscaled, /usr/share/fonts/truetype/, built-ins [ 262.273] (==) ModulePath set to "/usr/lib64/xorg/modules" [ 262.273] (**) Extension "Composite" is disabled [ 262.273] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 262.273] (II) Loader magic: 0x825d20 [ 262.273] (II) Module ABI versions: [ 262.273]
Re: Video mode instability on Inspiron 6400 with and WSXGA+ display
Boris Gjenero composed on 2017-04-05 17:42 (UTC-0400): My Dell Inspiron 6400 has a ATI x1400 video card and 1680x1050 WSXGA+ display I suspect it's possible your observations may be characteristic of the whole X1300-X1950 series of Radeon cards, at least with the FOSS Radeon driver. My X1300 isn't as bad as you describe, but does produce an annoying and powerful flicker by simply running the xrandr command. I simply haven't observed it producing persisting corruption or instability. When I connect it to a CRT instead of a 1680x1050 LCD, any mode switch produces a loud snap from the CRT that accompanies no other gfxchip's mode switches. I've been looking on dell.com for an XP driver for X1300, but no luck in finding one. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-driver-ati mailing list xorg-driver-ati@lists.x.org https://lists.x.org/mailman/listinfo/xorg-driver-ati
Re: seg fault in Fedora 25 with all desktop managers marco error 4 in libX11-xcb.so.1.0.0
Thomas Lübking composed on 2017-04-04 22:16 (UTC+0200): Robert Kudyba wrote: Rebooted. Still seg fault. I created a Bugzilla entry at https://bugzilla.redhat.com/show_bug.cgi?id=1438059 Just a hunch but you're loading the (builtin) modesetting driver, I don't think he could be. Doesn't it work only with gfxchips newer than around 10 years old? His gfxchip looks to me like it could be up to 17 years old: https://lists.freedesktop.org/archives/xorg/2017-April/058693.html Robert Kudyba composed on 2017-04-02 22:54 (UTC-0400): ... > > lspci -v -s 09:0d.0 > > 09:0d.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] > > RV100 [Radeon 7000 / Radeon VE] (prog-if 00 [VGA controller]) > > Subsystem: Dell PowerEdge 1850 Embedded Radeon 7000/VE... http://www.dell.com/downloads/ap/topics/servers/1850_specs.pdf says it's an embedded chip. vesa and fbdev and they're apparently all in the process of probing the HW and then you run into a pciaccess segfault. => try uninstalling xf86-video-vesa and xf86-video-fbdevhw (so you'll only use the modesetting driver since apparently modesetting itself seems to work and you don't have xf86-video-ati installed) and see whether that makes any difference. The ati/radeon driver works nicely enough for my rv200 on F25 kernel 4.10.6 and server 1.19.2. I wonder of noapic or nolapic would help? Also see eg. https://bugs.freedesktop.org/show_bug.cgi?id=81678 and google for pci_device_vgaarb_init - this seems a quite common issue... That bug is about decade newer AMD Island chip(s). If he has an empty expansion slot, he may need to put a newer video card in it order to keep the old server online with Xorg. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: seg fault in Fedora 25 with all desktop managers marco error 4 in libX11-xcb.so.1.0.0
Robert Kudyba composed on 2017-04-03 09:53 (UTC-0400): Felix Miata wrote: Give 'agp=off' on kernel cmdline a try: GRUB_CMDLINE_LINUX="rd.md=0 rd.dm=0 SYSFONT=True KEYTABLE=us rd.lvm.lv=vg_curie/lv_swap rd.luks=0 rd.lvm.lv=vg_curie/lv_root LANG=en_US.UTF-8 rhgb quiet elevator=noop zswap.enabled=1 transparent_hugepage=madvise iomem=relaxed agp=off" I ran grub2-mkconfig -o /boot/grub2/grub.cfg rebooted and no matter what I > try, sddm, xdm, kdm, all set fault You could try removing rhgb and quiet, but I think your best bets are either filing a Fedora bug, or asking elsewhere, either on xorg-driver-...@lists.x.org or http://www.forums.fedoraforum.org/forumdisplay.php?f=8 or one of the @lists.fedoraproject.org mailing lists. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: seg fault in Fedora 25 with all desktop managers marco error 4 in libX11-xcb.so.1.0.0
Robert Kudyba composed on 2017-04-02 22:54 (UTC-0400): ... lspci -v -s 09:0d.0 09:0d.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV100 [Radeon 7000 / Radeon VE] (prog-if 00 [VGA controller]) Subsystem: Dell PowerEdge 1850 Embedded Radeon 7000/VE... Give 'agp=off' on kernel cmdline a try: https://lists.x.org/archives/xorg-driver-ati/2017-March/029909.html -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: seg fault in Fedora 25 with all desktop managers marco error 4 in libX11-xcb.so.1.0.0
Robert Kudyba composed on 2017-03-31 23:09 (UTC-0400): ... > https://bugs.freedesktop.org/show_bug.cgi?id=100520 ... If you mentioned which gfxchip or CPU you have in your OP or the bug, I missed it. Is it possible your gfxchip makes the following applicable? * From Linux 4.8, several changes have been made in the kernel configuration to 'harden' the system, i.e. to mitigate security bugs. Some changes may cause legitimate applications to fail, and can be reverted by run-time configuration: - On most architectures, the /dev/mem device can no longer be used to access devices that also have a kernel driver. This breaks dosemu and some old user-space graphics drivers. To allow this, set the kernel parameter: iomem=relaxed -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
[AGP] rv200 and rv250 unusable absent 'agp=off'
AMD Sempron(tm) Processor 3000+ Fresh installation on resurrected MSI K8MM-V MS-7142 motherboard. https://www.msi.com/Motherboard/support/K8MMV.html openSUSE Tumbleweed, kernel 4.10.4, server 1.19.3, ati driver 7.9.0 Mouse cursor is about all that can be counted on. Sometimes an app like Xterm can be started, but text is smeared if it appears at all. As likely as not, the IceWM desktop mostly paints, and nothing more is possible, sometimes to the point even remote login cannot force reboot or kill X. The following is the tail of dmesg with drm.drm_debug=1 on cmdline (without it, no clues show up in dmesg or Xorg.0.log) using RV250/M9 GL [Mobility FireGL 9000/Radeon 9000 (similar results if I swap the RV200 for the RV250)(journal looks about the same): [ 89.691515] Key type id_legacy registered [ 319.776019] radeon :01:00.0: ring 0 stalled for more than 10240msec [ 319.776024] radeon :01:00.0: GPU lockup (current fence id 0x0188 last fence id 0x0189 on ring 0) [ 319.942304] radeon: wait for empty RBBM fifo failed ! Bad things might happen. [ 320.104030] Failed to wait GUI idle while programming pipes. Bad things might happen. [ 320.105042] radeon :01:00.0: Saved 31 dwords of commands on ring 0. [ 320.105051] radeon :01:00.0: (r100_asic_reset:2568) RBBM_STATUS=0x8031C100 [ 320.607052] radeon :01:00.0: (r100_asic_reset:2589) RBBM_STATUS=0x80010140 [ 321.104586] radeon :01:00.0: (r100_asic_reset:2597) RBBM_STATUS=0x0140 [ 321.104607] radeon :01:00.0: GPU reset succeed [ 321.104609] radeon :01:00.0: GPU reset succeeded, trying to resume [ 321.126699] radeon :01:00.0: WB disabled [ 321.126706] radeon :01:00.0: fence driver on ring 0 use gpu addr 0xd000 and cpu addr 0xbfe840381000 [ 321.126766] [drm] radeon: ring at 0xD0001000 [ 321.126787] [drm] ring test succeeded in 1 usecs [ 322.208113] [drm:r100_ib_test [radeon]] *ERROR* radeon: fence wait timed out. [ 322.208163] [drm:radeon_ib_ring_tests [radeon]] *ERROR* radeon: failed testing IB on GFX ring (-110). Log: http://fm.no-ip.com/Tmp/Linux/Xorg/xorg.0.log-k8mmv-201703260430 Is there something, other than the illogical 'agp=off' cmdline option, that can be done to make these cards usable? Time to file a new bug? I don't see any that match in a Driver / ATI, Drivers/DRI/r200 search. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-driver-ati mailing list xorg-driver-ati@lists.x.org https://lists.x.org/mailman/listinfo/xorg-driver-ati
Re: inexplicable fallback mode used when preferred mode 2560x1080 not supported by connection
Adam Jackson composed on 2017-02-13 11:27 (UTC-0500): On Sun, 2017-02-12 at 05:10 -0500, Felix Miata wrote: http://fm.no-ip.com/Tmp/Linux/Xorg/xorg.0.log-big31-ostw-d2913wm.txt is Xorg.0.log plus output from inxi -c0 -G, hwinfo --gfxcard and xrandr from a GT210[1] on host big31 booted to openSUSE Tumbleweed and modesetting driver, which falls to 1400x1050 using an HDMI cable. Using instead nouveau, fall is to 1152x864, same as same PC booted to openSUSE 42.1 using HDMI and modesetting driver, and same PC booted to Fedora 25 using HDMI and nouveau driver. Can you add Option "ModeDebug" "true" to your Device section in xorg.conf and give the log from that? <http://fm.no-ip.com/Tmp/Linux/Xorg/xorg.0.log-big31-ostw-d2913wm-ajax01-modedebugtrue> http://fm.no-ip.com/Tmp/Linux/Xorg/get-edid-big41-d2913wm-hdmi-deb9.txt is the output from 'get-edid | parse-edid' running TDE on Sid fallen back to 1152x864 mode using modesetting driver and HDMI on Intel gfx host big41, where mode details for 0 and 17 are conspicuously absent. http://fm.no-ip.com/Tmp/Linux/Xorg/get-edid-big41-d2913wm-vga-deb9.txt is the output from 'get-edid | parse-edid' running TDE on Sid running native mode 2560x1080 using modesetting driver and VGA on Intel gfx host big41, where extension block is conspicuously absent, and output sharpness is very poor compared to same display running 2560x1080 via either DVI or DisplayPort inputs. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
inexplicable fallback mode used when preferred mode 2560x1080 not supported by connection
Since acquiring my 2560x1080 U2913WM Dell for use among several multiboot PCs, with AMD/ATI, Intel and NVidia gfxchips represented, I'm frequently annoyed by lack of support for native mode by the cable actually connected, not so much by the lack of support itself as by the mode fallen back to. In most cases, the fallback selected is either 1400x1050@75 or 1152x864@75, while if using a VGA connection, 2560x1080 is supported for every PC and gfxchip I've tried so far. http://fm.no-ip.com/Tmp/Linux/Xorg/xorg.0.log-big31-ostw-d2913wm.txt is Xorg.0.log plus output from inxi -c0 -G, hwinfo --gfxcard and xrandr from a GT210[1] on host big31 booted to openSUSE Tumbleweed and modesetting driver, which falls to 1400x1050 using an HDMI cable. Using instead nouveau, fall is to 1152x864, same as same PC booted to openSUSE 42.1 using HDMI and modesetting driver, and same PC booted to Fedora 25 using HDMI and nouveau driver. Obviously the display isn't likely capable of any less than 1920x1080 via HDMI, because Intel gfx host big41 booted to Debian does it: http://fm.no-ip.com/Tmp/Linux/Xorg/xorg.0.log-big41-deb0-d2913wm.txt OTOH, this installation when I try to use xrandr in startup script to get 2560x1080 via HDMI also produces instead 1152x864. :-( Why isn't the fallback only to 1920x1080 instead of all the way back to a much less appropriate and anachronistic 4:3 mode? How can I determine whether the fault is driver(s), Xorg, Display or Gfxchip? Is an Xorg bug filing at freedesktop.org indicated here? Could more elaborate xrandr on startup or xorg.conf produce 1920x1080 in these cases? [1] http://www.evga.com/products/product.aspx?pn=01g-p3-1312-lr -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: "Fatal server error: no screens found"
Blaze Gottschalk composed on 2017-02-06 22:55 (UTC-0600): Hello, running Ubuntu on a Toshiba Chromebook 2. Received an error which referred me to your website. Here's a log of the crash, as I'm unable to start the OS at this time. Your log shows a kernel that doesn't seem to agree with your kernel version, making me suspect a sources problem leading to a driver version mismatch preventing the Xorg driver for your Intel gfxchip from loading. Provide this further information: 1-content of /etc/apt/sources.list 2-output from 'inxi -c0 -v6 | head -n20' 3-something to indicate which iteration of Chromebook 2 you have, when it was made It may be that your OS version predates your Chromebook model, so need to choose a newer OS version to install. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: How can I delete X11 files stuck in user Trash?
Darrell Stall composed on 2016-12-07 12:47 (UTC-0600): X11 files are stuck in my user Trash and cannot be emptied This is one of those operations that is facilitated by using an OFM[1] instead of cmdline utilities. 1-log out of X 2-login on a vtty 3-mc 4-navigate into the .Trash folder 5-highlight everything you want to delete (* and/or + keys) 6-F8 done If you manage to get permission denied this time, do it over again as root. If you get command not found trying to execute mc: A-complain to your distro they forgot to put it in the base install B-install mc (it's in every distribution's base repos) [1] http://www.softpanorama.org/OFM/Paradigm/ -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Problem with xorg-server-1.14.5
Anteja Vuk Macek composed on 2016-10-05 14:30 (UTC+0200): I can't use Xorg 1.13.x because for make Fedora 18 MR2 I need to use Xorg 1.14.5 ... For now I remove Xorg 1.13.x, and install Xorg 1.14.5, before I use both of Xorg ( 1.13.3-3 and 1.14.5 ). With new Xorg I can't start command startx. Mageia 4.1 is an rpm distro similar to Fedora and shipped with server 1.14.5. Maybe you should try it if 1.14.5 is truly a must. openSUSE 13.1 is an rpm distro that shipped with 1.14.3, and remains in "Evergreen" support for the next few months. PCLinuxOS 2014.12, based on Mageia, shipped with 1.14.6. All these distros stand a chance of being able to use an ISP driver rpm made for an older Fedora (18) release. I'm using FC/L from an rpm built for Fedora 10 on an over 3 year newer distro openSUSE 42.1. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Need help understanding X server freeze
jeetu.gol...@gmail.com composed on 2016-10-06 00:31 (UTC+0530): Can you think of something else we can try so we can further pinpoint where the problem lies or any thoughts at all on the above? Try a substantially different environment, one without GDM, and as little of GTK as possible (QT-based, unlike XFCE4), e.g. TDE: https://wiki.trinitydesktop.org/DebianInstall TDE is the most stable DE I'm aware of. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Need help understanding X server freeze
jeetu.gol...@gmail.com composed on 2016-10-04 10:14 (UTC+0530): Maybe try the modesetting driver at least on the Intel/Sid installation. Unfortunately when I use the default modesetting driver my X consistently crashes simply by starting gnome-terminal and doing something simple like traversing it's menus. This causes the entire X server to crash and restart back to the login. I read others having this problem with Debian Unstable and the recommendation was to move back to the Intel driver. Is there some way for me to gather more debugging information from the server so that when ssh over my tunnel causes a freeze I can figure out what has happened? I'm no expert on remote troubles. I suggest to focus on Intel/Sid by reading https://01.org/linuxgraphics/documentation/how-report-bugs and https://www.x.org/wiki/Development/Documentation/ServerDebugging/ if you haven't already, and providing all appropriate information on the intel-...@lists.freedesktop.org mailing list. If someone there says to report elsewhere, the response should come with more specific info as to what to provide, and more importantly, to which list. Surely you'll need to provide basic info, such as motherboard model (to know which generation chipset) and output from 'lspci | grep VGA' plus hwinfo and/or inxi and an Xorg log. Maybe first try a different login manager and/or DE. Gnome is surely the first or second most demanding from the X server of all DEs, so may be the root problem. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Inability to get system rescue graphical environment to boot on new Dell Inspiron laptop
Ken Burgett composed on 2016-09-27 14:59 (UTC-0700): I just purchased a Dell Inspiron 17" hdmi laptop, 64 bit, running Windows Which Inspiron model? Could it be it's too new for 1.17.4's Intel driver. 10. I wish to configure the system as a dual boot, with Ubuntu 16.04 alongside Win 10. I downloaded the System Rescue iso and flashed it to a usb 2.0 pendrive with Rufus. I got it to boot, but I can't get startx to start, it dumps out about 20 lines of stuff and then issues another command prompt. It looks like the X.org server didn't start. You could try configuring Xorg to not use the Intel driver. It will fall back to Xorg 1.17.4's incorporated modesetting driver if the Intel is not available, or you can specify modesetting specifically if you know how to use xrandr on X startup or to configure your own xorg.conf. http://www.phoronix.com/scan.php?page=news_item=Ubuntu-Debian-Abandon-Intel-DDX suggests why to try the modesetting driver. In order to capture the failure, I took a photo with my phone camera, and have included it as an attachment to this email. I would really appreciate someone letting me know how I might proceed beyond this point. I have used System Rescue many times in the past, but never with this particular computer. Are there bios options I need to set? lspci from that command prompt might be helpful if the model number doesn't produce adequate hardware specifications. Knoppix is the granddaddy of live Linux. Will its latest release start an X session? How about the latest Sid, Tumbleweed or Rawhide installation media? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Fatal server error: No screens found.
Mark nash composed on 2016-08-28 00:00 (UTC-0400): > I am a first year computer science student and would like to use my > chromebook to do things. I decided to watch some YouTube videos and > crouton my chromebook with Ubuntu. > It seems like a common problem. On the internet with no clear solution. > The attached picture is a picture of my error message that comes up after > being asked to put in a username and password, and entering sudo > startunity. > I looked on your website and found out that my error has something to do > with a graphics card. Other than that, it was difficult to comprehend what > I should do to correct the issue and be able to use Ubuntu. > Here is some information: > I am using a Poin2 Chromebook. > I tried this whole process about seven times with other flavors of Linux > like LXDE, AND XFCE. Those "flavors" are desktop flavors, not distro flavors. Desktop flavors themselves usually aren't directly responsible for errors like you have encountered. Other distro flavors would be such as Linuxmint, Fedora, Mageia, openSUSE and Debian. Ubuntu itself is a Debian derivative, while Xubuntu and Kubuntu are Ubuntu releases with desktops other than Ubuntu'x Unity default. > If any more information I needed, please let me know. More information is needed, but you can help yourself by ensuring the release you choose to try is newer than your Chromebook's design. Those messages are typical of hardware that's too new to be supported by an official distribution release. So, take a look at the distro release dates on http://distrowatch.com/ and see if any you tried that failed are new enough. If not, pick another that is, if you can find one, and try again. If you can't find one new enough to support your hardware, it means you either need to wait until one comes along that is new enough, or to try to install a development version of something that does include necessary support. Devel versions mean unresolved bugs are expected, so installing one is not usually recommended for a Linux novice. To see if we can help with what you already have installed we need to see more information, such as output from the 'lspci'command as a start, and the content of either of the files /var/log/Xorg.0.log and /var/log/Xorg.1.log if either exist. (Newest versions have moved those logs elsewhere.) Very important is exactly which Ubuntu version you have installed now, which will be obvious to us *if* we can see see either of those logs. Places to start self help before accumulating more information to provide here, if you haven't seen them already, might be: http://linux-rockchip.info/mw/index.php?title=Check_Linux_Framebuffer_Resolution http://linux-rockchip.info/mw/index.php?title=Display -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Latest Xorg from git breaks Mesa with VDPAU playback.
Boris composed on 2016-08-28 14:25 (UTC-0500): I am using the latest Xorg, Mesa from git and the Linux 4.7.2 kernel. I have an Nvdia NVA8 chipset[ion] with xorg setup to use the latest nouveau driver and it also supposed vdpau and I have the firmware... Have you considered trying another driver option? Anything as new as GT218 is liable to work as well as if not better with modesetting than with nouveau: http://www.phoronix.com/scan.php?page=news_item=Ubuntu-Debian-Abandon-Intel-DDX -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: display/GFX issue with Ubuntu 16.04 not seen in 14.04
Thomas Lübking composed on 2016-08-23 16:49 (UTC+0200): On Tue, Aug 23, 2016 at 06:53:13AM -0400, Jim Abernathy wrote: ... I've tried this on different computers all with Intel GFX. same results. ... Though this is very most likely a bug in the kernel and not in X11, check whether you're using the intel (likely) or the modesetting driver, then use the other one (uninstalling xf86-video-intel could be sufficient for this, but I never used ubuntu. Ask them.) http://www.phoronix.com/scan.php?page=news_item=Ubuntu-Debian-Abandon-Intel-DDX seems a decent summary of the Intel vs. Modesetting driver situation. The Modesetting driver was moved from a standalone package into the server in 1.17.0, long before 16.04 with server 1.18.3 was released. So far, all my Intel gfx installations that work at all with the Modesetting driver work at least as well as they ever did with the Intel driver. Since it's also likely a bug in the kernel, check "dmesg | grep intel" before doing so. You might face the same issue as this guy: https://bbs.archlinux.org/viewtopic.php?id=216088 -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: (WW) glamor requires at least 128 instructions (64 reported)
Dave Airlie composed on 2016-07-11 20:54 (UTC+1000): Felix Miata wrote: More precisely: (WW) glamor requires at least 128 instructions (64 reported) (EE) modeset(0): Failed to initialize glamor at ScreenInit() time. I've been seeing a lot of this over the past several months on various hardware in both Rawhide, F24, various openSUSEs and Stretch, trying intentionally to get built-in modeset driver used instead of the otherwise default intel or radeon driver. Instructions at what level? How can one find out whether there is some bug causing it, or if the real problem is a hardware limitation, or something else, maybe a broken depends for a not installed package? This is fine, we don't want glamor to run on the that can't support it. It would be finer to be able to compare lspci output to something that says yes or no for glamor on particular hardware, maybe a milestone like Cedar or Eaglelake if any such might be the case. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
(WW) glamor requires at least 128 instructions (64 reported)
More precisely: (WW) glamor requires at least 128 instructions (64 reported) (EE) modeset(0): Failed to initialize glamor at ScreenInit() time. I've been seeing a lot of this over the past several months on various hardware in both Rawhide, F24, various openSUSEs and Stretch, trying intentionally to get built-in modeset driver used instead of the otherwise default intel or radeon driver. Instructions at what level? How can one find out whether there is some bug causing it, or if the real problem is a hardware limitation, or something else, maybe a broken depends for a not installed package? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: wrong display size
Sebastian Gutzwiller composed on 2016-06-14 11:10 (UTC+0200): I have a LCD display (115 mm x 86 mm) with VGA resolution (640 x 480) connected over LVDS with an Intel Atom N450. After upgrading from Ubuntu 10.04 to 14.04 I only see the upper left detail of the whole screen (see attachment 'display_picture.jpg'). The Xorg log (see attachment 'Xubuntu.14.04.4.LTS.Xorg.0.log') reported a physical screen size of 270 mm x 203 mm which is pretty much the size of the whole screen. Any suggestions? Something to try (I don't have any Intel Atoms to test on): 1.login on a vtty 2.sudo apt-get purge xserver-xorg-video-intel 3.reboot (or restart X xserver) Reason: sometime post-server 1.16.x. the generic modesetting driver was moved directly into the server itself. It's supposed to be competent for all non-ancient mainstream gfxchips, a substitute for chip-specific drivers. If it doesn't help, it's up to you whether to bother reinstalling the intel driver. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Flickering single display in multi-head XRandR setup
Gene Heskett composed on 2016-03-25 20:52 (UTC-0400): Felix Miata wrote: ... Looks like a 3-wire cord to me, and $14.99 in the current sale catalog. By golly gee, I do believe you are correct, so they've "fixed" that problem. And 15 bucks is less than I paid for one 25 years ago when smd stuff was just coming on the scene. But I'm not on their spam list so I don't get the pdf version of the catalog... MCM catalogs seem to show up in snail mail here every 3-6 weeks or so. I usually have at least 3 that haven't expired yet. I can't imagine how anyone who's ever ordered from them could not have catalogs around, unless they're being moved straight from mailbox to trash or recycle bin. :-p Thanks for the tickle to make me go look at it again. :-) -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: Flickering single display in multi-head XRandR setup
Gene Heskett composed on 2016-03-25 15:55 (UTC-0400): http://www.mcmelectronics.com/product/TENMA-21-8230-/21-8230 wouldn't do it? Possibly, that looks a lot like the one I used (handles are identical) back in the later 90's but with different tips. Looks like a 3-wire cord to me, and $14.99 in the current sale catalog. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: setting primary in multihead xorg.conf causes disregard of DisplaySize
Thomas Lübking composed on 2016-02-10 09:49 (UTC+0100): > Felix Miata wrote: >> Sent too soon, and probably to the wrong place. I tried equivlent xorg.conf >> files with Nvidia G84 and nouveau, and with ATI Cedar and radeon. Neither >> have any such shortcoming, so I have to suspect this should have gone to >> intel-gfx as a driver bug. Anyone here agree, or know a config solution? > tried > Option "IgnoreEDID" "true" > ? NAICT, that was an NVidia proprietary driver exclusive (now deprecated?). Not so? I would think use of PreferredMode and/or DisplaySize would infer something equivalent. So, no not tried. I don't know what section to try it in either. I'm also having a hard time guessing why it could help. EDID isn't taken into account by Xorg anyway WRT display density, by default always ignoring real size and ASSuming whatever's required to anachronistically make DPI 96. :-( I already posted to intel-gfx: https://lists.freedesktop.org/archives/intel-gfx/2016-February/087333.html -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: is r128 supposed to be viable still with server 18.0 and 4.3.3 kernel?
Connor Behan composed on 2016-01-02 15:40 (UTC-0500): > Felix Miata wrote: >> I'm getting inexplicable web server errors trying to search for r128 bugs on >> https://bugs.freedesktop.org/query.cgi advanced search, so don't see if >> there's any bug on point. >> I don't get why there's any reference to a DVI-0 output on an ancient card >> that only has VGA output. Server 16.1/kernel 3.16.7 is OK, with no sign of >> DVI-0 in its Xorg.0.log. >> I don't see why no screens are being found, whether or not I have an >> xorg.conf file. Logs with and without, on CRT and WLCD screens: >> http://fm.no-ip.com/Tmp/Linux/Xorg/R128/xorg.0.log-ostw-r128-WOxconf-crt >> http://fm.no-ip.com/Tmp/Linux/Xorg/R128/xorg.0.log-ostw-r128-WOxconf-wsxga >> http://fm.no-ip.com/Tmp/Linux/Xorg/R128/xorg.0.log-ostw-r128-withxconf-wsxga >> Anyone see what the problem might be, or know? Need I file a bug on >> freedesktop? openSUSE? > Bug 91358 is more or less about this. Thank you! https://bugs.freedesktop.org/show_bug.cgi?id=91358 The R128 card in that bug's attached image is virtually identical to mine, which is also from a Dell (Optiplex GX240 tower), labeled Rage 128 Ultra on the backside. The white chip sticker has the same R128P 73108, and same stamped PN 109-73100-02. The differences are: 1: the four yellow chips in that image are black on mine, 2: the two what looks like surface mount polymer caps lower right are on mine two chips like replace the four yellow; 3: the inked 56 in the upper missing RAM chip location on mine is S70. > When the card worked on an older > "server and kernel", that's probably because it was using a version of > xf86-video-r128 less than 6.10.0. The change in 6.10.0 that seems to be > causing a lot of grief is that the driver now gives up when it fails to > detect a monitor. Older versions just assumed that there was a monitor > plugged into a VGA port when this happened. > Of the many cases for monitor detection, these three seem to work: > * DVI cards. > * Mobility cards. > * VGA cards not marked as DFP capable on x86 platforms. > So far I've met a PPC user with a non-DFP VGA card and he can't get > monitor detection to work. Your situation seems to be that you have a > VGA card that the original driver authors (correctly or incorrectly) > marked as DFP capable. Therefore the driver assumes that it is DVI > before it reads the BIOS connector table. And something is wrong with > the BIOS connector tables or my understanding of them because no VGA > section is being found. In case it might be useful, hwinfo output on mine: http://fm.no-ip.com/Tmp/Linux/Xorg/R128/hwinfo-gfx-r128UTF.txt > Because of all these cards doing undocumented things, it is probably > best to go back to fallback code similar to what the old driver has. A > patch I posted for this is > https://bugs.freedesktop.org/attachment.cgi?id=117325. Let me know if > you still get server errors and I can attach it. Still get server errors after doing what? I'm not a programmer, so I don't build. I only install, and test, binary packages others build, if and when they do. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-driver-ati mailing list xorg-driver-ati@lists.x.org http://lists.x.org/mailman/listinfo/xorg-driver-ati
is r128 supposed to be viable still with server 18.0 and 4.3.3 kernel?
I'm getting inexplicable web server errors trying to search for r128 bugs on https://bugs.freedesktop.org/query.cgi advanced search, so don't see if there's any bug on point. I don't get why there's any reference to a DVI-0 output on an ancient card that only has VGA output. Server 16.1/kernel 3.16.7 is OK, with no sign of DVI-0 in its Xorg.0.log. I don't see why no screens are being found, whether or not I have an xorg.conf file. Logs with and without, on CRT and WLCD screens: http://fm.no-ip.com/Tmp/Linux/Xorg/R128/xorg.0.log-ostw-r128-WOxconf-crt http://fm.no-ip.com/Tmp/Linux/Xorg/R128/xorg.0.log-ostw-r128-WOxconf-wsxga http://fm.no-ip.com/Tmp/Linux/Xorg/R128/xorg.0.log-ostw-r128-withxconf-wsxga Anyone see what the problem might be, or know? Need I file a bug on freedesktop? openSUSE? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg-driver-ati mailing list xorg-driver-ati@lists.x.org http://lists.x.org/mailman/listinfo/xorg-driver-ati
Re: New Monitor Weirdnesses or how do I get Xorg to pay attention to my xorg.conf file?
Robert Heller composed on 2015-12-03 16:00 (UTC-0500): > I use a distro with long term support. Not without a price. What you have is hardware technology that is more advanced than the foundation on which that support is built. A GeForce 8200 needs either the FOSS nouveau driver, which seems to be missing from CentOS 5, or the proprietary NVidia driver. VESA is a low technology fallback driver wholly incapable of properly supporting widescreen displays. AFAICT, there's no amount of xorg.conf or xrandr twiddling you can do to overcome the shortcomings of a fallback driver. > *I* have better things to do than > spend all of my time dealing with incompatible updates every few months. You have 3 choices that I can see: 1-upgrade software to the technology level of your hardware (nouveau driver, likely requiring KMS kernel) 2-backlevel your video hardware (either supported gfxcard, or supported 4:3 or 5:4 aspect display) 3-suffer a standard aspect video mode on your widescreen display -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: http://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: unset Xft.dpi how?
Thomas Lübking composed on 2015-10-12 08:46 (UTC+0200): Thanks again! >> I tried this ~/.xinitrc first at a test: >> Then I tried: >> #!/usr/bin/env bash >> #xrdb -query | grep -v Xft.dpi | xrdb -load & >> export LANG="en_US.UTF-8" >> export LC_ALL="en_US.UTF-8" >> export LANGUAGE="en_US.UTF-8" >> export LC_CTYPE="en_US.UTF-8" >> echo "Xft.dpi: 120" | xrdb -override >> exec /usr/bin/xterm & >> exec openbox >> #exec cinnamon-session >> That got an xterm open, from which I entered xrdb -query | grep Xft to find >> 120, then strace cinnamon-session 2>&1 | grep open. Screen went >> black, then I >> was returned to shell prompt from which I ran startx. So next I tried this >> ~/.xinitrc: >> #!/usr/bin/env bash >> #xrdb -query | grep -v Xft.dpi | xrdb -load & >> export LANG="en_US.UTF-8" >> export LC_ALL="en_US.UTF-8" >> export LANGUAGE="en_US.UTF-8" >> export LC_CTYPE="en_US.UTF-8" >> echo "Xft.dpi: 120" | xrdb -override >> exec /usr/bin/xterm & >> #exec openbox >> exec strace cinnamon-session 2>&1 | grep open >> That got both xterm and cinnamon session going, but left no >> output > exec strace cinnamon-session 2>&1 | grep open > /tmp/cinnamon.touches > it's however gonna generate a rather huge file. > No idea whether it's still up to date, but > https://forum.manjaro.org/index.php?topic=767.0 > suggests there's "gnome-session-properties" - it may also fetch this out of > some dconf, but yes: asking cinnamon expoerts sounds like a good idea. After reading http://forums.linuxmint.com/viewtopic.php?f=47=186189 and a fruitless full disk search for text files containing either Xft.dpi or gtk-xft-dpi I abandoned the idea that Cinnamon might be different from Gnome, which hard codes 96 DPI, apparently through use of Xft.dpi. So, I reallocated the space on which I had installed Mint/Cinnamon to LMDE/Mate, which ATM just finished installing. If it turns out that Mate too is making obfuscated use of Xft.dpi, I'll try pursuing it via your suggestion. Otherwise, this looks like a dead issue for me. There are enough other DEs that I don't need to waste more time on such hard core 96ers before such time as they learn what A11Y and U7Y are and create easy to discover and use scaling knobs for people who must have things big enough to see without having to use giant display screens. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: http://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: unset Xft.dpi how?
Thomas Lübking composed on 2015-10-11 09:38 (UTC+0200): Thank you, thank you, thank you!!! > Felix Miata wrote: >> Can anyone tell me how to globally unset Xft.dpi, particularly on an >> installation that does not set Xft.dpi via /etc/X11/Xresources, e.g. on >> Linuxmint Cinnamon? > Override: > echo "Xft.dpi: 123" | xrdb -override This would/should substitute 123 DPI for 96 DPI, correct? > Remove (you need to reload the entire database) > xrdb -query | grep -v Xft.dpi | xrdb -load > This reads out the current database, strips every line containing "Xft.dpi" > and loads the result as new database. This looks like should be what I'm after, but putting it above '. /etc/X11/Xsession' in Mint 17.2's /etc/X11/xinit/xinitrc has no apparent effect on cinnamon-session. Using the following ~/.xinitrc doesn't produce any apparent effect either: #!/usr/bin/env bash xrdb -query | grep -v Xft.dpi | xrdb -load & export LANG="en_US.UTF-8" export LC_ALL="en_US.UTF-8" export LANGUAGE="en_US.UTF-8" export LC_CTYPE="en_US.UTF-8" exec cinnamon-session > xrdb -remove wipes *everything*. Also no effect on cinnamon via /etc/X11/xinit/xinitrc. > Whether and where Xft.dpi is set (KDE font config kcm enforcing a > resolution?) by your distro, i don't know either - sorry. I've got resolution control under control. :-) My query didn't make clear that there were really two questions, and failed to mention that no version of Gnome is ever used here. The obvious question looks like has what should be a good answer that doesn't as yet work for me in Mint, I'm guessing because it hasn't yet been put wherever it needs to be put. The less obvious question is, precisely where to apply or include (aka "put") any of these xrdb commands. Distros don't all configure and start X in the same manner. In openSUSE it's sufficient to use DisplaySize in xorg.conf* or run xrandr from some script in /etc/X11/xinit/xinitrc.d/ to achieve a desired DPI. In Mageia and Fedora, the additional step (using the xrandr method) of commenting away the Xft.dpi line in /etc/X11/Xresources is necessary (and in Mageia, the script goes in /etc/X11/xinit.d/). Wheezy and Utopic Kubuntu (with kscreen disabled) are much like openSUSE - Xft.dpi is not set, while the xrandr script works in /etc/X11/Xsession.d/ (/etc/X11/xinit/xinitrc.d/ doesn't exist in Wheezy), which doesn't exist in openSUSE. Is it possible Cinnamon is like Gnome and carves Xft.dpi to 96 in stone, epoxy or .so? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: http://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: unset Xft.dpi how?
Thomas Lübking composed on 2015-10-11 20:32 (UTC+0200): > Felix Miata wrote: >>> Remove (you need to reload the entire database) xrdb -query | grep -v >>> Xft.dpi | xrdb -load >>> This reads out the current database, strips every line containing >>> "Xft.dpi" and loads the result as new database. >> This looks like should be what I'm after, but putting it above '. >> /etc/X11/Xsession' in Mint 17.2's /etc/X11/xinit/xinitrc effect on >> cinnamon-session. Using the following ~/.xinitrc d > Did you check whether those files are invoked at all by cinnamon? I have no idea how to find out. > xinitrc is related to xinit which is used by startx, but desktops tend to > operate on their own stuff (eg. startkde doesn't read ~/.xinitrc at all, > you would have to explicitly add it to its autostart stuff. > I know nothing about cinnamon nor Mint, sorry, but if cinnamon-session is > (typically) a script, you might find its inclusions there (otherwise best > ask cinnamon developers) You could also start a bare X server, xterm and > "strace cinnamon-session 2>&1 | grep open" to see what files it opens (iff > cinnamon-session is an ELF binary!) I tried this ~/.xinitrc first at a test: Then I tried: #!/usr/bin/env bash #xrdb -query | grep -v Xft.dpi | xrdb -load & export LANG="en_US.UTF-8" export LC_ALL="en_US.UTF-8" export LANGUAGE="en_US.UTF-8" export LC_CTYPE="en_US.UTF-8" echo "Xft.dpi: 120" | xrdb -override exec /usr/bin/xterm & exec openbox #exec cinnamon-session That got an xterm open, from which I entered xrdb -query | grep Xft to find 120, then strace cinnamon-session 2>&1 | grep open. Screen went black, then I was returned to shell prompt from which I ran startx. So next I tried this ~/.xinitrc: #!/usr/bin/env bash #xrdb -query | grep -v Xft.dpi | xrdb -load & export LANG="en_US.UTF-8" export LC_ALL="en_US.UTF-8" export LANGUAGE="en_US.UTF-8" export LC_CTYPE="en_US.UTF-8" echo "Xft.dpi: 120" | xrdb -override exec /usr/bin/xterm & #exec openbox exec strace cinnamon-session 2>&1 | grep open That got both xterm and cinnamon session going, but left no output anywhere I could see or find, including not touching .xsession-errors. lsof|wc -l produces 12363. Whether I should be able to find a clue in lsof output I have no idea, so maybe it's time to find cinnamon people to ask how it's supposed to work. >> xrdb -query | grep -v Xft.dpi | xrdb -load & >> export LANG="en_US.UTF-8" >> export LC_ALL="en_US.UTF-8" >> export LANGUAGE="en_US.UTF-8" >> export LC_CTYPE="en_US.UTF-8" >> exec cinnamon-session > I bet that cinnamon-session ultimately re-sets the xrdb (as does KDE on > starting up) That would surprise me none. Again, thank you! -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: http://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s
Re: unset Xft.dpi how?
Hacksign composed on 2015-10-12 10:03 (UTC+0800): > If your distribution still use Xorg, then config it via > /etc/X11/xorg.conf or any equal config file. > The keyword is DisplaySize this keyword should be supported by all video > card drivers. This is true at least for Intel, AMD and NVidia gfxchip drivers. Xrandr can usually produce a comparable result. One or the other may be required instead to work around bugs[1]. But, Xft.dpi overrides the Xorg computed result for all modern apps, and many that aren't so modern. My problem is how to unset Xft.dpi, allowing the X server calculation to affect all X apps. Apparently this may be impossible in Cinnamon, as it apparently is in Gnome. [1] https://bugs.freedesktop.org/show_bug.cgi?id=39949 https://bugs.freedesktop.org/show_bug.cgi?id=77321 -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ ___ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: http://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s