Re: Can we rely on #pragma once ?

2024-02-19 Thread Mark Kettenis
> Date: Mon, 19 Feb 2024 12:13:34 +0100
> From: "Enrico Weigelt, metux IT consult" 
> 
> Hello folks,

Hi Enrico,

> we've got a lot of include files, therefore lots of guards.
> 
> Modern C compilers should understand #pragma once, but this isn't
> without problems (possibly problematic with symlinks ?), and I have
> to admit I don't how well is it supported outside of GNU land.
> 
> Can we rely on it ?

IMHO, no.  There must be a reason why this hasn't been widely adopted,
despite having been around for the better part of 2- years.  And the
public Xorg headers should work with a standard C99 compiler.

To be honest, some of the things you've raised in this thread really
worry me.  I don't think Xorg is in a position where it makes sense to
do a large overhaul.  The code is used by a lot of projects, but many
of those projects probably don't have a lot of manpower to spend on
testing Xorg patches.  So breakage may not be caught in a timely
fashion unless you commit yourself to testing a wide variety of
platforms.

Looking at this particular thing, what problem are you trying to
solve?  The existing header files already have the protection in place
where it matters.  And I don't expect why software that's mostly just
in maintenance mode would see a lot of new header files being added.

Cheers,

Mark


> In fact, we already have one place using it:
> 
> include/xwayland-config.h.meson.in
> 
> But I doubt that xwayland is ever been compiled w/ non-gnu/non-clang
> compilers.


Re: tarball types (was: [ANNOUNCE] xf86-input-libinput 1.2.1)

2022-01-25 Thread Mark Kettenis
> Date: Mon, 24 Jan 2022 15:53:23 -0800
> From: Alan Coopersmith 
> 
> On 1/23/22 21:18, Peter Hutterer wrote:
> > xf86-input-libinput 1.2.1 is now available. Primarily a few typos and misc
> > minor fixes, the most visible change to distributions is that we now ship an
> > xz tarball instead of bz2. Due to a global shortage of flying cars, you will
> > have to accept that as your "welcome to the future" present. If you don't 
> > like
> > the future (and who can blame you!), we still have gz tarballs, simply
> > because I didn't realize we still generated those until I typed out this
> > email.
> 
> While I've been applying this change across the Xorg modules, I've followed
> the lead of those who came before me, and just replaced "dist-bz2" with
> "dist-xz".  To get rid of the gzip files we'd also have to add "no-dist-gzip"
> to our AM_INIT_AUTOMAKE() options.
> 
> Since it's been a decade since GNOME switched to exclusively releasing xz
> tarballs [1], I would expect there being no major headaches to us doing the
> same now, we just hadn't thought much about it. Is this something we want to
> do?  Does anyone have a reason we shouldn't stop making .gz files?
> (It looks like xwayland is already doing xz-only releases now.)
> 
> [1] 
> https://mail.gnome.org/archives/devel-announce-list/2011-September/msg3.html

OpenBSD's tar does not support xz, so yes, switching to xz-only
releases would be an inconvenience.

Is there a reason to stop making .gz files?  The amount of storage and
bandwidth required to make those available can't be significant in
today's world and stop making them is actual work ;).


Re: XChangeProperty accesses 32 bit values as 64 bit

2019-01-02 Thread Mark Kettenis
> Date: Wed, 2 Jan 2019 16:31:39 +0100
> From: Hanno Böck 
> 
> Hi,
> 
> Trying to debug a crash (in gajim) I discovered that it was due to a
> stack buffer overread in gtk+/libX11.
> 
> Digging down I am not entirely sure how to interpret it and whether
> it's libX11's or GTK's fault. Here's what's going on:
> 
> Gtk+ calls XChangeProperty where the second last parameters are a
> pointer to a pid, see e.g. [1]. The "format" parameter is "32", which
> is the bit size.
> 
> Now in libX11 it ends up crashing in the function _XData32, because it
> tries to access the variable as a long, which is 64 bit.
> 
> Now this is kinda documented [2], where it says:
> "If the specified format is 32, the property data must be a long array."
> 
> However that is - to put it mildly - unexpected and confusing. If I
> have a function that lets me tell I want to process a 32 bit value then
> learning that I have to pass that 32 bit value as a 64 bit value is
> surely the last thing I expected.

Yes. the X11 API dates from the time when 32-bit computing was
relatively new and short meant 16-bit and long meant 32-bit.  And then
64-bit computers came around some years later...

> Given this API this probably needs to be fixed in gtk by using long
> instead of pid_t for the pid, but I strongly recommend rethinking that
> design in libX11 and make it accept 32 bit values.

POSIX defines pid_t as "integer type" of "an appropriate length".
There is no guarantee that it is 32 bits, so GTK needs to be fixed
anyway.

Changing the libX11 API isn't possible at this point as software that
uses the interface correctly would break on a 64-bit system.  There is
of course libxcb which (as far as I can tell) doesn't have this
weirdness.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver] glamor: Work around GEM usage

2018-05-23 Thread Mark Kettenis
> From: Thomas Hellstrom 
> Date: Wed, 23 May 2018 22:58:05 +0200
> 
> On 05/23/2018 08:00 PM, Adam Jackson wrote:
> > On Wed, 2018-05-23 at 11:14 +0200, Thomas Hellstrom wrote:
> >> KMS drivers are not required to support GEM. In particular, vmwgfx
> >> doesn't support flink and handles and names are identical.
> >> Getting a bo name should really be part of a lower level API, if needed,
> >> but in the mean time work around this by setting the name identical to
> >> the handle if GEM isn't supported.
> > I'd thought flink was old and busted anyway. Could this path just go
> > away?
> >
> > - ajax
> 
> I guess it's needed for dri2? If all drivers using glamor are OK with 
> dropping dri2, then that should be OK.

OpenBSD doesn't implement dri3 (yet) and relies on the modesetting
driver on the newest generations of Intel hardware.  And I believe for
AMD hardware we rely on glamor as well.  The kernel drivers for those
do support GEM though.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 3/4] os: Make CLOCK_REALTIME the CLOCK_MONOTONIC fallback

2017-12-27 Thread Mark Kettenis
> From: Jeff Smith 
> Date: Tue, 26 Dec 2017 22:10:54 -0600
> 
> When clock_gettime(CLOCK_MONOTONIC) fails, xserver falls back on
> gettimeofday().  However, gettimeofday() is deprecated in favor of
> clock_gettime(CLOCK_REALTIME).
> 
> Fall back on CLOCK_REALTIME if CLOCK_MONOTONIC fails.  As long as
> clock_gettime() is available, passing CLOCK_REALTIME can be expected to
> work.  Leave the gettimeofday() implementation available for
> configurations that do not try to obtain a monotonic clock.

What problem are you trying to fix?  Are there any modern systems
where CLOCK_MONOTONIC isn't available?  Adding more "legacy" codepaths
isn't going to improve the quality of the code!  Especially if those codepaths 
are redundant.


> diff --git a/os/utils.c b/os/utils.c
> index 876841c..f3d0f71 100644
> --- a/os/utils.c
> +++ b/os/utils.c
> @@ -200,8 +200,8 @@ sig_atomic_t inSignalContext = FALSE;
>  #endif
>  
>  #ifdef MONOTONIC_CLOCK
> -static clockid_t clockid;
> -static clockid_t clockid_micro;
> +static clockid_t clockid = ~0L;
> +static clockid_t clockid_micro = ~0L;
>  #endif
>  
>  OsSigHandlerPtr
> @@ -427,8 +427,8 @@ ForceClockId(clockid_t forced_clockid)
>  {
>  struct timespec tp;
>  
> -BUG_RETURN(clockid);
> -BUG_RETURN(clockid_micro);
> +BUG_RETURN(clockid != ~0L);
> +BUG_RETURN(clockid_micro != ~0L);
>  
>  if (clock_gettime(forced_clockid, ) != 0) {
>  FatalError("Forced clock id failed to retrieve current time: %s\n",
> @@ -452,16 +452,13 @@ GetTimeInMicros(void)
>  {
>  return (CARD64) GetTickCount() * 1000;
>  }
> -#else
> +#elif defined(MONOTONIC_CLOCK)
>  CARD32
>  GetTimeInMillis(void)
>  {
> -struct timeval tv;
> -
> -#ifdef MONOTONIC_CLOCK
>  struct timespec tp;
>  
> -if (!clockid) {
> +if (clockid == ~0L) {
>  #ifdef CLOCK_MONOTONIC_COARSE
>  if (clock_getres(CLOCK_MONOTONIC_COARSE, ) == 0 &&
>  (tp.tv_nsec / 1000) <= 1000 &&
> @@ -472,32 +469,38 @@ GetTimeInMillis(void)
>  if (clock_gettime(CLOCK_MONOTONIC, ) == 0)
>  clockid = CLOCK_MONOTONIC;
>  else
> -clockid = ~0L;
> +clockid = CLOCK_REALTIME;
>  }
> -if (clockid != ~0L && clock_gettime(clockid, ) == 0)
> -return (tp.tv_sec * 1000) + (tp.tv_nsec / 100L);
> -#endif
> -
> -X_GETTIMEOFDAY();
> -return (tv.tv_sec * 1000) + (tv.tv_usec / 1000);
> +clock_gettime(clockid, );
> +return (tp.tv_sec * 1000) + (tp.tv_nsec / 100L);
>  }
> -
>  CARD64
>  GetTimeInMicros(void)
>  {
> -struct timeval tv;
> -#ifdef MONOTONIC_CLOCK
>  struct timespec tp;
>  
> -if (!clockid_micro) {
> +if (clockid_micro == ~0L) {
>  if (clock_gettime(CLOCK_MONOTONIC, ) == 0)
>  clockid_micro = CLOCK_MONOTONIC;
>  else
> -clockid_micro = ~0L;
> +clockid_micro = CLOCK_REALTIME;
>  }
> -if (clockid_micro != ~0L && clock_gettime(clockid_micro, ) == 0)
> -return (CARD64) tp.tv_sec * (CARD64)100 + tp.tv_nsec / 1000;
> -#endif
> +clock_gettime(clockid_micro, );
> +return (CARD64) tp.tv_sec * (CARD64)100 + tp.tv_nsec / 1000;
> +}
> +#else
> +CARD32
> +GetTimeInMillis(void)
> +{
> +struct timeval tv;
> +
> +X_GETTIMEOFDAY();
> +return (tv.tv_sec * 1000) + (tv.tv_usec / 1000);
> +}
> +CARD64
> +GetTimeInMicros(void)
> +{
> +struct timeval tv;
>  
>  X_GETTIMEOFDAY();
>  return (CARD64) tv.tv_sec * (CARD64)100 + (CARD64) tv.tv_usec;
> -- 
> 2.9.4
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Severe performance regression was observed on X Server 1.18 with older graphics cards

2017-07-31 Thread Mark Kettenis
> From: "Kevin Brace" 
> Date: Mon, 31 Jul 2017 08:47:37 +0200
> 
> Hi,
> 
> I am hoping someone can track down a bad commit that led to severe
> performance regression likely caused by X Server 1.18.
> The other day, I was trying to figure out why a computer with
> Xubuntu 16.04.2 was running so slowly.
> The thing is, as long as I used Lubuntu 12.04 or Xubuntu 14.04, I
> was getting the performance I was expecting (fairly fast).
> Here is the spec of the computer.
> 
> CPU: AMD Opteron 165 (1.8 GHz, 1 MB L2 cache, dual core) 
> Mainboard: ASRock 939Dual-SATA2
> RAM: 2.25 GB
> Graphics: SiS 305 (AGP, 32 MB)
> 
> First, I noticed severe performance regression on Xubuntu 16.04.2.
> Xubuntu 16.04.2 uses Linux 4.10 and X Server 1.18.
> Xubuntu 14.04 (stock) uses Linux 3.13 and X Server 1.15.
> Lubuntu 12.04 (stock) uses Linux 3.2 and X Server 1.11.
> 
> Please note that all version used xf86-video-sis (it is renamed as
> xserver-xorg-video-sis for Debian / Ubuntu), and they all used the
> version from the upstream freedesktop.org repository.

This is almost certainly because the XAA acceleration support was
removed from the X server.  The driver will now use EXA, but EXA never
really worked properly for a lot of the "legacy" drivers.

> For Xubuntu 16.04, Canonical no longer builds
> xserver-xorg-video-sis, so I compiled the upstream code and
> installed it.

Try compiling the driver without EXA support.  The easiest way is
probably to disable the SIS_USE_EXA #define in src/sis.h by changing
the line above from #if 1 into #if 0.  That way you'll still have the
userland modesetting support but the driver will use shadowfb for
rendering which is what the xf86-video-vesa driver uses as well.

Cheers,

Mark
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libpciaccess] linux: support 32 bit PCI domains

2017-07-12 Thread Mark Kettenis
> Date: Tue, 11 Jul 2017 10:32:19 -0700
> From: Stephen Hemminger 
> 
> On Tue, 11 Jul 2017 10:33:56 +0100
> Emil Velikov  wrote:
> 
> > On 11 July 2017 at 05:39, Alan Coopersmith  
> > wrote:
> > > On 07/10/17 11:35 AM, Stephen Hemminger wrote:  
> > >>
> > >> The PCI domain maybe larger than 16 bits on Microsoft Azure and other
> > >> virtual environments. PCI busses reported by ACPI are limited to
> > >> 16 bits, but in Azure the domain value for pass through devices is
> > >> intentionally larger than 16 bits to avoid clashing with local
> > >> devices. This is needed to support pass through of GPU devices.
> > >>
> > >> Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=101744
> > >> Signed-off-by: Stephen Hemminger 
> > >> ---
> > >>   include/pciaccess.h |  2 +-
> > >>   src/linux_sysfs.c   | 16 +++-
> > >>   2 files changed, 4 insertions(+), 14 deletions(-)
> > >>
> > >> diff --git a/include/pciaccess.h b/include/pciaccess.h
> > >> index 1d7aa4beabfd..93ed76f3cd25 100644
> > >> --- a/include/pciaccess.h
> > >> +++ b/include/pciaccess.h
> > >> @@ -321,7 +321,7 @@ struct pci_device {
> > >>* the domain will always be zero.
> > >>*/
> > >>   /*@{*/
> > >> -uint16_tdomain;
> > >> +uint32_tdomain;
> > >>   uint8_t bus;
> > >>   uint8_t dev;
> > >>   uint8_t func;  
> > >
> > >
> > > Isn't that going to break the ABI?
> > >  
> > Yes it will. There was a similar patch a year ago[1], which ultimately
> > led to trimming the upper bits of the domain.
> > 
> > Now that there's an actual need for said data, it might be worth
> > looking into the suggestions put in the earlier thread.
> > 
> > Stephen, can you give that a try?
> > 
> > -Emil
> > 
> > [1] 
> > https://lists.freedesktop.org/archives/xorg-devel/2016-August/050586.html
> 
> Yes, binary compatibility is a problem. Putting additional data at end
> of structure is possible, but as was raised in the thread, this won't work
> if some code puts the structure on the stack.
> 
> An alternative (like pci-utils) would be to have a sacrificial element.
> BSD PCI code would also have to change to do this.

As explained in the thread, this is still an ABI break.  There is not
much point in making these changes if you have to bump the shared
library major anyway.

> >From 5d8b06d59561d834ca09fdaf1d5bba38adfa86ea Mon Sep 17 00:00:00 2001
> From: Stephen Hemminger 
> Date: Mon, 10 Jul 2017 11:18:35 -0700
> Subject: [PATCH libpciaccess] linux: support 32 bit PCI domains
> 
> The PCI domain maybe larger than 16 bits on Microsoft Azure and other
> virtual environments. PCI busses reported by ACPI are limited to
> 16 bits, but in Azure the domain value for pass through devices is
> intentionally larger than 16 bits to avoid clashing with local
> devices. This is needed to support pass through of GPU devices.
> 
> Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=101744
> Signed-off-by: Stephen Hemminger 
> ---
>  include/pciaccess.h | 13 +++--
>  src/linux_sysfs.c   | 25 -
>  2 files changed, 23 insertions(+), 15 deletions(-)
> 
> diff --git a/include/pciaccess.h b/include/pciaccess.h
> index 1d7aa4beabfd..4e6b8b4a389b 100644
> --- a/include/pciaccess.h
> +++ b/include/pciaccess.h
> @@ -319,15 +319,18 @@ struct pci_device {
>   * Complete bus identification, including domain, of the device.  On
>   * platforms that do not support PCI domains (e.g., 32-bit x86 hardware),
>   * the domain will always be zero.
> + *
> + * Domain is really 32 bits, but only expose 16-bit version
> + * for backward compatibility. 0x if the real domain doesn't
> + * fint in 16 bits.
>   */
>  /*@{*/
> -uint16_tdomain;
> +uint16_tdomain_16;
>  uint8_t bus;
>  uint8_t dev;
>  uint8_t func;
>  /*@}*/
>  
> -
>  /**
>   * \name Vendor / device ID
>   *
> @@ -385,6 +388,12 @@ struct pci_device {
>* Used by the VGA arbiter. Type of resource decoded by the device and
>* the file descriptor (/dev/vga_arbiter). */
>  int vgaarb_rsrc;
> +
> +
> +/**
> + * PCI domain value (full 32 bits)
> + */
> +uint32_tdomain;
>  };
>  
>  
> diff --git a/src/linux_sysfs.c b/src/linux_sysfs.c
> index dd8ef3e489b1..a8bc2e1977b0 100644
> --- a/src/linux_sysfs.c
> +++ b/src/linux_sysfs.c
> @@ -118,28 +118,18 @@ pci_system_linux_sysfs_create( void )
>  
>  
>  /**
> - * Filter out the names "." and ".." from the scanned sysfs entries, and
> - * domains requiring 32-bits.
> + * Filter out the names "." and ".." from the scanned sysfs entries.
>   *
>   * \param d  Directory entry being processed by \c scandir.
>   *
>   * \return
> - * Zero if the entry name matches either "." or "..", or the domain requires
> - * 32 bits, 

Re: [xproto] _X_NONNULL and C++ 11

2017-05-27 Thread Mark Kettenis
> Date: Sat, 27 May 2017 13:33:25 +0200
> From: walter harms 
> 
> Am 27.05.2017 11:02, schrieb Matthieu Herrb:
> > Hi,
> > 
> > Marc Espie recently found out that the X_NONNULL macro in Xfuncproto.h
> > is generating spurious warnings when included in C++ code build with
> > clang++ -std=c++11.
> > 
> > Other OpenBSD developper tried to find uses of the macro in the wild
> > and didn't find any, even in the X.Org lib app or xserver tree.
> > 
> > So, should this macro definition be removed alltogether (acking that
> > no-one cares to use it) or just apply the patch below ?
> > 
> > From 6ae956660879d70e078025c3d8f1ac3fd438cad2 Mon Sep 17 00:00:00 2001
> > From: Marc Espie 
> > Date: Sat, 27 May 2017 10:55:04 +0200
> > Subject: [PATCH] Fix compiling any C++ code including Xfuncproto.h with
> >  clang++ -std=c++11
> > 
> > It shouldn't warn, bu it will use the "legacy" varargs macros and whine.
> > ---
> >  Xfuncproto.h.in | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/Xfuncproto.h.in b/Xfuncproto.h.in
> > index b88493d..1be3f55 100644
> > --- a/Xfuncproto.h.in
> > +++ b/Xfuncproto.h.in
> > @@ -166,7 +166,8 @@ in this Software without prior written authorization 
> > from The Open Group.
> > argument macros, must be only used inside #ifdef _X_NONNULL guards, as
> > many legacy X clients are compiled in C89 mode still. */
> >  #if __has_attribute(nonnull) \
> > -&& defined(__STDC_VERSION__) && (__STDC_VERSION__ - 0 >= 199901L) /* 
> > C99 */
> > +&& (defined(__STDC_VERSION__) && (__STDC_VERSION__ - 0 >= 199901L) \
> > +|| (defined(__cplusplus) && (__cplusplus - 0 >= 201103L))) /* C99 
> > C++11 */
> >  #define _X_NONNULL(...)  __attribute__((nonnull(__VA_ARGS__)))
> >  #elif __has_attribute(nonnull) \
> >  || defined(__GNUC__) &&  ((__GNUC__ * 100 + __GNUC_MINOR__) >= 303)
> > 
> > 
> > 
> 
> So far i understand this is a problem with clang++. "spurious
> warning" sounds for me more like a compiler bug (assuming that gcc
> has no problems with that). So why not make that a NOOP until fixed
> in clang ?

GCC 4.9.4 warns as well:

$ g++ -pedantic -std=c++11 -I/usr/X11R6/include xfunc.cc
In file included from xfunc.cc:1:0:
/usr/X11R6/include/X11/Xfuncproto.h:173:24: warning: ISO C does not permit 
named variadic macros [-Wvariadic-macros]
 #define _X_NONNULL(args...)  __attribute__((nonnull(args)))

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver] Makefile.am: add the meson files to the tarball

2017-04-28 Thread Mark Kettenis
> Date: Fri, 28 Apr 2017 16:09:55 +1000
> From: Peter Hutterer 
> 
> While we're providing both build systems, we'll likely have 'make dist'
> generated tarballs - those tarballs should be buildable with meson to
> have more exposure.
> 
> Signed-off-by: Peter Hutterer 
> ---
> triggered by:
> https://mail.gnome.org/archives/desktop-devel-list/2017-April/msg00091.html
> 
> I opted to keep all the meson-specifics in one location rather than
> sprinkling it across the various Makefile.am's.
> 
>  Makefile.am | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/Makefile.am b/Makefile.am
> index f0fa2d8..2c51f5e 100644
> --- a/Makefile.am
> +++ b/Makefile.am
> @@ -74,6 +74,12 @@ endif
>  
>  EXTRA_DIST = xorg-server.pc.in xorg-server.m4 autogen.sh
>  
> +meson_files = $(shell find . -type f -name '*meson.*' -print 2>/dev/null)

Unfortunately $(shell ...) is a GNU make extension.

Also, this will pick up random crap that happens to match the pattern,
such as editor backup files etc. etc.

> +EXTRA_DIST += meson_options.txt \
> +  hw/xfree86/loader/symbol-test.c \
> +  hw/xfree86/common/xf86Build.sh \
> +  $(meson_files)
> +
>  DISTCHECK_CONFIGURE_FLAGS=\
>   --with-xkb-path=$(XKB_BASE_DIRECTORY) \
>   --with-xkb-bin-directory=$(XKB_BIN_DIRECTORY) \
> -- 
> 2.9.3
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libXdmcp v2] Use getentropy() if arc4random_buf() is not available

2017-04-05 Thread Mark Kettenis
> From: Benjamin Tissoires <benjamin.tissoi...@gmail.com>
> Date: Tue,  4 Apr 2017 19:13:38 +0200
> 
> This allows to fix CVE-2017-2625 on Linux platforms without pulling in
> libbsd.
> The libc getentropy() is available since glibc 2.25 but also on OpenBSD.
> For Linux, we need at least a v3.17 kernel. If the recommended
> arc4random_buf() function is not available, emulate it by first trying
> to use getentropy() on a supported glibc and kernel. If the call fails,
> fall back to the current (vulnerable) code.
> 
> Signed-off-by: Benjamin Tissoires <benjamin.tissoi...@gmail.com>

Same comment as the other diff.

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> ---
> 
> changes in v2:
> - use the getentropy() from glibc, not the plain syscall
> - make it clear that arc4random_buf() should be preferred and that we
>   are only adding band-aids on top of the missing function
> ---
>  Key.c| 31 ++-
>  configure.ac |  2 +-
>  2 files changed, 27 insertions(+), 6 deletions(-)
> 
> diff --git a/Key.c b/Key.c
> index a09b316..70607d0 100644
> --- a/Key.c
> +++ b/Key.c
> @@ -62,10 +62,11 @@ getbits (long data, unsigned char *dst)
>  #define getpid(x) _getpid(x)
>  #endif
>  
> -void
> -XdmcpGenerateKey (XdmAuthKeyPtr key)
> -{
>  #ifndef HAVE_ARC4RANDOM_BUF
> +
> +static void
> +emulate_getrandom_buf (char *auth, int len)
> +{
>  longlowbits, highbits;
>  
>  srandom ((int)getpid() ^ time((Time_t *)0));
> @@ -73,9 +74,29 @@ XdmcpGenerateKey (XdmAuthKeyPtr key)
>  highbits = random ();
>  getbits (lowbits, key->data);
>  getbits (highbits, key->data + 4);
> -#else
> +}
> +
> +static void
> +arc4random_buf (void *auth, int len)
> +{
> +int  ret;
> +
> +#if HAVE_GETENTROPY
> +/* weak emulation of arc4random through the getentropy libc call */
> +ret = getentropy (auth, len);
> +if (ret == 0)
> + return;
> +#endif /* HAVE_GETENTROPY */
> +
> +emulate_getrandom_buf (auth, len);
> +}
> +
> +#endif /* !defined(HAVE_ARC4RANDOM_BUF) */
> +
> +void
> +XdmcpGenerateKey (XdmAuthKeyPtr key)
> +{
>  arc4random_buf(key->data, 8);
> -#endif
>  }
>  
>  int
> diff --git a/configure.ac b/configure.ac
> index 2288502..d2b045d 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -65,7 +65,7 @@ esac
>  
>  # Checks for library functions.
>  AC_CHECK_LIB([bsd], [arc4random_buf])
> -AC_CHECK_FUNCS([srand48 lrand48 arc4random_buf])
> +AC_CHECK_FUNCS([srand48 lrand48 arc4random_buf getentropy])
>  
>  # Obtain compiler/linker options for depedencies
>  PKG_CHECK_MODULES(XDMCP, xproto)
> -- 
> 2.9.3
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libICE v2] Use getentropy() if arc4random_buf() is not available

2017-04-05 Thread Mark Kettenis
> From: Benjamin Tissoires <benjamin.tissoi...@gmail.com>
> Date: Tue,  4 Apr 2017 19:12:53 +0200
> 
> This allows to fix CVE-2017-2626 on Linux platforms without pulling in
> libbsd.
> The libc getentropy() is available since glibc 2.25 but also on OpenBSD.
> For Linux, we need at least a v3.17 kernel. If the recommended
> arc4random_buf() function is not available, emulate it by first trying
> to use getentropy() on a supported glibc and kernel. If the call fails,
> fall back to the current (partly vulnerable) code.
> 
> Signed-off-by: Benjamin Tissoires <benjamin.tissoi...@gmail.com>

The emulate_getrandom_buf() name sounds a bit weird.  Perhaps call it
bad_getrandom_buf() or dangerous_getrandom_buf() or
insecure_getrandom_buf() to make it obvious that the fallback code
really isn't sufficient?

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> ---
> 
> changes in v2:
> - use the getentropy() from glibc, not the plain syscall
> - make it clear that arc4random_buf() should be preferred and that we
>   are only adding band-aids on top of the missing function
> ---
>  configure.ac  |  2 +-
>  src/iceauth.c | 65 
> ++-
>  2 files changed, 47 insertions(+), 20 deletions(-)
> 
> diff --git a/configure.ac b/configure.ac
> index 458882a..c971ab6 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -38,7 +38,7 @@ AC_DEFINE(ICE_t, 1, [Xtrans transport type])
>  
>  # Checks for library functions.
>  AC_CHECK_LIB([bsd], [arc4random_buf])
> -AC_CHECK_FUNCS([asprintf arc4random_buf])
> +AC_CHECK_FUNCS([asprintf arc4random_buf getentropy])
>  
>  # Allow checking code with lint, sparse, etc.
>  XORG_WITH_LINT
> diff --git a/src/iceauth.c b/src/iceauth.c
> index ed31683..de4785b 100644
> --- a/src/iceauth.c
> +++ b/src/iceauth.c
> @@ -44,31 +44,19 @@ Author: Ralph Mor, X Consortium
>  
>  static int was_called_state;
>  
> -/*
> - * MIT-MAGIC-COOKIE-1 is a sample authentication method implemented by
> - * the SI.  It is not part of standard ICElib.
> - */
> +#ifndef HAVE_ARC4RANDOM_BUF
>  
> -
> -char *
> -IceGenerateMagicCookie (
> +static void
> +emulate_getrandom_buf (
> + char *auth,
>   int len
>  )
>  {
> -char*auth;
> -#ifndef HAVE_ARC4RANDOM_BUF
>  longldata[2];
>  int  seed;
>  int  value;
>  int  i;
> -#endif
>  
> -if ((auth = malloc (len + 1)) == NULL)
> - return (NULL);
> -
> -#ifdef HAVE_ARC4RANDOM_BUF
> -arc4random_buf(auth, len);
> -#else
>  #ifdef ITIMER_REAL
>  {
>   struct timeval  now;
> @@ -76,13 +64,13 @@ IceGenerateMagicCookie (
>   ldata[0] = now.tv_sec;
>   ldata[1] = now.tv_usec;
>  }
> -#else
> +#else /* ITIMER_REAL */
>  {
>   longtime ();
>   ldata[0] = time ((long *) 0);
>   ldata[1] = getpid ();
>  }
> -#endif
> +#endif /* ITIMER_REAL */
>  seed = (ldata[0]) + (ldata[1] << 16);
>  srand (seed);
>  for (i = 0; i < len; i++)
> @@ -90,7 +78,46 @@ IceGenerateMagicCookie (
>   value = rand ();
>   auth[i] = value & 0xff;
>  }
> -#endif
> +}
> +
> +static void
> +arc4random_buf (
> + char *auth,
> + int len
> +)
> +{
> +int  ret;
> +
> +#if HAVE_GETENTROPY
> +/* weak emulation of arc4random through the entropy libc */
> +ret = getentropy (auth, len);
> +if (ret == 0)
> + return;
> +#endif /* HAVE_GETENTROPY */
> +
> +emulate_getrandom_buf (auth, len);
> +}
> +
> +#endif /* !defined(HAVE_ARC4RANDOM_BUF) */
> +
> +/*
> + * MIT-MAGIC-COOKIE-1 is a sample authentication method implemented by
> + * the SI.  It is not part of standard ICElib.
> + */
> +
> +
> +char *
> +IceGenerateMagicCookie (
> + int len
> +)
> +{
> +char*auth;
> +
> +if ((auth = malloc (len + 1)) == NULL)
> + return (NULL);
> +
> +arc4random_buf (auth, len);
> +
>  auth[len] = '\0';
>  return (auth);
>  }
> -- 
> 2.9.3
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libXdmcp] Use getrandom() syscall if available

2017-04-04 Thread Mark Kettenis
> From: Benjamin Tissoires <benjamin.tissoi...@gmail.com>
> Date: Tue, 4 Apr 2017 11:05:47 +0200
> 
> On Mon, Apr 3, 2017 at 9:17 PM, Mark Kettenis <mark.kette...@xs4all.nl> wrote:
> > OpenBSD does not have a getrandom(3) function in libc.  The OpenBSD
> > postion is that the arc4random(3) family of functions is always the
> > preferred way to get proper random numbers as all the other interfaces
> > have nasty corner cases that people shouldn't have to deal with.  In
> > your case for example, getrandom(2) may block and therefore could
> > return EINTR, a case which your patch doesn't handle.
> 
> I see. So if arc4random_buf(3) is available, it should always be
> preferred. When it is not available, I guess I should go for
> getrandom/getentropy and probably don't care about the syscall at all.

Right.  You probably shouldn't bother with getentropy() though, unless
you build a complete proper PRNG on top of it.

> > So I'd argue that from a Linux distro point of view, making sure that
> > a proper arc4random_buf(3) implementation is available would be the
> > best way to fix this vulnerability.  Either by improving the version
> > provided in libbsd (feel free to take the code from libressl) or by
> > lobbying for its inclusion in glibc.
> 
> TBH, the issue with the libbsd implementation is that it's not shipped
> with RHEL (where I try to provide a fix). For various reasons we do
> not ship libbsd and adding it there only for one PRNG is something
> that will be costly in term of maintenance.

More costly than pushing/maintaining patches for several other
projects?

> I don't think we ship libressl either, but the problem is not that
> much with RHEL (we can ship custom fixes if required), but more the
> fact that there is no true fix besides "use libbsd".
> 
> Which is why I wanted to provide an alternative for Linux that we can
> point customers at.
> 
> Regarding adding arc4random_buf(3) in glibc, I am not sure I
> personally want to go this path. It took a little bit more than 2
> years to have getrandom() in glibc, when it was pushed by far more
> knowledgeable people than me :(

I sympathize.

> >  Adding more #ifdef spaghetti
> > everywhere isn't going to help.
> 
> True. But the current situation where the code might be vulnerable or
> not is not a good situation either. Having a CVE out there without a
> fix is not good. Users will want it to be fixed and we can't guarantee
> that the code they are running is safe. I am not pointing fingers at
> anybody (I was not in the security discussion), I am just trying to
> justify the fact that we still need a patch that we can point people
> at.

The CVE is a bit of a joke though.  The original commit to use
arc4random_buf() in libICE was made in 2013, exactly to avoid the use
of poor quality random numbers in a security-sensitive bit of code
that the CVE is about.  A bit sad to see that four years there still
isn't a one-stop solution for this on Linux

> How about (in pseudo-c):
> 
> ---
> #ifndef HAVE_ARC4RANDOM_BUF
> void arc4random_buf(char *auth, int len)
> {
> #if HAVE_GET_ENTROPY
> getentropy(auth, len);
> if (failed)
> #endif
> {
>   /* use the older, unsafe code */
> }
> }
> #endif
> 
> void
> XdmcpGenerateKey (XdmAuthKeyPtr key)
> {
> arc4random_buf(key->data, 8)
> }
> 
> ---
> 
> This should make the code cleaner, remove a little bit of spaghetti,
> and will allow for future libc to implement arc4random_buf in the
> missing architectures.

Agreed.  That would be the best way to move forward.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libXdmcp] Use getrandom() syscall if available

2017-04-03 Thread Mark Kettenis
> From: Benjamin Tissoires 
> Date: Mon, 3 Apr 2017 17:52:32 +0200
> 
> On Mon, Apr 3, 2017 at 4:02 PM, Alan Coopersmith
>  wrote:
> > On 04/ 3/17 05:52 AM, Benjamin Tissoires wrote:
> >>
> >> This allows to fix CVE-2017-2625 on Linux platforms without pulling in
> >> libbsd.
> >> The syscall getrandom is available since kernel v3.17. The code first
> >> tries to use the syscall on a supported kernel. If the syscall fails,
> >> it falls back to the current (vulnerable) code.
> >> We do not implement the glibc getrandom() call given that it's only
> >> available in glibc 2.25, and the #if dance is already messy here.
> >>
> >> Signed-off-by: Benjamin Tissoires 
> >
> >
> > This is dangerous - Solaris  defines SYS_getrandom, but
> > I don't know if our syscall arguments/semantics are the same, and we
> > only support applications calling the libc getrandom() function, not the
> > raw syscall.

Doesn't Solaris have a proper arc4random_buf(3) in libc now though?

> I see. In that case, would it help to use unconditionally the libc
> getrandom() function, and in case it's not there, provide a stub for
> it through the syscall or other emulation layer?
> 
> Matthieu, I am not 100% sure, but do you have the getrandom() libc
> call in BSD, and would it be OK to use it instead of arc4random_buf()
> when available?

OpenBSD does not have a getrandom(3) function in libc.  The OpenBSD
postion is that the arc4random(3) family of functions is always the
preferred way to get proper random numbers as all the other interfaces
have nasty corner cases that people shouldn't have to deal with.  In
your case for example, getrandom(2) may block and therefore could
return EINTR, a case which your patch doesn't handle.

So I'd argue that from a Linux distro point of view, making sure that
a proper arc4random_buf(3) implementation is available would be the
best way to fix this vulnerability.  Either by improving the version
provided in libbsd (feel free to take the code from libressl) or by
lobbying for its inclusion in glibc.  Adding more #ifdef spaghetti
everywhere isn't going to help.

Cheers,

Mark

> >> diff --git a/configure.ac b/configure.ac
> >> index 2288502..d0d4d05 100644
> >> --- a/configure.ac
> >> +++ b/configure.ac
> >> @@ -63,6 +63,9 @@ case $host_os in
> >>  ;;
> >>  esac
> >>
> >> +# Checks for syscalls
> >> +AC_CHECK_DECLS([SYS_getrandom], [], [], [[#include ]])
> >> +
> >>  # Checks for library functions.
> >>  AC_CHECK_LIB([bsd], [arc4random_buf])
> >>  AC_CHECK_FUNCS([srand48 lrand48 arc4random_buf])
> >>
> >
> > Could you move that check up into the case $host_os section above it
> > under a new case for *linux* perhaps?
> 
> Sure.
> 
> Cheers,
> Benjamin
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH rendercheck 0/5] Convert to meson.

2017-03-24 Thread Mark Kettenis
> From: Eric Anholt 
> Date: Fri, 24 Mar 2017 13:17:45 -0700
> 
> Having bitten off a bit more than I can chew in 3 days with the X
> Server (hw/xfree86/sdksyms.c is the worst), I decided to take a quick
> pass at converting a project that's my own fault.

Seems I missed some discussion somewhere...

While I understand your frustrations with autoconf, moving to build
infrastructure that relies on tools like Python and Ninja would have
serious consequences for the way we integrate X into OpenBSD.  We
build the entire base OS, which includes X, with tools that are part
of the base OS.  That would become pretty much impossible if various X
projects go this way.

This is the first I've heard about Meson.  That in itself doesn't fill
me with confidence.  But the Meson documentation doesn't really
mentioned how you'd do configure-style checks like checking whether a
particular header file is present or whether a library defines a
certain function.  I suspect the answer is that people using Meson
don't do such tests and don't feel the need to do such tests because
they just check whether you're running on Windows, MacOS X or Linux
and make assumptions that things are present based on just that.  Your
new meson build system doesn't check whether the err.h file exists or
whether the C compiler supports the -Wshadow option for example.  It's
not really surprising that Meson is much faster if you cut out all
those checks.  Unfortunately this approach tends to yield rather
unportable software.

You can probably cut quite a bit of time from the autoconf configure
script run time by removing silly checks like checking for standard C
headers and such.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH 1/4] xfree86: Don't bother probing -nv on Linux

2017-03-16 Thread Mark Kettenis
> From: Timo Aaltonen <tjaal...@ubuntu.com>
> Date: Thu, 16 Mar 2017 12:29:41 +0200
> 
> From: Timo Aaltonen <tjaal...@debian.org>
> 
> For linux this driver is long obsolete now.  It may have some relevance
> on non-linux systems.

Still somewhat relevant on OpenBSD indeed.  Diff makes sense to me.

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> Signed-off-by: Timo Aaltonen <tjaal...@ubuntu.com>
> ---
>  hw/xfree86/common/xf86pciBus.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/xfree86/common/xf86pciBus.c b/hw/xfree86/common/xf86pciBus.c
> index 9adfee5..66e576e 100644
> --- a/hw/xfree86/common/xf86pciBus.c
> +++ b/hw/xfree86/common/xf86pciBus.c
> @@ -1195,8 +1195,9 @@ xf86VideoPtrToDriverList(struct pci_device *dev,
>  
>  #ifdef __linux__
>  driverList[idx++] = "nouveau";
> -#endif
> +#else
>  driverList[idx++] = "nv";
> +#endif
>  break;
>  }
>  case 0x1106:
> -- 
> 2.7.4
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libdrm v2 2/4] xf86drm: Add USB support

2017-01-13 Thread Mark Kettenis
> From: Thierry Reding 
> Date: Thu, 12 Jan 2017 23:04:27 +0100
> 
> Allow DRM/KMS devices hosted on USB to be detected by the drmDevice
> infrastructure.
> 
> v2:
> - make sysfs_uevent_get() more flexible using a format string
> 
> Signed-off-by: Thierry Reding 

All this sysfs parsing stuff is highly Linux-specific and should
probably be #ifdef __linux__.  Returning -EINVAL on non-Linux
platforms for usb and host1x should be fine.

Cheers,

Mark

> ---
>  xf86drm.c | 163 
> ++
>  xf86drm.h |  13 +
>  2 files changed, 176 insertions(+)
> 
> diff --git a/xf86drm.c b/xf86drm.c
> index c123650a1e23..27cd6eb5193e 100644
> --- a/xf86drm.c
> +++ b/xf86drm.c
> @@ -2906,6 +2906,9 @@ static int drmParseSubsystemType(int maj, int min)
>  if (strncmp(name, "/pci", 4) == 0)
>  return DRM_BUS_PCI;
>  
> +if (strncmp(name, "/usb", 4) == 0)
> +return DRM_BUS_USB;
> +
>  return -EINVAL;
>  #elif defined(__OpenBSD__)
>  return DRM_BUS_PCI;
> @@ -2992,6 +2995,10 @@ static int drmCompareBusInfo(drmDevicePtr a, 
> drmDevicePtr b)
>  switch (a->bustype) {
>  case DRM_BUS_PCI:
>  return memcmp(a->businfo.pci, b->businfo.pci, sizeof(drmPciBusInfo));
> +
> +case DRM_BUS_USB:
> +return memcmp(a->businfo.usb, b->businfo.usb, sizeof(drmUsbBusInfo));
> +
>  default:
>  break;
>  }
> @@ -3235,6 +3242,145 @@ free_device:
>  return ret;
>  }
>  
> +static char * DRM_PRINTFLIKE(2, 3)
> +sysfs_uevent_get(const char *path, const char *fmt, ...)
> +{
> +char filename[PATH_MAX + 1], *key, *line = NULL, *value = NULL;
> +size_t size = 0, len;
> +ssize_t num;
> +va_list ap;
> +FILE *fp;
> +
> +va_start(ap, fmt);
> +num = vasprintf(, fmt, ap);
> +va_end(ap);
> +len = num;
> +
> +snprintf(filename, sizeof(filename), "%s/uevent", path);
> +
> +fp = fopen(filename, "r");
> +if (!fp) {
> +free(key);
> +return NULL;
> +}
> +
> +while ((num = getline(, , fp)) >= 0) {
> +if ((strncmp(line, key, len) == 0) && (line[len] == '=')) {
> +char *start = line + len + 1, *end = line + num - 1;
> +
> +if (*end != '\n')
> +end++;
> +
> +value = strndup(start, end - start);
> +break;
> +}
> +}
> +
> +free(line);
> +fclose(fp);
> +
> +free(key);
> +
> +return value;
> +}
> +
> +static int drmParseUsbBusInfo(int maj, int min, drmUsbBusInfoPtr info)
> +{
> +char path[PATH_MAX + 1], *value;
> +unsigned int bus, dev;
> +int ret;
> +
> +snprintf(path, sizeof(path), "/sys/dev/char/%d:%d/device", maj, min);
> +
> +value = sysfs_uevent_get(path, "BUSNUM");
> +if (!value)
> +return -ENOENT;
> +
> +ret = sscanf(value, "%03u", );
> +free(value);
> +
> +if (ret <= 0)
> +return -errno;
> +
> +value = sysfs_uevent_get(path, "DEVNUM");
> +if (!value)
> +return -ENOENT;
> +
> +ret = sscanf(value, "%03u", );
> +free(value);
> +
> +if (ret <= 0)
> +return -errno;
> +
> +info->bus = bus;
> +info->dev = dev;
> +
> +return 0;
> +}
> +
> +static int drmParseUsbDeviceInfo(int maj, int min, drmUsbDeviceInfoPtr info)
> +{
> +char path[PATH_MAX + 1], *value;
> +unsigned int vendor, product;
> +int ret;
> +
> +snprintf(path, sizeof(path), "/sys/dev/char/%d:%d/device", maj, min);
> +
> +value = sysfs_uevent_get(path, "PRODUCT");
> +if (!value)
> +return -ENOENT;
> +
> +ret = sscanf(value, "%x/%x", , );
> +free(value);
> +
> +if (ret <= 0)
> +return -errno;
> +
> +info->vendor = vendor;
> +info->product = product;
> +
> +return 0;
> +}
> +
> +static int drmProcessUsbDevice(drmDevicePtr *device, const char *node,
> +   int node_type, int maj, int min,
> +   bool fetch_deviceinfo, uint32_t flags)
> +{
> +drmDevicePtr dev;
> +char *ptr;
> +int ret;
> +
> +dev = drmDeviceAlloc(node_type, node, sizeof(drmUsbBusInfo),
> + sizeof(drmUsbDeviceInfo), );
> +if (!dev)
> +return -ENOMEM;
> +
> +dev->bustype = DRM_BUS_USB;
> +
> +dev->businfo.usb = (drmUsbBusInfoPtr)ptr;
> +
> +ret = drmParseUsbBusInfo(maj, min, dev->businfo.usb);
> +if (ret < 0)
> +goto free_device;
> +
> +if (fetch_deviceinfo) {
> +ptr += sizeof(drmUsbBusInfo);
> +dev->deviceinfo.usb = (drmUsbDeviceInfoPtr)ptr;
> +
> +ret = drmParseUsbDeviceInfo(maj, min, dev->deviceinfo.usb);
> +if (ret < 0)
> +goto free_device;
> +}
> +
> +*device = dev;
> +
> +return 0;
> +
> +free_device:
> +free(dev);
> +return ret;
> +}
> +
>  /* Consider devices located on the same bus as duplicate and fold the 
> respective
>   * entries into 

Re: [PATCH xserver 0/5] Remove (most) 24bpp support

2016-12-08 Thread Mark Kettenis
> From: Keith Packard <kei...@keithp.com>
> Date: Thu, 08 Dec 2016 12:25:46 -0800
> 
> Mark Kettenis <mark.kette...@xs4all.nl> writes:
> 
> > It is also unfortunate that this means you won't be able to run X
> > directly on the framebuffer set up by UEFI firmware on such servers
> > and virtually all virtualization solutions.
> 
> That seems like a simple matter of fixing the fb driver to handle the
> 32->24 shadow setup automatically?

Ah right.  If the shadow module in the X server would do the
conversion, things would just work.  The current code doesn't support
that though isn't it?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 0/5] Remove (most) 24bpp support

2016-12-08 Thread Mark Kettenis
> From: Adam Jackson 
> Date: Thu, 08 Dec 2016 13:34:11 -0500
> 
> On Thu, 2016-12-08 at 09:19 -0800, Keith Packard wrote:
> > > Adam Jackson  writes:
> > > Assuming those bugs are gone, the only configurations that would break
> > > are the two drivers mentioned in the commit message, and any manual
> > > configuration that had set 24bpp in order to fit in available VRAM.
> > 
> > At a minimum, forcing those to 16bpp would at least keep the screen lit
> > up.
> 
> It would, but xfree86 isn't set up to do that. The driver tells xfree86
> which depth/bpp it will use, then validates modes. Worse, the check
> about whether to fail because there are no modes is in the driver, so
> the best the server could do is notice that PreInit failed and retry at
> 16bpp, which seems... icky.
> 
> > > first two we can easily fix in code, the second is a little rougher.
> > > Personally, given how antique 24bpp-supporting hardware is at this
> > > point, I'm okay with just a release note saying that some
> > > configurations might break.
> > 
> > I thought there were a pile of 24bpp chips in recent server boxes?
> 
> Said devices have KMS drivers so modesetting gets this right already.
> Though the "antique" comment still kind of applies, the G200 core is
> old enough to vote, and the emulated cirrus in qemu is at least two
> years older than that.

The KMS drivers in question are unfortunately GPL'ed, which for some
non-Linux OSes is a bit of an issue.  It is also unfortunate that this
means you won't be able to run X directly on the framebuffer set up by
UEFI firmware on such servers and virtually all virtualization
solutions.

Not something I care very deeply about (I think interacting with a
server through a serial port is far superior).  But oters may beg to
differ.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libpciaccess] Support for 32-bit domains

2016-08-12 Thread Mark Kettenis
> From: Eric Anholt 
> Date: Thu, 11 Aug 2016 23:32:04 -0700
> 
> Keith Busch  writes:
> 
> > On Thu, Aug 11, 2016 at 12:03:30PM -0700, Eric Anholt wrote:
> >> Given that libpciaccess allocates the struct, and the struct isn't
> >> embedded in any other public structs, I think you could just tack a
> >> "uint16_t domain_high;" at the end of the struct and fill that with the
> >> high bits.
> >
> > Hmm, then we have to change every usage of domain to combine the two,
> > and every usage thereafter must do the same.
> >
> > How about tacking 'uint32_t domain' to the end for the full domain,
> > rename the existing 'uint16_t domain' to 'domain_low', and then just
> > copy the lower bits from the full domain into there? That should satisfy
> > legacy usage, and everywhere else will automatically use the full value.
> >
> > Does something like this look more reasonable?
> 
> That looks even better to me!  Let's give people a few days to comment,
> but I like this solution.

'struct pci_device' isn't an opaque struct, and the "documentation"
never says that you can't build your own.  It isn't unreasonable to
expect to be able to fill in the domain, bus, device and function
yourself and be able to call pci_device_cfg_read().  It also means
that code can copy a 'struct pci_device' instance around.  Code using
the old ABI would lose the new 32-bit domain field in that case.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libpciaccess] Support for 32-bit domains

2016-08-11 Thread Mark Kettenis
> Date: Wed, 10 Aug 2016 22:58:34 +
> From: Keith Busch <keith.bu...@intel.com>
> 
> On Thu, Aug 11, 2016 at 12:17:48AM +0200, Mark Kettenis wrote:
> > > From: Keith Busch <keith.bu...@intel.com>
> > > Date: Tue,  9 Aug 2016 15:39:35 -0600
> > > 
> > > A pci "domain" is a purely software construct, and need not be limited
> > > to the 16-bit ACPI defined segment. The Linux kernel currently supports
> > > 32-bit domains, so this patch matches up with those capabilities to make
> > > it usable on systems exporting such domains.
> > 
> > Well, yes, and no.  PCI domains really are a hardware property.  There
> > are systems out there that have multiple PCI host bridges, each with
> > their own separate config/mem/io address spaces and bus numbers
> > starting with 0 for the root of the PCI bus hierarchy.  Pretty much
> > any 64-bit SPARC system falls into this category, and I've seen
> > PA-RISC and POWER systems with such a hardware configuration as well.
> > And given that HP's Itanium line developed from their PA-RISC hardware
> > I expect them to be in the same boat.  There is no domain numering
> > scheme that is implied by the hardware though, do domain numbers are
> > indeed purely a software construct.  On OpenBSD we simply number the
> > domains sequentially.  So 16 bits are more than enough.
> > 
> > The Linux kernel could do the same with ACPI segments (which may or
> > may not map onto true PCI domains).  That would remove the need to
> > change te libpciaccess ABI.  Although I can see that having a 1:1
> > mapping of ACPI segments to domains is something that is nice to have.
> 
> I can give a little more background on where this is coming from. The
> Intel x86 Skylake E-Series has an option to provide a number of additional
> "host bridges". The "vmd" driver in the Linux mainline kernel supports
> this hardware.
> 
> For better or worse, Linux does match the segment number to the
> domain. The "vmd" hardware is not a segment though, and decoupling _SEG
> from domain numbers in the Linux kernel proved difficult and unpopular
> with the devs. To avoid the potential clash from letting vmd hardware
> occupy the same range that an ACPI _SEG could define, we let VMD start
> at 0x1.
> 
> I've already patched pci-utils (provides lspci, setpci) to allow
> this, but I missed this library at the time (my dev machines are all
> no-graphics). Right now, a system with VMD segfaults startx. I believe
> it's down to the error handling that frees the pci devices and sets
> pci_system->devices to NULL. It looks like this is dereferenced later,
> but I'm very unfamiliar with the code base and not sure which repo to
> look into.
> 
> If preserving libpciaccess ABI is of high importance, I think the only
> other option is to just ignore domains requiring 32-bits.  That should
> be okay for us since X should not need the devices in these domains
> anyway. I'll send a patch for consideration.

To be honest, bumping the shared library major is perfectly fine with
me.  The current "thou shalt never bump the shared library major"
mantra that seems to has taken hold of the Linux community makes no
sense.  Why have a shared library major at all if you can never bump
it?

In any case the impact of bumping the libpciaccess shared library
should be fairly limited as it's not widely used outside of X.  But I
fear it does affect the driver API.

> > However, I do believe that ACPI segments are actually encoded as
> > 64-bit integers.  So such a 1:1 mapping may not be achievable.
> 
> True, though only 16 bits are used (ACPI 6, section 6.5.6):
> 
>   "
>   The lower 16 bits of _SEG returned integer is the PCI Segment Group number.
>   Other bits are reserved.
>   "

At least whoever designed the libpciaccess interface had some reason
to pick a 16-bit integer for the domain.

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libpciaccess] Support for 32-bit domains

2016-08-10 Thread Mark Kettenis
> From: Keith Busch 
> Date: Tue,  9 Aug 2016 15:39:35 -0600
> 
> A pci "domain" is a purely software construct, and need not be limited
> to the 16-bit ACPI defined segment. The Linux kernel currently supports
> 32-bit domains, so this patch matches up with those capabilities to make
> it usable on systems exporting such domains.

Well, yes, and no.  PCI domains really are a hardware property.  There
are systems out there that have multiple PCI host bridges, each with
their own separate config/mem/io address spaces and bus numbers
starting with 0 for the root of the PCI bus hierarchy.  Pretty much
any 64-bit SPARC system falls into this category, and I've seen
PA-RISC and POWER systems with such a hardware configuration as well.
And given that HP's Itanium line developed from their PA-RISC hardware
I expect them to be in the same boat.  There is no domain numering
scheme that is implied by the hardware though, do domain numbers are
indeed purely a software construct.  On OpenBSD we simply number the
domains sequentially.  So 16 bits are more than enough.

The Linux kernel could do the same with ACPI segments (which may or
may not map onto true PCI domains).  That would remove the need to
change te libpciaccess ABI.  Although I can see that having a 1:1
mapping of ACPI segments to domains is something that is nice to have.
However, I do believe that ACPI segments are actually encoded as
64-bit integers.  So such a 1:1 mapping may not be achievable.

> Reported-by: Pawel Baldysiak 
> Signed-off-by: Keith Busch 
> ---
>  include/pciaccess.h | 2 +-
>  src/linux_sysfs.c   | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/pciaccess.h b/include/pciaccess.h
> index 1d7aa4b..93ed76f 100644
> --- a/include/pciaccess.h
> +++ b/include/pciaccess.h
> @@ -321,7 +321,7 @@ struct pci_device {
>   * the domain will always be zero.
>   */
>  /*@{*/
> -uint16_tdomain;
> +uint32_tdomain;
>  uint8_t bus;
>  uint8_t dev;
>  uint8_t func;
> diff --git a/src/linux_sysfs.c b/src/linux_sysfs.c
> index 6367b11..c0c8f2a 100644
> --- a/src/linux_sysfs.c
> +++ b/src/linux_sysfs.c
> @@ -159,7 +159,7 @@ populate_entries( struct pci_system * p )
>   (struct pci_device_private *) >devices[i];
>  
>  
> - sscanf(devices[i]->d_name, "%04x:%02x:%02x.%1u",
> + sscanf(devices[i]->d_name, "%x:%02x:%02x.%1u",
>  & dom, & bus, & dev, & func);
>  
>   device->base.domain = dom;
> -- 
> 2.7.2
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] modesetting: fix build with glamor disabled.

2016-05-05 Thread Mark Kettenis
> From: Dave Airlie <airl...@gmail.com>
> Date: Fri,  6 May 2016 05:41:58 +1000
> 
> From: Dave Airlie <airl...@redhat.com>
> 
> Fix build without --enable-glamor.
> 
> Caught by the arm tinderbox.
> 
> Signed-off-by: Dave Airlie <airl...@redhat.com>

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> ---
>  hw/xfree86/drivers/modesetting/drmmode_display.c | 4 
>  1 file changed, 4 insertions(+)
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 3/4] dix: Convert ResourceClientBits to a variable

2016-05-03 Thread Mark Kettenis
> From: Adam Jackson 
> Date: Tue,  3 May 2016 11:24:36 -0400
> 
> This turns out to be a remarkably hot function in XID lookup.  Consider
> the expansion of CLIENT_ID. We start with:
> 
>  #define RESOURCE_CLIENT_BITS ResourceClientBits()
>  #define RESOURCE_AND_CLIENT_COUNT 29
>  #define CLIENTOFFSET (RESOURCE_AND_CLIENT_COUNT - RESOURCE_CLIENT_BITS)
> 
> So that's effectively:
> 
>  #define CLIENTOFFSET (29 - RCB())
> 
> Then:
> 
>  #define RESOURCE_CLIENT_MASK \
> ((1 << RESOURCE_CLIENT_BITS) - 1) << CLIENTOFFSET
> 
> Is effectively:
> 
>  #define RESOURCE_CLIENT_MASK (((1 << RCB()) - 1) << (29 - RCB())
> 
> And:
> 
>  #define CLIENT_BITS(id) ((id) & RESOURCE_CLIENT_MASK)
>  #define CLIENT_ID(id) ((int)(CLIENT_BITS(id) >> CLIENTOFFSET))
> 
> Is:
> 
>  #define CLIENT_ID(id) \
> ((id) & (((1 << RCB()) - 1) << (29 - RCB( >> (29 - RCB())
> 
> Since ResourceClientBits isn't marked __attribute__((pure)), the
> optimizer isn't allowed to CSE those function calls, since it doesn't
> know they don't have side effects. And, worse, ResourceClientBits
> doesn't just return an integer, it computes the ilog2 afresh every time.
> 
> Instead we can just compute the value we need at argv parse.
> 
> v2: Ensure ResourceClientBits is initialized even when we don't pass
> -maxclients (Alan Coopersmith)

Hmm.  This is marked as _X_EXPORT, so presumably part of the driver
ABI.  Exporting variables like this is generally a bad idea, at least
for ELF DSOs.  Copy relocations and all that.

So isn't it better to keep this as a function call?  You'd still make
it return a precalculated value instead of doing the ilog2 on each
call.  Marking it as a "pure" function in addition to that should fine
too.

Cheers,

Mark


> Signed-off-by: Adam Jackson 
> ---
>  dix/resource.c | 23 ---
>  include/resource.h |  4 ++--
>  os/osinit.c|  1 +
>  os/utils.c | 13 +
>  4 files changed, 16 insertions(+), 25 deletions(-)
> 
> diff --git a/dix/resource.c b/dix/resource.c
> index ad71b24..f06bf58 100644
> --- a/dix/resource.c
> +++ b/dix/resource.c
> @@ -600,29 +600,6 @@ CreateNewResourceClass(void)
>  
>  static ClientResourceRec clientTable[MAXCLIENTS];
>  
> -static unsigned int
> -ilog2(int val)
> -{
> -int bits;
> -
> -if (val <= 0)
> - return 0;
> -for (bits = 0; val != 0; bits++)
> - val >>= 1;
> -return bits - 1;
> -}
> -
> -/*
> - * ResourceClientBits
> - *Returns the client bit offset in the client + resources ID field
> - */
> -
> -unsigned int
> -ResourceClientBits(void)
> -{
> -return (ilog2(LimitClients));
> -}
> -
>  /*
>   * InitClientResources
>   *When a new client is created, call this to allocate space
> diff --git a/include/resource.h b/include/resource.h
> index 5871a4c..3cce6c6 100644
> --- a/include/resource.h
> +++ b/include/resource.h
> @@ -85,10 +85,10 @@ typedef uint32_t RESTYPE;
>  #define RT_LASTPREDEF((RESTYPE)9)
>  #define RT_NONE  ((RESTYPE)0)
>  
> -extern _X_EXPORT unsigned int ResourceClientBits(void);
> +extern _X_EXPORT unsigned int ResourceClientBits;
>  /* bits and fields within a resource id */
>  #define RESOURCE_AND_CLIENT_COUNT   29  /* 29 bits for XIDs */
> -#define RESOURCE_CLIENT_BITSResourceClientBits() /* client field 
> offset */
> +#define RESOURCE_CLIENT_BITSResourceClientBits /* client field 
> offset */
>  #define CLIENTOFFSET (RESOURCE_AND_CLIENT_COUNT - RESOURCE_CLIENT_BITS)
>  /* resource field */
>  #define RESOURCE_ID_MASK ((1 << CLIENTOFFSET) - 1)
> diff --git a/os/osinit.c b/os/osinit.c
> index 54b39a0..92aced3 100644
> --- a/os/osinit.c
> +++ b/os/osinit.c
> @@ -88,6 +88,7 @@ int limitNoFile = -1;
>  
>  /* The actual user defined max number of clients */
>  int LimitClients = LIMITCLIENTS;
> +unsigned int ResourceClientBits;
>  
>  static OsSigWrapperPtr OsSigWrapper = NULL;
>  
> diff --git a/os/utils.c b/os/utils.c
> index e48d9f8..8b81d39 100644
> --- a/os/utils.c
> +++ b/os/utils.c
> @@ -513,6 +513,18 @@ AdjustWaitForDelay(void *waitTime, unsigned long 
> newdelay)
>  }
>  }
>  
> +static unsigned int
> +ilog2(int val)
> +{
> +int bits;
> +
> +if (val <= 0)
> + return 0;
> +for (bits = 0; val != 0; bits++)
> + val >>= 1;
> +return bits - 1;
> +}
> +
>  void
>  UseMsg(void)
>  {
> @@ -1061,6 +1073,7 @@ ProcessCommandLine(int argc, char *argv[])
>  FatalError("Unrecognized option: %s\n", argv[i]);
>  }
>  }
> +ResourceClientBits = ilog2(LimitClients);
>  }
>  
>  /* Implement a simple-minded font authorization scheme.  The authorization
> -- 
> 2.7.4
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___

Re: [PATCH xserver] ad hoc fix for mmap's truncated offset parameter on 32bit

2016-04-29 Thread Mark Kettenis
> Date: Fri, 29 Apr 2016 09:34:08 +0200
> From: Julien Cristau <jcris...@debian.org>
> 
> On Thu, Apr 28, 2016 at 16:10:01 +0200, Mark Kettenis wrote:
> 
> > > From: Stefan Dirsch <sndir...@suse.de>
> > > Date: Thu, 28 Apr 2016 14:35:00 +0200
> > > 
> > > Builtin modesetting driver didn't work on 32bit on cirrus KMS. See
> > > https://bugzilla.suse.com/show_bug.cgi?id=917385 for more details.
> > 
> > I think selectively compiling with -D_FILE_OFFSET_BITS=64 is a
> > seriously bad idea.  Either evrything should be compiled with that
> > switch, or nothing.  And adding a #define like that after including
> > some other header files is even worse.
> > 
> I think the right fix for the referenced bug was
> https://cgit.freedesktop.org/xorg/xserver/commit/?id=4962c8c08842d9d3ca66d254b1ce4cacc4fb3756
> so this patch should not be necessary anyway.

Yup.  That should do it.  I agree that this patch shouldn't be
necessary anymore.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver] ad hoc fix for mmap's truncated offset parameter on 32bit

2016-04-28 Thread Mark Kettenis
> From: Stefan Dirsch 
> Date: Thu, 28 Apr 2016 14:35:00 +0200
> 
> Builtin modesetting driver didn't work on 32bit on cirrus KMS. See
> https://bugzilla.suse.com/show_bug.cgi?id=917385 for more details.

I think selectively compiling with -D_FILE_OFFSET_BITS=64 is a
seriously bad idea.  Either evrything should be compiled with that
switch, or nothing.  And adding a #define like that after including
some other header files is even worse.

> Signed-off-by: Stefan Dirsch 
> ---
>  hw/xfree86/drivers/modesetting/dumb_bo.c | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/hw/xfree86/drivers/modesetting/dumb_bo.c 
> b/hw/xfree86/drivers/modesetting/dumb_bo.c
> index cf13f0a..42abe8b 100644
> --- a/hw/xfree86/drivers/modesetting/dumb_bo.c
> +++ b/hw/xfree86/drivers/modesetting/dumb_bo.c
> @@ -29,6 +29,12 @@
>  #include "dix-config.h"
>  #endif
>  
> +/*
> + * ad hoc fix for mmap's truncated offset parameter on 32bit
> + * see also https://bugzilla.suse.com/show_bug.cgi?id=917385
> + */
> +#define _FILE_OFFSET_BITS 64
> +
>  #include "dumb_bo.h"
>  
>  #include 
> -- 
> 2.6.2
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [xserver-xorg][PATCH 1/1] xwayland: ftruncate if posix_fallocate fails

2016-04-26 Thread Mark Kettenis
> Date: Tue, 26 Apr 2016 15:58:47 +0300
> From: Ian Ray 
> 
> On Mon, Apr 25, 2016 at 04:49:07PM +0300, Pekka Paalanen wrote:
> > On Mon, 25 Apr 2016 15:47:09 +0300
> > Ian Ray  wrote:
> > 
> > > On a slow system that is configured with SMART_SCHEDULE_POSSIBLE, large
> > > posix_fallocate() requests may be interrupted by the SmartScheduleTimer
> > > (SIGALRM) continuously. Fallback to ftruncate if posix_fallocate fails.
> > > 
> > > Signed-off-by: Ian Ray 
> > > ---
> > >  hw/xwayland/xwayland-shm.c | 9 ++---
> > >  1 file changed, 6 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/hw/xwayland/xwayland-shm.c b/hw/xwayland/xwayland-shm.c
> > > index e8545b3..342b723 100644
> > > --- a/hw/xwayland/xwayland-shm.c
> > > +++ b/hw/xwayland/xwayland-shm.c
> > > @@ -142,9 +142,12 @@ os_create_anonymous_file(off_t size)
> > >  #ifdef HAVE_POSIX_FALLOCATE
> > >  ret = posix_fallocate(fd, 0, size);
> > >  if (ret != 0) {
> > > -close(fd);
> > > -errno = ret;
> > > -return -1;
> > > +/* Fallback to ftruncate in case of failure. */
> > > +ret = ftruncate(fd, size);
> > > +if (ret < 0) {
> > > +close(fd);
> > > +return -1;
> > > +}
> > >  }
> > >  #else
> > >  ret = ftruncate(fd, size);
> > 
> > Hi Ian,
> > 
> > I indeed think this is a bit harsh. The first EINTR may happen on any
> > system, so the very least we should try twice before giving up.
> > 
> > I'd also like to see some comments on whether fallocate making no
> > progress when repeatedly restarted is normal or not. Maybe that should
> > be asked on a kernel list?
> > 
> > 
> > Thanks,
> > pq
> 
> Hi Pekka,
> 
> Referring to  http://lxr.free-electrons.com/source/mm/shmem.c#L2235 and 
> http://lkml.iu.edu/hypermail/linux/kernel/1603.0/03523.html,  fallocate
> is _not_ expected to make progress when repeatedly restarted.

The crucial bit being that it does an explicit rollback if it gets
EINTR.  Not very well thought out if you ask me (it is as if an
interrupted write call would return EINTR and not make progress), but
probably something that is cast into stone and cannot be fixed
anymore.

That last message you cite is interesting.  POSIX states that system
calls return EINTR should be automatically restarted for signals that
have the SA_RESTART flag set.  So that means that the linux kernel
should indeed return -ERESTARTSYS here.  But since the Xorg smart
scheduler code sets SA_RESTART, that means you'd get an infinite loop
on the systems where (allegedly) posix_fallocate() never completes in
presence of the smart scheduler's repeated SIGALRM.

> Taking Yuriy's reply in to account, then there seems to be are a couple
> of options:
> 
> 1. try fallocate() a couple of times and in case of EINTR fallback
>to ftruncate
> 
> 2. mask SIGALRM while calling fallocate

That probably is the right thing to do here.  Having the smart
scheduler miss a "tick" isn't a disaster, and the other suggestions
(looping while EINTR is returned, looping over smaller chunks) would
block the request just as well.

Or perhaps take a step back and ask yourselves if using
posix_fallocate() here is the right thing to do in the first place.
SIGBUS really can't be prevented as a malicious client can still
ftruncate() the file descriptor.  On OpenBSD I implemented a mmap
option that prevents SIGBUS on access of the memory even if the
request for backing store cannot be fulfilled.  But that isn't
portable of course.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] Avoid 10x7 heuristic, handled by server

2016-04-26 Thread Mark Kettenis
> From: Stefan Dirsch 
> Date: Tue, 26 Apr 2016 11:45:15 +0200
> 
> From: Frederic Crozat 
> 
> Remove the 10x7 heuristic, since the server has equivalent code now.
> Instead, disable "acceleration" under qemu, since taking the hypercall
> trap is really quite expensive and you're better off doing noaccel.
> (Fedora)

Same comment (about magic numbers) applies here.

A #define for the PCI subvendor is probably self-documenting.

> ---
>  src/alp_driver.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/src/alp_driver.c b/src/alp_driver.c
> index bd5e52f..da31321 100644
> --- a/src/alp_driver.c
> +++ b/src/alp_driver.c
> @@ -774,6 +774,13 @@ AlpPreInit(ScrnInfoPtr pScrn, int flags)
>   else
>   xf86SetDDCproperties(pScrn,xf86PrintEDID(
>xf86DoEDID_DDC2(XF86_SCRN_ARG(pScrn),pCir->I2CPtr1)));
> +
> +#ifdef XSERVER_LIBPCIACCESS
> + if (!pScrn->monitor->DDC &&
> + ((pCir->PciInfo->subvendor_id & 0x) == 0x1af4)) {
> + pCir->NoAccel = TRUE;
> + }
> +#endif
>   
>   /* Probe the possible LCD display */
>   AlpProbeLCD(pScrn);
> -- 
> 2.6.2
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] Use 16bpp when running in virt and on XenSource gfx

2016-04-26 Thread Mark Kettenis
> From: Stefan Dirsch 
> Date: Tue, 26 Apr 2016 11:45:38 +0200
> 
> From: Frederic Crozat 
> 
> Due to graphics corruption default to 16bpp in virt instead of 24 (Fedora).
> Do the same on XenSource gfx, which suffers from the same issue.

I think using magic numbers like that is unhelpful, especially without
comments.

> ---
>  src/alp_driver.c | 11 ++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/src/alp_driver.c b/src/alp_driver.c
> index da31321..6abb476 100644
> --- a/src/alp_driver.c
> +++ b/src/alp_driver.c
> @@ -466,6 +466,7 @@ AlpPreInit(ScrnInfoPtr pScrn, int flags)
>   vgaHWPtr hwp;
>   MessageType from, from1;
>   int i;
> + int defaultdepth;
>   int depth_flags;
>   ClockRangePtr clockRanges;
>   char *s;
> @@ -551,11 +552,19 @@ AlpPreInit(ScrnInfoPtr pScrn, int flags)
>   depth_flags |= Support32bppFb |
>  SupportConvert32to24 |
>  PreferConvert32to24;
> +
> + /* use 16bpp in virt */
> + if (((pCir->PciInfo->subvendor_id & 0x) == 0x1af4) ||
> + ((pCir->PciInfo->subvendor_id & 0x) == 0x5853))
> + defaultdepth = 16;
> + else
> + defaultdepth = 24;
> +
>   /*
>* The first thing we should figure out is the depth, bpp, etc.
>* We support both 24bpp and 32bpp layouts, so indicate that.
>*/
> - if (!xf86SetDepthBpp(pScrn, 0, 0, 24, depth_flags)) {
> + if (!xf86SetDepthBpp(pScrn, 0, 0, defaultdepth, depth_flags)) {
>   return FALSE;
>   } else {
>   /* Check that the returned depth is one we support */
> -- 
> 2.6.2
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver] os: Treat ssh as a non-local client (v3)

2016-01-12 Thread Mark Kettenis
> From: =?UTF-8?Q?Michel_D=c3=a4nzer?= <mic...@daenzer.net>
> 
> On 17.12.2015 16:41, Michel Dänzer wrote:
> > From: Adam Jackson <a...@redhat.com>
> > 
> > By the time we get to ComputeLocalClient, we've already done
> > NextAvailableClient → ReserveClientIds → DetermineClientCmd (assuming
> > we're built with #define CLIENTIDS), so we can look up the name of the
> > client process and refuse to treat ssh's X forwarding as if it were
> > local.
> > 
> > v2: (Michel Dänzer)
> > * Only match "ssh" itself, not other executable names starting with
> >   that prefix.
> >     * Ignore executable path for the match.
> > v3: (Michel Dänzer)
> > * Use GetClientCmdName (Mark Kettenis)
> > * Perform check on Windows as well, but only ignore path on Cygwin
> >   (Martin Peres, Emil Velikov, Jon Turney)
> > 
> > Signed-off-by: Adam Jackson <a...@redhat.com>
> > Signed-off-by: Michel Dänzer <michel.daen...@amd.com>
> 
> Martin, Mark, Jon, any other objections? If not, can we get a Reviewed-by?

Must admit I'm not really thrilled by the diff.  SoI'll abstain.

> > ---
> >  os/access.c | 38 +++---
> >  1 file changed, 35 insertions(+), 3 deletions(-)
> > 
> > diff --git a/os/access.c b/os/access.c
> > index 10a48c3..3ea2e21 100644
> > --- a/os/access.c
> > +++ b/os/access.c
> > @@ -173,6 +173,10 @@ SOFTWARE.
> >  
> >  #endif  /* WIN32 */
> >  
> > +#if !defined(WIN32) || defined(__CYGWIN__)
> > +#include 
> > +#endif
> > +
> >  #define X_INCLUDE_NETDB_H
> >  #include 
> >  
> > @@ -1081,9 +1085,8 @@ ResetHosts(const char *display)
> >  }
> >  }
> >  
> > -/* Is client on the local host */
> > -Bool
> > -ComputeLocalClient(ClientPtr client)
> > +static Bool
> > +xtransLocalClient(ClientPtr client)
> >  {
> >  int alen, family, notused;
> >  Xtransaddr *from = NULL;
> > @@ -1116,6 +1119,35 @@ ComputeLocalClient(ClientPtr client)
> >  return FALSE;
> >  }
> >  
> > +/* Is client on the local host */
> > +Bool
> > +ComputeLocalClient(ClientPtr client)
> > +{
> > +const char *cmdname = GetClientCmdName(client);
> > +
> > +if (!xtransLocalClient(client))
> > +return FALSE;
> > +
> > +/* If the executable name is "ssh", assume that this client connection
> > + * is forwarded from another host via SSH
> > + */
> > +if (cmdname) {
> > +Bool ret;
> > +
> > +#if !defined(WIN32) || defined(__CYGWIN__)
> > +char *cmd = strdup(cmdname);
> > +ret = strcmp(basename(cmd), "ssh") != 0;
> > +free(cmd);
> > +#else
> > +ret = strcmp(cmdname, "ssh") != 0;
> > +#endif
> > +
> > +return ret;
> > +}
> > +
> > +return TRUE;
> > +}
> > +
> >  /*
> >   * Return the uid and all gids of a connected local client
> >   * Allocates a LocalClientCredRec - caller must call FreeLocalClientCreds
> > 
> 
> 
> -- 
> Earthling Michel Dänzer   |   http://www.amd.com
> Libre software enthusiast | Mesa and X developer
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 4/8] Create a threaded mechanism for input [v3]

2015-12-12 Thread Mark Kettenis
> From: Keith Packard <kei...@keithp.com>
> Date: Fri, 11 Dec 2015 15:37:24 -0800
> 
> Keith Packard <kei...@keithp.com> writes:
> 
> > Mark Kettenis <mark.kette...@xs4all.nl> writes:
> >
> >> However, is there a reason why you didn't use the
> >> PTHREAD_MUTEX_RECURSIVE mtex type that is standardized by POSIX?
> >
> > Sorry, my documentation only mentions a non-portable version of
> > recursive mutexes. I didn't even know that recursive mutexes had been
> > standardized. Looks like we've got options.
> 
> So, how about this preference plan, picking the first one which works:
> 
>  1) PTHREAD_MUTEX_RECURSIVE
>  2) PTHREAD_MUTEX_RECURSIVE_NP
>  3) PTHREAD_MUTEX_NORMAL + __thread variable
>  4) PTHREAD_MUTEX_NORMAL + thread specific variables

I'd say that would be overkill.  The use of recursive mutexes is
somewhat controversal, which is almost certainly why they weren't part
of POSIX initially.  But they were part of Unix98 and present as an
XSI extension in POSIX until they were moved to base in 2008.

So unless there is real evidence that there are still supported
systems out there that don't provide PTHREAD_MUTEX_RECURSIVE, I'd stop
at 1) and disable the input thread support on systems that don't
provide it.  Adding fallbacks just increases the maintenance burden.

Must admit that I have an agenda here.  I'd like to avoid 3) as this
might encourage people to use __thread in other places in the Xorg
codebase.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver] os: Treat ssh as a non-local client (v2)

2015-12-12 Thread Mark Kettenis
> From: =?UTF-8?q?Michel=20D=C3=A4nzer?= 
> Date: Fri, 11 Dec 2015 11:28:51 +0900
> 
> By the time we get to ComputeLocalClient, we've already done
> NextAvailableClient → ReserveClientIds →
> DetermineClientCmd (assuming we're built with #define CLIENTIDS), so
> we can look up the name of the client process and refuse to treat
> ssh's X forwarding as if it were local.
> 
> v2: (Michel Dänzer)
> * Only match "ssh" itself, not other executable names starting with
>   that prefix.
> * Ignore executable path for the match.
> 
> Signed-off-by: Adam Jackson 
> Signed-off-by: Michel Dänzer 
> ---
>  os/access.c | 27 ---
>  1 file changed, 24 insertions(+), 3 deletions(-)
> 
> diff --git a/os/access.c b/os/access.c
> index 10a48c3..6ff6977 100644
> --- a/os/access.c
> +++ b/os/access.c
> @@ -101,6 +101,7 @@ SOFTWARE.
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #ifndef NO_LOCAL_CLIENT_CRED
>  #include 
> @@ -1081,9 +1082,8 @@ ResetHosts(const char *display)
>  }
>  }
>  
> -/* Is client on the local host */
> -Bool
> -ComputeLocalClient(ClientPtr client)
> +static Bool
> +xtransLocalClient(ClientPtr client)
>  {
>  int alen, family, notused;
>  Xtransaddr *from = NULL;
> @@ -1116,6 +1116,27 @@ ComputeLocalClient(ClientPtr client)
>  return FALSE;
>  }
>  
> +/* Is client on the local host */
> +Bool
> +ComputeLocalClient(ClientPtr client)
> +{
> +if (!xtransLocalClient(client))
> +return FALSE;
> +
> +#ifndef WIN32
> +/* If the executable name is "ssh", consider this client not local */
> +if (client->clientIds->cmdname) {

Isn't it better to use GetClientCmdName() here to keep avoid a
potential null-pointer dereference?  Actually include/client.h tells
you you shuld ;).
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 4/8] Create a threaded mechanism for input [v3]

2015-12-12 Thread Mark Kettenis
> From: Keith Packard <kei...@keithp.com>
> Date: Sat, 12 Dec 2015 13:19:59 -0800
> 
> Mark Kettenis <mark.kette...@xs4all.nl> writes:
> 
> > I'd say that would be overkill.  The use of recursive mutexes is
> > somewhat controversal, which is almost certainly why they weren't part
> > of POSIX initially.  But they were part of Unix98 and present as an
> > XSI extension in POSIX until they were moved to base in 2008.
> 
> I would argue that recursive mutexes aren't 'controversial', they're
> just a bad idea used by lazy programmers. However, given that the
> existign code uses what is effectively a recursive mutex, converting to
> threaded input *and* eliminating the mutex recursion at the same time
> seems like a worse idea.

You're excused ;).

> > So unless there is real evidence that there are still supported
> > systems out there that don't provide PTHREAD_MUTEX_RECURSIVE, I'd stop
> > at 1) and disable the input thread support on systems that don't
> > provide it.  Adding fallbacks just increases the maintenance burden.
> 
> Linux doesn't appear to provide PTHREAD_RECURSIVE_MUTEX_INITIALIZER,
> which seems rather odd to me.

It isn't odd, because PTHREAD_RECURSIVE_MUTEX_INITIALIZE isn't in the
standard.  You'll have to explicitly initialize the mutex with
pthread_mutex_init() to get a recursive mutex.

> > Must admit that I have an agenda here.  I'd like to avoid 3) as this
> > might encourage people to use __thread in other places in the Xorg
> > codebase.
> 
> Mesa already uses __thread extensively, and it looks to provide
> significant performance benefits. I did some simple benchmarking today
> with PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP vs __thread, and my test
> program ran twice as fast using __thread compared with recursive mutexes
> (see attached).

You can always prove a point with an appropriately constructed
microbenchmark ;).  Seriously though, I do hope that the overhead of
the recursive mutex isn't noticable in the Xorg input thread.

> So, what I suggest is that we use recursive mutexes where supported,
> falling back to __thread/__declspec(thread). Two paths seems like a fine
> plan to me, offering portability to a wider set of systems than either
> one alone, while preferring the POSIX standard where supported.

I still dont see the point as I expect all systems that support
__thread to support recursive mutexes as well.

Anyway, the diff below won't fly as
PTHREAD_RECURSIVE_MUTEX_INITIALIZER isn't in the standard and
therefore not widely available.  On OpenBSD we don't even have
PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP, and supporting such a thing
would be difficult.

Cheers,

Mark

> --=-=-=
> Content-Type: text/x-diff
> Content-Disposition: inline;
>  filename=0005-Disable-input-thread-code-with-disable-input-thread..patch
> Content-Transfer-Encoding: quoted-printable
> 
> From=2055f483cff8d660e444a65026e57f793c06a66ce3 Mon Sep 17 00:00:00 2001
> From: Keith Packard <kei...@keithp.com>
> Date: Thu, 10 Dec 2015 11:36:34 -0800
> Subject: [PATCH xserver 05/22] Disable input thread code with
>  --disable-input-thread. RECURSIVE_MUTEX support
> 
> This is a pair of changes to the threaded input code which should be
> merged with that patch; I'm publishing it separately for review before
> doing that.
> 
> The first change is to respect the --disable-input-thread
> configuration option and disable all of the threaded input code in
> that case. What remains are just stubs that get input drivers to work
> in this situation without changing the API or ABI.
> 
> The second is to add ax_tls.m4 which detects
> PTHREAD_RECURSIVE_MUTEX_INITIALIZER and either __thread or
> __declspec(thread) support in the system and to use that to figure out
> how to get recursive mutex support in the server. We prefer
> PTHREAD_RECURSIVE_MUTEX_SUPPORt where available, falling back to
> __thread/__declspec(thread) support if necessary.
> 
> Signed-off-by: Keith Packard <kei...@keithp.com>
> 
> fixup for thread variables
>
>  configure.ac| 21 ++---
>  hw/xfree86/sdksyms.sh   |  4 ---
>  include/dix-config.h.in |  9 ++
>  include/input.h | 23 ++
>  m4/ax_tls.m4| 78 +=
> ++
>  os/inputthread.c| 80 +=
> 
>  6 files changed, 187 insertions(+), 28 deletions(-)
>  create mode 100644 m4/ax_tls.m4
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 4/8] Create a threaded mechanism for input [v3]

2015-12-11 Thread Mark Kettenis
> From: Keith Packard <kei...@keithp.com>
> Date: Thu, 10 Dec 2015 07:18:23 -0800
> 
> Mark Kettenis <mark.kette...@xs4all.nl> writes:
> 
> > Ugh.  Exporting global variables as part of the ABI is generally not
> > such a good idea.  Perhaps it is better to use functions to acquire
> > and release the input mutex instead?
> 
> Yeah, it's not a performance sensitive operation, so hiding the globals
> in a function is a good idea. I'll leave the stub versions as static
> inline so I don't have to move them to the new file (inputthread.c)
> later, just redefine them in the header file.
> 
> > Also, using TLS (i.e. __thread variables) isn't portable.  That
> > mechanism certainly isn't supported by all platforms supported by
> > Xorg.
> 
> I can add some autoconf magic to autodetect whether this is supported
> and disable threaded input where it isn't.

Is that really necessary?  If I understand the code correctly, you're
using the __thread variable just to implement a recursive mutex.
There are ways to do that without using a per-thread counter, which
would avoid using the non-portable __thread storage class.

However, is there a reason why you didn't use the
PTHREAD_MUTEX_RECURSIVE mtex type that is standardized by POSIX?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH 0/5] Remove DRI1

2015-12-09 Thread Mark Kettenis
> From: Adam Jackson 
> Date: Tue,  8 Dec 2015 17:41:35 -0500
> 
> DRI1 hasn't really been a supported option for a while now, Mesa hasn't
> supplied any drivers for it in years, and the design really isn't
> compatible with KMS-like drivers. Also, Keith has some upcoming work to
> thread input handling that will conflict with the SIGIO handling, so if
> it's not already broken it's soon to get worse.

FWIW, OpenBSD dropped support for DRI1 a couple of years ago.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 0/6] Use a thread for input

2015-12-09 Thread Mark Kettenis
> From: Keith Packard 
> Date: Tue,  8 Dec 2015 15:44:48 -0800
> 
> Here's a series which revives Tiago's ancient threaded input
> patch. This has been cleaned up to make sure that it does locking
> correctly, and then patches for xf86 and kdrive/fbdev hook the code to
> a couple of X servers to show how it should work.

Most of my objections to this Tiago's patch still apply.  Threads
severely impact debuggability and this needs a full audit of all the
drivers; not just the server.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 4/8] Create a threaded mechanism for input [v3]

2015-12-09 Thread Mark Kettenis
> From: Keith Packard 
> Date: Wed, 09 Dec 2015 16:04:08 -0800
> 
> >> +extern _X_EXPORT pthread_mutex_t input_mutex;
> >> +extern _X_EXPORT __thread int input_mutex_count;
> >> +
> > Are these really meant to be exported - a wild guess will be that the
> > input drivers won't need access to these ?
> 
> Turns out they do - evdev needs to grab the input lock to handle button
> emulation via a timer.

Ugh.  Exporting global variables as part of the ABI is generally not
such a good idea.  Perhaps it is better to use functions to acquire
and release the input mutex instead?

Also, using TLS (i.e. __thread variables) isn't portable.  That
mechanism certainly isn't supported by all platforms supported by
Xorg.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH:libxtrans] Remove support for SysV on x86 platforms other than Solaris & SCO

2015-11-29 Thread Mark Kettenis
> From: Alan Coopersmith <alan.coopersm...@oracle.com>
> Date: Sun, 29 Nov 2015 09:47:22 -0800
> 
> No other x86 SysV platforms have ever been supported in the modular
> build systems, so we don't need to keep carrying around a bunch of
> ifdef's for them.
> 
> Signed-off-by: Alan Coopersmith <alan.coopersm...@oracle.com>

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> ---
>  Xtrans.c |7 ++-
>  Xtransint.h  |8 
>  Xtranssock.c |   12 
>  3 files changed, 6 insertions(+), 21 deletions(-)
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH:libX11] Remove unused definition of XCONN_CHECK_FREQ

2015-11-29 Thread Mark Kettenis
> From: Alan Coopersmith <alan.coopersm...@oracle.com>
> Date: Sun, 29 Nov 2015 09:48:09 -0800
> 
> The only use of XCONN_CHECK_FREQ was removed in commit 15e5eaf62897b3179
> when we dropped the old Xlib connection handling in favor of xcb's.
> 
> Signed-off-by: Alan Coopersmith <alan.coopersm...@oracle.com>

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> ---
>  include/X11/Xlibint.h |8 
>  1 file changed, 8 deletions(-)
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH 1/2] modesetting: drop platform_dev pointer.

2015-11-18 Thread Mark Kettenis
> From: Dave Airlie <airl...@gmail.com>
> Date: Wed, 18 Nov 2015 09:51:04 +1000
> 
> This isn't used anywhere, so no point storing it until we need it.

Makes sense to me.

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> Signed-off-by: Dave Airlie <airl...@redhat.com>
> ---
>  hw/xfree86/drivers/modesetting/driver.c | 8 +---
>  hw/xfree86/drivers/modesetting/driver.h | 3 ---
>  2 files changed, 1 insertion(+), 10 deletions(-)
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH v2] glamor: Make glamor_name_from_pixmap work without DRI3

2015-11-18 Thread Mark Kettenis
> From: Eric Anholt <e...@anholt.net>
> Date: Wed, 18 Nov 2015 12:57:31 -0800
> 
> Mark Kettenis <kette...@openbsd.org> writes:
> 
> > This function is used by the modesetting driver to implement DRI2 and
> > shouldn't fail on systems that don't support DRI3.  Remove the check
> > for DRI3 and rename glamor_egl_dri3_fd_name_from_tex to
> > glamor_egl_fd_name_from_tex.
> 
> At the time you sent the patch, if you didn't have the dri3_capable ->
> glamor_enable_dri3() -> dri3_enable flag series set, then you wouldn't
> have had glamor_egl_create_argb_based_texture() getting called, so
> you'd make your EGL image to share out of whatever random format your GL
> chose.  That probably explains your byte swapping bugs.
> 
> Also, if you didn't have dri3_capable set, then you hadn't succeeded at
> the check for EGL_KHR_gl_texture_2D_image and
> EGL_EXT_image_dma_buf_import and GL_OES_EGL_image.  I expect on your
> platform you don't have EXT_image_dma_buf_import, but you still need to
> check for the others before trying it.
> 
> Now, post glamor-delay-shareable, we shouldn't have the issue with byte
> swapping since I always make the GBM bo-based EGLImage when exporting.
> So, could you respin in a way that just makes sure that we've checked
> for the appropriate extensions (see the calls in
> glamor_egl_create_textured_pixmap_from_gbm_bo() and
> glamor_create_texture_from_image())?

Thanks.  The current master with my diffs on top seems to work.
Haven't fully processed your comments about checking the appropriate
extensions.  Will try to do that tomorrow and respin the diff.

> Oh, also, make sure you rename the other callers of
> glamor_egl_dri3_fd_name_from_tex() in the tree.

Ah, I missed wayland and the stubs which aren't built on my system.

> > Signed-off-by: Mark Kettenis <kette...@openbsd.org>
> > ---
> >  glamor/glamor.c | 2 --
> >  glamor/glamor.h | 6 +++---
> >  glamor/glamor_egl.c | 8 
> >  3 files changed, 7 insertions(+), 9 deletions(-)
> >
> > diff --git a/glamor/glamor.c b/glamor/glamor.c
> > index d4a0236..110bdb8 100644
> > --- a/glamor/glamor.c
> > +++ b/glamor/glamor.c
> > @@ -776,8 +776,6 @@ glamor_name_from_pixmap(PixmapPtr pixmap, CARD16 *str=
> ide, CARD32 *size)
> >  glamor_screen_private *glamor_priv =3D
> >  glamor_get_screen_private(pixmap->drawable.pScreen);
> >=20=20
> > -if (!glamor_priv->dri3_enabled)
> > -return -1;
> >  switch (pixmap_priv->type) {
> >  case GLAMOR_TEXTURE_DRM:
> >  case GLAMOR_TEXTURE_ONLY:
> > diff --git a/glamor/glamor.h b/glamor/glamor.h
> > index 4be8800..01b6e4c 100644
> > --- a/glamor/glamor.h
> > +++ b/glamor/glamor.h
> > @@ -144,9 +144,9 @@ extern _X_EXPORT unsigned int glamor_egl_create_argb8=
> 888_based_texture(ScreenPtr
> > i=
> nt w,
> > i=
> nt h,
> > B=
> ool linear);
> > -extern _X_EXPORT int glamor_egl_dri3_fd_name_from_tex(ScreenPtr, PixmapP=
> tr,
> > -  unsigned int, Bool,
> > -  CARD16 *, CARD32 *=
> );
> > +extern _X_EXPORT int glamor_egl_fd_name_from_tex(ScreenPtr, PixmapPtr,
> > +unsigned int, Bool,
> > +CARD16 *, CARD32 *);
> >=20=20
> >  extern void glamor_egl_destroy_pixmap_image(PixmapPtr pixmap);
> >=20=20
> > diff --git a/glamor/glamor_egl.c b/glamor/glamor_egl.c
> > index 2e6c7bd..9b91147 100644
> > --- a/glamor/glamor_egl.c
> > +++ b/glamor/glamor_egl.c
> > @@ -471,10 +471,10 @@ glamor_gbm_bo_from_pixmap(ScreenPtr screen, PixmapP=
> tr pixmap)
> >  }
> >=20=20
> >  int
> > -glamor_egl_dri3_fd_name_from_tex(ScreenPtr screen,
> > - PixmapPtr pixmap,
> > - unsigned int tex,
> > - Bool want_name, CARD16 *stride, CARD32 =
> *size)
> > +glamor_egl_fd_name_from_tex(ScreenPtr screen,
> > +   PixmapPtr pixmap,
> > +   unsigned int tex,
> > +   Bool want_name, CARD16 *stride, CARD32 *size)
> >  {
> >  #ifdef GLAMOR_HAS_GBM
> >  struct glamor_egl_screen_private *glamor_egl;
> > --=20
> > 2.6.3
> >
> > ___
> > xorg-devel@lists.x.org: X.Org development
> > Archives: http://lists.x.org/archives/xorg-devel
> > Info: http://lists.x.org/mailman/listinfo/xorg-devel
> 
> --=-=-=
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

[PATCH v2] glamor: Make glamor_name_from_pixmap work without DRI3

2015-11-17 Thread Mark Kettenis
This function is used by the modesetting driver to implement DRI2 and
shouldn't fail on systems that don't support DRI3.  Remove the check
for DRI3 and rename glamor_egl_dri3_fd_name_from_tex to
glamor_egl_fd_name_from_tex.

Signed-off-by: Mark Kettenis <kette...@openbsd.org>
---
 glamor/glamor.c | 2 --
 glamor/glamor.h | 6 +++---
 glamor/glamor_egl.c | 8 
 3 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/glamor/glamor.c b/glamor/glamor.c
index d4a0236..110bdb8 100644
--- a/glamor/glamor.c
+++ b/glamor/glamor.c
@@ -776,8 +776,6 @@ glamor_name_from_pixmap(PixmapPtr pixmap, CARD16 *stride, 
CARD32 *size)
 glamor_screen_private *glamor_priv =
 glamor_get_screen_private(pixmap->drawable.pScreen);
 
-if (!glamor_priv->dri3_enabled)
-return -1;
 switch (pixmap_priv->type) {
 case GLAMOR_TEXTURE_DRM:
 case GLAMOR_TEXTURE_ONLY:
diff --git a/glamor/glamor.h b/glamor/glamor.h
index 4be8800..01b6e4c 100644
--- a/glamor/glamor.h
+++ b/glamor/glamor.h
@@ -144,9 +144,9 @@ extern _X_EXPORT unsigned int 
glamor_egl_create_argb_based_texture(ScreenPtr
int w,
int h,
Bool 
linear);
-extern _X_EXPORT int glamor_egl_dri3_fd_name_from_tex(ScreenPtr, PixmapPtr,
-  unsigned int, Bool,
-  CARD16 *, CARD32 *);
+extern _X_EXPORT int glamor_egl_fd_name_from_tex(ScreenPtr, PixmapPtr,
+unsigned int, Bool,
+CARD16 *, CARD32 *);
 
 extern void glamor_egl_destroy_pixmap_image(PixmapPtr pixmap);
 
diff --git a/glamor/glamor_egl.c b/glamor/glamor_egl.c
index 2e6c7bd..9b91147 100644
--- a/glamor/glamor_egl.c
+++ b/glamor/glamor_egl.c
@@ -471,10 +471,10 @@ glamor_gbm_bo_from_pixmap(ScreenPtr screen, PixmapPtr 
pixmap)
 }
 
 int
-glamor_egl_dri3_fd_name_from_tex(ScreenPtr screen,
- PixmapPtr pixmap,
- unsigned int tex,
- Bool want_name, CARD16 *stride, CARD32 *size)
+glamor_egl_fd_name_from_tex(ScreenPtr screen,
+   PixmapPtr pixmap,
+   unsigned int tex,
+   Bool want_name, CARD16 *stride, CARD32 *size)
 {
 #ifdef GLAMOR_HAS_GBM
 struct glamor_egl_screen_private *glamor_egl;
-- 
2.6.3

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

[PATCH] glamor: Make glamor_sync_init work with --disable-xshmfence

2015-11-17 Thread Mark Kettenis
Signed-off-by: Mark Kettenis <kette...@openbsd.org>
---
 glamor/glamor_sync.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/glamor/glamor_sync.c b/glamor/glamor_sync.c
index fbc47d4..907e0c6 100644
--- a/glamor/glamor_sync.c
+++ b/glamor/glamor_sync.c
@@ -97,6 +97,9 @@ glamor_sync_init(ScreenPtr screen)
 #ifdef HAVE_XSHMFENCE
if (!miSyncShmScreenInit(screen))
return FALSE;
+#else
+   if (!miSyncSetup(screen))
+   return FALSE;
 #endif
 
screen_funcs = miSyncGetScreenFuncs(screen);
-- 
2.6.3

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] glamor: Make glamor_name_from_pixmap work without DRI3

2015-11-16 Thread Mark Kettenis
> From: Keith Packard <kei...@keithp.com>
> Date: Sun, 15 Nov 2015 23:51:04 -0800
> 
> Mark Kettenis <kette...@openbsd.org> writes:
> 
> > This function is used by the modesetting driver to implement DRI2 and
> > shouldn't fail on systems that don't support DRI3.
> 
> If glamor_egl_dri3_fd_name_from_tex works on systems not using DRI3,
> then it better have its name changed to match reality.

Hi Keith,

It does seem to work... almost.  With this diff and the sync diff the
glamor-enabled modesetting driver works on OpenBSD, both on Intel and
AMD hradware, and provides 3D acceleration on both.  Without this
diff, glamor works, but 3D acceleration fails with a

  "Failed to get DRI2 name for pixmap"

message in the log.  I'll send an updated diff out.

There is one issue though.  The colors are wrong!  As far as I can
tell red and blue are reversed in the output for anything that uses 3D
acceleration (everything produced by the "2D" codepaths seems to be
fine).  I've been hunting this down yesterday, but so far not been
able to track this down.  Happens with both xserver 17.4 and 18.99.x,
and with both Mesa 10.2.9 and 11.0.3.  The galmor-enabled
xf86-video-ati driver on the AMD hardware doesn't have this issue.
Feels a bit like an endianness issue, but this is all on bog-standard
little-endian x86 hardware.

Any clues in tracking this down would be appreciated.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: zaphod support broke the modesetting driver

2015-11-16 Thread Mark Kettenis
> Date: Mon, 16 Nov 2015 09:05:53 +1000
> From: Dave Airlie <airl...@gmail.com>
> 
> On 16 November 2015 at 02:57, Mark Kettenis <mark.kette...@xs4all.nl> wrote:
> > Commit 19e1dc8f6ea6d7ff5ba4a5caa0e2f40a47879408 broke the modesetting
> > driver quite badly on systems without XSERVER_PLATFORM_BUS.  The
> > problem is that the "entity private" only gets set up when
> > ms_platform_probe() gets called, but is used unconditionally in the
> > driver.  So on systems where ms_platform_probe() doesn't get called,
> > this results in a segmentation fault.
> >
> > I tried moving the code that sets up the entity private around a bit,
> > but I couldn't get it to work as the entity private ends up being a
> > NULL pointer.
> 
> Can you test the two patches I've just sent, one is just a cleanup
> the second might fix this.

Unfortunately that doesn't seem to help.

The problem is that I end up in the "old-style" probe method:

(gdb) bt
#0  0x10e501c0e085 in ms_ent_priv (scrn=0x10e53ca47000)
at /home/kettenis/src/xserver/hw/xfree86/drivers/modesetting/driver.c:189
#1  0x10e501c0f2ee in ms_get_drm_master_fd (pScrn=0x10e53ca47000)
at /home/kettenis/src/xserver/hw/xfree86/drivers/modesetting/driver.c:722
#2  0x10e501c0f5dc in PreInit (pScrn=0x10e53ca47000, flags=0)
at /home/kettenis/src/xserver/hw/xfree86/drivers/modesetting/driver.c:823
#3  0x10e2943b47aa in InitOutput (pScreenInfo=0x10e2949e3a20, argc=1, 
argv=0x7f7dc498)
at /home/kettenis/src/xserver/hw/xfree86/common/xf86Init.c:581
#4  0x10e29435bb33 in dix_main (argc=1, argv=0x7f7dc498, 
envp=0x7f7dc4a8) at /home/kettenis/src/xserver/dix/main.c:204
#5  0x10e29433caa7 in main (argc=1, argv=0x7f7dc498, 
envp=0x7f7dc4a8) at /home/kettenis/src/xserver/dix/stubmain.c:34
(gdb) f 2
#2  0x10e501c0f5dc in PreInit (pScrn=0x10e53ca47000, flags=0)
at /home/kettenis/src/xserver/hw/xfree86/drivers/modesetting/driver.c:823
823 if (!ms_get_drm_master_fd(pScrn))
(gdb) p *pEnt
$6 = {index = 1, location = {type = BUS_NONE, id = {pci = 0x0, sbus = {
fbNum = 0}, plat = 0x0}}, chipset = 0, active = 1, 
  device = 0x10e4d4830400, driver = 0x10e56ec86000}

So the "private" still has not been set up.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

[PATCH] glamor: Make glamor_name_from_pixmap work without DRI3

2015-11-15 Thread Mark Kettenis
This function is used by the modesetting driver to implement DRI2 and
shouldn't fail on systems that don't support DRI3.

Signed-off-by: Mark Kettenis <kette...@openbsd.org>
---
 glamor/glamor.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/glamor/glamor.c b/glamor/glamor.c
index d4a0236..110bdb8 100644
--- a/glamor/glamor.c
+++ b/glamor/glamor.c
@@ -776,8 +776,6 @@ glamor_name_from_pixmap(PixmapPtr pixmap, CARD16 *stride, 
CARD32 *size)
 glamor_screen_private *glamor_priv =
 glamor_get_screen_private(pixmap->drawable.pScreen);
 
-if (!glamor_priv->dri3_enabled)
-return -1;
 switch (pixmap_priv->type) {
 case GLAMOR_TEXTURE_DRM:
 case GLAMOR_TEXTURE_ONLY:
-- 
2.6.3

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: glamor and the sync extension

2015-11-15 Thread Mark Kettenis
> Date: Sun, 15 Nov 2015 09:53:57 -0800
> From: "Jasper St. Pierre" <jstpie...@mecheye.net>
> 
> Should we just unconditionally enable xshmfence? Are there any OSes
> we care about that can't implement a fence primitive?

There currently is no implementation on OpenBSD, and that's the
platform I care about ;).

The futex implementation supportsLinux and FreeBSD.  The pthreads
implementation should in theory supports many other OSes.  But in
practice support for process-shared mutexes and condition variables
isn't very widespread.  It probably works on Solaris, but not much
else.

An important reason why I haven't pushed for an xshmfence
implementation on OpenBSD is that I have some security concerns.
Especially with the pthreads implementation.

It is trivial to block any application that calls xshmfence_await() by
simply never triggering the fence.  I guess that isn't a major concern
as long the xserver never waits on a fence and only triggers them.
I'm not sure that's the case though with dri3-enable glamor.

However, with the pthreads implementation, even triggering a fence
becomes dangerous since it needs to acquire a shared mutex before
broadcasting the condition.  So a client could trivially DOS the
server by locking the mutex and never unlocking it.  Doesn't even have
to be mailicious.  A buggy client could simply crash after locking the
mutex and the server would be toast.  So in my view the pthreads
implementation is not a robust xshmfence implementationand should
probably be avoided.

> On Sun, Nov 15, 2015 at 8:59 AM, Mark Kettenis <mark.kette...@xs4all.nl> 
> wrote:
> > Currently glamor hits an assertion on systems that don't have
> > xshmfence.  This happens when the glamor code calls
> > miSyncGetScreenFuncs() because the miSyncScreenPrivateKey has not been
> > set up.  For systems with xshmfence, this happens when
> > miSyncShmScreenInit() gets called, but that code is wrapped within
> > #ifdef HAVE_XSHMFENCE.  The diff below simply calls miSyncSetup()
> > instead if HAVE_XSHMFENCE is not defined.  This makes things work, but
> > I'm not sure if this is the right approach.
> >
> > Thoughts?
> >
> >
> > Index: glamor/glamor_sync.c
> > ===
> > RCS file: /cvs/xenocara/xserver/glamor/glamor_sync.c,v
> > retrieving revision 1.1
> > diff -u -p -r1.1 glamor_sync.c
> > --- glamor/glamor_sync.c16 Sep 2015 19:10:21 -  1.1
> > +++ glamor/glamor_sync.c15 Nov 2015 13:02:31 -
> > @@ -97,6 +97,9 @@ glamor_sync_init(ScreenPtr screen)
> >  #ifdef HAVE_XSHMFENCE
> > if (!miSyncShmScreenInit(screen))
> > return FALSE;
> > +#else
> > +   if (!miSyncSetup(screen))
> > +   return FALSE;
> >  #endif
> >
> > screen_funcs = miSyncGetScreenFuncs(screen);
> >
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

glamor and the sync extension

2015-11-15 Thread Mark Kettenis
Currently glamor hits an assertion on systems that don't have
xshmfence.  This happens when the glamor code calls
miSyncGetScreenFuncs() because the miSyncScreenPrivateKey has not been
set up.  For systems with xshmfence, this happens when
miSyncShmScreenInit() gets called, but that code is wrapped within
#ifdef HAVE_XSHMFENCE.  The diff below simply calls miSyncSetup()
instead if HAVE_XSHMFENCE is not defined.  This makes things work, but
I'm not sure if this is the right approach.

Thoughts?


Index: glamor/glamor_sync.c
===
RCS file: /cvs/xenocara/xserver/glamor/glamor_sync.c,v
retrieving revision 1.1
diff -u -p -r1.1 glamor_sync.c
--- glamor/glamor_sync.c16 Sep 2015 19:10:21 -  1.1
+++ glamor/glamor_sync.c15 Nov 2015 13:02:31 -
@@ -97,6 +97,9 @@ glamor_sync_init(ScreenPtr screen)
 #ifdef HAVE_XSHMFENCE
if (!miSyncShmScreenInit(screen))
return FALSE;
+#else
+   if (!miSyncSetup(screen))
+   return FALSE;
 #endif
 
screen_funcs = miSyncGetScreenFuncs(screen);

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 23/24] Remove readmask from screen block/wakeup handler

2015-09-21 Thread Mark Kettenis
> From: Keith Packard <kei...@keithp.com>
> Date: Mon, 21 Sep 2015 12:06:22 +0100
> 
> Mark Kettenis <mark.kette...@xs4all.nl> writes:
> 
> > This, of course, has impact on pretty much all the drivers.  Many use
> > compat-api.h and could probably be fixed by adding an additional
> > stanza there.  But somebody still needs to do that work.
> 
> A few actually use the block and wakeup handlers to manage additional
> file descriptors, and those will take actual work to change them to use
> the new NotifyFd interface.
> 
> Most of the compat-api.h versions I see already contain
> BLOCKHANDLER_ARGS, and a few contain WAKEUPHANDLER_ARGS as well, so at
> least a few drivers will make this pretty easy to manage.

In the OpenBSD xenocara tree, only the xf86-video-ati and
xf86-video-intel drivers actually use the handlers to manage
additional file descriptors.  However there are 35 copies of
compat-api.h in or tree that would need to be updated.

As with all driver API changes I question whether the urge to clean up
APIs should win from keeping the API compatible in the xserver code
base.  One could for example simply pass NULL in the last argument of
the block and wakeup handlers.  That would weed out drivers that use
the argument quickly enough (but admittedly would not guarantee that
such all such usage would be caught).

> I'm still fumbling around creating a suitable way for the OS layer to
> use epoll, kqueue or poll as an appropriate replacement for select. I
> hope I'm getting closer though; something much like the NotifyFd
> interfaces might work. With that, we'll be able to actually connect
> MAXCLIENTS to the server, even with input devices, system interface
> libraries and listening sockets occupying some of the file descriptor
> space.

Replacing select with poll seems a useful goal indeed.  I'm not so
sure about epoll and kqueue as they're not portable.  You'd end up
with separate code paths that would both have to be tested, and you'd
probably need even more to support Solaris and Windows.  And then
there are those damn corner cases where each system behaves in a
slighly different buggy way.  But perhaps a well-tested framework like
libevent could remove that headache.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 23/24] Remove readmask from screen block/wakeup handler

2015-09-21 Thread Mark Kettenis
> From: Keith Packard 
> Date: Mon, 21 Sep 2015 07:16:34 +0100
> 
> With no users of the interface needing the readmask anymore, we can
> remove it from the argument passed to these functions.
> 
> Signed-off-by: Keith Packard 

This, of course, has impact on pretty much all the drivers.  Many use
compat-api.h and could probably be fixed by adding an additional
stanza there.  But somebody still needs to do that work.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [xrandr] Only use the current information when setting modes

2015-09-13 Thread Mark Kettenis
> From: Chris Wilson 
> Date: Sun, 13 Sep 2015 11:40:37 +0100
> 
> Before we change the state (e.g. adding a mode or applying one to an
> output), we query the screen resources for the right identifiers. This
> should only use the current information rather than force a reprobe on
> all hardware - not only can that reprobe be very slow (e.g. EDID
> timeouts on the order of seconds), but it may perturb the setup that the
> user is trying to configure.

How do you guarantee that that cached information isn't stale?

Seems you already can get the behaviour you want by specifying
--current.  Whereas there is no convenient way to force a probe, other
than an explicit query?

> Signed-off-by: Chris Wilson 
> ---
>  xrandr.c | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/xrandr.c b/xrandr.c
> index 181c76e..dcfdde0 100644
> --- a/xrandr.c
> +++ b/xrandr.c
> @@ -3325,7 +3325,7 @@ main (int argc, char **argv)
>  {
>   umode_t *m;
>  
> -get_screen (current);
> +get_screen (True);
>   get_crtcs();
>   get_outputs();
>   
> @@ -3374,7 +3374,7 @@ main (int argc, char **argv)
>  {
>   output_t *output;
>  
> -get_screen (current);
> +get_screen (True);
>   get_crtcs();
>   get_outputs();
>   
> @@ -3465,7 +3465,7 @@ main (int argc, char **argv)
>   if (!has_1_4)
>   fatal ("--setprovideroutputsource requires RandR 1.4\n");
>  
> - get_screen (current);
> + get_screen (True);
>   get_providers ();
>  
>   provider = find_provider (_name);
> @@ -3480,7 +3480,7 @@ main (int argc, char **argv)
>   if (!has_1_4)
>   fatal ("--setprovideroffloadsink requires RandR 1.4\n");
>  
> - get_screen (current);
> + get_screen (True);
>   get_providers ();
>  
>   provider = find_provider (_name);
> @@ -3490,7 +3490,7 @@ main (int argc, char **argv)
>  }
>  if (setit_1_2)
>  {
> - get_screen (current);
> + get_screen (True);
>   get_crtcs ();
>   get_outputs ();
>   set_positions ();
> @@ -3589,7 +3589,7 @@ main (int argc, char **argv)
>   exit(0);
>   }
>  
> - get_screen(current);
> + get_screen(True);
>   get_monitors(True);
>   get_crtcs();
>   get_outputs();
> -- 
> 2.5.1
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: http://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [xrandr] Only use the current information when setting modes

2015-09-13 Thread Mark Kettenis
> Date: Sun, 13 Sep 2015 12:14:57 +0100
> From: Chris Wilson <ch...@chris-wilson.co.uk>
> 
> On Sun, Sep 13, 2015 at 01:08:11PM +0200, Mark Kettenis wrote:
> > > From: Chris Wilson <ch...@chris-wilson.co.uk>
> > > Date: Sun, 13 Sep 2015 11:40:37 +0100
> > > 
> > > Before we change the state (e.g. adding a mode or applying one to an
> > > output), we query the screen resources for the right identifiers. This
> > > should only use the current information rather than force a reprobe on
> > > all hardware - not only can that reprobe be very slow (e.g. EDID
> > > timeouts on the order of seconds), but it may perturb the setup that the
> > > user is trying to configure.
> > 
> > How do you guarantee that that cached information isn't stale?
> 
> There is no issue in that, since the user parameters are against the
> output from a previous run. If the configuration is stale then so are
> the xrandr parameters.

I bet tons of people have a script that runs xrandr to configure a
projector connected to the VGA port on their laptop.  Yes there is no
guarantee that the parameters I give to xrandr match the current
configuration.  But if I connect the same projector (or even a
different one) to the same VGA port I expect this script to work and
make the display of my laptop appear on the projector.  I expect this
to work even when I connected the VGA cable after I started X.

> > Seems you already can get the behaviour you want by specifying
> > --current.  Whereas there is no convenient way to force a probe, other
> > than an explicit query?
> 
> And there should not be. On KMS platforms regeneration of the RR
> information is driven by hotplug events (either generated by the hw
> itself or by a kernel worker emulating the notifications otherwise). An
> explicit probe by the user should be the means of last resort not the
> default.

Well, that only works on KMS with udev.  And only if the specific
driver you're using has udev support.

To me it seems that you're trying to solve this at the wrong level.
If a driver (kernel or Xorg) has reliable hotplug support, shouldn't
it be able to decide whether it needs to reread the EDID information
of some monitor or not?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: Fwd: [PATCH] Make configure test for LD_NO_UNDEFINED_FLAG on Solaris work w/autoconf 2.69

2015-09-12 Thread Mark Kettenis
> Date: Sat, 12 Sep 2015 09:40:18 -0700
> 
> Anyone want to review this 4 month old patch?

(void *)0 certainly is avalid null pointer constant. So

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

>  Forwarded Message 
> Subject: [PATCH] Make configure test for LD_NO_UNDEFINED_FLAG on Solaris work 
> w/autoconf 2.69
> Date: Fri, 15 May 2015 21:27:26 -0700
> From: Alan Coopersmith <alan.coopersm...@oracle.com>
> To: xorg-devel@lists.x.org
> 
> After upgrading from autoconf 2.68 to 2.69, this test started failing with
> "conftest.c", line 149: undefined symbol: NULL
> so use a raw 0 pointer to avoid header dependencies in the autoconf
> generated test case.
> 
> Signed-off-by: Alan Coopersmith <alan.coopersm...@oracle.com>
> ---
>   configure.ac |2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/configure.ac b/configure.ac
> index f760730..4fce5b7 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -1740,7 +1740,7 @@ case "$host_os" in
>   #XORG_DRIVER_LIBS="-Wl,-z,defs -Wl,-z,parent=${bindir}/Xorg"
>],[],
>   [AC_LANG_SOURCE([extern int main(int argc, char **argv);
> - int call_main(void) { return main(0, NULL); }])])
> + int call_main(void) { return main(0, (void *)0); }])])
>rm -f conftest.parent
>   ])
>   ;;
> -- 
> 1.7.9.2
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: http://lists.x.org/mailman/listinfo/xorg-devel
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: http://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xinit] remove bogus \/ escapes

2015-08-31 Thread Mark Kettenis
> From: Matthieu Herrb <matth...@herrb.eu>
> Date: Sun, 30 Aug 2015 15:19:38 +0200
> 
> From: Ingo Schwarze <schwa...@usta.de>
> 
> some X manuals use then escape sequence \/ when they want to render
> a slash.  That's bad because \/ is not a slash but an italic
> correction, never producing any output, having no effect at all in
> terminal output, and only changing spacing in a minor way in typeset
> output.
> 
> Signed-off-by: Matthieu Herrb <matth...@herrb.eu>

Verified that even man on Linux doesn't render \/ as a slash.

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> ---
>  man/xinit.man | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git man/xinit.man man/xinit.man
> index f8005ce..de6e27c 100644
> --- man/xinit.man
> +++ man/xinit.man
> @@ -117,7 +117,7 @@ This will start up a server named \fIX\fP, and will 
> append the given
>  arguments to the default \fIxterm\fP command.  It will ignore 
> \fI\.xinitrc\fP.
>  .TP 8
>  .B "xinit \-e widgets \-\^\- ./Xorg \-l \-c"
> -This will use the command \fI\.\/Xorg \-l \-c\fP to start the server and will
> +This will use the command \fI\./Xorg \-l \-c\fP to start the server and will
>  append the arguments \fI\-e widgets\fP to the default \fIxterm\fP command.
>  .TP 8
>  .B "xinit /usr/ucb/rsh fasthost cpupig \-display ws:1 \-\^\-  :1 \-a 2 \-t 5"
> -- 
> 2.4.6
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: http://lists.x.org/mailman/listinfo/xorg-devel
> 
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] remove bogus \/ escapes

2015-08-31 Thread Mark Kettenis
> From: Matthieu Herrb <matth...@herrb.eu>
> Date: Sun, 30 Aug 2015 15:26:40 +0200
> 
> From: Ingo Schwarze <schwa...@usta.de>
> 
> some X manuals use then escape sequence \/ when they want to render
> a slash.  That's bad because \/ is not a slash but an italic
> correction, never producing any output, having no effect at all in
> terminal output, and only changing spacing in a minor way in typeset
> output.
> 
> Signed-off-by: Matthieu Herrb <matth...@herrb.eu>

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> ---
>  hw/xfree86/man/Xorg.man | 2 +-
>  man/Xserver.man | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git hw/xfree86/man/Xorg.man hw/xfree86/man/Xorg.man
> index ddf1358..646a90c 100644
> --- hw/xfree86/man/Xorg.man
> +++ hw/xfree86/man/Xorg.man
> @@ -46,7 +46,7 @@ On most platforms, the "Local" connection type is a 
> UNIX-domain socket.
>  On some System V platforms, the "local" connection types also include
>  STREAMS pipes, named pipes, and some other mechanisms.
>  .TP 4
> -.I TCP\/IP
> +.I TCP/IP
>  .B Xorg
>  listens on port
>  .RI 6000+ n ,
> diff --git man/Xserver.man man/Xserver.man
> index ac410cd..1cf242d 100644
> --- man/Xserver.man
> +++ man/Xserver.man
> @@ -428,7 +428,7 @@ elapse between autorepeat-generated keystrokes).
>  loads keyboard description in \fIfilename\fP on server startup.
>  .SH "NETWORK CONNECTIONS"
>  The X server supports client connections via a platform-dependent subset of
> -the following transport types: TCP\/IP, Unix Domain sockets, DECnet,
> +the following transport types: TCP/IP, Unix Domain sockets, DECnet,
>  and several varieties of SVR4 local connections.  See the DISPLAY
>  NAMES section of the \fIX\fP(__miscmansuffix__) manual page to learn how to
>  specify which transport type clients should try to use.
> -- 
> 2.4.6
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xorg-docs] remove bogus \/ escapes

2015-08-31 Thread Mark Kettenis
> From: Matthieu Herrb <matth...@herrb.eu>
> Date: Sun, 30 Aug 2015 15:24:13 +0200
> 
> From: Ingo Schwarze <schwa...@usta.de>
> 
> some X manuals use then escape sequence \/ when they want to render
> a slash.  That's bad because \/ is not a slash but an italic
> correction, never producing any output, having no effect at all in
> terminal output, and only changing spacing in a minor way in typeset
> output.
> 
> Signed-off-by: Matthieu Herrb <matth...@herrb.eu>

Reviewed-by: Mark Kettenis <kette...@openbsd.org>

> ---
>  man/X.man | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git man/X.man man/X.man
> index 16b4c96..bc9fc1a 100644
> --- man/X.man
> +++ man/X.man
> @@ -224,7 +224,7 @@ The hostname part of the display name should be the empty 
> string.
>  For example:  \fI:0\fP, \fI:1\fP, and \fI:0.1\fP.  The most efficient
>  local transport will be chosen.
>  .TP 8
> -.I TCP\/IP
> +.I TCP/IP
>  .br
>  The hostname part of the display name should be the server machine's
>  hostname or IP address.  Full Internet names, abbreviated names, IPv4
> @@ -487,7 +487,7 @@ implementation dependent.
>  If the name is not found, the color is looked up in the
>  X server's database.
>  The text form of this database is commonly stored in the file
> -\fI\__datadir__/X11/rgb.txt\fP.
> +\fI__datadir__/X11/rgb.txt\fP.
>  .PP
>  A numerical color specification
>  consists of a color space name and a set of values in the following syntax:
> @@ -1164,7 +1164,7 @@ A wide variety of error messages are generated from 
> various programs.
>  The default error handler in \fIXlib\fP (also used by many toolkits) uses
>  standard resources to construct diagnostic messages when errors occur.  The
>  defaults for these messages are usually stored in
> -\fI\__datadir__/X11/XErrorDB\fP.  If this file is not present,
> +\fI__datadir__/X11/XErrorDB\fP.  If this file is not present,
>  error messages will be rather terse and cryptic.
>  .PP
>  When the X Toolkit Intrinsics encounter errors converting resource strings to
> -- 
> 2.4.6
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH libXt] Fix builds on X32

2015-05-18 Thread Mark Kettenis
 From: Ross Burton ross.bur...@intel.com
 Date: Mon, 18 May 2015 17:05:46 +0100
 
 The x86 X32 ABI is a 32-bit environment on 64-bit processors, so __amd64__ is
 defined but pointers and longs are 32-bit. Handle this case by also checking
 __LP64__.

That's the wrong way to handle it.  Not all 64-bit amd64 systems
define __LP64__.  Take a look at x11proto/Xmd.h.  It checks __ILP32__
to decide between x32 and a proper amd64 system.

Cheers,

Mark

 Signed-off-by: Ross Burton ross.bur...@intel.com
 ---
  include/X11/Xtos.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/include/X11/Xtos.h b/include/X11/Xtos.h
 index 64b2da8..44f52c2 100644
 --- a/include/X11/Xtos.h
 +++ b/include/X11/Xtos.h
 @@ -59,7 +59,7 @@ SOFTWARE.
  defined(__sparc64__) || \
  defined(__s390x__) || \
  (defined(__hppa__)  defined(__LP64__)) || \
 -defined(__amd64__) || defined(amd64) || \
 +((defined(__amd64__) || defined(amd64))  defined(__LP64__)) || \
  defined(__powerpc64__) || \
  (defined(sgi)  (_MIPS_SZLONG == 64))
  #define LONG64
 -- 
 2.1.4
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] Allow system call restarts upon signal interruption

2015-05-13 Thread Mark Kettenis
 From: =?UTF-8?Q?Michel_D=c3=a4nzer?= mic...@daenzer.net
 Date: Wed, 13 May 2015 11:18:45 +0900
 
 On 13.05.2015 07:39, Daniel Drake wrote:
  The X server frequently deals with SIGIO and SIGALRM interruptions.
  If process execution is inside certain blocking system calls
  when these signals arrive, e.g. with the kernel blocked on
  a contended semaphore, the system calls will be interrupted.
  
  Some system calls are automatically restartable (the kernel re-executes
  them with the same parameters once the signal handler returns) but
  only if the signal handler allows it.
  
  Set SA_RESTART on the signal handlers to enable this convenient
  behaviour.
 
 The discussion about this on IRC sounded to me like we don't want to do
 this for both signals, because at least one of them should interrupt
 select(). My guess would be that SIGIO should interrupt select() and
 thus shouldn't use SA_RESTART.

Doesn't the smart scheduler rely on the interrupt behviour as well?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xinit] Remove workaround for xterm -L (#89653)

2015-03-24 Thread Mark Kettenis
 Date: Tue, 24 Mar 2015 11:33:28 +0100
 From: Hans de Goede hdego...@redhat.com
 
 Hi,
 
 On 24-03-15 05:49, Peter Hutterer wrote:
  The -L flag was removed in 1989.
 
  This enables the legacy keyboard driver again when the server is started
  with -keeptty (bd6cacdd3661)
 
  X.Org Bug 89653 http://bugs.freedesktop.org/show_bug.cgi?id=89653
 
  Signed-off-by: Peter Hutterer peter.hutte...@who-t.net
  ---
  No idea what exactly is going on here, I tried to bisect the server to get
  some hints what triggers what there are too many moving targets. Either
  way, with this patch applied, a basic xorg.conf that uses the kbd driver
  works again with startx/xinit (provided a suid Xorg binary of course)
 
  Thanks to alanc for digging up the commit that removed the -L option.
  http://cgit.freedesktop.org/~alanc/xc-historical/commit/xc/programs/xterm?id=46fc268c21d01cf0d664c84e5d03f785b2b2e5ce
 
 I'm afraid that this causes a regression, with this patch the Xserver no
 longer cleanly exits when the last client disconnects.
 
 Test-case:
 
 -Fully up2date 64 bit Fedora-22 system
 -xinit rpm package build with this patch
 -do: startx /usr/bin/xterm -title foo
 -exit the shell in the xterm
 -now Xorg hangs using aprox 100% cpu, strace shows that it
   sits in a select() loop

I think that one is easy to understand.  When the client exits, xinit
will try to kill the X server.  It does this by calling killpg().  With
Peter's diff, xinit no longer creates a process group for the server.
So the X server will stay around.  Not sure why the server spins, but
that's probably an unrelated X server bug of some sorts.

Without -keeptty the X server itself will (on some platforms) put
itself in its own process group.  In that case the server will be
killed.  Relying on this isn't a good idea though.  With privsep on
OpenBSD for example the X server forks itself before dropping
privileges and if it is the child that calls setpgid(), xinit will use
the wrong pgid and killing the server will still fail.

So I think the setpgid() call should stay.  Perhaps it was added to
work around an xterm -L, but more changes have been made that depend
on it.  The comment should probably be removed as it is clearly
misleading.

I'm not really familliar with the Linux console driver and how it
interacts with the keyboard.  Perhaps the xf86-input-keyboard driver
depends on being part of the foreground process group, and the problem
can be fixed by making the Linux-specific X server init code call
tcsetpgrp()?

xinit.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
 
  diff --git a/xinit.c b/xinit.c
  index 1b04911..74fda74 100644
  --- a/xinit.c
  +++ b/xinit.c
  @@ -417,11 +417,7 @@ startServer(char *server_argv[])
 * at xinit when ready to accept connections
 */
signal(SIGUSR1, SIG_IGN);
  -/*
  - * prevent server from getting sighup from vhangup()
  - * if client is xterm -L
  - */
  -setpgid(0,getpid());
  +
Execute(server_argv);
 
Error(unable to run server \%s\, server_argv[0]);
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH:xf86-video-mga] Fix pointer cast warning.

2015-02-25 Thread Mark Kettenis
 Date: Wed, 25 Feb 2015 11:31:50 +0100
 From: Tormod Volden lists.tor...@gmail.com
 
 On Tue, Feb 24, 2015 at 7:36 AM, Mark Kettenis mark.kette...@xs4all.nl 
 wrote:
   cast to pointer from integer of different size
   [-Werror=int-to-pointer-cast]
 
   Signed-off-by: Thomas Klausner w...@netbsd.org
   ---
src/mga_exa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
 
  This code makes no sense to me.  exaGetPixmapFirstPixel() returns a pixel
  value.  Interpreting that as a pointer to the first pixel must be wrong.
 
 Are you sure about this? What would be the utility of a function
 returning the pixel /value/ of the first pixel in a pixmap?

From exa/exa_unaccel.c:

/**
 * Gets the 0,0 pixel of a pixmap.  Used for doing solid fills of tiled pixmaps
 * that happen to be 1x1.  Pixmap must be at least 8bpp.
 */

And yes, there really is little doubt that the implementation returns
the pixel value of the 0,0 pixel of the pixmap.

  Note that the actual usage of mgaDownloadFromScreen() is #if 0'ed out.
  Perhaps this code should just be removed from the driver?  Or perhaps the
  code was #if 0'ed out because the code didn't work?
 
 About the EXA code working or not, see also
 https://bugs.freedesktop.org/show_bug.cgi?id=53423

That pretty much confirms the code doesn't work (before or after your
change).  Probably the answer is to revert the
exaGetPixmapFirstPixel() changes you made (here and in other drivers).
And then disable the DownloadFromScreen() implementation, or perhaps
even disable EXA altogether.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH:xf86-video-mga] Fix pointer cast warning.

2015-02-23 Thread Mark Kettenis
 Date: Mon, 23 Feb 2015 22:55:51 -0800
 From: Alan Coopersmith alan.coopersm...@oracle.com
 
 On 02/23/15 10:36 PM, Mark Kettenis wrote:
cast to pointer from integer of different size
[-Werror=int-to-pointer-cast]
 
Signed-off-by: Thomas Klausner w...@netbsd.org
---
 src/mga_exa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 
  This code makes no sense to me.  exaGetPixmapFirstPixel() returns a pixel
  value.  Interpreting that as a pointer to the first pixel must be wrong.
 
  Note that the actual usage of mgaDownloadFromScreen() is #if 0'ed out.
 
 Perhaps in your copy, but not upstream:
 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-mga/tree/src/mga_exa.c#n896

He, right.  Mattieu Herrb disabled this locally in OpenBSD's xenocara:

  revision 1.7
  date: 2013/06/14 21:21:54;  author: matthieu;  state: Exp;  lines: +2 -0;
  Disable broken EXA operations for now.

Breakage was probably introduced with commit
e9109a0b04695d6971c94abe271dda2dc1a5e886.  Not sure if simply
reverting that commit is an option.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xinit] startx: Don't use GNU expr extensions

2015-01-29 Thread Mark Kettenis
Use the ':' operator instead of match and avoid the use of \+.  Both
constructions aren't specified by POSIX and not supported in BSD expr.
Also drop the '^' from the regular expressions as it is implicit and
POSIX leaves its behaviour undefined.

Signed-off-by: Mark Kettenis kette...@openbsd.org
---
 startx.cpp | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/startx.cpp b/startx.cpp
index 45d7bd9..ce4713f 100644
--- a/startx.cpp
+++ b/startx.cpp
@@ -193,7 +193,7 @@ if [ x$server = x ]; then
 XCOMM the startx session being seen as inactive:
 XCOMM https://bugzilla.redhat.com/show_bug.cgi?id=806491;
 tty=$(tty)
-if expr match $tty '^/dev/tty[0-9]\+$'  /dev/null; then
+if expr $tty : '/dev/tty[0-9][0-9]*$'  /dev/null; then
 tty_num=$(echo $tty | grep -oE '[0-9]+$')
 vtarg=vt$tty_num -keeptty
 fi
@@ -217,7 +217,7 @@ fi
 XCOMM if no vt is specified add vtarg (which may be empty)
 have_vtarg=no
 for i in $serverargs; do
-if expr match $i '^vt[0-9]\+$'  /dev/null; then
+if expr $i : 'vt[0-9][0-9]*$'  /dev/null; then
 have_vtarg=yes
 fi
 done
-- 
2.2.2

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH:xauth v2] include POSIX-standard limits.h for PATH_MAX instead of sys/syslimits.h

2015-01-03 Thread Mark Kettenis
 From: Alan Coopersmith alan.coopersm...@oracle.com
 Date: Sat,  3 Jan 2015 10:52:28 -0800
 
 Signed-off-by: Alan Coopersmith alan.coopersm...@oracle.com

Reviewed-by: Mark Kettenis kette...@openbsd.org

 ---
  gethost.c  |2 +-
  parsedpy.c |2 +-
  2 files changed, 2 insertions(+), 2 deletions(-)
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH:xauth] include limits.h for PATH_MAX on OS'es without sys/syslimits.h

2015-01-02 Thread Mark Kettenis
 From: Alan Coopersmith alan.coopersm...@oracle.com
 Date: Fri,  2 Jan 2015 09:50:36 -0800

Are there any systems that have sys/syslimits.h but don't have a
limits.h that provides PATH_MAX?  POSIX requires that limits.h
provides PATH_MAX.  And as far as I can tell sys/syslimits.h is a
BSD-ism, and has always been included by limits.h.

So I think you can just replace sys/syslimits.h with limits.h and
drop the additional autoconf check.

 Signed-off-by: Alan Coopersmith alan.coopersm...@oracle.com
 ---
  configure.ac |2 +-
  gethost.c|4 
  parsedpy.c   |4 
  3 files changed, 9 insertions(+), 1 deletion(-)
 
 diff --git a/configure.ac b/configure.ac
 index 2d36ed5..5a3f190 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -41,7 +41,7 @@ m4_ifndef([XORG_MACROS_VERSION],
  XORG_MACROS_VERSION(1.8)
  XORG_DEFAULT_OPTIONS
  
 -AC_CHECK_HEADERS([net/errno.h])
 +AC_CHECK_HEADERS([net/errno.h sys/syslimits.h])
  
  AC_CHECK_FUNCS([strlcpy])
  
 diff --git a/gethost.c b/gethost.c
 index c75ae02..e8e332a 100644
 --- a/gethost.c
 +++ b/gethost.c
 @@ -58,7 +58,11 @@ in this Software without prior written authorization from 
 The Open Group.
  #include xauth.h
  
  #include sys/stat.h
 +#ifdef HAVE_SYS_SYSLIMITS_H
  #include sys/syslimits.h
 +#else
 +#include limits.h
 +#endif
  
  #ifndef WIN32
  #include arpa/inet.h
 diff --git a/parsedpy.c b/parsedpy.c
 index 7365224..f43d78d 100644
 --- a/parsedpy.c
 +++ b/parsedpy.c
 @@ -43,7 +43,11 @@ in this Software without prior written authorization from 
 The Open Group.
  #include X11/Xmu/SysUtil.h
  
  #include sys/stat.h
 +#ifdef HAVE_SYS_SYSLIMITS_H
  #include sys/syslimits.h
 +#else
 +#include limits.h
 +#endif
  
  #if defined(UNIXCONN) || defined(LOCALCONN)
  #define UNIX_CONNECTION unix
 -- 
 1.7.9.2
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] Solaris: delete undocumented, unuseful -protect0 flag

2014-12-09 Thread Mark Kettenis
 From: Alan Coopersmith alan.coopersm...@oracle.com
 Date: Tue,  9 Dec 2014 10:15:08 -0800
 
 Solaris already makes the page at address 0 inaccessible by default to
 catch NULL pointer bugs, we don't need a double secret undocumented flag
 to try to make our own hacky attempt at it.
 
 As a bonus, deleting this code removes gcc warning of:
 
 sun_init.c: In function 'xf86OpenConsole':
 sun_init.c:103:17: warning: declaration of 'fd' shadows a previous local 
 [-Wshadow]
  int fd = -1;
  ^
 sun_init.c:89:9: warning: shadowed declaration is here [-Wshadow]
  int fd;
  ^
 
 Signed-off-by: Alan Coopersmith alan.coopersm...@oracle.com

Reviewed-by: Mark Kettenis kette...@openbsd.org in case you need one.

 ---
  hw/xfree86/os-support/solaris/sun_init.c |   31 
 --
  1 file changed, 31 deletions(-)
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH 4/5] dix: GetHosts bounds check using wrong pointer value

2014-12-09 Thread Mark Kettenis
 From: Keith Packard kei...@keithp.com
 Date: Tue,  9 Dec 2014 09:31:00 -0800
 
 GetHosts saves the pointer to allocated memory in *data, and then
 wants to bounds-check writes to that region, but was mistakenly using
 a bare 'data' instead of '*data'. Also, data is declared as void **,
 so we need a cast to turn it into a byte pointer so we can actually do
 pointer comparisons.

Not sure it is needed for pointer *comparisons*, but it certainly is
necessary to be able to do pointer arithmetic without relying on GCC
extensions.

 Signed-off-by: Keith Packard kei...@keithp.com

Reviewed by: Mark Kettenis kette...@openbsd.org

 ---
  os/access.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/os/access.c b/os/access.c
 index f393c8d..28f2d32 100644
 --- a/os/access.c
 +++ b/os/access.c
 @@ -1308,7 +1308,7 @@ GetHosts(void **data, int *pnHosts, int *pLen, BOOL * 
 pEnabled)
  }
  for (host = validhosts; host; host = host-next) {
  len = host-len;
 -if ((ptr + sizeof(xHostEntry) + len)  (data + n))
 +if ((ptr + sizeof(xHostEntry) + len)  ((unsigned char *) *data 
 + n))
  break;
  ((xHostEntry *) ptr)-family = host-family;
  ((xHostEntry *) ptr)-length = len;
 -- 
 2.1.3
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH 2/4] xfree86: Remove implicit xf86EnableIO from BSD for PowerPC

2014-10-24 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Wed,  8 Oct 2014 17:38:57 +0200
 
 This shouldn't be necessary, we're doing this already at the DDX level
 when it's needed (and, more importantly, not when it's not).
 
 Signed-off-by: Adam Jackson a...@redhat.com

Tested (in 1.16.1) on a PowerMac G4 with an r128 on OpenBSD.  Although I
think that on OpenBSD we actually avoid accessing VGA registers.

Reviewed-by: Mark Kettenis kette...@openbsd.org

 ---
  hw/xfree86/os-support/bsd/ppc_video.c | 1 -
  1 file changed, 1 deletion(-)
 
 diff --git a/hw/xfree86/os-support/bsd/ppc_video.c 
 b/hw/xfree86/os-support/bsd/ppc_video.c
 index b1cf7eb..f017376 100644
 --- a/hw/xfree86/os-support/bsd/ppc_video.c
 +++ b/hw/xfree86/os-support/bsd/ppc_video.c
 @@ -51,7 +51,6 @@ void
  xf86OSInitVidMem(VidMemInfoPtr pVidMem)
  {
  pVidMem-initialised = TRUE;
 -xf86EnableIO();
  }
  
  volatile unsigned char *ioBase = MAP_FAILED;
 -- 
 1.9.3
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 1/4] xfree86: Remove no-op /dev/mem checks from xf86OSInitVidMem

2014-10-15 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Wed,  8 Oct 2014 17:38:56 +0200
 
 pciaccess doesn't end up using the fd being inspected here, so there's
 not really any point.

We currently don't actually support legacy PCI video drivers on
OpenBSD/alpha and OpenBSD/arm.  We can add the necessary bits back if
we ever add it back (which isn't all that likely).

Reviewed-by: Mark Kettenis kette...@openbsd.org

 Signed-off-by: Adam Jackson a...@redhat.com
 ---
  hw/xfree86/os-support/bsd/alpha_video.c | 122 
 
  hw/xfree86/os-support/bsd/arm_video.c   |  52 --
  hw/xfree86/os-support/hurd/hurd_video.c |   4 --
  3 files changed, 178 deletions(-)
 
 diff --git a/hw/xfree86/os-support/bsd/alpha_video.c 
 b/hw/xfree86/os-support/bsd/alpha_video.c
 index 236def6..fd8f823 100644
 --- a/hw/xfree86/os-support/bsd/alpha_video.c
 +++ b/hw/xfree86/os-support/bsd/alpha_video.c
 @@ -45,131 +45,9 @@
  #define MAP_FLAGS (MAP_FILE | MAP_SHARED)
  #endif
  
 -#ifndef __NetBSD__
 -extern unsigned long dense_base(void);
 -#else   /* __NetBSD__ */
 -static struct alpha_bus_window *abw;
 -static int abw_count = -1;
 -
 -static void
 -init_abw(void)
 -{
 -if (abw_count  0) {
 -abw_count = alpha_bus_getwindows(ALPHA_BUS_TYPE_PCI_MEM, abw);
 -if (abw_count = 0)
 -FatalError(init_abw: alpha_bus_getwindows failed\n);
 -}
 -}
 -
 -static unsigned long
 -dense_base(void)
 -{
 -if (abw_count  0)
 -init_abw();
 -
 -/* XXX check abst_flags for ABST_DENSE just to be safe? */
 -xf86Msg(X_INFO, dense base = %#lx\n, abw[0].abw_abst.abst_sys_start);  
/*  */
 -return abw[0].abw_abst.abst_sys_start;
 -}
 -
 -#endif  /* __NetBSD__ */
 -
 -#define BUS_BASE dense_base()
 -
 -/***/
 -/* Video Memory Mapping section*/
 -/***/
 -
 -#ifdef __OpenBSD__
 -#define SYSCTL_MSG \tCheck that you have set 'machdep.allowaperture=1'\n\
 -  \tin /etc/sysctl.conf and reboot your machine\n \
 -  \trefer to xf86(4) for details
 -#endif
 -
 -static int devMemFd = -1;
 -
 -#ifdef HAS_APERTURE_DRV
 -#define DEV_APERTURE /dev/xf86
 -#endif
 -
 -/*
 - * Check if /dev/mem can be mmap'd.  If it can't print a warning when
 - * warn is TRUE.
 - */
 -static void
 -checkDevMem(Bool warn)
 -{
 -static Bool devMemChecked = FALSE;
 -int fd;
 -void *base;
 -
 -if (devMemChecked)
 -return;
 -devMemChecked = TRUE;
 -
 -#ifdef HAS_APERTURE_DRV
 -/* Try the aperture driver first */
 -if ((fd = open(DEV_APERTURE, O_RDWR)) = 0) {
 -/* Try to map a page at the VGA address */
 -base = mmap((caddr_t) 0, 4096, PROT_READ | PROT_WRITE,
 -MAP_FLAGS, fd, (off_t) 0xA + BUS_BASE);
 -
 -if (base != MAP_FAILED) {
 -munmap((caddr_t) base, 4096);
 -devMemFd = fd;
 -xf86Msg(X_INFO, checkDevMem: using aperture driver %s\n,
 -DEV_APERTURE);
 -return;
 -}
 -else {
 -if (warn) {
 -xf86Msg(X_WARNING, checkDevMem: failed to mmap %s (%s)\n,
 -DEV_APERTURE, strerror(errno));
 -}
 -}
 -}
 -#endif
 -if ((fd = open(DEV_MEM, O_RDWR)) = 0) {
 -/* Try to map a page at the VGA address */
 -base = mmap((caddr_t) 0, 4096, PROT_READ | PROT_WRITE,
 -MAP_FLAGS, fd, (off_t) 0xA + BUS_BASE);
 -
 -if (base != MAP_FAILED) {
 -munmap((caddr_t) base, 4096);
 -devMemFd = fd;
 -return;
 -}
 -else {
 -if (warn) {
 -xf86Msg(X_WARNING, checkDevMem: failed to mmap %s (%s)\n,
 -DEV_MEM, strerror(errno));
 -}
 -}
 -}
 -if (warn) {
 -#ifndef HAS_APERTURE_DRV
 -xf86Msg(X_WARNING, checkDevMem: failed to open/mmap %s (%s)\n,
 -DEV_MEM, strerror(errno));
 -#else
 -#ifndef __OpenBSD__
 -xf86Msg(X_WARNING, checkDevMem: failed to open %s and %s\n
 -\t(%s)\n, DEV_APERTURE, DEV_MEM, strerror(errno));
 -#else   /* __OpenBSD__ */
 -xf86Msg(X_WARNING, checkDevMem: failed to open %s and %s\n
 -\t(%s)\n%s, DEV_APERTURE, DEV_MEM, strerror(errno),
 -SYSCTL_MSG);
 -#endif  /* __OpenBSD__ */
 -#endif
 -xf86ErrorF(\tlinear framebuffer access unavailable\n);
 -}
 -return;
 -}
 -
  void
  xf86OSInitVidMem(VidMemInfoPtr pVidMem)
  {
 -checkDevMem(TRUE);
 -
  pVidMem-initialised = TRUE;
  }
  
 diff --git a/hw/xfree86/os-support/bsd/arm_video.c 
 b/hw/xfree86/os-support/bsd/arm_video.c
 index 3a639b8

Re: [PATCH 0/4] Finish killing VidMemInfo

2014-10-13 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Wed,  8 Oct 2014 17:38:55 +0200
 
 This finishes the job from:
 
 http://lists.freedesktop.org/archives/xorg-devel/2014-July/043343.html
 
 OpenBSD retains its special case for privsep setup, but otherwise this is
 the same as before.

Please give me a chance to properly review and test this.  Was
travelling last week.

  b/configure.ac  |9 -
  b/hw/xfree86/common/compiler.h  |   35 +-
  b/hw/xfree86/os-support/Makefile.am |2 
  b/hw/xfree86/os-support/bsd/Makefile.am |2 
  b/hw/xfree86/os-support/bsd/alpha_video.c   |  152 
 
  b/hw/xfree86/os-support/bsd/arm_video.c |   59 --
  b/hw/xfree86/os-support/bsd/bsd_ev56.c  |   23 +---
  b/hw/xfree86/os-support/bsd/i386_video.c|5 
  b/hw/xfree86/os-support/bsd/ppc_video.c |9 -
  b/hw/xfree86/os-support/bus/Makefile.am |4 
  b/hw/xfree86/os-support/bus/Pci.c   |   13 --
  b/hw/xfree86/os-support/bus/Pci.h   |   10 -
  b/hw/xfree86/os-support/hurd/Makefile.am|1 
  b/hw/xfree86/os-support/hurd/hurd_video.c   |   11 --
  b/hw/xfree86/os-support/linux/Makefile.am   |1 
  b/hw/xfree86/os-support/linux/lnx_agp.c |1 
  b/hw/xfree86/os-support/linux/lnx_ev56.c|   22 +---
  b/hw/xfree86/os-support/linux/lnx_video.c   |   38 ---
  b/hw/xfree86/os-support/shared/agp_noop.c   |1 
  b/hw/xfree86/os-support/solaris/Makefile.am |1 
  b/hw/xfree86/os-support/solaris/sun_vid.c   |   11 --
  b/hw/xfree86/os-support/stub/Makefile.am|4 
  b/hw/xfree86/os-support/xf86_OSproc.h   |3 
  hw/xfree86/os-support/bsd/sparc64_video.c   |   45 
  hw/xfree86/os-support/bus/bsd_pci.c |   55 --
  hw/xfree86/os-support/shared/vidmem.c   |   54 -
  hw/xfree86/os-support/stub/stub_video.c |   13 --
  hw/xfree86/os-support/xf86OSpriv.h  |   41 ---
  28 files changed, 31 insertions(+), 594 deletions(-)
 
 - ajax
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


[PATCH pciaccess] Use PCIOCREADMASK on OpenBSD.

2014-09-29 Thread Mark Kettenis
If the machdep.allowaperture sysctl(8) variable is set to 0, writing to PCI
config space is not allowed.  So instead of writing 0x to the BARs
in order to determine their size, use the PCIOCREADMASK ioctl(2) which
returns the mask of changeable bits that was saved by the kernel when the
devices was initially probed.

Reviewed-by: Matthieu Herrb matth...@herbb.eu
Signed-off-by: Mark Kettenis kette...@openbsd.org
---

Given that Matthieu already reviewed this diff, I'll push it in a couple of
days unless somebody speaks up.

 src/openbsd_pci.c | 31 +--
 1 file changed, 25 insertions(+), 6 deletions(-)

diff --git a/src/openbsd_pci.c b/src/openbsd_pci.c
index fe034f3..4d1b5cd 100644
--- a/src/openbsd_pci.c
+++ b/src/openbsd_pci.c
@@ -81,6 +81,29 @@ pci_write(int domain, int bus, int dev, int func, uint32_t 
reg, uint32_t val)
return ioctl(pcifd[domain], PCIOCWRITE, io);
 }
 
+static int
+pci_readmask(int domain, int bus, int dev, int func, uint32_t reg,
+uint32_t *val)
+{
+   struct pci_io io;
+   int err;
+
+   bzero(io, sizeof(io));
+   io.pi_sel.pc_bus = bus;
+   io.pi_sel.pc_dev = dev;
+   io.pi_sel.pc_func = func;
+   io.pi_reg = reg;
+   io.pi_width = 4;
+
+   err = ioctl(pcifd[domain], PCIOCREADMASK, io);
+   if (err)
+   return (err);
+
+   *val = io.pi_data;
+
+   return 0;
+}
+
 /**
  * Read a VGA ROM
  *
@@ -328,11 +351,9 @@ pci_device_openbsd_probe(struct pci_device *device)
return err;
 
/* Probe the size of the region. */
-   err = pci_write(domain, bus, dev, func, bar, ~0);
+   err = pci_readmask(domain, bus, dev, func, bar, size);
if (err)
return err;
-   pci_read(domain, bus, dev, func, bar, size);
-   pci_write(domain, bus, dev, func, bar, reg);
 
if (PCI_MAPREG_TYPE(reg) == PCI_MAPREG_TYPE_IO) {
region-is_IO = 1;
@@ -360,11 +381,9 @@ pci_device_openbsd_probe(struct pci_device *device)
return err;
reg64 |= (uint64_t)reg  32;
 
-   err = pci_write(domain, bus, dev, func, bar, 
~0);
+   err = pci_readmask(domain, bus, dev, func, bar, 
size);
if (err)
return err;
-   pci_read(domain, bus, dev, func, bar, size);
-   pci_write(domain, bus, dev, func, bar, reg64  
32);
size64 |= (uint64_t)size  32;
 
region-base_addr = 
PCI_MAPREG_MEM64_ADDR(reg64);
-- 
2.1.0

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 04/19] loader: Deobfuscate RTLD_* macro stuff

2014-09-25 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Thu, 25 Sep 2014 13:37:20 -0400
 
 POSIX requires that these be named correctly, no need to be clever.

None of the BSDs have needed those compatibility defines for quite
some time, and I doubt anything else still supported cares.

Reviewed-by: Mark Kettenis kette...@openbsd.org

 Signed-off-by: Adam Jackson a...@redhat.com
 ---
  hw/xfree86/loader/loader.c | 22 ++
  1 file changed, 2 insertions(+), 20 deletions(-)
 
 diff --git a/hw/xfree86/loader/loader.c b/hw/xfree86/loader/loader.c
 index cc41dcb..3132984 100644
 --- a/hw/xfree86/loader/loader.c
 +++ b/hw/xfree86/loader/loader.c
 @@ -71,24 +71,6 @@
  #include dlfcn.h
  #include X11/Xos.h
  
 -#if defined(DL_LAZY)
 -#define DLOPEN_LAZY DL_LAZY
 -#elif defined(RTLD_LAZY)
 -#define DLOPEN_LAZY RTLD_LAZY
 -#elif defined(__FreeBSD__)
 -#define DLOPEN_LAZY 1
 -#else
 -#define DLOPEN_LAZY 0
 -#endif
 -
 -#if defined(LD_GLOBAL)
 -#define DLOPEN_GLOBAL LD_GLOBAL
 -#elif defined(RTLD_GLOBAL)
 -#define DLOPEN_GLOBAL RTLD_GLOBAL
 -#else
 -#define DLOPEN_GLOBAL 0
 -#endif
 -
  #else
  #error i have no dynamic linker and i must scream
  #endif
 @@ -128,7 +110,7 @@ LoaderOpen(const char *module, int *errmaj, int *errmin)
  
  xf86Msg(X_INFO, Loading %s\n, module);
  
 -if (!(ret = dlopen(module, DLOPEN_LAZY | DLOPEN_GLOBAL))) {
 +if (!(ret = dlopen(module, RTLD_LAZY | RTLD_GLOBAL))) {
  xf86Msg(X_ERROR, Failed to load %s: %s\n, module, dlerror());
  if (errmaj)
  *errmaj = LDR_NOLOAD;
 @@ -151,7 +133,7 @@ LoaderSymbol(const char *name)
  return p;
  
  if (!global_scope)
 -global_scope = dlopen(NULL, DLOPEN_LAZY | DLOPEN_GLOBAL);
 +global_scope = dlopen(NULL, RTLD_LAZY | RTLD_GLOBAL);
  
  if (global_scope)
  return dlsym(global_scope, name);
 -- 
 1.9.3
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 18/19] xfree86: Remove driver entity hooks and private

2014-09-25 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Thu, 25 Sep 2014 13:37:34 -0400
 
 No driver is using these, as far as I know.

The xf86-video-mga driver uses these, but only if DISABLE_VGA_IO is
defined, and by default it isn't, and there is no configure option to
turn it on.

I'm a bit worried about silently ignoring the arguments passed to
xf86ConfigFbEntity* and xf86ConfigPciEntity*.  Perhaps throw a fatal
error if any of the function pointers is non-NULL?

 Signed-off-by: Adam Jackson a...@redhat.com
 ---
  hw/xfree86/common/xf86.h   |  3 ---
  hw/xfree86/common/xf86Bus.c| 41 -
  hw/xfree86/common/xf86Bus.h|  4 
  hw/xfree86/common/xf86Events.c |  4 
  hw/xfree86/common/xf86Helper.c |  3 ---
  hw/xfree86/common/xf86Init.c   |  3 ---
  hw/xfree86/common/xf86PM.c |  3 ---
  hw/xfree86/common/xf86Priv.h   |  2 --
  hw/xfree86/common/xf86pciBus.c |  5 -
  hw/xfree86/doc/ddxDesign.xml   | 22 ++
  10 files changed, 2 insertions(+), 88 deletions(-)
 
 diff --git a/hw/xfree86/common/xf86.h b/hw/xfree86/common/xf86.h
 index 49ff35b..25e2190 100644
 --- a/hw/xfree86/common/xf86.h
 +++ b/hw/xfree86/common/xf86.h
 @@ -156,9 +156,6 @@ extern _X_EXPORT GDevPtr xf86GetDevFromEntity(int 
 entityIndex, int instance);
  extern _X_EXPORT void xf86RemoveEntityFromScreen(ScrnInfoPtr pScrn,
   int entityIndex);
  extern _X_EXPORT EntityInfoPtr xf86GetEntityInfo(int entityIndex);
 -extern _X_EXPORT Bool xf86SetEntityFuncs(int entityIndex, EntityProc init,
 - EntityProc enter, EntityProc leave,
 - void *);
  extern _X_EXPORT Bool xf86IsEntityPrimary(int entityIndex);
  extern _X_EXPORT ScrnInfoPtr xf86FindScreenForEntity(int entityIndex);
  
 diff --git a/hw/xfree86/common/xf86Bus.c b/hw/xfree86/common/xf86Bus.c
 index bd3e4e3..a520e12 100644
 --- a/hw/xfree86/common/xf86Bus.c
 +++ b/hw/xfree86/common/xf86Bus.c
 @@ -290,19 +290,6 @@ xf86IsEntityPrimary(int entityIndex)
  }
  
  Bool
 -xf86SetEntityFuncs(int entityIndex, EntityProc init, EntityProc enter,
 -   EntityProc leave, void *private)
 -{
 -if (entityIndex = xf86NumEntities)
 -return FALSE;
 -xf86Entities[entityIndex]-entityInit = init;
 -xf86Entities[entityIndex]-entityEnter = enter;
 -xf86Entities[entityIndex]-entityLeave = leave;
 -xf86Entities[entityIndex]-private = private;
 -return TRUE;
 -}
 -
 -Bool
  xf86DriverHasEntities(DriverPtr drvp)
  {
  int i;
 @@ -519,30 +506,6 @@ xf86GetDevFromEntity(int entityIndex, int instance)
  }
  
  /*
 - * xf86AccessEnter() -- gets called to save the text mode VGA IO 
 - * resources when reentering the server after a VT switch.
 - */
 -void
 -xf86AccessEnter(void)
 -{
 -int i;
 -
 -for (i = 0; i  xf86NumEntities; i++)
 -if (xf86Entities[i]-entityEnter)
 -xf86Entities[i]-entityEnter(i, xf86Entities[i]-private);
 -}
 -
 -void
 -xf86AccessLeave(void)
 -{
 -int i;
 -
 -for (i = 0; i  xf86NumEntities; i++)
 -if (xf86Entities[i]-entityLeave)
 -xf86Entities[i]-entityLeave(i, xf86Entities[i]-private);
 -}
 -
 -/*
   * xf86PostProbe() -- Allocate all non conflicting resources
   * This function gets called by xf86Init().
   */
 @@ -566,10 +529,6 @@ xf86PostProbe(void)
  ))
  FatalError(Cannot run in framebuffer mode. Please specify busIDs 
for all framebuffer devices\n);
 -
 -for (i = 0; i  xf86NumEntities; i++)
 -if (xf86Entities[i]-entityInit)
 -xf86Entities[i]-entityInit(i, xf86Entities[i]-private);
  }
  
  int
 diff --git a/hw/xfree86/common/xf86Bus.h b/hw/xfree86/common/xf86Bus.h
 index c59625d..52b497a 100644
 --- a/hw/xfree86/common/xf86Bus.h
 +++ b/hw/xfree86/common/xf86Bus.h
 @@ -48,10 +48,6 @@ typedef struct {
  DriverPtr driver;
  int chipset;
  int entityProp;
 -EntityProc entityInit;
 -EntityProc entityEnter;
 -EntityProc entityLeave;
 -void *private;
  Bool active;
  Bool inUse;
  BusRec bus;
 diff --git a/hw/xfree86/common/xf86Events.c b/hw/xfree86/common/xf86Events.c
 index 16b3e28..27206d9 100644
 --- a/hw/xfree86/common/xf86Events.c
 +++ b/hw/xfree86/common/xf86Events.c
 @@ -492,8 +492,6 @@ xf86VTLeave(void)
  for (i = 0; i  xf86NumGPUScreens; i++)
  xf86GPUScreens[i]-LeaveVT(xf86GPUScreens[i]);
  
 -xf86AccessLeave();  /* We need this here, otherwise */
 -
  if (!xf86VTSwitchAway())
  goto switch_failed;
  
 @@ -519,7 +517,6 @@ xf86VTLeave(void)
  
  switch_failed:
  DebugF(xf86VTSwitch: Leave failed\n);
 -xf86AccessEnter();
  for (i = 0; i  xf86NumScreens; i++) {
  if (!xf86Screens[i]-EnterVT(xf86Screens[i]))
  FatalError(EnterVT failed for screen %d\n, i);
 @@ -564,7 +561,6 @@ xf86VTEnter(void)
  
  if (xorgHWAccess)
  

Re: [PATCH 19/19] x86emu: Undefine _NO_INLINE

2014-09-25 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Thu, 25 Sep 2014 13:37:35 -0400
 
 Never defined by the server.

Fairly certain it was never intended to be defined by the server, but
used as a compile-time option to make debugging easier.

 Signed-off-by: Adam Jackson a...@redhat.com
 ---
  hw/xfree86/x86emu/x86emu/x86emui.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/hw/xfree86/x86emu/x86emu/x86emui.h 
 b/hw/xfree86/x86emu/x86emu/x86emui.h
 index 5e20d97..01bd92a 100644
 --- a/hw/xfree86/x86emu/x86emu/x86emui.h
 +++ b/hw/xfree86/x86emu/x86emu/x86emui.h
 @@ -46,7 +46,7 @@
   * dramatically in this case).
   */
  
 -#if  defined(__cplusplus)  !defined(_NO_INLINE)
 +#if defined(__cplusplus)
  #define  _INLINE inline
  #else
  #define  _INLINE static
 -- 
 1.9.3
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] os: Don't listen to 'tcp' or 'unix' by default. Add '-listen' option.

2014-09-13 Thread Mark Kettenis
 From: Keith Packard kei...@keithp.com
 Date: Fri, 12 Sep 2014 15:19:53 -0700
 
 Mark Kettenis mark.kette...@xs4all.nl writes:
 
  Unconditionally disabling the unix listen ports by default might be
  a step too far.  Abstract sockets are only available on Linux.
 
 Yes, of course.
 
  So on other architectures (or at least the ones where LOCALCONN isn't
  defined) this would leave us with no listen ports at all.  I may have
  gotten lost in the #ifdef maze though...
 
 It's a terrible maze in there; I have no idea whether disabling unix
 will actually disable local connections on BSD or not; I fear the best
 way to find out is for you to just run the X server with --nolisten unix
 and see what happens...

Running with -nolisten unix on OpenBSD indeed disables local connections.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] os: Don't listen to 'tcp' or 'unix' by default. Add '-listen' option.

2014-09-13 Thread Mark Kettenis
 From: Keith Packard kei...@keithp.com
 Date: Fri, 12 Sep 2014 11:35:41 -0700
 
 This disables tcp and unix listen ports by default (the unix port is
 the non-abstract /tmp/.X11-unix port that xcb doesn't use). Then, it
 uses a new xtrans interface, TRANS(Listen), to provide a command line
 option to re-enable those if desired.

Just looked at OpenSSH, and it doesn't use the xtrans code to connect
to the server when doing X11 forwarding, but has its own
implementation instead.  This implementation only supports unix and
tcp and has no support for abstract sockets.  So I don't think you
want to disable unix listen ports by default even on Linux.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH xinit 0/1] startx: Pass -nolisten tcp by default

2014-09-12 Thread Mark Kettenis
 From: Colin Harrison colin.harri...@virgin.net
 Date: Fri, 12 Sep 2014 09:37:05 +0100
 
 Hi,
 
 'this day and age'
 'this time and age'
 
 Time is good: it allows evolution (or erosion) to slowly happen.
 
 But I for one vote that X11 network transparency remains default.

In this day^H^H^Htime and age sane people would use ssh -X to
achieve X11 network transparency though.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH xinit 0/1] startx: Pass -nolisten tcp by default

2014-09-12 Thread Mark Kettenis
 From: Hans de Goede hdego...@redhat.com
 Date: Fri, 12 Sep 2014 09:25:17 +0200
 
 Hi All,
 
 After doing the 1.3.4 release yesterday, I've started working on updating the
 Fedora packages to 1.3.4. While looking at our open bug list against xinit,
 I found one bug which is not yet resolved in 1.3.4 .
 
 This patch fixes this, I realize that this is a behavior change, and as such
 may be a bit controversial, but I really believe that in this day and age
 -nolisten tcp by default is the right thing to do.

You're probably right.  However instead of fixing this in each and
every bit of code that starts and X server, wouldn't it make more
sense to simply change the default in the X server itself and add the
-listen option there to override things?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH xinit 0/1] startx: Pass -nolisten tcp by default

2014-09-12 Thread Mark Kettenis
 Date: Fri, 12 Sep 2014 12:17:24 +0200
 From: Hans de Goede hdego...@redhat.com
 
 Hi,
 
 On 09/12/2014 11:12 AM, Mark Kettenis wrote:
  From: Hans de Goede hdego...@redhat.com
  Date: Fri, 12 Sep 2014 09:25:17 +0200
 
  Hi All,
 
  After doing the 1.3.4 release yesterday, I've started working on updating 
  the
  Fedora packages to 1.3.4. While looking at our open bug list against xinit,
  I found one bug which is not yet resolved in 1.3.4 .
 
  This patch fixes this, I realize that this is a behavior change, and as 
  such
  may be a bit controversial, but I really believe that in this day and age
  -nolisten tcp by default is the right thing to do.
  
  You're probably right.  However instead of fixing this in each and
  every bit of code that starts and X server, wouldn't it make more
  sense to simply change the default in the X server itself and add the
  -listen option there to override things?
 
 I was thinking the same thing while working on this patch, the problem
 is that most bits of code starting the xserver have already been patched
 to start it with -nolisten tcp, and have their own config file options /
 cmdline options to override this.
 
 Changing the server would break all this, where as just changing startx
 keeps all of the existing other xserver starters working.

I don't see how this would break things.  Just make sure that
-nolisten tcp continues to be accepted by the xserver.  It will
become a no-op of course.  Unless perhaps you do something silly as
starting the xserver with -listen tcp -nolisten tcp.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] os: Don't listen to 'tcp' or 'unix' by default. Add '-listen' option.

2014-09-12 Thread Mark Kettenis
 From: Keith Packard kei...@keithp.com
 Date: Fri, 12 Sep 2014 11:35:41 -0700
 
 This disables tcp and unix listen ports by default (the unix port is
 the non-abstract /tmp/.X11-unix port that xcb doesn't use). Then, it
 uses a new xtrans interface, TRANS(Listen), to provide a command line
 option to re-enable those if desired.

Hi Keith,

Looks like you're on my side of the fence on this issue ;).

Unconditionally disabling the unix listen ports by default might be
a step too far.  Abstract sockets are only available on Linux.  So on
other architectures (or at least the ones where LOCALCONN isn't
defined) this would leave us with no listen ports at all.  I may have
gotten lost in the #ifdef maze though...

 Signed-off-by: Keith Packard kei...@keithp.com
 ---
  os/utils.c | 21 +
  1 file changed, 21 insertions(+)
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 09/12] xfree86: Remove leaky /dev/mem checks from xf86OSInitVidMem

2014-07-29 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Tue, 29 Jul 2014 15:00:16 -0400
 
 This is mostly a no-op, the checks didn't have much effect since
 pciaccess didn't end up using the fd we opened.  Except for OpenBSD,
 where you have to pass that in from above, which is sort of a weird API
 decision.  So this will break there until libpciaccess is made less
 dumb.

Sorry, but there are some fundamental problems with libpciaccess:

1. It isn't possible to properly log messages from inside libpciaccess.

2. For our privsep'ed X server the devices that give access to the raw
   hardware need to be opened by the priviliged process, but ideally
   for security reasons, we only want to initialize libpciaccess in
   the unpriviliged process.

3. On OpenBSD/sparc64 and OpenBSD/macppc, memorty mapped io is handled
   by the console driver.  But the console device can only be opened
   once by unpriviliged processes if nobody else has it open.  So
   libpciaccess can't do this since the server already opens the
   device.

One way or another some way to pass a filedescriptor to use for memory
mapped io is needed, and the X server will need to do some initial setup.

We can certainly do some redesign of the interfaces here, but frankly
just keeping xf8OSInitVidMem() around for *BSD would not be such a bad
idea.  I believe os-support/bus/bsd_pci.c:osPciInit() could call
xf86OsInitVidMem() directly without problems.


 Signed-off-by: Adam Jackson a...@redhat.com
 ---
  hw/xfree86/os-support/bsd/alpha_video.c | 122 
 
  hw/xfree86/os-support/bsd/arm_video.c   |  52 --
  hw/xfree86/os-support/bsd/i386_video.c  | 104 ---
  hw/xfree86/os-support/hurd/hurd_video.c |   4 --
  4 files changed, 282 deletions(-)
 
 diff --git a/hw/xfree86/os-support/bsd/alpha_video.c 
 b/hw/xfree86/os-support/bsd/alpha_video.c
 index 236def6..fd8f823 100644
 --- a/hw/xfree86/os-support/bsd/alpha_video.c
 +++ b/hw/xfree86/os-support/bsd/alpha_video.c
 @@ -45,131 +45,9 @@
  #define MAP_FLAGS (MAP_FILE | MAP_SHARED)
  #endif
  
 -#ifndef __NetBSD__
 -extern unsigned long dense_base(void);
 -#else   /* __NetBSD__ */
 -static struct alpha_bus_window *abw;
 -static int abw_count = -1;
 -
 -static void
 -init_abw(void)
 -{
 -if (abw_count  0) {
 -abw_count = alpha_bus_getwindows(ALPHA_BUS_TYPE_PCI_MEM, abw);
 -if (abw_count = 0)
 -FatalError(init_abw: alpha_bus_getwindows failed\n);
 -}
 -}
 -
 -static unsigned long
 -dense_base(void)
 -{
 -if (abw_count  0)
 -init_abw();
 -
 -/* XXX check abst_flags for ABST_DENSE just to be safe? */
 -xf86Msg(X_INFO, dense base = %#lx\n, abw[0].abw_abst.abst_sys_start);  
/*  */
 -return abw[0].abw_abst.abst_sys_start;
 -}
 -
 -#endif  /* __NetBSD__ */
 -
 -#define BUS_BASE dense_base()
 -
 -/***/
 -/* Video Memory Mapping section*/
 -/***/
 -
 -#ifdef __OpenBSD__
 -#define SYSCTL_MSG \tCheck that you have set 'machdep.allowaperture=1'\n\
 -  \tin /etc/sysctl.conf and reboot your machine\n \
 -  \trefer to xf86(4) for details
 -#endif
 -
 -static int devMemFd = -1;
 -
 -#ifdef HAS_APERTURE_DRV
 -#define DEV_APERTURE /dev/xf86
 -#endif
 -
 -/*
 - * Check if /dev/mem can be mmap'd.  If it can't print a warning when
 - * warn is TRUE.
 - */
 -static void
 -checkDevMem(Bool warn)
 -{
 -static Bool devMemChecked = FALSE;
 -int fd;
 -void *base;
 -
 -if (devMemChecked)
 -return;
 -devMemChecked = TRUE;
 -
 -#ifdef HAS_APERTURE_DRV
 -/* Try the aperture driver first */
 -if ((fd = open(DEV_APERTURE, O_RDWR)) = 0) {
 -/* Try to map a page at the VGA address */
 -base = mmap((caddr_t) 0, 4096, PROT_READ | PROT_WRITE,
 -MAP_FLAGS, fd, (off_t) 0xA + BUS_BASE);
 -
 -if (base != MAP_FAILED) {
 -munmap((caddr_t) base, 4096);
 -devMemFd = fd;
 -xf86Msg(X_INFO, checkDevMem: using aperture driver %s\n,
 -DEV_APERTURE);
 -return;
 -}
 -else {
 -if (warn) {
 -xf86Msg(X_WARNING, checkDevMem: failed to mmap %s (%s)\n,
 -DEV_APERTURE, strerror(errno));
 -}
 -}
 -}
 -#endif
 -if ((fd = open(DEV_MEM, O_RDWR)) = 0) {
 -/* Try to map a page at the VGA address */
 -base = mmap((caddr_t) 0, 4096, PROT_READ | PROT_WRITE,
 -MAP_FLAGS, fd, (off_t) 0xA + BUS_BASE);
 -
 -if (base != MAP_FAILED) {
 -munmap((caddr_t) base, 4096);
 -devMemFd = fd;
 -return;
 -}
 -else {
 -if (warn) {
 -

Re: [PATCH] modesetting: Support native primary plane rotation

2014-07-10 Thread Mark Kettenis
 From: Chris Wilson ch...@chris-wilson.co.uk
 Date: Wed,  9 Jul 2014 12:14:41 +0100
 
 With the advent of universal drm planes and the introduction of generic
 plane properties for rotations, we can query and program the hardware
 for native rotation support.
 
 NOTE: this depends upon the next release of libdrm to remove one
 opencoded define.
 
 v2: Use enum to determine primary plane, suggested by Pekka Paalanen.
 Use libobj for replacement ffs(), suggested by walter harms
 v3: Support combinations of rotations and reflections
 Eliminate use of ffs() and so remove need for libobj

Surely that means...
  
 +#ifndef HAVE_FFS
 +extern int ffs(unsigned int value);
 +#endif

This bit should be removed ad well?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] modesetting: Support native primary plane rotation

2014-07-09 Thread Mark Kettenis
 Date: Wed, 09 Jul 2014 09:28:31 +0200
 From: walter harms wha...@bfs.de
 
 Am 09.07.2014 09:00, schrieb Chris Wilson:
  With the advent of universal drm planes and the introduction of generic
  plane properties for rotations, we can query and program the hardware
  for native rotation support.
  
  NOTE: this depends upon the next release of libdrm to remove some
  opencoded defines.
  
  Signed-off-by: Chris Wilson ch...@chris-wilson.co.uk
  ---
   configure.ac  |   2 +-
   src/drmmode_display.c | 223 
  +++---
   src/drmmode_display.h |   7 +-
   3 files changed, 199 insertions(+), 33 deletions(-)
  
  diff --git a/configure.ac b/configure.ac
  index 1c1a36d..0b4e857 100644
  --- a/configure.ac
  +++ b/configure.ac
  @@ -74,7 +74,7 @@ AM_CONDITIONAL(HAVE_XEXTPROTO_71, [ test 
  $HAVE_XEXTPROTO_71 = yes ])
   # Checks for header files.
   AC_HEADER_STDC
   
  -PKG_CHECK_MODULES(DRM, [libdrm = 2.4.46])
  +PKG_CHECK_MODULES(DRM, [libdrm = 2.4.54]) #.55 required for universal 
  planes
   PKG_CHECK_MODULES([PCIACCESS], [pciaccess = 0.10])
   AM_CONDITIONAL(DRM, test x$DRM = xyes)
   
  diff --git a/src/drmmode_display.c b/src/drmmode_display.c
  index c533324..aaeda39 100644
  --- a/src/drmmode_display.c
  +++ b/src/drmmode_display.c
  @@ -56,6 +56,11 @@
   
   #include driver.h
   
  +#define DRM_CLIENT_CAP_UNIVERSAL_PLANES 2
  +#define DRM_PLANE_TYPE_OVERLAY 0
  +#define DRM_PLANE_TYPE_PRIMARY 1
  +#define DRM_PLANE_TYPE_CURSOR  2
  +
   static struct dumb_bo *dumb_bo_create(int fd,
const unsigned width, const unsigned height,
const unsigned bpp)
  @@ -300,6 +305,136 @@ create_pixmap_for_fbcon(drmmode_ptr drmmode,
   
   #endif
   
  +static unsigned
  +rotation_index(unsigned rotation)
  +{
  +#if _SVID_SOURCE || _BSD_SOURCE || _POSIX_C_SOURCE = 200809L || 
  _XOPEN_SOURCE = 700
  +   return ffs(rotation) - 1;
  +#else
  +   int i;
  +
  +   for (i = 0; i  32; i++) {
  +   if ((1  i) == rotation)
  +   break;
  +   }
  +
  +   return i;
  +#endif
  +}
 
 
 perhaps it is better to provide an internal ffs for systems that
 lack one ?  than you can use ffs() directly and leave the rest to
 configure.  That would result in a generic HAVE_FFS instead of a
 list of defines like this now.

Seriously though, what system that supports the modesetting interfaces
does actually lack ffs(3)?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] libx11:lcDefConv.c:fix use before check

2014-06-04 Thread Mark Kettenis
 Date: Wed, 04 Jun 2014 12:07:25 +0200
 From: walter harms wha...@bfs.de
 
 Do not use variables before checked for NULL.
 
 
 Signed-off-by: wharms wha...@bfs.de
 
 ---
  modules/lc/def/lcDefConv.c | 35 +--
  1 file changed, 25 insertions(+), 10 deletions(-)
 
 diff --git a/modules/lc/def/lcDefConv.c b/modules/lc/def/lcDefConv.c
 index 3cd5c22..2cae504 100644
 --- a/modules/lc/def/lcDefConv.c
 +++ b/modules/lc/def/lcDefConv.c
 @@ -181,14 +182,16 @@ def_wcstombs(
  XPointer *args,
  int num_args)
  {
 -const wchar_t *src = (const wchar_t *) * from;
 +const wchar_t *src;
  char  *dst = (char *) *to;
  State state = (State) conv-state;
  char ch[MB_LEN_MAX];
  int unconv = 0;
 
  if (from == NULL || *from == NULL)
 - return 0;
 +  return 0;
 +
 +src = (const wchar_t *) * from;

While there, you should probably remove the space after the '*'.
There are a couple of more occurances further down.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] libx11:lcDefConv.c:fix use before check

2014-06-04 Thread Mark Kettenis
 Date: Wed, 04 Jun 2014 13:28:35 +0200
 From: walter harms wha...@bfs.de
 
 Am 04.06.2014 12:23, schrieb Mark Kettenis:
  Date: Wed, 04 Jun 2014 12:07:25 +0200
  From: walter harms wha...@bfs.de
 
  Do not use variables before checked for NULL.
 
 
  Signed-off-by: wharms wha...@bfs.de
 
  ---
   modules/lc/def/lcDefConv.c | 35 +--
   1 file changed, 25 insertions(+), 10 deletions(-)
 
  diff --git a/modules/lc/def/lcDefConv.c b/modules/lc/def/lcDefConv.c
  index 3cd5c22..2cae504 100644
  --- a/modules/lc/def/lcDefConv.c
  +++ b/modules/lc/def/lcDefConv.c
  @@ -181,14 +182,16 @@ def_wcstombs(
   XPointer *args,
   int num_args)
   {
  -const wchar_t *src = (const wchar_t *) * from;
  +const wchar_t *src;
   char  *dst = (char *) *to;
   State state = (State) conv-state;
   char ch[MB_LEN_MAX];
   int unconv = 0;
 
   if (from == NULL || *from == NULL)
  -  return 0;
  +  return 0;
  +
  +src = (const wchar_t *) * from;
  
  While there, you should probably remove the space after the '*'.
  There are a couple of more occurances further down.
 
 you mean:
 src = (const wchar_t *) *from;  ?

yup
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 0/9] Rewrite include/servermd.h

2014-05-22 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Wed, 21 May 2014 15:31:16 -0400
 
 We really don't want to be in the business of knowing about CPU
 architectures.  servermd.h is one of the worst offenders for that in
 the tree; this series attempts to clean out the gunk.  Probably the most
 contentious bit is assuming your compiler will actually define
 __BYTE_ORDER__, but gcc certainly does.

Unfortunately older versions of GCC don't define __BYTE_ORDER__.  In
particular 4.2.1 doesn't support this, and since this is the last
version released under GPLv2, it is still widely used.  Also,
__BYTE_ORDER__ was only recently added to clang.

Is providing a fallback based on AC_C_BIGENDIAN like Alan C. suggests
feasable?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 1/3] xfree86: Report DRI3 as a built-in module

2014-05-12 Thread Mark Kettenis
 From: Chris Wilson ch...@chris-wilson.co.uk
 Date: Mon, 12 May 2014 08:12:37 +0100
 
 This is so that drivers can do a runtime check that DRI3 is available,
 similar to existing runtime checks performed by the drivers for DRI and
 DRI2.

Does that run-time check actually work?  It seems any check based on
compiled_in_modules would detect DRI3 even on platforms that don't
support DRI3 or if the server was built using --disable-dri3.

 ---
  hw/xfree86/loader/loadmod.c | 1 +
  1 file changed, 1 insertion(+)
 
 diff --git a/hw/xfree86/loader/loadmod.c b/hw/xfree86/loader/loadmod.c
 index 092bf57..9f54929 100644
 --- a/hw/xfree86/loader/loadmod.c
 +++ b/hw/xfree86/loader/loadmod.c
 @@ -838,6 +838,7 @@ static const char *compiled_in_modules[] = {
  extmod,
  dri,
  dri2,
 +dri3,
  NULL
  };
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: xserver ABI freeze policy (was: [PATCH] hw/xfree86: Restore API compatibility for cursor loading functions)

2014-04-29 Thread Mark Kettenis
 From: Keith Packard kei...@keithp.com
 Date: Mon, 28 Apr 2014 15:59:32 -0700
 
 Aaron Plattner aplatt...@nvidia.com writes:
 
  Changing the ABI at the last minute, even if you bump the ABI version 
  number, defeats the purpose of that.  In this case, I just added support 
  for ABI 17 so that driver that in theory will support the upcoming 
  xserver 1.16 can be released relatively soon.  If we bump the ABI to 18 
  now, then you'll make the changelog entry I added a lie.
 
 1.16 is scheduled to be released at mid-year, we've still got two months
 to go. The freeze process for the X server takes half of the development
 cycle, and we're now one month into that.

Right, we're not talking about changing the ABI because somebody is
trying to sneak in a new feature after feature freeze.  We're changing
the ABI to fix a mistake made earlier in the release cycle.  That
seems perfectly acceptable to me.  I'd even go as far as saying that
making such changes would be acceptable much closer to release time if
needed.

 I'm not willing to ship 1.16 with the API that was released at the
 feature freeze; it has a bug where existing drivers would
 build and mostly run, except that cursors would fail in mysterious ways.
 
 That's clearly not acceptable for the 1.16 release. We either need an
 API change that breaks compilation (so that users are forced to fix
 them), or a cross-version compatible API. A cross-version compatible API
 is obviously preferred as that will allow people to separately migrate
 driver packages to the new API as they please.

There are many open source drivers out there, and basically only one
binary driver.  And that binary driver is only provided for a subset
of the supported platforms.  As such API stability is much more
important than ABI stability.  But if Aaron/NVIDIA really wants to
stick to the current ABI, I'd say they should do the work of fixing
and testing all the open source drivers.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 0/4] Xorg.wrap: Misc cleanups and porting

2014-04-14 Thread Mark Kettenis
 From: Guillem Jover guil...@hadrons.org
 Date: Mon, 14 Apr 2014 18:13:21 +0200
 
 Hi!
 
 I've adapted the porting patches I submitted to the Debian Xorg wrapper
 long ago to the native one, and in the process fixed some other issues.
 
 I've only tested that it builds, and that it fails on a non-console as
 I've recently lost my GNU/kFreeBSD system.
 
 I guess porting this to OpenBSD/NetBSD might require using one of the
 wscons ioctls, and for Solaris KDGETMODE might work?

We (OpenBSD) don't need an Xorg wrapper, as we have a properly
privilige seperated Xorg that allows us to run X without root
priviliges even on non-KMS hardware.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: libdrm: ERESTART undefined

2014-04-12 Thread Mark Kettenis
 Date: Sat, 12 Apr 2014 09:21:06 +0200
 From: Thomas Klausner w...@netbsd.org
 
 After updating libdrm in pkgsrc, I got a bug report from David Shao
 
 --- begin quote ---
 
 For current cvs pkgsrc on DragonFly 3.7-DEVELOPMENT and perhaps other
 BSDs, ERESTART is not defined for userland programs instead being
 defined in errno.h only for
 
 #if defined(_KERNEL) || defined(_KERNEL_STRUCTURES)
 
 In any case the DragonFly value of (-1) for ERESTART differs from
 Linux's. As DragonFly does not even implement KMS for VMWare vmx
 graphics its value can basically be anything as long as it is defined.
 Without some definition building libdrm 2.4.53 fails with
 
 Making all in libkms
   CC   linux.lo
   CC   intel.lo
   CC   dumb.lo
   CC   api.lo
   CC   vmwgfx.lo
 vmwgfx.c: In function 'vmwgfx_bo_create':
 vmwgfx.c:107:20: error: 'ERESTART' undeclared (first use in this function)
 vmwgfx.c:107:20: note: each undeclared identifier is reported only once for 
 each function it appears in
 
 --- end quote ---
  
 David suggests defining ERESTART to just any value if it's
 not defined, i.e.
 
 +#ifndef ERESTART
 +#define ERESTART 85
 +#endif
 
 Should I do that or are there better solutions?

Well, vmgfx.c should almost certainly not be doing that ERESTART
check.  On *BSD and Linux, system calls that are interrupted by a
signal should be restarted automatically by the kernel or return
EINTR.  And libdrm already handles EINTR.

System V derived kernels behave a little bit different.  Here
interrupted system calls are restarted by the system call stubs in
libc.  So here the kernel does expose ERESTART and errno.h defines
it such that library code can use it to implement the system call
stubs.  The libkms code should still not see ERESTART, as the ioctl(2)
system call stub will hide it.

Now Linux itself doesn't actually use ERESTART for interrupted system
calls that need to be restarted.  Instead it has various ERESTART*
error codes.  But it does define ERESTART, probably for System V
compatibility.  The define is there in Linux 0.99.15, but not used
anywhere in the kernel code.  Unfortunately through the years some
misguided Linux kernel developers have started using ERESTART for
things that are unrelated to interrupted system calls.  For example
the lguest driver has:

/* Special case: Guest is 'dead' but wants a reboot. */

But as far as I can tell the vmwgfx driver never actually returns ERESTART.
Hence my claim that the right thing to do is to remove the check:

diff --git a/libkms/vmwgfx.c b/libkms/vmwgfx.c
index d594b3b..1667bd7 100644
--- a/libkms/vmwgfx.c
+++ b/libkms/vmwgfx.c
@@ -100,12 +100,9 @@ vmwgfx_bo_create(struct kms_driver *kms,
bo-base.pitch = width * 4;
bo-base.kms = kms;
 
-   do {
-   ret = drmCommandWriteRead(bo-base.kms-fd,
- DRM_VMW_ALLOC_DMABUF,
- arg, sizeof(arg));
-   } while (ret == -ERESTART);
-
+   ret = drmCommandWriteRead(bo-base.kms-fd,
+ DRM_VMW_ALLOC_DMABUF,
+ arg, sizeof(arg));
if (ret)
goto err_free;
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH libxtrans] Increase UNIX socket buffer size

2014-03-30 Thread Mark Kettenis
 Date: Sat, 29 Mar 2014 17:26:00 -0700
 From: Alan Coopersmith alan.coopersm...@oracle.com
 
 Do we need to do the same for the SO_RCVBUF as well?

Not really.  These connections are local by defenition so in principle
it doesn't really matter whether we queue on the sending side or on
the receiving side.  On Linux SO_RCVBUF on a UNIX socket is explicitly
documented as a no-op.  The BSD sockets implementation seems to use
the receive buffer limits only for control (ancillary) data, like file
descriptors.  Not 100% sure though; the code is a bit complex...

 Though looking at the pfiles output from Xorg on Solaris, it appears we
 default to mismatched sizes for some reason:
 
37: S_IFSOCK mode:0666 dev:621,0 ino:56871 uid:0 gid:0 size:0
O_RDWR|O_NONBLOCK
   SOCK_STREAM
   SO_SNDBUF(16384),SO_RCVBUF(5120)
   sockname: AF_UNIX /tmp/.X11-unix/X0
   peername: AF_UNIX
   peer: metacity[14049] zone: global[0]

Based on this assymmetry, it wouldn't surprise me if Solaris behaved
similar to BSD.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] dix/dispatch: DoGetImage: Use a single buffer

2014-03-29 Thread Mark Kettenis
 From: Keith Packard kei...@keithp.com
 Date: Sat, 29 Mar 2014 10:38:01 -0700
 
 Daniel Kurtz djku...@chromium.org writes:
 
  DoGetImage chops up an image into IMAGE_BUFSIZE strips.
  This predates commit [0] (a pull from XFree86 to X.Org), at which time
  IMAGE_BUFSIZE was 8k.
 
 It's worked like this since before R1, which was before the OS layer
 resized output buffers.
 
  Since we are now allocating from the heap, there should no longer be
  necessary, or desirable to chop up the DoGetImage() buffer.
 
 Can you show us performance comparisons so that we can see if it
 helps/hurts? There are additional disasters in this code path which
 makes the answer less than obvious from the driver level...
 
 What I always wanted to do was get the OS layer to allocate space for
 the whole image and get the DDX to dump the data right into that, rather
 than allocating a temporary buffer and doing two copies. Something like
 
 void *
 ReserveOutputSpace(ClientPtr client, int size);
 
 However, if you look at the FlushClient function, it doesn't track the
 start of the output data to send to the client, so we're actually
 memmove'ing the data after each write.
 
 So, we should fix that as well; sticking an offset into the output
 buffer, and only memmove'ing the remaining data when the amount left in
 the buffer is a small amount.

Heh, I just happened to be in there yesterday.  The memmove'ing gets
incredibly insane if you're trying to move a large image through a
socket with a small socket buffer size (like the 4k that was passed
down to us by the CSRG).  A typical 8-megapixel image produced by a
digital camera takes tens of seconds when you send it in 4k chunks and
memove the remainder of the image after each chunk with CPU usage by
Xorg going through the roof.

I tried making the FlushClient() implementation a bit smarter by using
a circular buffer to avoid the memmove'ing.  Unfortunately that makes
resizing the output buffer more painful and introduces some additional
memcpy'ing there.  I managed to speed things up a bit, but things were
still too slow.  See the attached diff (against OpenBSD-current
xenocara, so basically Xorg 1.14.4).  Your ReserveOutputSpace() idea
would certainly help here as it would eliminate or at least reduce the
overhead of resizing the circular buffer.  Would Daniel's diff have
the same effect?

I ended up modifying libxtrans to set the socket buffer size to
something 64k to fix the performance problem I was investigating.  See
the diff below.  I guess most Linux uses a larger default socket
buffer size, in which case the performance problem might not be so
prominent.

 I guess my main question is whether you have a need to improve GetImage
 performance, or whether this is strictly a desire to clean up some
 code which is presumably sub-optimal?

In my case the problem shows up with Firefox, which apparently is
rendering images to the screen when downloading them, then grabs the
pixels with XGetImage() and re-renders using those grabbed pixels.
Seems utterly retarded to me, and it is potentially an issue with the
old cairo embedded by Firefox.


Index: io.c
===
RCS file: /cvs/xenocara/xserver/os/io.c,v
retrieving revision 1.11
diff -u -p -r1.11 io.c
--- io.c24 Aug 2013 19:44:51 -  1.11
+++ io.c29 Mar 2014 18:24:23 -
@@ -96,9 +96,11 @@ typedef struct _connectionOutput {
 struct _connectionOutput *next;
 unsigned char *buf;
 int size;
+int start;
 int count;
 } ConnectionOutput;
 
+static void AppendToOutputBuffer(ConnectionOutputPtr, const char *, int);
 static ConnectionInputPtr AllocateInputBuffer(void);
 static ConnectionOutputPtr AllocateOutputBuffer(void);
 
@@ -705,6 +707,7 @@ WriteToClient(ClientPtr who, int count, 
 {
 OsCommPtr oc;
 ConnectionOutputPtr oco;
+static char padBuffer[3];
 int padBytes;
 const char *buf = __buf;
 
@@ -829,12 +832,9 @@ WriteToClient(ClientPtr who, int count, 
 
 NewOutputPending = TRUE;
 FD_SET(oc-fd, OutputPending);
-memmove((char *) oco-buf + oco-count, buf, count);
-oco-count += count;
-if (padBytes) {
-memset(oco-buf + oco-count, '\0', padBytes);
-oco-count += padBytes;
-}
+AppendToOutputBuffer(oco, buf, count);
+if (padBytes)
+   AppendToOutputBuffer(oco, padBuffer, padBytes);
 return count;
 }
 
@@ -854,7 +854,7 @@ FlushClient(ClientPtr who, OsCommPtr oc,
 ConnectionOutputPtr oco = oc-output;
 int connection = oc-fd;
 XtransConnInfo trans_conn = oc-trans_conn;
-struct iovec iov[3];
+struct iovec iov[4];
 static char padBuffer[3];
 const char *extraBuf = __extraBuf;
 long written;
@@ -904,11 +904,17 @@ FlushClient(ClientPtr who, OsCommPtr oc,
before = 0; \
}
 
-InsertIOV((char *) oco-buf, oco-count)
-InsertIOV((char *) extraBuf, extraCount)
-

[PATCH libxtrans] Increase UNIX socket buffer size

2014-03-29 Thread Mark Kettenis
From: Mark Kettenis kette...@openbsd.org

Some systems provide a really small default buffer size for UNIX sockets.
Bump it up to 64k if necessary such that large transfers (such as
XGetImage() on a 8-megapixel image) don't take tens of seconds.
---
 Xtranssock.c | 21 +
 1 file changed, 21 insertions(+)

diff --git a/Xtranssock.c b/Xtranssock.c
index fdf1dd7..6cde146 100644
--- a/Xtranssock.c
+++ b/Xtranssock.c
@@ -445,6 +445,27 @@ TRANS(SocketOpen) (int i, int type)
 }
 #endif
 
+/*
+ * Some systems provide a really small default buffer size for
+ * UNIX sockets.  Bump it up a bit such that large transfers don't
+ * proceed at glacial speed.
+ */
+#ifdef SO_SNDBUF
+if (Sockettrans2devtab[i].family == AF_UNIX)
+{
+   SOCKLEN_T len = sizeof (int);
+   int val;
+
+   if (getsockopt (ciptr-fd, SOL_SOCKET, SO_SNDBUF,
+   (char *) val, len) == 0  val  64 * 1024)
+   {
+   val = 64 * 1024;
+   setsockopt (ciptr-fd, SOL_SOCKET, SO_SNDBUF,
+   (char *) val, sizeof (int));
+   }
+}
+#endif
+
 return ciptr;
 }
 
-- 
1.9.0

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] dix/dispatch: DoGetImage: Use a single buffer

2014-03-29 Thread Mark Kettenis
 From: Keith Packard kei...@keithp.com
 Date: Sat, 29 Mar 2014 12:40:07 -0700

  In my case the problem shows up with Firefox, which apparently is
  rendering images to the screen when downloading them, then grabs the
  pixels with XGetImage() and re-renders using those grabbed pixels.
  Seems utterly retarded to me, and it is potentially an issue with the
  old cairo embedded by Firefox.
 
 Sounds like we might actually want to fix this then.
 
 You might  try the combination of your changes with Daniel's patch and
 see what you get; doing the whole GetImage in a single WriteToClient
 should help a bunch, especially when mixed with your change.

Hmm, I don't think Daniel's message ever made it to the list though.

I did send the libxtrans diff as a formal patch (now against git HEAD)
to the list though as I'm sure it is an essential part of the puzzle.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH xinit v2 1/3] Remove unixware / sco support

2014-03-27 Thread Mark Kettenis
 From: Hans de Goede hdego...@redhat.com
 Date: Thu, 27 Mar 2014 12:55:55 +0100
 
 We don't support SCO / Unixware anymore, so lets remove the SCO / Unixware
 specific bits from startx and xinitrc
 
 Signed-off-by: Hans de Goede hdego...@redhat.com

Reviewed-by: Mark Kettenis kette...@openbsd.org

 ---
  startx.cpp  | 53 +
  xinitrc.cpp | 43 ---
  2 files changed, 1 insertion(+), 95 deletions(-)
 
 diff --git a/startx.cpp b/startx.cpp
 index c595b84..b7a29f9 100644
 --- a/startx.cpp
 +++ b/startx.cpp
 @@ -14,7 +14,7 @@ XCOMM
  unset DBUS_SESSION_BUS_ADDRESS
  unset SESSION_MANAGER
  
 -#if defined(__SCO__) || defined(__UNIXWARE__) || defined(__APPLE__)
 +#ifdef __APPLE__
  
  XCOMM Check for /usr/bin/X11 and BINDIR in the path, if not add them.
  XCOMM This allows startx to be placed in a place like /usr/bin or 
 /usr/local/bin
 @@ -33,11 +33,7 @@ case $PATH in
  esac
  
  XCOMM Now the old compiled path
 -#ifdef __APPLE__
  oldbindir=/usr/X11R6/bin
 -#else
 -oldbindir=/usr/bin/X11
 -#endif
  
  if [ -d $oldbindir ] ; then
  case $PATH in
 @@ -51,24 +47,8 @@ XCOMM so export the new PATH just in case the user changes 
 the shell
  export PATH
  #endif
  
 -#if defined(__SCO__) || defined(__UNIXWARE__)
 -XCOMM Set up the XMERGE env var so that dos merge is happy under X
 -
 -if [ -f /usr/lib/merge/xmergeset.sh ]; then
 - . /usr/lib/merge/xmergeset.sh
 -elif [ -f /usr/lib/merge/console.disp ]; then
 - XMERGE=`cat /usr/lib/merge/console.disp`
 - export XMERGE
 -fi
 -
 -userclientrc=$HOME/.startxrc
 -sysclientrc=LIBDIR/sys.startxrc
 -scouserclientrc=$HOME/.xinitrc
 -scosysclientrc=XINITDIR/xinitrc
 -#else
  userclientrc=$HOME/.xinitrc
  sysclientrc=XINITDIR/xinitrc
 -#endif
  
  userserverrc=$HOME/.xserverrc
  sysserverrc=XINITDIR/xserverrc
 @@ -145,21 +125,6 @@ done
  defaultdisplay=:$d
  unset d
  
 -#if defined(__SCO__) || defined(__UNIXWARE__)
 -
 -XCOMM SCO -t option: do not start an X server
 -case $1 in
 -  -t)   if [ -n $DISPLAY ]; then
 -REMOTE_SERVER=TRUE
 -shift
 -else
 -echo DISPLAY environment variable not set
 -exit 1
 -fi
 -;;
 -esac
 -#endif
 -
  whoseargs=client
  while [ x$1 != x ]; do
  case $1 in
 @@ -209,12 +174,6 @@ if [ x$client = x ]; then
  client=$userclientrc
  elif [ -f $sysclientrc ]; then
  client=$sysclientrc
 -#if defined(__SCO__) || defined(__UNIXWARE__)
 -elif [ -f $scouserclientrc ]; then
 -client=$scouserclientrc
 -elif [ -f $scosysclientrc ]; then
 -client=$scosysclientrc
 -#endif
  fi
  fi
  fi
 @@ -319,21 +278,11 @@ EOF
  done
  fi
  
 -#if defined(__SCO__) || defined(__UNIXWARE__)
 -if [ $REMOTE_SERVER = TRUE ]; then
 -exec SHELL_CMD ${client}
 -else
 -XINIT $client $clientargs -- $server $display $serverargs
 -fi
 -#else
 -
  #if defined(__APPLE__) || defined(__CYGWIN__)
  eval XINIT \$client\ $clientargs -- \$server\ $display $serverargs
  #else
  XINIT $client $clientargs -- $server $display $serverargs
  #endif
 -
 -#endif
  retval=$?
  
  if [ x$enable_xauth = x1 ] ; then
 diff --git a/xinitrc.cpp b/xinitrc.cpp
 index 81c238b..14d3cbc 100644
 --- a/xinitrc.cpp
 +++ b/xinitrc.cpp
 @@ -41,49 +41,6 @@ fi
  
  XCOMM start some nice programs
  
 -#if defined(__SCO__) || defined(__UNIXWARE__)
 -if [ -r /etc/default/xdesktops ]; then
 -  . /etc/default/xdesktops
 -fi
 -
 -if [ -r $HOME/.x11rc ]; then
 -  . $HOME/.x11rc
 -else
 -  if [ -r /etc/default/X11 ]; then
 -  . /etc/default/X11
 -  fi
 -fi
 -
 -#if defined(__SCO__)
 -if [ -n $XSESSION ]; then
 -  case $XSESSION in
 -[Yy][Ee][Ss])
 -  [ -x /usr/bin/X11/scosession ]  exec /usr/bin/X11/scosession
 -  ;;
 -  esac
 -fi
 -
 -if [ -n $XDESKTOP ]; then
 -  exec `eval echo $$XDESKTOP`
 -else
 -  if [ -x /usr/bin/X11/pmwm -a -x /usr/bin/X11/scoterm ]; then
 -/usr/bin/X11/scoterm 2 /dev/null 
 -exec /usr/bin/X11/pmwm2 /dev/null
 -  fi
 -fi
 -#elif defined(__UNIXWARE__)
 -if [ -n $XDESKTOP ]; then
 -  exec `eval echo $$XDESKTOP`
 -else
 -  if [ -x /usr/X/bin/pmwm ]; then
 -exec /usr/X/bin/pmwm2 /dev/null
 -  fi
 -fi
 -#endif
 -
 -XCOMM This is the fallback case if nothing else is executed above
 -#endif /* !defined(__SCO__)   !defined(__UNIXWARE__) */
 -
  if [ -d XINITDIR/xinitrc.d ] ; then
   for f in XINITDIR/xinitrc.d/?*.sh ; do
   [ -x $f ]  . $f
 -- 
 1.9.0
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH xinit v2 3/3] startx: Under Linux start X on the current VT

2014-03-27 Thread Mark Kettenis
 From: Hans de Goede hdego...@redhat.com
 Date: Thu, 27 Mar 2014 12:55:57 +0100
 
 When we let X allocate a new VT, systemd-logind will not recognize any
 processes running on this VT as belonging to a valid session (since there
 was no pam session opened on that tty).
 
 This causes problems like PolicyKit denials for these processes.
 
 ConsoleKit under Linux has been deprecated for a few years now and is no
 longer being maintained, so simply make this the default under Linux.

What you're basically saying is that on a modern Linux systems, the
historic behaviour where X allocates a new VT and occupies it is no
longer wanted.  This raises several questions:

1. Is there consensus among the various Linux distros that this really
   is the desired default behaviour?

2. Doesn't this really mean that the default behaviour of the X server
   itself should be change to take over the VT it is running on
   instead of allocating a new one?  An excellent opportunity to
   remove code instead of adding more hacks I'd say ;).

 Note we do not pass in the vt if the user has specified an alternative server
 to start, as the vtX argument is only valid for the Xorg server, likewise we
 omit it if the user has specified any other server arguments.

If I read the shell code correctly you're also not passing the vtX
argument if the user specifies any arguments to be used with the
standard server.  For example,

  startx -- -depth 16

will not pass the necessary vtX argument.

Also:

 ---
  startx.cpp | 10 ++
  1 file changed, 10 insertions(+)
 
 diff --git a/startx.cpp b/startx.cpp
 index b7a29f9..8426734 100644
 --- a/startx.cpp
 +++ b/startx.cpp
 @@ -187,6 +187,16 @@ XCOMM process server arguments
  if [ x$server = x ]; then
  server=$defaultserver
  
 +#ifdef __linux__
 +XCOMM When starting the defaultserver start X on the current tty to avoid
 +XCOMM the startx session being seen as inactive: RHBZ#820675

This sort of vendor-specific bug references isn't very helpful is it?
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH v2 2/2] xf86LogInit: log to XDG_DATA_HOME when not running as root

2014-03-26 Thread Mark Kettenis
 From: Hans de Goede hdego...@redhat.com
 Date: Wed, 26 Mar 2014 12:24:50 +0100
 
 When no logfile was specified (xf86LogFileFrom == X_DEFAULT) and
 we're not running as root log to $XDG_DATA_HOME/xorg/Xorg.#.log as
 Xorg won't be able to log to the default /var/log/... when it is not
 running as root.

Having log files in .../share/... feels wrong.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH:libxtrans 1/2] Add missing headers for free() and strlen().

2014-03-26 Thread Mark Kettenis
 From: Thomas Klausner w...@netbsd.org
 Date: Wed, 26 Mar 2014 12:49:31 +0100
 
 Signed-off-by: Thomas Klausner w...@netbsd.org

Reviewed-by: Mark Kettenis kette...@openbsd.org

 ---
  Xtrans.c | 2 ++
  1 file changed, 2 insertions(+)
 
 diff --git a/Xtrans.c b/Xtrans.c
 index 735d7b8..9a6dfbc 100644
 --- a/Xtrans.c
 +++ b/Xtrans.c
 @@ -48,6 +48,8 @@ from The Open Group.
   */
  
  #include ctype.h
 +#include stdlib.h
 +#include string.h
  
  /*
   * The transport table contains a definition for every transport (protocol)
 -- 
 1.9.0
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH libxtrans v2] configure: Also add -D_DEFAULT_SOURCE to .pc cflags to shut up glibc warnings

2014-03-25 Thread Mark Kettenis
 From: Hans de Goede hdego...@redhat.com
 Date: Tue, 25 Mar 2014 15:16:57 +0100
 
 I've discussed this with the glibc developers and the prefered way of fixing
 this is by also defining _DEFAULT_SOURCE which is the new way of stating
 _BSD_SOURCE / _SVID_SOURCE .

Sounds like a reasonable compromise to me.  Still think that what the
glibc developers did is unhelpful though.

Reviewed-by: Mark Kettenis kette...@openbsd.org
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH xf86-video-ati] Add support for server managed fds

2014-03-15 Thread Mark Kettenis
 From: Hans de Goede hdego...@redhat.com
 Date: Tue, 11 Mar 2014 11:09:19 +0100
 
 Signed-off-by: Hans de Goede hdego...@redhat.com
 ---
  src/radeon_kms.c   | 60 
 +-
  src/radeon_probe.c |  5 +
  src/radeon_probe.h |  2 ++
  3 files changed, 57 insertions(+), 10 deletions(-)
 
 diff --git a/src/radeon_kms.c b/src/radeon_kms.c
 index 4a6c38e..6c5994c 100644
 --- a/src/radeon_kms.c
 +++ b/src/radeon_kms.c
 @@ -180,7 +180,11 @@ static void RADEONFreeRec(ScrnInfoPtr pScrn)
  pRADEONEnt = pPriv-ptr;
  pRADEONEnt-fd_ref--;
  if (!pRADEONEnt-fd_ref) {
 -drmClose(pRADEONEnt-fd);
 +#ifdef XF86_PDEV_SERVER_FD
 +if (!(pRADEONEnt-platform_dev 
 +pRADEONEnt-platform_dev-flags  XF86_PDEV_SERVER_FD))
 +#endif
 +drmClose(pRADEONEnt-fd);
  pRADEONEnt-fd = 0;
  }
  }
 @@ -599,6 +603,15 @@ static Bool radeon_open_drm_master(ScrnInfoPtr pScrn)
   goto out;
  }
  
 +#if defined(ODEV_ATTRIB_FD)
 +if (pRADEONEnt-platform_dev) {
 +info-dri2.drm_fd = xf86_get_platform_device_int_attrib(
 +pRADEONEnt-platform_dev, ODEV_ATTRIB_FD, 
 -1);
 +if (info-dri2.drm_fd != -1)
 +goto got_fd;

Sorry, but this is firmly above my spaghetti-style goto threshold.

Same issue with most/all of the other driver conversions you did.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 02/10] fb: Use #error instead of a syntax error

2014-03-10 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Mon, 10 Mar 2014 11:04:26 -0400
 
 Signed-off-by: Adam Jackson a...@redhat.com

Reviewed-by: Mark Kettenis kette...@openbsd.org

 ---
  fb/fb.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/fb/fb.h b/fb/fb.h
 index f63aa4f..206a3bc 100644
 --- a/fb/fb.h
 +++ b/fb/fb.h
 @@ -84,7 +84,7 @@
  #endif
  
  #if FB_SHIFT  LOG2_BITMAP_PAD
 -error FB_SHIFT must be = LOG2_BITMAP_PAD
 +#error FB_SHIFT must be = LOG2_BITMAP_PAD
  #endif
  #define FB_UNIT  (1  FB_SHIFT)
  #define FB_HALFUNIT (1  (FB_SHIFT-1))
 -- 
 1.8.5.3
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH 01/10] fb: Modernize the FbBits typedef for FB_SHIFT == 6

2014-03-10 Thread Mark Kettenis
 From: Adam Jackson a...@redhat.com
 Date: Mon, 10 Mar 2014 11:04:25 -0400
 
 Signed-off-by: Adam Jackson a...@redhat.com

Not immediately obvious to me how you get stdint.h.  Perhaps you
should explicitly include it?

 ---
  fb/fb.h | 15 +--
  1 file changed, 1 insertion(+), 14 deletions(-)
 
 diff --git a/fb/fb.h b/fb/fb.h
 index 9057767..f63aa4f 100644
 --- a/fb/fb.h
 +++ b/fb/fb.h
 @@ -106,20 +106,7 @@ error FB_SHIFT must be = LOG2_BITMAP_PAD
  #define FbBitsStrideToStipStride(s) (((s)  (FB_SHIFT - FB_STIP_SHIFT)))
  #define FbFullMask(n)   ((n) == FB_UNIT ? FB_ALLONES : FbBits) 1)  n) 
 - 1))
  #if FB_SHIFT == 6
 -#ifdef WIN32
 -typedef unsigned __int64 FbBits;
 -#else
 -#if defined(__alpha__) || defined(__alpha) || \
 -  defined(ia64) || defined(__ia64__) || \
 -  defined(__sparc64__) || defined(_LP64) || \
 -  defined(__s390x__) || \
 -  defined(amd64) || defined (__amd64__) || \
 -  defined (__powerpc64__)
 -typedef unsigned long FbBits;
 -#else
 -typedef unsigned long long FbBits;
 -#endif
 -#endif
 +typedef uint64_t FbBits;
  #endif
  
  #if FB_SHIFT == 5
 -- 
 1.8.5.3
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


  1   2   3   4   5   6   >