Re: Video standards

2024-05-03 Thread Pekka Paalanen
On Thu, 11 Apr 2024 22:22:01 -0300
salsaman  wrote:

> Sorry for the delay, there was a lot to respond to, most of it not relevant
> to the XDG discussion. I suggest we limit discussion on the mailing list to
> whatever is relevant. Pekka, if you want to continue this discussion, then
> we can chat in private off the mailing list.
> 

Sorry from my behalf as well, I've been spread too thin lately.

> 
> On Mon, 8 Apr 2024 at 08:42, Pekka Paalanen 
> wrote:
> 
> > On Fri, 5 Apr 2024 19:16:55 -0300
> > salsaman  wrote:
> >  
> > > On Fri, 5 Apr 2024 at 12:57, Pekka Paalanen <
> > pekka.paala...@haloniitty.fi>
> > > wrote:
> > >  
> > > > On Fri, 5 Apr 2024 08:28:27 -0300
> > > > salsaman  wrote:
> > > >  
> > > > > I don't think you are paying enough attention to the main points. Ir  
> > is  
> > > > not  
> > > > > simply a case of extending the fourcc values to include more. If I  
> > didn't  
> > > > > make it clear enough, the whole fourcc system is obscure, inadequate,
> > > > > ambiguous. The only reason ever to use it would be when you don't  
> > have  
> > > > meta  
> > > > > data and you are forced to encode the format in the first 4 bytes.  


> Now to get back to my proposal , all I am suggesting is an agreed upon set
> of constants. It shouldn't stop anyone from doing anything they would
> normally do; what I am suggesting now is that projects that want to provide
> full interoperability provide an API to convert between project constants
> and XDG constants.  My preference would be to implement the conversions as
> macros in a compatibility header to avoid having to link with libraries.
> Then it is a simple matter of, if I wanted to send metadata to Wayland, I
> translate to XDG constants using my compatibility macro, then from XDG
> constants to Wayland values using the Wayland XDG compatibility API. libav
> can implement their own compatibility header, or someone could contribute a
> patch. Pipewire can make an XDG compat header and now suddenly all these
> apps and frameworks become compatible with each other ! Due to the network
> effect, each new project which participates increases the number of
> possibilities exponentially.

...

> 
> None of that will change, all I am suggesting is adding glue to help the
> building blocks gel together better.
> All pf this is optional, nobody will be forced to include this project <->
> XDG compatibiliity but as more projects join the advantages of
> participating increase.
> 
> Just to reiterate - nothing internal to a projecr need change in any way,
> Projects participate by publishing exrenal APIs for two way conversion.
> We are talking only about constant enumerations. The meanings will be
> defined with reference to existing standards, so XDG_VIDEO_GAMMA_BT709
> could have value 2, and it is defined as representing the gamma transfer
> function specified in the bt709 standard.
> 
> Then suppose in my project I have WEED_GAMMA_BT709 with value 3, then I
> make a comoat macro for gamma XDG_GAMMA_TO_PROJECT(gamma_type, which given
> value 2, returns 3, and the corresponding PROJECT_GAMMA_TO_XDG converts 3
> to 2.
> 
> It's that simple.

Right, as long as the things are enumerated rather than numeric, and
the common enumerations are each for strictly orthogonal aspects of
color encoding. It gets complicated when one enumeration combines
things A and B, and another enumeration combines things B and C
instead.

What about all the practicalities for allowing lots of programs to
interoperate:

- Defining an API or IPC protocol between programs so that they can
  actually pass messages and images from one to another?

- How to find available sources and sinks in a system?

- How to decide where to connect, which program feeds into which program?

- How to negotiate a common set of image parameters so that both sides
  of a connection can handle them?


> > > >  
> > > > > Colorimetry is only relevant when displaying on a monitor. In the  
> > video  
> > > > > world we just have red, green and blue (plus alpha, y, u and v).  
> > These  
> > > > are  
> > > > > just labels for the colour channels, mapping them to bit formats.  
> > > >
> > > > That is a very surprising opinion. Have you worked on HDR imagery?
> > > > Or wide color gamut?
> > > >  
> > >
> > > As I have mentioned several times, these are display output parameters,
> > >  The only details which are relevant are the yuv/rgb conversion constants
> > > and the gamma transfer values, Wit

Re: Video standards

2024-04-08 Thread Pekka Paalanen
On Fri, 5 Apr 2024 19:16:55 -0300
salsaman  wrote:

> On Fri, 5 Apr 2024 at 12:57, Pekka Paalanen 
> wrote:
> 
> > On Fri, 5 Apr 2024 08:28:27 -0300
> > salsaman  wrote:
> >  
> > > I don't think you are paying enough attention to the main points. Ir is  
> > not  
> > > simply a case of extending the fourcc values to include more. If I didn't
> > > make it clear enough, the whole fourcc system is obscure, inadequate,
> > > ambiguous. The only reason ever to use it would be when you don't have  
> > meta  
> > > data and you are forced to encode the format in the first 4 bytes.  
> >
> > Right. You must be talking about some other fourcc system. There are
> > many of them, and some combine multiple orthogonal things into a single
> > enumeration, which then becomes very difficult to extend and work with.
> >
> > drm_fourcc.h is not one of those.
> >  
> 
> I am talking about any system which tries to enumerate palettes (pixel
> formats) in four bytes in a non sequential way.
> In my own system (Weed) for example, all RGB palettes are in the range 1 -
> 511, yuv palettes are 512 - 1023, alpha are 1024 +

Interesting use of the term "palette". I've never heard of it being
used synonymous to pixel format. The only usage of "palette" I've
seen so far in the context of digital imagery is a mapping from
color-index values to color channel value tuples, a form of look-up
table.

With such profound disconnect on terminology, it is no wonder we
cannot seem to be able to communicate efficiently. Forgive me the
lengthy writings below, I'm just trying to avoid misunderstandings. You
and I obviously have very different underlying assumptions of what we
are even doing.

> In fact this header is enough to define every possible palette, there are
> standard enumerations for the most commonly used palettes, and
> advanced palettes allows for the composition of new ones. In there also I
> have symbolic names for gamma types and yuv details,
> 
> interlacing and flags for pre-posr alpha are kept in another header,
> 
> 
> 
> 
> >
> > Metadata is always necessary anyway, either implied or explicit.
> >  
> 
> Exactly, so I don't know why you keep mentioning fourcc as if it were some
> kind of complete solution.

It's not complete. It's a building block.

drm_fourcc.h is a very widely used standard, so it would be better to
build on it than to replace it. drm_fourcc.h pixel format system does
not conflict with the addition of metadata.

It is very common to allocate image buffers using specific pixel format
and format modifier (and width, height, stride), because those are
necessary for computing the amount of memory needed for an image. Other
metadata does not affect the amount of memory, or the layout of the
memory, so it is natural to keep this and the other metadata
independent of each other.

It has also been very common to have all the other metadata implicit,
especially colorimetry. For computer graphics, that has been the sRGB
assumption. As long as the assumption was good enough, no other
metadata was needed, and as a result the ecosystem is well developed to
communicate and use pixel formats and more recently also format
modifiers which are crucial for hardware accelerated graphics and video.

Therefore, we have a huge amount of infrastructure that can handle
pixel formats, either drm_fourcc.h or equivalent. If we were to switch
to a fundamentally different pixel format system, all that
infrastructure would need to be replaced. That's unlikely to happen.
What would happen is that if something uses a different pixel format
system, it will just end up being converted to and from drm_fourcc.h.
It adds overhead and room for mistakes, and it is possibly a
fundamentally imperfect translation, while the benefits I have not yet
understood.

This is why I am so keenly interested in what problem you have set out
to solve by introducing a new pixel format system. The benefits need to
be considerable to exceed the disadvantages.

I see the ability to combine independent building blocks to build a
complete image description as an advantage, because there will always
be something new in the future to add, that has previously been either
ignored, assumed, or not known of.

> >  
> > > Colorimetry is only relevant when displaying on a monitor. In the video
> > > world we just have red, green and blue (plus alpha, y, u and v). These  
> > are  
> > > just labels for the colour channels, mapping them to bit formats.  
> >
> > That is a very surprising opinion. Have you worked on HDR imagery?
> > Or wide color gamut?
> >  
> 
> As I have mentioned several times, these are display output parameters,
>  The only details which are relevant are the yuv/

Re: Video standards

2024-04-05 Thread Pekka Paalanen
On Fri, 5 Apr 2024 08:28:27 -0300
salsaman  wrote:

> I don't think you are paying enough attention to the main points. Ir is not
> simply a case of extending the fourcc values to include more. If I didn't
> make it clear enough, the whole fourcc system is obscure, inadequate,
> ambiguous. The only reason ever to use it would be when you don't have meta
> data and you are forced to encode the format in the first 4 bytes.

Right. You must be talking about some other fourcc system. There are
many of them, and some combine multiple orthogonal things into a single
enumeration, which then becomes very difficult to extend and work with.

drm_fourcc.h is not one of those.

Metadata is always necessary anyway, either implied or explicit.

> Colorimetry is only relevant when displaying on a monitor. In the video
> world we just have red, green and blue (plus alpha, y, u and v). These are
> just labels for the colour channels, mapping them to bit formats.

That is a very surprising opinion. Have you worked on HDR imagery?
Or wide color gamut?

> The values I mentioned are all necessary if you want to convert from one
> colourspace to another. For example if I decode a video frame and the pix
> format is YUV420P then to convert it to RGBA to display via openGL, I need
> to know the YUV subspace (bt709 or itu601) and whether the values are
> clamped or full range. Then I apply the standard conversion factors (Kr =
> 0.2126, Kb = 0.0722 for bt709). This cannot be derived from the fourcc
> (generally). No doubt there is a standard definition of definition of the
> R,G,B primaries, but that isnr a concern.  I just feed the values into an
> openGL texture buffer, and SDL buffer, a gdkpixbuf, QImage or whatever and
> ask for it to be displayed. Now in an application I may optionally offer
> the user filters to adjust the white balance, contrast, display gamma etc.
> but that is outside of the scope of what I am proposing.

Yes, those are all important properties, and not enough.

> And no, it is not a case of "adding another standard" and confusing things,
> there is no standard.

There are standards. ITU-T H.273, coding-independent code points, for
example. That combines well with drm_fourcc.h. Also ICC combines well
with drm_fourcc.h. This works, because drm_fourcc.h does not attempt to
define anything outside of the memory layout and abstract channels.

> I just had a look at pipewire, there is nothing bad about it per se, they
> mention their palette values are based on gstreamer. So fine, we have yet
> another library specific set of definitions.
> 
> It's like I am trying to invent Esperanto, and all you can say is"oh
> you don't like English, well have you considered speaking German instead ?"

That does seem like an apt analogue.

> 
> Well that is it, I am done. I was asked how XDG video could be useful. I
> explained the shortcomings of what exists currently, and outlined various
> ways in which having a standard could be useful.

Sorry, but I haven't understood what gap there is that would need to be
filled with yet another pixel format enumeration. Or is it perhaps the
same gap that we are filling in Wayland?

We need to communicate a lot more about images than what pixel formats
ever could. We are building that definition based on existing standards
defining their own aspects: drm_fourcc.h, H.273 / ICC, and then adding
what's missing like the how or if alpha channel is baked into RGB (the
two ways to pre-multiply). Since these are all well-defined and
orthogonal, there is no problem combining them.

Wayland also already provides ways to handle some things, like pixel
aspect ratio, so we don't want to define another conflicting way to
define the same thing. That means the solution for Wayland is probably
not applicable somewhere else, and vice versa.


Thanks,
pq


pgpJzBEpY2E6m.pgp
Description: OpenPGP digital signature


Re: Video standards

2024-04-05 Thread Pekka Paalanen
On Thu, 4 Apr 2024 17:13:40 -0300
salsaman  wrote:

> Hi,
> the problem with the drm.h header is, it is complicated, still needs
> interpretation, and it lacks some commonly used formats, (e.g YUVAp)

They accept additions, if the additions serve userspace
interoperability. There is no requirement to actually use the format in
the kernel.

Pixel formats are complicated, yes. There are too many pixel format
enumerations, every one differently defined, sure. I wouldn't add yet
another system of definitions.

> Also it doesn't address the gamma value (linear, sRGB, bt701), or the yuv
> subspace, (eg Y'CbCr vs bt701), the yuv ramge (16 - 240. 16 - 235 = clamped
> / mpeg. 0 - 255 unclamped, full, jpeg range) or uv sampling position, e.g
> center, top_left)

My opinion is that that none of that is relevant to a pixel format.
These are additional information that must be decoupled from the pixel
format to avoid a combinatorial explosion of the format enumeration,
which is already massive even without them. A pixel format only
describes a part of the memory layout: which set of bits forms a raw
channel value of a pixel, and what are the channel names. Giving any
further meaning to those raw values is for other metadata.

What about colorimetry? Primaries and white point, dynamic range, plus
the difference between encoding colorimetry (container color volume)
and the usable/used colorimetry (target color volume, which is present
in e.g. HDR static metadata typical for BT.2100/PQ signals in the form
of the Mastering Display Color Volume).

What about the assumed viewing environment, if we want to go from just
stimulus towards appearance?

> I can see that having some common definitions would be useful for
> exchanging data between applications. Eg  my app gets a frame buffer and
> metadata XDG_VIDEO_PALETTE_RGB24, XDG_VIDEO_GAMMA_LINEAR
> then I know unambiguously that this is planar RGB 8:8:8 (so forget little /
> big endian) and that the values are encoded with linear (not sRGB) gamma.

> If you want to be more specific with palettes, then you could do so, but it
> might require defining metadata structs,

> I'll try to explain the rationale a bit. In the audio world it is quite
> common for apps to send audio from one to another. Generally speaking they
> would send or receive via an audio server, e.g pulseaudio, jack.
> Now imagine the same for video, 

This sounds like Pipewire. One would develop Pipewire API to carry the
necessary metadata. One could choose to follow something massive like
ITU-T H.274, or maybe follow what we are brewing for Wayland.

To my understanding, Pipewire is already becoming very common among
desktop environments for routing audio and video streams between
applications and system components and devices.


Thanks,
pq


pgpYhSI46yoBC.pgp
Description: OpenPGP digital signature


Re: Video standards

2024-04-04 Thread Pekka Paalanen
On Wed, 3 Apr 2024 21:51:39 -0300
salsaman  wrote:

> Regarding my expertise, I was one of the developers most involved in
> developing the "livido" standard which was one of the main topics of the
> Piksel Festivals held in Bergen, Norway.
> In the early days (2004 - 2006) the focus of the annual event was precisely
> the formulation of free / open standards, in this case for video effects.
> Other contributors included:
>  Niels Elburg, Denis "Jaromil" Rojo, Tom Schouten, Andraz Tori, Kentaro
> Fukuchi and Carlo Prelz.
> I've also been involved with and put forward proposals for common command /
> query / reply actions (Open Media Control). To the extent that these
> proposals have not gained traction, I don't ascribe this to a failing in
> the proposals, but rather to a lack of developer awareness.
> 
> Now regarding specific areas, I went back and reviewed some of the
> available material at  https://www.freedesktop.org/wiki/Specifications/
> 
> free media player specifications
> https://www.freedesktop.org/wiki/Specifications/free-media-player-specs/
> metadata standards for things like comments and ratings - talks mainly
> about audio but describes video files also
> 
> I am not a big fan of dbus, but this looks fine, it could be used for video
> players. I'd be happier if it were a bit more abstracted and not tied to a
> specific implementation (dbus). I could suggest some enhancements but I
> guess this is a dbus thing and not an xdg thing.

Thanks, these sound like they do not need to involve Wayland in any
way, so they are not on my plate.

> IMO what would be useful would be to define a common set of constants, most
> specifically related to frame pixel fornats
> The 2 most common in use are fourCC and avformat

Wayland protocol extensions and I suppose also Wayland compositors
internally standardise on drm_fourcc.h formats. Their authoritative
definitions are in
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/uapi/drm/drm_fourcc.h
and they are not intentionally mirroring any other fourcc coding.

These are strictly pixel formats, and do not define anything about
colorimetry, interlacing, field order, frame rate, quantization range,
or anything else.

> Consider a frame in UYVY fornat
> 
> fourCC values:
> 
>  #define MK_FOURCC(a, b, c, d) (((uint32_t)a) | (((uint32_t)b) << 8) \
>| (((uint32_t)c) << 16) | (((uint32_t)d) <<
> 24))
> 
> MK_FOURCC('U', 'Y', 'V', 'Y')
> but also
> MK_FOURCC('I', 'U', 'Y', 'B')
> the same but with interlacing
> MK_FOURCC('H', 'D', 'Y', 'C')
> same but bt709 (hdtv) encoding
> 
> so this requires interpretation by sender / receiver - a simpler way could
> be with constants
> 
> - probably the nearest we have are ffmpeg / libav definitions, but this is
> the wrong way around, a lib shouldn't define a global standard, the
> standard should come first and the lib should align to that.
> 
> We have AV_PIX_FMT_UYVY422 which was formerly PIX_FMT_UYVY422
> and AVCOL_TRC_BT709, which is actually the gamma transfer function, There
> is no equivalent bt709 constant fot bt709 yuv / rgb, instead this exists as
> a matrix.
> 
> Now consider how much easier it would be to share data if we had the
> following constants enumerated:
> 
> *XDG_VIDEO_PALETTE_UYVY*
> *XDG_VIDEO_INTERLACE_TOP_FIRST*
> *XDG_VIDEO_YUV_SUBSPACE_BT709*
> *XDG_VIDEO_GAMMA_SRGB*
> 
> (this is an invented example, not intended to be a real example).
> 
> There is a bit more too it but that should be enough to give a general idea.

Where should this be used?


Thanks,
pq


pgpoEcw0uwabR.pgp
Description: OpenPGP digital signature


Re: Video standards

2024-04-03 Thread Pekka Paalanen
On Thu, 28 Mar 2024 19:19:33 -0300
salsaman  wrote:

> There are two hardware settings from the monitor that overlap video, these
> are
> - monitor aspect ratio
> - monitor pixel aspect ratio
> These are both useful when rendering video. The first defines how much
> stretch or letterbocing to apply, the second defines non square pixels,
> which is goof to know if you want to render fixed size objects (a circle
> for example). Knowing the monitor size in RGB or Y plane pixels can also be
> useful to define a max or min resize limit (whether it is min or max
> depends on the desired display quality level)

Thanks. I was trying to ask what kind of video standards you have
experience and expertise in?

I'm also interested in what kind of standards you see as missing. The
Wayland extension aims to cover everything display related. I'm sure
video file format specifications do their job.

What would be left to define?

What goals would there be?

I suppose individual APIs like Pipewire might be lacking something, but
that's a Pipewire API rather than an XDG standard. Or do we need an XDG
standard to be used as the design guide and reference for APIs?


Thanks,
pq

> On Thu, 28 Mar 2024 at 19:05, salsaman  wrote:
> 
> > colour management and hdr mostly intersect with three areas of video:
> > pixel formats, yuv <-> rgb conversions and gamma transfer functions.
> > For example
> > xdg_pixformat_yuv121010
> > xdg_subspace_bt2020
> > xdg_gamma_bt2020
> >
> > just off the top of my head, these arent intended to be actual suggestions
> >
> >
> > On Thu, 28 Mar 2024 at 18:57, salsaman  wrote:
> >  
> >> In addition, I am not sure if there are xdg standards for audio, but I
> >> would suggest video and follow similar hierarchies, and that both could be
> >> classed under a more generic xdg multimedia standard.
> >>
> >>
> >> On Thu, 28 Mar 2024 at 18:48, salsaman  wrote:
> >>  
> >>> Hi, IMO hardware related would be more appropriate under display
> >>> standards
> >>> Video standards could be more software related, and provide common
> >>> definitions, for example , allowing exchange of information between
> >>> applications which produce or consume video frames or streams of frames.
> >>> Some examples I can think of might be
> >>>  xdg_colorspace_RGB,
> >>>  xdg_colorspace_YUV
> >>>
> >>> xdg_pixfmt_RGB24
> >>> xdg_pixfmt_YUV420p
> >>> etc
> >>>
> >>>  xdg_gamma_linear
> >>>  xdg_gamma_sRGB
> >>>
> >>> xdg_video_width
> >>> xdg_video_height
> >>>
> >>> I could provide a more full list, but I think if it goes along this
> >>> route. the starting point has to be what are we setting out to achieve 
> >>> with
> >>> the standards / definitions, and provide a range of speculative use cases.
> >>>
> >>> Gabriel (salsaman)
> >>>
> >>>
> >>> On Thu, 28 Mar 2024 at 06:07, Pekka Paalanen <  
> >>> pekka.paala...@haloniitty.fi> wrote:  
> >>>  
> >>>> On Wed, 27 Mar 2024 11:45:00 -0300
> >>>> salsaman  wrote:
> >>>>  
> >>>> > ISTR that the xdg video standards were never defined, If you need any
> >>>> > advice or assistance with this, I would be happy to act in an
> >>>> > advisory capacity if that is called for. I have over 20 years  
> >>>> experience of  
> >>>> > developing Free Software video and have been an active participant in
> >>>> > developing other video / effects standards. I have been a bit our of  
> >>>> the  
> >>>> > spotlight recently as I have been busy architecting and implementing  
> >>>> the  
> >>>> > core components of the upcoming next gen LiVES 4,0 video application  
> >>>> plus  
> >>>> > its accompanying state-of-the-art effects standard)  
> >>>>
> >>>> Hi,
> >>>>
> >>>> what kind of video standards would that be?
> >>>>
> >>>> I'm wondering if it would have anything to do with Wayland color
> >>>> management and HDR:
> >>>>
> >>>> https://gitlab.freedesktop.org/pq/color-and-hdr
> >>>>
> >>>> https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/183
> >>>>
> >>>> https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14
> >>>>
> >>>> Would there need to be any XDG standards to support color managed HDR
> >>>> desktops, or is the window system support enough?
> >>>>
> >>>> I have not much in my mind, but then I've been staring only at the
> >>>> window system interactions, and haven't seen what else the desktop
> >>>> ecosystem or applications might need.
> >>>>
> >>>> Recommended display calibration and measurement procedures maybe?
> >>>>
> >>>> Desktop viewing environment standards?
> >>>>
> >>>> Viewing environment measurement?
> >>>>
> >>>> They could be as straightforward as referring to a freely available
> >>>> ITU-R or SMPTE papers or others, if there are suitable ones.
> >>>>
> >>>>
> >>>> Thanks,
> >>>> pq
> >>>>  
> >>>  



pgpp9JR6KBTue.pgp
Description: OpenPGP digital signature


Video standards

2024-03-28 Thread Pekka Paalanen
On Wed, 27 Mar 2024 11:45:00 -0300
salsaman  wrote:

> ISTR that the xdg video standards were never defined, If you need any
> advice or assistance with this, I would be happy to act in an
> advisory capacity if that is called for. I have over 20 years experience of
> developing Free Software video and have been an active participant in
> developing other video / effects standards. I have been a bit our of the
> spotlight recently as I have been busy architecting and implementing the
> core components of the upcoming next gen LiVES 4,0 video application plus
> its accompanying state-of-the-art effects standard)

Hi,

what kind of video standards would that be?

I'm wondering if it would have anything to do with Wayland color
management and HDR:

https://gitlab.freedesktop.org/pq/color-and-hdr
https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/183
https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14

Would there need to be any XDG standards to support color managed HDR
desktops, or is the window system support enough?

I have not much in my mind, but then I've been staring only at the
window system interactions, and haven't seen what else the desktop
ecosystem or applications might need.

Recommended display calibration and measurement procedures maybe?

Desktop viewing environment standards?

Viewing environment measurement?

They could be as straightforward as referring to a freely available
ITU-R or SMPTE papers or others, if there are suitable ones.


Thanks,
pq


pgpk9SktzVd3R.pgp
Description: OpenPGP digital signature


Re: Specifying sensible device types to use an application on in the desktop file

2020-10-12 Thread Pekka Paalanen
On Fri, 9 Oct 2020 16:46:31 +0200
Marco Martin  wrote:

> On Fri, Oct 9, 2020 at 4:17 PM Matthias Klumpp  wrote:
> > So within AppStream, apps will highly likely in addition to defining
> > their user input methods also be able to set preferred screen sizes
> > with an upper and lower limit.  
> 
> pixels or millimeters? (would prefer the second)

Hi,

the thread referred to here might be interesting as well:
https://lists.freedesktop.org/archives/wayland-devel/2020-October/041637.html

"Re-thinking DPI and scaling (Re: Physical vs logical DPI on X)"

TL;DR: pixel resolution ignores the physical screen size but limits how
small details can be rendered, DPI ignores the viewing distance meaning
that you don't actually know if something n millimeters high is legible
or not even at infinite pixel resolution.

Then add systems with heterogeneous outputs. Also accessibility for
people with poor eye-sight (maybe somehow factor in user interface
scaling?).


Thanks,
pq


pgpWIUoIIQVTF.pgp
Description: OpenPGP digital signature
___
xdg mailing list
xdg@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/xdg


Re: Current state of DPI settings under X

2017-07-26 Thread Pekka Paalanen
On Tue, 25 Jul 2017 21:15:31 +0300
Vladimir Kudrya  wrote:

> Hi all!
> 
> I would like to know the current 'proper' way of setting DPI in X (if 
> there is any).
> I see conflicting information on the topic, and different applications 
> seem to have different source of this setting. I currently counted 3 of 
> them:
>- fontconfig
>- xrdb
>- randr
> 
> Different versions of gtk2 seem to either demand or ignore dpi setting 
> in xrdb.
> And there is also this recent change that gives xdpyinfo the ability to 
> state different DPI for different outputs simultaneously: 
> https://lists.x.org/archives/xorg-devel/2017-April/053430.html
> 
> Is there any DE- and toolkit-agnostic approach for letting applications 
> know the proper DPI and/or notifying them that DPI has changed?

Hi,

rather than asking "how to set the DPI", why do you want to set the
DPI, what do you hope to accomplish?

Are you looking for making fonts physically the same size on different
monitors?

Are you trying to cope with HiDPI monitors?

Are you trying to make physical measurement units in applications
correspond to real physical dimensions on the screen?

Even these are fairly low-level questions and would need an explanation
on what you are really trying to make to work. Some goals are false to
begin with, some are reasonable but technically hard, and some have
existing solutions depending on software.

In general, DPI is a mess, and very often the actual DPI number is
not even what one should be concerned with.


Thanks,
pq


pgpCiBDIEcEC2.pgp
Description: OpenPGP digital signature
___
xdg mailing list
xdg@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/xdg


Re: Bypass events and get the physical keyboard state directly

2017-02-21 Thread Pekka Paalanen
On Tue, 21 Feb 2017 12:19:27 +1300
Bryan Baldwin  wrote:

> Yup, the encryption was an automagickal client mistake.
> 
> And I understand all your points, and don't disagree with the
> security model. I just don't think that the people and the software
> that are assuming responsibility for system security should be. I'm
> not afraid to go and look, then tear it up and write my own thing if
> its a good use of time. I think its going to be a very good use of
> time these days ;)
> 
> For posterity, the input problem I had is a bug in SDL2. Pressing and
> holding down a key was not producing events with the repeat flag set.
> It produced pairs of keydown & pressed - keyup & released events
> without the repeat flag set. These events continued to be send long
> after the key was physically released. It is an untenable expectation
> for the application to track key state with garbage input.
> 
> This is a known SDL2 bug affecting at least verisons 2.0.4
> https://bugzilla.libsdl.org/show_bug.cgi?id=3472 and 2.0.5.
> 
> There is a patch I've tested locally on my own system, and it works.
> https://bugzilla-attachments.libsdl.org/attachment.cgi?id=2594

Hi Bryan,

I am glad you found the culprit and told us what it is. I would have
never guessed it to be an SDL bug caused by such subtle interactions.

Happy hacking!


Thanks,
pq


pgpqOXjxutrqN.pgp
Description: OpenPGP digital signature
___
xdg mailing list
xdg@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/xdg


Re: Bypass events and get the physical keyboard state directly

2017-02-20 Thread Pekka Paalanen
(I think you just sent an encrypted email to a mailing list. I assume
this was an accident, since nothing indicates otherwise.)


On Mon, 20 Feb 2017 21:56:03 +1300
Bryan Baldwin <br...@katofiad.co.nz> wrote:

> I don't know why you mentioned Wayland. As a reference point? I'm not
> presently developing with a Wayland target.

Yes, as a reference point.

> I cannot track the state of the keyboard with the events I received,
> because, as described, they are garbage. Something between evdev and
> my code screws the input data. X11, GNOME, or SDL2. Out of that
> stack, X11 would have been my goto to look for a way to view input
> more directly, which is why I asked about your interfaces here, but
> even that was completely wrong.

There is probably a reason why something somewhere converts repeats
into up/down pairs, if that is the only problem you have. You could try
finding out which component is responsible for it, and ask on the
appropriate mailing list why it is so and how to work around it.

I would also hazard a guess that these up/down pairs come really close
to each other, so if you actually drained the event queue before
looking at the tracked keyboard state, maybe it would be what you need.
But that is just speculation from me.

FWIW, Wayland does not have repeat events itself. A toolkit may
manufacture those any way it wants based on keyboard state (the exact
state you seem to be wanting in the first place).

> Initial code tests I've made directly against evdev prove that the
> input seen is accurate and properly reported. Grabbing all the input
> devices is easy. Releasing and reacquiring all of them based on
> window focus is easy. Identifying what each device is and which to
> use is easy. I have no idea what you are talking about with Re: to
> permissions and security. I'm not sure where in the original software
> stack the input code is being ruined, but to whomever code that
> belongs, if you cannot deliver accurate input from the kernel with
> 100% confidence, you cannot be trusted to decide permissions or
> security, either.

I'm not convinced, but I won't argue about the easyness.

The security/permissions problem is this: if you are allowed to open
the input devices, you can also trivially implement an invisible
keylogger, just "forget" to close the devices. Obviously people don't
like that idea, hence in a usual system the input device permissions
are restricted, probably as far as to the root user. Hence your game
needs to run as root, or ask the user to bypass the set permissions
e.g. by adding himself to the 'input' group (which now opens the door
for keyloggers). Display servers often use logind DBus API to open input
devices to avoid running as root, but I believe logind would refuse
your game if a display server was also active at the same time.


Thanks,
pq


> On 02/20/2017 09:20 PM, Pekka Paalanen wrote:
> > On Mon, 20 Feb 2017 12:37:36 +1300
> > Bryan Baldwin <br...@katofiad.co.nz> wrote:
> >  
> >> Okay, so further investigation has lead me to test code against
> >> evdev directly. Nevermind ;)  
> > But if you run under any kind of display server (windowing system),
> > that won't usually work at all, or works wrong.
> >
> > That is also why any kind of "bypass the display server" will not
> > generally work. There is a myriad of reasons for that, including
> > permission and security issues, even starting from how to even pick
> > the right devices.
> >
> > You really are expected to keep the keyboard state tracked in your
> > app, based on the events. Even evdev works like that. If you were
> > writing for Wayland, there would be libxkbcommon to do that, and I
> > believe SDL2 already uses libxkbcommon anyway (on Wayland).
> >
> > OTOH, if you were not running under any display server, then you'd
> > be fine with that approach.
> >
> > Your question would be better directed at SDL or the specific window
> > system fora (e.g. mailing lists).
> >
> >
> > Thanks,
> > pq  
> 
> 



pgpT4loyTkF8S.pgp
Description: OpenPGP digital signature
___
xdg mailing list
xdg@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/xdg


Re: Bypass events and get the physical keyboard state directly

2017-02-20 Thread Pekka Paalanen
On Mon, 20 Feb 2017 12:37:36 +1300
Bryan Baldwin  wrote:

> Okay, so further investigation has lead me to test code against evdev
> directly. Nevermind ;)

But if you run under any kind of display server (windowing system),
that won't usually work at all, or works wrong.

That is also why any kind of "bypass the display server" will not
generally work. There is a myriad of reasons for that, including
permission and security issues, even starting from how to even pick the
right devices.

You really are expected to keep the keyboard state tracked in your app,
based on the events. Even evdev works like that. If you were writing
for Wayland, there would be libxkbcommon to do that, and I believe SDL2
already uses libxkbcommon anyway (on Wayland).

OTOH, if you were not running under any display server, then you'd be
fine with that approach.

Your question would be better directed at SDL or the specific window
system fora (e.g. mailing lists).


Thanks,
pq


pgpFk3g7n1j41.pgp
Description: OpenPGP digital signature
___
xdg mailing list
xdg@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/xdg


Re: Pixels Per Inch needs to be standardized 

2016-05-06 Thread Pekka Paalanen
On Thu, 5 May 2016 15:33:48 +0200
Alberto Salvia Novella <es204904...@gmail.com> wrote:

> Pekka Paalanen:
>  > each wl_output (usually represents a single monitor) has an integer
>  > scale factor associated.  
> 
> Thank you, this seems what I was talking about.
> 
> You know if is there another way of figuring out the scale factor in the 
> xserver and mir display managers?

I do not, unfortunately.


Thanks,
pq


pgpogua9COOQu.pgp
Description: OpenPGP digital signature
___
xdg mailing list
xdg@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/xdg


Re: Pixels Per Inch needs to be standardized 

2016-05-05 Thread Pekka Paalanen
On Thu, 5 May 2016 03:24:29 +0200
Alberto Salvia Novella <es204904...@gmail.com> wrote:

> Pekka Paalanen:
> > too bad the discussion does not explain why you need the ppi, or
> > what it would be used for.  
> 
> What I am asking for is a standard way to advertise the desktop scale 
> factor. And it does not necessarily need to be about pixel density, it 
> just could be a multiplier like x1.5 or x2.
> 
> 
> Pekka Paalanen:
>  > Or, how would a DE know which ppi it needs to advertise at a time?  
> 
> One scale factor per screen.

Oh scale factor! Yes, that is a completely different thing. Please
do talk about a scale factor instead of dpi or ppi. People will
respond much better, while dpi tends to raise hard prejudice (with
me too) due to its history of abuse and misconceptions.

So this is about HiDPI? That would be a good term to use too, as
HiDPI support also uses a scale factor, not dpi.

FWIW, Wayland offers it like this: each wl_output (usually
represents a single monitor) has an integer scale factor
associated. The client/app/toolkit gets told on which outputs a
window is shown on, and then the app can choose what size and
factor to draw in. The compositor automatically accounts for the
mismatch between the draw factor and output factor. (wl_outputs
also advertise resolution and physical size, if applicable, so it
is also possible to compute ppi.) This is built in the core of the
Wayland display protocol.


Thanks,
pq


pgprmD4zcxRIE.pgp
Description: OpenPGP digital signature
___
xdg mailing list
xdg@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/xdg


Re: Proposal: clean way of animating windows to/from system tray icons

2015-09-09 Thread Pekka Paalanen
On Tue, 08 Sep 2015 13:51:48 -0400
Éric Tremblay  wrote:

> As for wayland, i don't know enough about it, but i would imagine it 
> would be trivial to adapt this simple mechanism to wayland.

Hi,

well, there are a few of fundamental issues:

1. Wayland/desktop does not have a global coordinate system visible to apps.

2. Wayland does not have global identifiers for any client resources.

3. Wayland does not have a generic client-to-client communication
   mechanism, that is, there are no generic properties or anything you
   could abuse as IPC.

The first issue is a feature designed on purpose which we've fought
hard to keep.

The second issue is also a deliberate design, though there are cases
where it will be useful to explicitly create a handle for a wl_surface
or such, pass it to another process, and let that new process use it as
a reference to e.g. tell the compositor who spawned it. This protocol
extension is still to be designed.

The third issue is on purpose too. Client-to-client communications that
do not *absolutely require* specifically the display server to enforce
anything should not go through the display server. Besides, due issue
#2, there is no way a client could reference another client or its
resource at will to begin with. (Client actually means a connection,
that is, a wl_display instance; similar Xlib's Display created by
XOpenDisplay.)

There is also no such concept as "grab the server". You have to design
protocol extensions to be race-free from the start.

Regardless of whether your idea is a good one or not, in Wayland you
would somehow let the compositor know of the relationship between the
objects you want to associate, and then the compositor will just do the
right thing. Passing coordinates will not work on Wayland/desktop,
because clients cannot know where things are.


Thanks,
pq

PS. display server = compositor = window manager, all in one.


pgpAXJVMhJaAT.pgp
Description: OpenPGP digital signature
___
xdg mailing list
xdg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xdg