[PATCH] Fixes a typo in doc/publican/sources/Protocol.xml

2013-05-21 Thread Peng Wu
The doc/publican/sources/Protocol.xml use 'trasnfer', fixes it as 'transfer'.

Peng Wu (1):
  fixes trivial typo

 doc/publican/sources/Protocol.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-- 
1.8.1.4

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread Pekka Paalanen
On Mon, 20 May 2013 17:58:30 -0700
Bill Spitzak spit...@gmail.com wrote:

 Pekka Paalanen wrote:
 
  This seems pretty limiting to me. What happens when *all* the outputs 
  are hi-res? You really think wayland clients should not be able to take 
  full advantage of this?
  
  Then the individual pixels are so small that it won't matter.
 
 It does not matter how tiny the pixels are. The step between possible 
 surface sizes and subsurface positions remains the size of a scale-1 
 unit. Or else I am seriously mis-understanding the proposal:
 
 Let's say the output is 10,000dpi and the compositor has set it's scale 
 to 100. Can a client make a buffer that is 10,050 pixels wide appear 1:1 
 on the pixels of this output? It looks to me like only multiples of 100 
 are possible.

As far as I understand, that is correct.

But it does not matter. You cannot employ any widgets or widget parts
that would need a finer resolution than 100 px steps, because a) the
user cannot clearly see them, and b) the user cannot clearly poke them
with e.g. a pointer, since they are so small. So there is no need to
have window size in finer resoution either. Even a resize handle in a
window border would have to be at least 300 pixels thick to be usable.

The scale factor only allows to specify the image in finer resolution,
so it looks better, not jagged-edged for instance. There is no point in
having anything else in finer resolution, since everything else is
related to input.

To be precice, in that scenario a client should never even attempt to
make a buffer of 10050 px wide.

  If nothing else it makes it so that subsurfaces are
  always positioned on integer positions on non-scaled displays, which
  makes things easier when monitor of differen scales are mixed.
  This is false if the subsurface is attached to a scaled parent surface.
  
  Huh?
 
 Parent surface uses the scaler api to change a buffer width of 100 to 
 150. The fullscreen and this hi-dpi interface can also produce similar 
 scales. The subsurface has a width of 51. Either the left or right edge 
 is going to land in the middle of an output pixel.

How can you say that? Where did you get the specification of how scaler
interacts with buffer_scale? We didn't write any yet.

And what is this talk about parent surfaces?

  The input rectangle to the scaler proposal is in the space between the 
  buffer transform and the scaling. Therefore there are *three* coordinate 
  spaces.
  
  Where did you get this? Where is this defined or proposed?
 
 The input rectangle is in the same direction as the output rectangle 
 even if the buffer is rotated 90 degrees by the buffer_transform.

Yeah. So how does that define anything about scaler and buffer_scale
interaction?

The only thing that that could imply, is that buffer_scale and
buffer_transform are applied simultaneously (they are orthogonal
operations), so I can't understand how you arrive at your conclusion.

The scaler transformation was designed to change old surface
coordinates into new surface coordinates, anyway, except not in those
words, since it does not make sense in the spec.

  On a quick thought, that seems only a different way of doing it,
  without any benefits, and possibly having cons.
 
 Benefits: the buffer can be any integer number of pixels in size, 
 non-integer buffer sizes cannot be specified by the api, you can align 
 subsurfaces with pixels in the buffer (which means a precomposite of 
 subsurfaces into the main one before scaling is possible).

Any size for buffer, okay.

How could you ever arrive to non-integer buffer sizes in the earlier
proposal?

Aligning sub-surfaces is still possible if anyone cares about that, one
just have to take the scale into account. That's a drawing problem. If
you had a scale 1 output and buffers, you cannot align to fractional
pixels, anyway.

Why would pre-compositing not be possible is some case?

  Actually, it means that the surface coordinate system can change
  dramatically when a client sends a new buffer with a different scale,
  which then raises a bucketful of races: is an incoming event using new
  or old surface coordinates? That includes at least all input events
  with a surface position,
 
 This is a good point and the only counter argument that makes sense.
 
 All solutions I can think of are equivalent to reporting events in the 
 output space, the same as your proposal. However I still feel that the 
 surface size, input area, and other communication from client to server 
 should be specified in input space.

Urgh, so you specify input region in one coordinate system, and then
get events in a different coordinate system? Utter madness.

Let's keep everything in the surface coordinates (including client
toolkit widget layout, AFAIU), except client rendering which needs to
happen in buffer coordinates, obviously. That is logical, consistent,
and easy to understand. That forces the clients to deal with two
coordinate systems at most, and 

[PATCH] fixes trivial typo

2013-05-21 Thread Peng Wu
---
 doc/publican/sources/Protocol.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/publican/sources/Protocol.xml 
b/doc/publican/sources/Protocol.xml
index f576542..1a7a7da 100644
--- a/doc/publican/sources/Protocol.xml
+++ b/doc/publican/sources/Protocol.xml
@@ -453,7 +453,7 @@
para
  When the drag ends, the receiving client receives a
  functionwl_data_device.drop/function event at which it is expect
- to trasnfer the data using the
+ to transfer the data using the
  functionwl_data_offer.receive/function request.
/para
   /section
-- 
1.8.1.4

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


weston-launch logging and security (Re: [PATCH weston] weston-launch: Print explanation of why we failed to open the device)

2013-05-21 Thread Pekka Paalanen
On Mon, 20 May 2013 16:56:47 -0400
Kristian Høgsberg hoegsb...@gmail.com wrote:

 On Mon, May 20, 2013 at 04:55:10PM +0100, Rob Bradford wrote:
  From: Rob Bradford r...@linux.intel.com
 
 That's better, though I wonder if we should instead let weston log the
 error message using weston_log()... committed this for now.

That was my first thought, too. Just send errno from weston-launch,
or/and message string?

Would be nice to have all that in weston's log.

What about syslog in addition? weston-launch is a suid-root program,
so might be useful to log into system log for security purposes, no?
PAM is writing its own stuff to the system log on behalf of
weston-launch already.

Btw. if we have a mechanism for weston to load custom plugins, then
those plugins could ask weston-launch to open any file with root
permissions, right? Do we have any restrictions in opening or plugin
loading yet?


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH weston 4/8] shell: Use relative layers for lock/unlock

2013-05-21 Thread Quentin Glidic

This patch should be replaced with a more generic mechanism.
Proposal and patches will come soon as a new series.

--

Quentin “Sardem FF7” Glidic
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH weston] udev-seat: Fail seat setup only if the seat is incomplete

2013-05-21 Thread Rob Bradford
Hi Kristian,

I think I should split the patch in two. Firstly not abort the
compositor initialisation if we can't open a device or if there is a
problem with the device and secondly some kind of input presence
tests.

In terms of presence tests what condition ar we trying to mitigate
against? I personally remember being caught out in the past where I
ended up starting weston with no input devices and thus couldn't kill
it / VT switch. From a developer experience perspective that's not
very nice.

But from a production perspective weston might be deployed in setups
that don't feature a keyboard and instead feature a touchscreen. I'm
thinking we could have a command line option and weston.ini entry
along the lines of:

--required-input=[touch,pointer,keyboard]

For which I think a sensible default is keyboard. And if you are in a
touchscreen only environment you will need to configure your weston to
permit it to start up (giving some advice in the log of how to do
this.) How this interacts in a multiple seat environment is an open
problem :-)

In terms of the device list check warning I agree I think we should
drop that if we go with some required device check like above.

Rob

On 20 May 2013 22:25, Kristian Høgsberg hoegsb...@gmail.com wrote:
 On Mon, May 20, 2013 at 05:55:03PM +0100, Rob Bradford wrote:
 From: Rob Bradford r...@linux.intel.com

 Rather than failing seat setup if we fail to open the input device
 instead fail the seat setup if we don't have complete seat with both
 keyboard and pointer or a touchscreen.

 https://bugs.freedesktop.org/show_bug.cgi?id=64506
 ---
  src/udev-seat.c | 10 --
  1 file changed, 8 insertions(+), 2 deletions(-)

 diff --git a/src/udev-seat.c b/src/udev-seat.c
 index 7e62429..3dd3438 100644
 --- a/src/udev-seat.c
 +++ b/src/udev-seat.c
 @@ -58,7 +58,7 @@ device_added(struct udev_device *udev_device, struct 
 udev_seat *master)
   fd = weston_launcher_open(c, devnode, O_RDWR | O_NONBLOCK);
   if (fd  0) {
   weston_log(opening input device '%s' failed.\n, devnode);
 - return -1;
 + return 0;
   }

   device = evdev_device_create(master-base, devnode, fd);
 @@ -69,7 +69,7 @@ device_added(struct udev_device *udev_device, struct 
 udev_seat *master)
   } else if (device == NULL) {
   close(fd);
   weston_log(failed to create input device '%s'.\n, devnode);
 - return -1;
 + return 0;
   }

   calibration_values =
 @@ -142,6 +142,12 @@ udev_seat_add_devices(struct udev_seat *seat, struct 
 udev *udev)
   udev device property ID_SEAT)\n);
   }

 + if (!(seat-base.touch || (seat-base.keyboard  
 seat-base.pointer))) {
 + weston_log (seat not complete: no touchscreen or 
 + no keyboard and pointer found.\n);
 + return -1;
 + }
 +

 I wonder if the previous check isn't good enough - I think requiring a
 keyboard and a mouse is a little restrictive, there are many cases
 where we only have a keyboard or only a mouse.  And if we do want this
 more specific check, at least drop the check for an empty
 devices_list.

   return 0;
  }

 --
 1.8.1.4

 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] libinputmapper: Input device configuration for graphic-servers

2013-05-21 Thread David Herrmann
Hi Peter

On Tue, May 21, 2013 at 6:37 AM, Peter Hutterer
peter.hutte...@who-t.net wrote:
 On Thu, May 16, 2013 at 03:16:11PM +0200, David Herrmann wrote:
 Hi Peter

 On Thu, May 16, 2013 at 7:37 AM, Peter Hutterer
 peter.hutte...@who-t.net wrote:
  On Sun, May 12, 2013 at 04:20:59PM +0200, David Herrmann wrote:
 [..]
  So what is the proposed solution?
  My recommendation is, that compositors still search for devices via
  udev and use device drivers like libxkbcommon. So linux evdev handling
  is still controlled by the compositor. However, I'd like to see
  something like my libinputmapper proposal being used for device
  detection and classification.
 
  libinputmapper provides an inmap_evdev object which reads device
  information from an evdev-fd or sysfs /sys/class/input/inputnum
  path, performs some heuristics to classify it and searches it's global
  database for known fixups for broken devices.
  It then provides capabilities to the caller, which allow them to see
  what drivers to load on the device. And it provides a very simple
  mapping table that allows to apply fixup mappings for broken devices.
  These mappings are simple 1-to-1 mappings that are supposed to be
  applied before drivers handle the input. This is to avoid
  device-specific fixup in the drivers and move all this to the
  inputmapper. An example would be a remapping for gamepads that report
  BTN_A instead of BTN_NORTH, but we cannot fix them in the kernel for
  backwards-compatibility reasons. The gamepad-driver can then assume
  that if it receives BTN_NORTH, it is guaranteed to be BTN_NORTH and
  doesn't need to special case xbox360/etc. controllers, because they're
  broken.
 
  I think evdev is exactly that interface and apparently it doesn't work.
 
  if you want a mapping table, you need a per-client table because sooner or
  later you have a client that needs BTN_FOO when the kernel gives you 
  BTN_BAR
  and you can't change the client to fix it.
 
  i.e. the same issue evdev has now, having a global remapping table just
  moves the problem down by 2 years.
 
  a mapping table is good, but you probably want two stages of mapping: one
  that's used in the compositor for truly broken devices that for some reason
  can't be fixed in the kernel, and one that's used on a per-client basis. 
  and
  you'll likely want to be able to overide the client-specific from outside
  the client too.

 IMHO, the problem with evdev is, that it doesn't provide device
 classes. The only class we have is this is an input device. All
 other event-flags can be combined in whatever way we want.

 So like 10 years ago when the first gamepad driver was introduced, we
 added some mapping that was unique to this device (the device was
 probably unique, too). Some time later, we added some other
 gamepad-like driver with a different mapping (as it was probably a
 very different device-type, back then, and we didn't see it coming
 that this will become a wide-spread device-type).
 However, today we notice that a GamePad is an established type of
 device (like a touchpad), but we have tons of different mappings in
 the kernel for backwards-compatibility reasons. I can see that this
 kind of development can happen again (and very likely it _will_ happen
 again) and it will happen for all kinds of devices.

 But that's why I designed the proposal from a compositor's view
 instead of from a kernel's view.

 A touchpad driver of the compositor needs to know exactly what kind of
 events it gets from the kernel. If it gets wrong events, it will
 misbehave. As we cannot guarantee that all kernel drivers behave the
 same way, the compositor's touchpad driver needs to work around all
 these little details on a per-device basis.
 To avoid this, I tried to abstract the touchpad-protocol and moved
 per-device handling into a separate library. It detects all devices
 that can serve as a touchpad and fixes trivial (1-to-1 mapping)
 incompatibilities. This removes all per-device handling from the
 touchpad driver and it can expect all input it gets to be conform with
 a touchpad protocol.
 And in fact, it removes this from all the compositor's input drivers.
 So I think of it more like a lib-detect-and-make-compat.

 All devices that do not fall into one of the categories (I called it
 capability), will be handled as custom devices. So if we want an input
 driver for a new fancy device, then we need a custom driver, anyway
 (or adjust a generic driver to handle both). If at some point it turns
 out, that this kind of device becomes more established, we can add a
 new capability for it. Or we try extending an existing capability in a
 backwards-compatible way. We can then remove the custom-device
 handling from the input-driver and instead extend/write a generic
 driver for the new capability.


 So I cannot follow how you think this will have the same problems as
 evdev? Or, let's ask the inverse question: How does this differ from
 the X11 model where we move 

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread Bill Spitzak

On 05/20/2013 11:46 PM, Pekka Paalanen wrote:


Let's say the output is 10,000dpi and the compositor has set it's scale
to 100. Can a client make a buffer that is 10,050 pixels wide appear 1:1
on the pixels of this output? It looks to me like only multiples of 100
are possible.


As far as I understand, that is correct.

But it does not matter. You cannot employ any widgets or widget parts
that would need a finer resolution than 100 px steps, because a) the
user cannot clearly see them, and b) the user cannot clearly poke them
with e.g. a pointer, since they are so small. So there is no need to
have window size in finer resoution either. Even a resize handle in a
window border would have to be at least 300 pixels thick to be usable.


This proposal does not actually restrict widget positions or line sizes, 
since they are drawn by the client at buffer resolution. Although 
annoying, the outside buffer size is not that limiting. The client can 
just place a few transparent pixels along the edge to make it look like 
it is any size.


However it does restrict the positions of widgets that use subsurfaces.

I see this as a serious problem and I'm not sure why you don't think it 
is. It is an arbitrary artificial limit in the api that has nothing to 
do with any hardware limits.


The reason you want to position widgets at finer positions is so they 
can be positioned evenly, and so they can be moved smoothly, and so they 
can be perfectly aligned with hi-resolution graphics.



How can you say that? Where did you get the specification of how scaler
interacts with buffer_scale? We didn't write any yet.


It is pretty obvious that if the parent has a scale and the child has 
one, these scales are multiplied to get the transform from the child to 
the parent's parent.


It is true that the resulting scale if the hi-dpi and scaler are applied 
to the *SAME* surface is not yet written.



And what is this talk about parent surfaces?


The subsurfaces have a parent. For main surfaces the parent is the 
compositor coordinate space.



The input rectangle is in the same direction as the output rectangle
even if the buffer is rotated 90 degrees by the buffer_transform.


Yes exactly. Thus it is a different space than the buffer pixels, as 
there may be a 90 degree rotation / reflections, and translation to put 
the origin in different corners of the buffer.



How could you ever arrive to non-integer buffer sizes in the earlier
proposal?


If the scale is 3/2 then specifying the surface size as 33 means the 
buffer is 49.5 pixels wide. I guess this is a protocol error? Still 
seems really strange to design the api so this is possible at all.



Aligning sub-surfaces is still possible if anyone cares about that, one
just have to take the scale into account. That's a drawing problem. If
you had a scale 1 output and buffers, you cannot align to fractional
pixels, anyway.


If there is a scale of 2 you cannot align to the odd pixels. And  a 
scale of 3/2 means you *can* align to fractional pixels.



Why would pre-compositing not be possible is some case?


Because it would require rendering a fractional-pixel aligned version of 
the subsurface and compositing that with the parent. This may make 
unwanted graphics leak through the anti-aliased edge. The most obvious 
example is if there are two subsurfaces and you try to make their edges 
touch.


However both proposals have this problem if pre-compositing is not done, 
and most practical shells I can figure out can't do pre-compositing 
because that requires another buffer for every parent, so maybe this is 
not a big deal.



Urgh, so you specify input region in one coordinate system, and then
get events in a different coordinate system? Utter madness.

Let's keep everything in the surface coordinates (including client
toolkit widget layout, AFAIU), except client rendering which needs to
happen in buffer coordinates, obviously.


Sounds like you have no problem with two coordinate spaces. I don't see 
any reason the size of windows and the positions of graphics should not 
be done in the same coordinates drawings are done in.



The x,y do not
describe how the surface moves, they describe how pixel rows and
columns are added or removed on the edges.


No, it is in the surface coordinate system, like written in the patch.


Then I would not describe it as pixel rows and columns added or removed 
on the edges. If the scaler is set to 70/50 than a delta of -1,0 is 
adding 1.4 pixels to the left edge of the buffer. I agree that having it 
in the parent coordinates works otherwise.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFC weston] compositor: Use ordered layers

2013-05-21 Thread Quentin Glidic
From: Quentin Glidic sardemff7+...@sardemff7.net

It allows a more generic layer management that several modules can use
at the same time without breaking each others’ layers.

Signed-off-by: Quentin Glidic sardemff7+...@sardemff7.net
---

This change is incomplete but the desktop shell works fine with it.

The idea is to allow other modules to use layers without breaking the shell.
Using relative layers like the old patch did is not flexible enough, since 
modules could want to put layers *between* the shell’s ones.

Comments?

 src/compositor.c| 39 +++---
 src/compositor.h| 18 --
 src/shell.c | 54 ++---
 src/tablet-shell.c  | 13 +
 tests/weston-test.c |  3 ++-
 5 files changed, 85 insertions(+), 42 deletions(-)

diff --git a/src/compositor.c b/src/compositor.c
index 9fefb77..46bd370 100644
--- a/src/compositor.c
+++ b/src/compositor.c
@@ -1318,11 +1318,33 @@ idle_repaint(void *data)
 }
 
 WL_EXPORT void
-weston_layer_init(struct weston_layer *layer, struct wl_list *below)
+weston_layer_init(struct weston_layer *layer, int32_t order)
 {
wl_list_init(layer-surface_list);
-   if (below != NULL)
-   wl_list_insert(below, layer-link);
+   layer-order = order;
+}
+
+WL_EXPORT void
+weston_layer_order(struct weston_layer *layer, struct weston_compositor 
*compositor, bool use_layer)
+{
+   if (!use_layer) {
+   wl_list_remove(layer-link);
+   return;
+   }
+
+   if (wl_list_empty(compositor-layer_list)) {
+   wl_list_insert(compositor-layer_list, layer-link);
+   return;
+   }
+
+   struct weston_layer *l;
+   wl_list_for_each_reverse(l, compositor-layer_list, link) {
+   if (layer-order = l-order) {
+   wl_list_insert(l-link, layer-link);
+   return;
+   }
+   }
+   wl_list_insert(compositor-layer_list, layer-link);
 }
 
 WL_EXPORT void
@@ -2779,8 +2801,11 @@ weston_compositor_init(struct weston_compositor *ec,
 
ec-input_loop = wl_event_loop_create();
 
-   weston_layer_init(ec-fade_layer, ec-layer_list);
-   weston_layer_init(ec-cursor_layer, ec-fade_layer.link);
+   weston_layer_init(ec-fade_layer, WESTON_LAYER_ORDER_BASE);
+   weston_layer_init(ec-cursor_layer, WESTON_LAYER_ORDER_BASE);
+
+   weston_layer_order(ec-fade_layer, ec, true);
+   weston_layer_order(ec-cursor_layer, ec, true);
 
weston_compositor_schedule_repaint(ec);
 
@@ -2878,7 +2903,7 @@ print_backtrace(void)
filename = dlinfo.dli_fname;
else
filename = ?;
-   
+
weston_log(%u: %s (%s%s+0x%x) [%p]\n, i++, filename, procname,
   ret == -UNW_ENOMEM ? ... : , (int)off, (void 
*)(pip.start_ip + off));
 
@@ -3151,7 +3176,7 @@ int main(int argc, char *argv[])
}
 
weston_log_file_open(log);
-   
+
weston_log(%s\n
   STAMP_SPACE %s\n
   STAMP_SPACE Bug reports to: %s\n
diff --git a/src/compositor.h b/src/compositor.h
index 318fc0d..afdd9d8 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -28,6 +28,7 @@
 extern C {
 #endif
 
+#include stdbool.h
 #include pixman.h
 #include xkbcommon/xkbcommon.h
 #include wayland-server.h
@@ -183,7 +184,7 @@ struct weston_output {
char *make, *model, *serial_number;
uint32_t subpixel;
uint32_t transform;
-   
+
struct weston_mode *current;
struct weston_mode *origin;
struct wl_list mode_list;
@@ -456,8 +457,10 @@ enum {
 };
 
 struct weston_layer {
+   struct weston_compositor *compositor;
struct wl_list surface_list;
struct wl_list link;
+   int32_t order;
 };
 
 struct weston_plane {
@@ -835,8 +838,19 @@ void
 notify_touch(struct weston_seat *seat, uint32_t time, int touch_id,
 wl_fixed_t x, wl_fixed_t y, int touch_type);
 
+enum weston_layer_order {
+   WESTON_LAYER_ORDER_BASE = 0,
+   WESTON_LAYER_ORDER_LOCK = INT8_MAX  4,
+   WESTON_LAYER_ORDER_FULLSCREEN = INT8_MAX,
+   WESTON_LAYER_ORDER_UI = INT16_MAX,
+   WESTON_LAYER_ORDER_NORMAL = INT32_MAX  8, /* INT24 */
+   WESTON_LAYER_ORDER_BACKGROUND = INT32_MAX
+};
+
+void
+weston_layer_init(struct weston_layer *layer, int32_t order);
 void
-weston_layer_init(struct weston_layer *layer, struct wl_list *below);
+weston_layer_order(struct weston_layer *layer, struct weston_compositor 
*compositor, bool use_layer);
 
 void
 weston_plane_init(struct weston_plane *plane, int32_t x, int32_t y);
diff --git a/src/shell.c b/src/shell.c
index f5d5bff..c2ac8e6 100644
--- a/src/shell.c
+++ b/src/shell.c
@@ -573,7 +573,7 @@ workspace_create(void)
if (ws == NULL)
return NULL;
 
-   weston_layer_init(ws-layer, NULL);
+   

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread John Kåre Alsaker
On Tue, May 21, 2013 at 5:35 PM, Bill Spitzak spit...@gmail.com wrote:
 However both proposals have this problem if pre-compositing is not done,
and most practical shells I can figure out can't do pre-compositing because
that requires another buffer for every parent, so maybe this is not a big
deal.
Pre-compositing or compositing of individual windows into buffers will be
required to be done for transparent subsurfaces which overlaps another
subsurface if the compositor wants to change the opacity of the window (a
common effect).

On Mon, May 20, 2013 at 11:23 AM, Pekka Paalanen ppaala...@gmail.comwrote:

 Actually, it means that the surface coordinate system can change
 dramatically when a client sends a new buffer with a different scale,
 which then raises a bucketful of races: is an incoming event using new
 or old surface coordinates? That includes at least all input events
 with a surface position, and the shell geometry event.

This is not a new race. Resizing and surface content changing have the same
problem. Changing the scaling factor would be a relatively rare event too.
I believe I was told that the frame callback was usable as a separator of
events for frames. That could allow clients which are changing scaling
factors to translate old input correctly or simply ignore it.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread Pekka Paalanen
On Tue, 21 May 2013 08:35:53 -0700
Bill Spitzak spit...@gmail.com wrote:

 On 05/20/2013 11:46 PM, Pekka Paalanen wrote:
 
  Let's say the output is 10,000dpi and the compositor has set it's scale
  to 100. Can a client make a buffer that is 10,050 pixels wide appear 1:1
  on the pixels of this output? It looks to me like only multiples of 100
  are possible.
 
  As far as I understand, that is correct.
 
  But it does not matter. You cannot employ any widgets or widget parts
  that would need a finer resolution than 100 px steps, because a) the
  user cannot clearly see them, and b) the user cannot clearly poke them
  with e.g. a pointer, since they are so small. So there is no need to
  have window size in finer resoution either. Even a resize handle in a
  window border would have to be at least 300 pixels thick to be usable.
 
 This proposal does not actually restrict widget positions or line sizes, 
 since they are drawn by the client at buffer resolution. Although 

No, but I expect the toolkits may.

 annoying, the outside buffer size is not that limiting. The client can 
 just place a few transparent pixels along the edge to make it look like 
 it is any size.
 
 However it does restrict the positions of widgets that use subsurfaces.
 
 I see this as a serious problem and I'm not sure why you don't think it 
 is. It is an arbitrary artificial limit in the api that has nothing to 
 do with any hardware limits.

It is a design decision with the least negative impact, and it is
not serious. Sub-surfaces will not be that common, and they
certainly will not be used for common widgets like buttons.

 The reason you want to position widgets at finer positions is so they 
 can be positioned evenly, and so they can be moved smoothly, and so they 
 can be perfectly aligned with hi-resolution graphics.

But why? You have a real, compelling use case? Otherwise it just
complicates things.

Remember, sub-surfaces are not supposed to be just any widgets.
They are video and openGL canvases, and such.

  How can you say that? Where did you get the specification of how scaler
  interacts with buffer_scale? We didn't write any yet.
 
 It is pretty obvious that if the parent has a scale and the child has 
 one, these scales are multiplied to get the transform from the child to 
 the parent's parent.

A what? No way, buffer_scale is private to a surface, and does not
affect any other surface, not even sub-surfaces. It is not
inherited, that would be insane.

The same goes with the scaler proposal, it is private to a surface,
and not inherited. They affect the contents, not the surface.

 It is true that the resulting scale if the hi-dpi and scaler are applied 
 to the *SAME* surface is not yet written.
 
  And what is this talk about parent surfaces?
 
 The subsurfaces have a parent. For main surfaces the parent is the 
 compositor coordinate space.

There is no compositor coordinate space in the protocol. There
are only surface coordinates, and now to a small extent we are
getting buffer coordinates.

Still, this parent reference made no sense in the context you used it.

  The input rectangle is in the same direction as the output rectangle
  even if the buffer is rotated 90 degrees by the buffer_transform.
 
 Yes exactly. Thus it is a different space than the buffer pixels, as 
 there may be a 90 degree rotation / reflections, and translation to put 
 the origin in different corners of the buffer.

Glad to see you agree with yourself.

  How could you ever arrive to non-integer buffer sizes in the earlier
  proposal?
 
 If the scale is 3/2 then specifying the surface size as 33 means the 
 buffer is 49.5 pixels wide. I guess this is a protocol error? Still 
 seems really strange to design the api so this is possible at all.

We have one scale factor which is integer. How can you come up with 3/2?

Even if you took the scaler extension into play, that will only
produce integers, no matter at which point of coordinate
transformations it is applied at.

  Aligning sub-surfaces is still possible if anyone cares about that, one
  just have to take the scale into account. That's a drawing problem. If
  you had a scale 1 output and buffers, you cannot align to fractional
  pixels, anyway.
 
 If there is a scale of 2 you cannot align to the odd pixels. And  a 
 scale of 3/2 means you *can* align to fractional pixels.
 
  Why would pre-compositing not be possible is some case?
 
 Because it would require rendering a fractional-pixel aligned version of 
 the subsurface and compositing that with the parent. This may make 
 unwanted graphics leak through the anti-aliased edge. The most obvious 
 example is if there are two subsurfaces and you try to make their edges 
 touch.

Umm, but since sub-surface positions and sizes are always integers
in the surface coordinate system, the edges will always align
perfectly, regardless of the individual buffer_scales.

 However both proposals have this problem if pre-compositing is not 

[PATCH] Last updates for the RDP compositor

2013-05-21 Thread Hardening
This patch fixes the compilation of the RDP compositor with the head of the
FreeRDP project. It also brings the following improvements/fixes:
* the fake seat as been dropped as now a compositor can be safely started
without any seat
* fixed a wrong initialisation of the NSC encoder context
* the first screen update is now sent on postConnect, not on Synchronize packets
as not all clients send then (this is the case for the last version of FreeRDP).
In the specs, Synchronize packets are described as old artifacts that SHOULD be
ignored.
* send frame markers when using raw surfaces
* reworked raw surfaces sending so that the subtile and the image flip are done
in one step (instead of computing the subtile and then flip it)
* we now send all the sub-rectangles instead of sending the full bounding box
* the negociated size for the fragmentation buffer is honored when sending raw
surfaces PDUs
* always send using the preferred codec without caring about the size
---
 src/compositor-rdp.c | 184 ++-
 1 file changed, 108 insertions(+), 76 deletions(-)

diff --git a/src/compositor-rdp.c b/src/compositor-rdp.c
index 0dae963..7eec273 100644
--- a/src/compositor-rdp.c
+++ b/src/compositor-rdp.c
@@ -59,7 +59,6 @@ struct rdp_output;
 
 struct rdp_compositor {
struct weston_compositor base;
-   struct weston_seat main_seat;
 
freerdp_listener *listener;
struct wl_event_source *listener_events[MAX_FREERDP_FDS];
@@ -133,8 +132,8 @@ rdp_peer_refresh_rfx(pixman_region32_t *damage, 
pixman_image_t *image, freerdp_p
SURFACE_BITS_COMMAND *cmd = update-surface_bits_command;
RdpPeerContext *context = (RdpPeerContext *)peer-context;
 
-   stream_clear(context-encode_stream);
-   stream_set_pos(context-encode_stream, 0);
+   Stream_Clear(context-encode_stream);
+   Stream_SetPosition(context-encode_stream, 0);
 
width = (damage-extents.x2 - damage-extents.x1);
height = (damage-extents.y2 - damage-extents.y1);
@@ -169,8 +168,8 @@ rdp_peer_refresh_rfx(pixman_region32_t *damage, 
pixman_image_t *image, freerdp_p
pixman_image_get_stride(image)
);
 
-   cmd-bitmapDataLength = stream_get_length(context-encode_stream);
-   cmd-bitmapData = stream_get_head(context-encode_stream);
+   cmd-bitmapDataLength = Stream_GetPosition(context-encode_stream);
+   cmd-bitmapData = Stream_Buffer(context-encode_stream);
 
update-SurfaceBits(update-context, cmd);
 }
@@ -185,8 +184,8 @@ rdp_peer_refresh_nsc(pixman_region32_t *damage, 
pixman_image_t *image, freerdp_p
SURFACE_BITS_COMMAND *cmd = update-surface_bits_command;
RdpPeerContext *context = (RdpPeerContext *)peer-context;
 
-   stream_clear(context-encode_stream);
-   stream_set_pos(context-encode_stream, 0);
+   Stream_Clear(context-encode_stream);
+   Stream_SetPosition(context-encode_stream, 0);
 
width = (damage-extents.x2 - damage-extents.x1);
height = (damage-extents.y2 - damage-extents.y1);
@@ -206,42 +205,79 @@ rdp_peer_refresh_nsc(pixman_region32_t *damage, 
pixman_image_t *image, freerdp_p
nsc_compose_message(context-nsc_context, context-encode_stream, (BYTE 
*)ptr,
cmd-width, cmd-height,
pixman_image_get_stride(image));
-   cmd-bitmapDataLength = stream_get_length(context-encode_stream);
-   cmd-bitmapData = stream_get_head(context-encode_stream);
+   cmd-bitmapDataLength = Stream_GetPosition(context-encode_stream);
+   cmd-bitmapData = Stream_Buffer(context-encode_stream);
update-SurfaceBits(update-context, cmd);
 }
 
 static void
+pixman_image_flipped_subrect(const pixman_box32_t *rect, pixman_image_t *img, 
BYTE *dest) {
+   int stride = pixman_image_get_stride(img);
+   int h;
+   int toCopy = (rect-x2 - rect-x1) * 4;
+   int height = (rect-y2 - rect-y1);
+   const BYTE *src = (const BYTE *)pixman_image_get_data(img);
+   src += ((rect-y2-1) * stride) + (rect-x1 * 4);
+
+   for(h = 0; h  height; h++, src -= stride, dest += toCopy)
+   memcpy(dest, src, toCopy);
+}
+
+static void
 rdp_peer_refresh_raw(pixman_region32_t *region, pixman_image_t *image, 
freerdp_peer *peer)
 {
-   pixman_image_t *tile;
rdpUpdate *update = peer-update;
SURFACE_BITS_COMMAND *cmd = update-surface_bits_command;
-   pixman_box32_t *extends = pixman_region32_extents(region);
+   SURFACE_FRAME_MARKER *marker = update-surface_frame_marker;
+   pixman_box32_t *rect, subrect;
+   int nrects, i;
+   int heightIncrement, remainingHeight, top;
+
+   rect = pixman_region32_rectangles(region, nrects);
+   if(!nrects)
+   return;
+
+   marker-frameId++;
+   marker-frameAction = SURFACECMD_FRAMEACTION_BEGIN;
+   update-SurfaceFrameMarker(peer-context, marker);
 
cmd-bpp = 32;
cmd-codecID 

Documentation? - a non-x Linux base for wayland/weston to build on?

2013-05-21 Thread scsijon
I thought I would build a minimum linux 3.8.x system.iso to build a 
NON-X base for wayland/weston to reside on. However I can't find any 
form of a minimal packagelist anywhere around on the net for what is 
required and what can be left out. I am considering something like 
Landley's Aboriginal Linux but without running as an emulator as the start.


Does anyone have a list 'hiding' anywhere please, or alternately know of 
 a url where I can find such a list.


Please be aware, I'm not looking for a linux xorg distribution to build 
it on. I'm wanting to build it totally non-x and with minimal packages 
from near scratch.


Also, if I get enough feedback info from people to put a doc together, 
I'm quite happy to do so, so that any others can work from a known base 
rather than 'reinvent the wheel' again and again as often seems to 
happen. You can use wayl...@lamiaworks.com.au to send any suitable 
docs, rather than filling up this mailsystem. Be aware that it is set 
with a high 'auto-delete junkscore', bounces junk back to the sender, 
and only allows text format messages, and ascii-text and pdf formatted 
attachments.


And yes, when the iso is built, it will be made available.

Of course, this could be a waste of time because someone is already 
doing it... If so, a message reply to that fact, would be helpfull.


thanks
scsijon
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Documentation? - a non-x Linux base for wayland/weston to build on?

2013-05-21 Thread darxus
Somebody documented getting wayland to work without X on Arch here:
https://wiki.archlinux.org/index.php/Wayland#Pure_Wayland

I'd love to see more / better documentation on the subject.

On 05/22, scsijon wrote:
 I thought I would build a minimum linux 3.8.x system.iso to build a
 NON-X base for wayland/weston to reside on. However I can't find any
 form of a minimal packagelist anywhere around on the net for what is
 required and what can be left out. I am considering something like
 Landley's Aboriginal Linux but without running as an emulator as the
 start.
 
 Does anyone have a list 'hiding' anywhere please, or alternately
 know of  a url where I can find such a list.
 
 Please be aware, I'm not looking for a linux xorg distribution to
 build it on. I'm wanting to build it totally non-x and with minimal
 packages from near scratch.
 
 Also, if I get enough feedback info from people to put a doc
 together, I'm quite happy to do so, so that any others can work from
 a known base rather than 'reinvent the wheel' again and again as
 often seems to happen. You can use wayl...@lamiaworks.com.au to
 send any suitable docs, rather than filling up this mailsystem. Be
 aware that it is set with a high 'auto-delete junkscore', bounces
 junk back to the sender, and only allows text format messages, and
 ascii-text and pdf formatted attachments.

Sounds pretty obnoxious.

 And yes, when the iso is built, it will be made available.
 
 Of course, this could be a waste of time because someone is already
 doing it... If so, a message reply to that fact, would be
 helpfull.
 
 thanks
 scsijon
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
 

-- 
Eh, wisdom's overrated. I prefer beatings and snacks.
- Unity, Skin Horse
http://www.ChaosReigns.com
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] libinputmapper: Input device configuration for graphic-servers

2013-05-21 Thread Rick Yorgason

On 2013-05-20 23:56, Peter Hutterer wrote:

what I am wondering is whether that difference matters to the outside
observer (i.e. the compositor). a gamepad and a joystick are both gaming
devices and with the exception of the odd need to control the pointer it
doesn't matter much which type they are.

as for a game that would access the device - does it matter if the device is
labelled gamepad or joystick? if it's a gaming device you have to look at
the pysical properties anyway and decide which you use in what matter.


That would seem to unnecessarily complicate things. Let's say you want 
to do something simple, like display a list of the devices plugged into 
the computer, with a graphic showing the device type.


If you don't have separate gamepad and joystick types, you would need to 
use some heuristic to decide whether to show the gamepad or joystick 
graphic. Really, the device driver should be able to tell us what kind 
of device it considers itself.


Granted, we can't do that right now, because the kernel doesn't expose 
device types, which is something that should probably be resolved 
regardless of what happens with libinputmapper.


In the meantime, we can put the heuristic in libinputmapper, and if the 
kernel ever grows the ability to expose device types, we can remove the 
heuristic all together (or maybe demote it to a backup strategy).


-Rick-
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel