On 02/15/2017 11:33 PM, Konrad Rzeszutek Wilk wrote:
.snip..
I will define 2 sections:
    *------------------ Connector Request Transport Parameters
-------------------
    *
    * ctrl-event-channel
    * ctrl-ring-ref
    *
    *------------------- Connector Event Transport Parameters
--------------------
    *
    * event-channel
    * event-ring-ref

Or is the other ring buffer the one that is created via 'gref_directory' ?
no
At the bottom:
    * In order to deliver asynchronous events from back to front a shared page
is
    * allocated by front and its gref propagated to back via XenStore entries
    * (event-XXX).
AAnd you may want to say this is guarded by REQ_ALLOC feature right?
Not sure I understood you. Event path is totally independent
from any feature, e.g. REQ_ALLOC.
It just provides means to send async events
from back to front, "page flip done" in my case.
<scratche his head> Why do you need a seperate ring to send
responses back? Why not use the same ring on which requests
were sent
Ok, it seems we are not on the same page for rings/channels usage.
Let me describe how those are used:

1. Command/control event channel and its corresponding ring are used
to pass requests from front to back (XENDISPL_OP_XXX) and get responses
from the back. These are synchronous, use macros from ring.h:
ctrl-event-channel + ctrl-ring-ref
I call them "ctrl-" because this way front controls back, or sends commands
if you will. Maybe "cmd-" would fit better here?

2. Event channel - asynchronous path for the backend to signal activity
to the frontend, currently used for "page flip done" event which is sent
at some point of time after back has actually completed the page flip
requested
(so, before that the corresponding request was sent and response received,
but
operation didn't complete yet, instead it was scheduled)
No macros exist for this use-case in ring.h (kbdif+fbif implement
this on their own, so do I)
These are:  event-channel + event-ring-ref
Probably here is the point from where confusion comes, naming.
We can have something like "be-to-fe-event-channel" or anything else
more cute and descriptive.

Hope this explains the need for 2 paths
Aha!

So this is like the network where there is an 'rx' and 'tx'!
kind of

Now I get it.
sorry, I was probably not clear

In that case why not just prefix it with 'in' and 'out'? Such as:

'out-ring-ref' and 'out-event-channel' and 'in-ring-ref' along
with 'in-event-channel'.
hmmmm, it may confuse, because you must know "out"
from which POV, e.g. frontend's or backend's.
What is more, these "out-" and "in-" are... nameless?
Can we still have something like "ctrl-"/"cmd-"/"req-"
for the req/resp path and probably "evt-" for
events from back to front?
Or perhaps better - borrow the same idea that Stefano came up for
9pfs and PV calls - where his ring does both.

Then you just need 'ring-ref', 'event-channel', 'max-page-ring-order'
(which must be 1 or larger).

And you split the ring-ref in two - one for 'in' events and the other
part for 'out' events?
yes, I saw current implementations (kbdif, fbif) and
what Stefano did, but would rather stick to what is currently
defined (I believe it is optimal as is)
And hope, that maybe someone will put new functionality into ring.h
to serve async events one day :)
..giant snip..

Thus, I was thinking of XenbusStateReconfiguringstate as appropriate in this
case
Right, but somebody has to move to this state. Who would do it?
when backend dies its state changes to "closed".
At this moment front tries to remove virtualized device
and if it is possible/done, then it goes into "initialized"
state. If not - "reconfiguring".
So, you would ask how does the front go from "reconfiguring"
into "initialized" state? This is OS/front specific, but:
1. the underlying framework, e.g. DRM/KMS, ALSA, provide
a callback(s) to signal that the last client to the
virtualized device has gone and the driver can be removed
(equivalent to module's usage counter 0)
2. one can schedule a delayed work (timer/tasklet/workqueue)
to periodically check if this is the right time to re-try
the removal and remove

In both cases, after the virtualized device has been removed we move
into "initialized" state again and are ready for new connections
with backend (if it arose from the dead :)
Would the
frontend have some form of timer to make sure that the backend is still
alive? And if it had died then move to Reconfiguring?
There are at least 2 ways to understand if back is dead:
1. XenBus state change (back is closed)
.. If the backend does a nice shutdown..
hm, on Linux I can kill -9 backend and XenBus driver seems
to be able to turn back's state into "closed"
isn't this expected behavior?
That is the expected behavior. I was thinking more of a backend
being a guest - and the guest completly going away and nobody
clearing its XenStore keys.

In which case your second option of doing a timeout will work.
But you may need an 'PING' type request to figure this out?
no ping, just usual calls, one of which will fail
ok, so this is what I have for the recovery flow now:

*------------------------------- Recovery flow -------------------------------
 *
 * In case of frontend unrecoverable errors backend handles that as
 * if frontend goes into the XenbusStateClosed state.
 *
 * In case of backend unrecoverable errors frontend tries removing
 * the virtualized device. If this is possible at the moment of error,
* then frontend goes into the XenbusStateInitialising state and is ready for * new connection with backend. If the virtualized device is still in use and * cannot be removed, then frontend goes into the XenbusStateReconfiguring state
 * until either the virtualized device removed or backend initiates a new
 * connection. On the virtualized device removal frontend goes into the
 * XenbusStateInitialising state.
 *
 * Note on XenbusStateReconfiguring state of the frontend: if backend has
 * unrecoverable errors then frontend cannot send requests to the backend
 * and thus cannot provide functionality of the virtualized device anymore.
* After backend is back to normal the virtualized device may still hold some * state: configuration in use, allocated buffers, client application state etc. * So, in most cases, this will require frontend to implement complex recovery
 * reconnect logic. Instead, by going into XenbusStateReconfiguring state,
 * frontend will make sure no new clients of the virtualized device are
 * accepted, allow existing client(s) to exit gracefully by signaling error
 * state etc.
 * Once all the clients are gone frontend can reinitialize the virtualized
 * device and get into XenbusStateInitialising state again signaling the
 * backend that a new connection can be made.
 *
 * There are multiple conditions possible under which frontend will go from
* XenbusStateReconfiguring into XenbusStateInitialising, some of them are OS
 * specific. For example:
* 1. The underlying OS framework may provide callbacks to signal that the last * client of the virtualized device has gone and the device can be removed
 * 2. Frontend can schedule a deferred work (timer/tasklet/workqueue)
 *    to periodically check if this is the right time to re-try removal of
 *    the virtualized device.
 * 3. By any other means.

I would also like to re-use this as is for sndif, for that reason I do not use
"virtualized display" here, but keep it nameless, e.g. "virtualized device"

I am attaching the diff between v3 and v4 for your convenience

Thank you,
Olekandr
diff --git a/xen/include/public/io/displif.h b/xen/include/public/io/displif.h
index 849f27fe5f1d..21bafc9f9a19 100644
--- a/xen/include/public/io/displif.h
+++ b/xen/include/public/io/displif.h
@@ -48,6 +48,13 @@
  * Note: existing fbif can be used together with displif running at the
  * same time, e.g. on Linux one provides framebuffer and another DRM/KMS
  *
+ * Note: display resolution (XenStore's "resolution" property) defines
+ * visible area of the virtual display. At the same time resolution of
+ * the display and frame buffers may differ: buffers can be smaller, equal
+ * or bigger than the visible area. This is to enable use-cases, where backend
+ * may do some post-processing of the display and frame buffers supplied,
+ * e.g. those buffers can be just a part of the final composition.
+ *
  ******************************************************************************
  *                        Direction of improvements
  ******************************************************************************
@@ -110,7 +117,7 @@
  *
  * /local/domain/1/device/vdispl/0/0/resolution = "1920x1080"
  * /local/domain/1/device/vdispl/0/0/ctrl-ring-ref = "2832"
- * /local/domain/1/device/vdispl/0/0/ctrl-channel = "15"
+ * /local/domain/1/device/vdispl/0/0/ctrl-event-channel = "15"
  * /local/domain/1/device/vdispl/0/0/event-ring-ref = "387"
  * /local/domain/1/device/vdispl/0/0/event-channel = "16"
  *
@@ -118,7 +125,7 @@
  *
  * /local/domain/1/device/vdispl/0/1/resolution = "800x600"
  * /local/domain/1/device/vdispl/0/1/ctrl-ring-ref = "2833"
- * /local/domain/1/device/vdispl/0/1/ctrl-channel = "17"
+ * /local/domain/1/device/vdispl/0/1/ctrl-event-channel = "17"
  * /local/domain/1/device/vdispl/0/1/event-ring-ref = "388"
  * /local/domain/1/device/vdispl/0/1/event-channel = "18"
  *
@@ -179,11 +186,16 @@
  *      Values:         <width, uint32_t>x<height, uint32_t>
  *
  *      Width and height of the connector in pixels separated by
- *      XENDISPL_RESOLUTION_SEPARATOR.
+ *      XENDISPL_RESOLUTION_SEPARATOR. This defines visible area of the
+ *      display.
  *
  *------------------ Connector Request Transport Parameters -------------------
  *
- * ctrl-channel
+ * This communication path is used to deliver requests from frontend to backend
+ * and get the corresponding responses from backend to frontend,
+ * set up per connector.
+ *
+ * ctrl-event-channel
  *      Values:         <uint32_t>
  *
  *      The identifier of the Xen connector's control event channel
@@ -195,6 +207,11 @@
  *      The Xen grant reference granting permission for the backend to map
  *      a sole page in a single page sized connector's control ring buffer.
  *
+ *------------------- Connector Event Transport Parameters --------------------
+ *
+ * This communication path is used to deliver asynchronous events from backend
+ * to frontend, set up per connector.
+ *
  * event-channel
  *      Values:         <uint32_t>
  *
@@ -274,24 +291,50 @@
  * if frontend goes into the XenbusStateClosed state.
  *
  * In case of backend unrecoverable errors frontend tries removing
- * the emulated device. If this is possible at the moment of error,
+ * the virtualized device. If this is possible at the moment of error,
  * then frontend goes into the XenbusStateInitialising state and is ready for
- * new connection with backend. If the emulated device is still in use and
+ * new connection with backend. If the virtualized device is still in use and
  * cannot be removed, then frontend goes into the XenbusStateReconfiguring state
- * until either the emulated device removed or backend initiates a new
- * connection. On the emulated device removal frontend goes into the
+ * until either the virtualized device removed or backend initiates a new
+ * connection. On the virtualized device removal frontend goes into the
  * XenbusStateInitialising state.
  *
+ * Note on XenbusStateReconfiguring state of the frontend: if backend has
+ * unrecoverable errors then frontend cannot send requests to the backend
+ * and thus cannot provide functionality of the virtualized device anymore.
+ * After backend is back to normal the virtualized device may still hold some
+ * state: configuration in use, allocated buffers, client application state etc.
+ * So, in most cases, this will require frontend to implement complex recovery
+ * reconnect logic. Instead, by going into XenbusStateReconfiguring state,
+ * frontend will make sure no new clients of the virtualized device are
+ * accepted, allow existing client(s) to exit gracefully by signaling error
+ * state etc.
+ * Once all the clients are gone frontend can reinitialize the virtualized
+ * device and get into XenbusStateInitialising state again signaling the
+ * backend that a new connection can be made.
+ *
+ * There are multiple conditions possible under which frontend will go from
+ * XenbusStateReconfiguring into XenbusStateInitialising, some of them are OS
+ * specific. For example:
+ * 1. The underlying OS framework may provide callbacks to signal that the last
+ *    client of the virtualized device has gone and the device can be removed
+ * 2. Frontend can schedule a deferred work (timer/tasklet/workqueue)
+ *    to periodically check if this is the right time to re-try removal of
+ *    the virtualized device.
+ * 3. By any other means.
+ *
  ******************************************************************************
  *                             REQUEST CODES
  ******************************************************************************
+ * Request codes [0; 15] are reserved and must not be used
  */
-#define XENDISPL_OP_DBUF_CREATE       0
-#define XENDISPL_OP_DBUF_DESTROY      1
-#define XENDISPL_OP_FB_ATTACH         2
-#define XENDISPL_OP_FB_DETACH         3
-#define XENDISPL_OP_SET_CONFIG        4
-#define XENDISPL_OP_PG_FLIP           5
+
+#define XENDISPL_OP_DBUF_CREATE       0x10
+#define XENDISPL_OP_DBUF_DESTROY      0x11
+#define XENDISPL_OP_FB_ATTACH         0x12
+#define XENDISPL_OP_FB_DETACH         0x13
+#define XENDISPL_OP_SET_CONFIG        0x14
+#define XENDISPL_OP_PG_FLIP           0x15
 
 /*
  ******************************************************************************
@@ -314,7 +357,7 @@
 #define XENDISPL_FIELD_FE_VERSION     "version"
 #define XENDISPL_FIELD_FEATURES       "features"
 #define XENDISPL_FIELD_CTRL_RING_REF  "ctrl-ring-ref"
-#define XENDISPL_FIELD_CTRL_CHANNEL   "ctrl-channel"
+#define XENDISPL_FIELD_CTRL_CHANNEL   "ctrl-event-channel"
 #define XENDISPL_FIELD_EVT_RING_REF   "event-ring-ref"
 #define XENDISPL_FIELD_EVT_CHANNEL    "event-channel"
 #define XENDISPL_FIELD_RESOLUTION     "resolution"
@@ -349,17 +392,17 @@
  * Display buffers's cookie of value 0 treated as invalid.
  * Framebuffer's cookie of value 0 treated as invalid.
  *
- *---------------------------------- Requests ---------------------------------
- *
- * All requests/responses, which are not connector specific, must be sent over
- * control ring of the connector with index 0.
- *
  * For all request/response/event packets that use cookies:
  *   dbuf_cookie - uint64_t, unique to guest domain value used by the backend
  *     to map remote display buffer to its local one
  *   fb_cookie - uint64_t, unique to guest domain value used by the backend
  *     to map remote framebuffer to its local one
  *
+ *---------------------------------- Requests ---------------------------------
+ *
+ * All requests/responses, which are not connector specific, must be sent over
+ * control ring of the connector with index 0.
+ *
  * All request packets have the same length (64 octets)
  * All request packets have common header:
  *         0                1                 2               3        octet
@@ -402,6 +445,14 @@
  * +----------------+----------------+----------------+----------------+
  *
  * Must be sent over control ring of the connector with index 0.
+ * All unused bits in flags field must be set to 0.
+ *
+ * An attempt to create multiple display buffers with the same dbuf_cookie is
+ * an error. dbuf_cookie can be re-used after destroying the corresponding
+ * display buffer.
+ *
+ * Width and height can be smaller, equal or bigger than the connector's
+ * resolution.
  *
  * width - uint32_t, width in pixels
  * height - uint32_t, height in pixels
@@ -412,20 +463,22 @@
  *     to allocate the buffer with the parameters provided in this request.
  *     Page directory is handled as follows:
  *       Frontend on request:
- *         o allocates pages for the directory
+ *         o allocates pages for the directory (gref_directory,
+ *           gref_dir_next_page(s)
  *         o grants permissions for the pages of the directory
  *         o sets gref_dir_next_page fields
  *       Backend on response:
  *         o grants permissions for the pages of the buffer allocated
  *         o fills in page directory with grant references
+ *           (gref[] in struct xendispl_page_directorygref)
  * gref_directory - grant_ref_t, a reference to the first shared page
  *   describing shared buffer references. At least one page exists. If shared
- *   buffer size  (buffer_sz) exceeds what can be addressed by this single page,
+ *   buffer size (buffer_sz) exceeds what can be addressed by this single page,
  *   then reference to the next page must be supplied (see gref_dir_next_page
  *   below)
  */
 
-#define XENDISPL_DBUF_FLG_REQ_ALLOC       0x0001
+#define XENDISPL_DBUF_FLG_REQ_ALLOC       (1 << 0)
 
 struct xendispl_dbuf_create_req {
     uint64_t dbuf_cookie;
@@ -529,6 +582,12 @@ struct xendispl_dbuf_destroy_req {
  * +----------------+----------------+----------------+----------------+
  *
  * Must be sent over control ring of the connector with index 0.
+ * Width and height can be smaller, equal or bigger than the connector's
+ * resolution.
+ *
+ * An attempt to create multiple frame buffers with the same fb_cookie is
+ * an error. fb_cookie can be re-used after destroying the corresponding
+ * frame buffer.
  *
  * width - uint32_t, width in pixels
  * height - uint32_t, height in pixels
@@ -602,9 +661,10 @@ struct xendispl_fb_detach_req {
  *
  * Pass all zeros to reset, otherwise command is treated as
  * configuration set.
- * If this is a set configuration request then framebuffer's cookie tells
- * the display which framebuffer/dbuf must be displayed while enabling display
- * (applying configuration).
+ * Framebuffer's cookie defines which framebuffer/dbuf must be
+ * displayed while enabling display (applying configuration).
+ * x, y, width and height are bound by the connector's resolution and must not
+ * exceed it.
  *
  * x - uint32_t, starting position in pixels by X axis
  * y - uint32_t, starting position in pixels by Y axis
@@ -749,8 +809,8 @@ DEFINE_RING_TYPES(xen_displif, struct xendispl_req, struct xendispl_resp);
  *                        Back to front events delivery
  ******************************************************************************
  * In order to deliver asynchronous events from back to front a shared page is
- * allocated by front and its grefs propagated to back via XenStore entries
- * (event-XXX).
+ * allocated by front and its granted reference propagated to back via
+ * XenStore entries (event-XXX).
  * This page has a common header used by both front and back to synchronize
  * access and control event's ring buffer, while back being a producer of the
  * events and front being a consumer. The rest of the page after the header
@@ -776,3 +836,13 @@ struct xendispl_event_page {
 	(XENDISPL_IN_RING((page))[(idx) % XENDISPL_IN_RING_LEN])
 
 #endif /* __XEN_PUBLIC_IO_DISPLIF_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to