On 10/01/2019 15:46, Paul Durrant wrote:
>> -----Original Message-----
>> From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
>> Sent: 10 January 2019 15:31
>> To: Paul Durrant <paul.durr...@citrix.com>; xen-devel@lists.xenproject.org
>> Cc: Stefano Stabellini <sstabell...@kernel.org>; Wei Liu
>> <wei.l...@citrix.com>; Razvan Cojocaru <rcojoc...@bitdefender.com>; Konrad
>> Rzeszutek Wilk <konrad.w...@oracle.com>; George Dunlap
>> <george.dun...@citrix.com>; Andrew Cooper <andrew.coop...@citrix.com>; Ian
>> Jackson <ian.jack...@citrix.com>; Tim (Xen.org) <t...@xen.org>; Julien
>> Grall <julien.gr...@arm.com>; Tamas K Lengyel <ta...@tklengyel.com>; Jan
>> Beulich <jbeul...@suse.com>; Roger Pau Monne <roger....@citrix.com>
>> Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
>> for sync requests.
>>
>> On Thu, 2018-12-20 at 12:05 +0000, Paul Durrant wrote:
>>>> -----Original Message-----
>>>>
>>>> The memory for the asynchronous ring and the synchronous channels
>>>> will
>>>> be allocated from domheap and mapped to the controlling domain
>>>> using the
>>>> foreignmemory_map_resource interface. Unlike the current
>>>> implementation,
>>>> the allocated pages are not part of the target DomU, so they will
>>>> not be
>>>> reclaimed when the vm_event domain is disabled.
>>> Why re-invent the wheel here? The ioreq infrastructure already does
>>> pretty much everything you need AFAICT.
>>>
>>>   Paul
>>>
>> Hi Paul,
>>
>> I'm still struggling to understand how the vm_event subsystem could be
>> integrated with an IOREQ server.
>>
>> An IOREQ server shares with the emulator 2 pages, one for ioreqs and
>> one for buffered_ioreqs. For vm_event we need to share also one or more
>> pages for the async ring and a few pages for the slotted synchronous
>> vm_events.
>> So, to my understanding, your idea to use the ioreq infrastructure for
>> vm_events is basically to replace the custom signalling (event channels
>> + ring / custom states) with ioreqs. Since the
>> vm_event_request/response structures are larger than 8 bytes, the
>> "data_is_ptr" flag should be used in conjunction with the addresses
>> (indexes) from the shared vm_event buffers.
>>
>> Is this the mechanism you had in mind?
>>
> Yes, that's roughly what I hoped might be possible. If that is too cumbersome 
> though then it should at least be feasible to mimic the ioreq code's page 
> allocation functions and code up vm_event buffers as another type of mappable 
> resource.

So, I've finally realised what has been subtly nagging at me for a while
from the suggestion to use ioreqs.  vm_event and ioreq have completely
different operations and semantics as far as the code in Xen is concerned.

The semantics for ioreq servers are "given a specific MMIO/PIO/CFG
action, which one of $N emulators should handle it".

vm_event on the other hand behaves just like the VT-x/SVM vmexit
intercepts.  It is "tell me when the guest does $X".  There isn't a
sensible case for having multiple vm_event consumers for a domain.

There is no overlap in the format of data used, or the cases where an
event would be sent.  Therefore, I think trying to implement vm_event in
terms of the ioreq server infrastructure is a short sighted move.

Beyond that, the only similarity is the slotted ring setup, which can be
entirely abstracted away behind resource mapping.  This actually comes
with a bonus in that vm_event will no longer strictly be tied to HVM
guests by virtue of its ring living in an HVMPARAM.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to