On 6/24/2016 4:01 PM, Jan Beulich wrote:
On 24.06.16 at 09:12, wrote:
On 6/24/2016 2:12 PM, Jan Beulich wrote:
In any event, I think log-dirty shouldn't be disabled when an
ioreq server binds the type, but as long as there are outstanding
entries of that type.
>>> On 24.06.16 at 09:12, wrote:
> On 6/24/2016 2:12 PM, Jan Beulich wrote:
>> In any event, I think log-dirty shouldn't be disabled when an
>> ioreq server binds the type, but as long as there are outstanding
>> entries of that type. That way, the "cannot be migrated"
On 6/24/2016 2:12 PM, Jan Beulich wrote:
On 24.06.16 at 06:16, wrote:
I'm now willing to take your suggestions:
a> still need the p2m resetting when ioreq server is unbounded;
b> disable log dirty feature if one ioreq server is bounded.
Does anyone else has
>>> On 24.06.16 at 06:16, wrote:
> I'm now willing to take your suggestions:
> a> still need the p2m resetting when ioreq server is unbounded;
> b> disable log dirty feature if one ioreq server is bounded.
>
> Does anyone else has different opinions? Thanks!
Hmm, in
On 6/23/2016 6:33 PM, George Dunlap wrote:
On 23/06/16 08:37, Yu Zhang wrote:
On 6/22/2016 7:33 PM, George Dunlap wrote:
On 22/06/16 11:07, Yu Zhang wrote:
On 6/22/2016 5:47 PM, George Dunlap wrote:
On 22/06/16 10:29, Jan Beulich wrote:
On 22.06.16 at 11:16,
On 6/22/2016 7:33 PM, George Dunlap wrote:
On 22/06/16 11:07, Yu Zhang wrote:
On 6/22/2016 5:47 PM, George Dunlap wrote:
On 22/06/16 10:29, Jan Beulich wrote:
On 22.06.16 at 11:16, wrote:
On 22/06/16 07:39, Jan Beulich wrote:
On 21.06.16 at 16:38,
>>> On 22.06.16 at 12:15, wrote:
> On 22/06/16 11:10, Jan Beulich wrote:
> On 22.06.16 at 11:47, wrote:
>>> So you're afraid of this sequence of events?
>>> 1) Server A de-registered, triggering a ioreq_server -> ram_rw type change
>>> 2)
On 22/06/16 11:07, Yu Zhang wrote:
>
>
> On 6/22/2016 5:47 PM, George Dunlap wrote:
>> On 22/06/16 10:29, Jan Beulich wrote:
>> On 22.06.16 at 11:16, wrote:
On 22/06/16 07:39, Jan Beulich wrote:
On 21.06.16 at 16:38, wrote:
On 22/06/16 11:10, Jan Beulich wrote:
On 22.06.16 at 11:47, wrote:
>> On 22/06/16 10:29, Jan Beulich wrote:
>> On 22.06.16 at 11:16, wrote:
On 22/06/16 07:39, Jan Beulich wrote:
On 21.06.16 at 16:38,
>>> On 22.06.16 at 11:47, wrote:
> On 22/06/16 10:29, Jan Beulich wrote:
> On 22.06.16 at 11:16, wrote:
>>> On 22/06/16 07:39, Jan Beulich wrote:
>>> On 21.06.16 at 16:38, wrote:
> On 21/06/16 10:47, Jan
On 6/22/2016 5:47 PM, George Dunlap wrote:
On 22/06/16 10:29, Jan Beulich wrote:
On 22.06.16 at 11:16, wrote:
On 22/06/16 07:39, Jan Beulich wrote:
On 21.06.16 at 16:38, wrote:
On 21/06/16 10:47, Jan Beulich wrote:
And then - didn't we
On 22/06/16 10:29, Jan Beulich wrote:
On 22.06.16 at 11:16, wrote:
>> On 22/06/16 07:39, Jan Beulich wrote:
>> On 21.06.16 at 16:38, wrote:
On 21/06/16 10:47, Jan Beulich wrote:
> And then - didn't we mean to disable that
>>> On 22.06.16 at 11:16, wrote:
> On 22/06/16 07:39, Jan Beulich wrote:
> On 21.06.16 at 16:38, wrote:
>>> On 21/06/16 10:47, Jan Beulich wrote:
And then - didn't we mean to disable that part of XenGT during
migration,
On 22/06/16 07:39, Jan Beulich wrote:
On 21.06.16 at 16:38, wrote:
>> On 21/06/16 10:47, Jan Beulich wrote:
>>> And then - didn't we mean to disable that part of XenGT during
>>> migration, i.e. temporarily accept the higher performance
>>> overhead
>>> On 22.06.16 at 10:38, wrote:
>
> On 6/22/2016 2:39 PM, Jan Beulich wrote:
> On 21.06.16 at 16:38, wrote:
>>> On 21/06/16 10:47, Jan Beulich wrote:
And then - didn't we mean to disable that part of XenGT during
On 6/22/2016 2:39 PM, Jan Beulich wrote:
On 21.06.16 at 16:38, wrote:
On 21/06/16 10:47, Jan Beulich wrote:
And then - didn't we mean to disable that part of XenGT during
migration, i.e. temporarily accept the higher performance
overhead without the
>>> On 21.06.16 at 16:38, wrote:
> On 21/06/16 10:47, Jan Beulich wrote:
>> And then - didn't we mean to disable that part of XenGT during
>> migration, i.e. temporarily accept the higher performance
>> overhead without the p2m_ioreq_server entries? In which
On 21/06/16 10:47, Jan Beulich wrote:
> And then - didn't we mean to disable that part of XenGT during
> migration, i.e. temporarily accept the higher performance
> overhead without the p2m_ioreq_server entries? In which case
> flipping everything back to p2m_ram_rw after
On 6/21/2016 5:47 PM, Jan Beulich wrote:
On 6/21/2016 4:22 PM, Jan Beulich wrote:
Above modification would convert _all_ p2m_ioreq_server into
p2m_ram_rw, irrespective of log-dirty mode being active. Which
I don't think is what you want.
Well, this is another situation I found very
>>> On 21.06.16 at 11:16, wrote:
>
> On 6/21/2016 4:22 PM, Jan Beulich wrote:
> On 21.06.16 at 09:45, wrote:
>>> On 6/20/2016 9:38 PM, Jan Beulich wrote:
>>> On 20.06.16 at 14:06, wrote:
> However,
On 6/21/2016 4:22 PM, Jan Beulich wrote:
On 21.06.16 at 09:45, wrote:
On 6/20/2016 9:38 PM, Jan Beulich wrote:
On 20.06.16 at 14:06, wrote:
However, if live migration is started(all pte entries invalidated
again), resolve_misconfig()
>>> On 21.06.16 at 09:45, wrote:
> On 6/20/2016 9:38 PM, Jan Beulich wrote:
> On 20.06.16 at 14:06, wrote:
>>> However, if live migration is started(all pte entries invalidated
>>> again), resolve_misconfig() would
>>> change both gfn
On 6/20/2016 9:38 PM, Jan Beulich wrote:
On 20.06.16 at 14:06, wrote:
Suppose resolve_misconfig() is modified to change all p2m_ioreq_server
entries(which also
have e.recalc flag turned on) back to p2m_ram_rw. And suppose we have
ioreq server 1, which
emulates gfn
On 6/20/2016 9:13 PM, George Dunlap wrote:
On 20/06/16 12:28, Yu Zhang wrote:
On 6/20/2016 6:55 PM, Jan Beulich wrote:
On 20.06.16 at 12:32, wrote:
On 20/06/16 11:25, Jan Beulich wrote:
On 20.06.16 at 12:10, wrote:
On 20/06/16 10:03,
>>> On 20.06.16 at 14:06, wrote:
> Suppose resolve_misconfig() is modified to change all p2m_ioreq_server
> entries(which also
> have e.recalc flag turned on) back to p2m_ram_rw. And suppose we have
> ioreq server 1, which
> emulates gfn A, and ioreq server 2 which
On 20/06/16 12:28, Yu Zhang wrote:
>
>
> On 6/20/2016 6:55 PM, Jan Beulich wrote:
> On 20.06.16 at 12:32, wrote:
>>> On 20/06/16 11:25, Jan Beulich wrote:
>>> On 20.06.16 at 12:10, wrote:
> On 20/06/16 10:03, Yu Zhang wrote:
On 6/20/2016 6:55 PM, Jan Beulich wrote:
On 20.06.16 at 12:32, wrote:
On 20/06/16 11:25, Jan Beulich wrote:
On 20.06.16 at 12:10, wrote:
On 20/06/16 10:03, Yu Zhang wrote:
However, there are conflicts if we take live migration into
>>> On 20.06.16 at 13:06, wrote:
>
> On 6/20/2016 6:45 PM, Jan Beulich wrote:
> On 20.06.16 at 12:30, wrote:
>>> On 6/20/2016 6:10 PM, George Dunlap wrote:
On 20/06/16 10:03, Yu Zhang wrote:
> So one solution is to disallow
On 6/20/2016 6:45 PM, Jan Beulich wrote:
On 20.06.16 at 12:30, wrote:
On 6/20/2016 6:10 PM, George Dunlap wrote:
On 20/06/16 10:03, Yu Zhang wrote:
So one solution is to disallow the log dirty feature in XenGT, i.e. just
return failure when enable_logdirty()
is
>>> On 20.06.16 at 12:32, wrote:
> On 20/06/16 11:25, Jan Beulich wrote:
> On 20.06.16 at 12:10, wrote:
>>> On 20/06/16 10:03, Yu Zhang wrote:
However, there are conflicts if we take live migration into account,
i.e. if the live
>>> On 20.06.16 at 12:30, wrote:
> On 6/20/2016 6:10 PM, George Dunlap wrote:
>> On 20/06/16 10:03, Yu Zhang wrote:
>>> So one solution is to disallow the log dirty feature in XenGT, i.e. just
>>> return failure when enable_logdirty()
>>> is called in toolstack. But
On 20/06/16 11:30, Yu Zhang wrote:
>
>
> On 6/20/2016 6:10 PM, George Dunlap wrote:
>> On 20/06/16 10:03, Yu Zhang wrote:
>>>
>>> On 6/17/2016 6:17 PM, George Dunlap wrote:
On 16/06/16 10:55, Jan Beulich wrote:
>> Previously in the 2nd version, I used
>>
On 20/06/16 11:25, Jan Beulich wrote:
On 20.06.16 at 12:10, wrote:
>> On 20/06/16 10:03, Yu Zhang wrote:
>>> However, there are conflicts if we take live migration into account,
>>> i.e. if the live migration is
>>> triggered by the user(unintentionally maybe)
On 6/20/2016 6:10 PM, George Dunlap wrote:
On 20/06/16 10:03, Yu Zhang wrote:
On 6/17/2016 6:17 PM, George Dunlap wrote:
On 16/06/16 10:55, Jan Beulich wrote:
Previously in the 2nd version, I used p2m_change_entry_type_global() to
reset the
outstanding p2m_ioreq_server entries back to
>>> On 20.06.16 at 12:10, wrote:
> On 20/06/16 10:03, Yu Zhang wrote:
>> However, there are conflicts if we take live migration into account,
>> i.e. if the live migration is
>> triggered by the user(unintentionally maybe) during the gpu emulation
>> process,
On 20/06/16 10:03, Yu Zhang wrote:
>
>
> On 6/17/2016 6:17 PM, George Dunlap wrote:
>> On 16/06/16 10:55, Jan Beulich wrote:
Previously in the 2nd version, I used p2m_change_entry_type_global() to
reset the
outstanding p2m_ioreq_server entries back to p2m_ram_rw
On 6/16/2016 6:02 PM, Jan Beulich wrote:
On 16.06.16 at 11:32, wrote:
On 6/14/2016 6:45 PM, Jan Beulich wrote:
On 19.05.16 at 11:05, wrote:
@@ -914,6 +916,45 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain
*d, ioservid_t
On 6/17/2016 6:17 PM, George Dunlap wrote:
On 16/06/16 10:55, Jan Beulich wrote:
Previously in the 2nd version, I used p2m_change_entry_type_global() to
reset the
outstanding p2m_ioreq_server entries back to p2m_ram_rw asynchronously after
the de-registration. But we realized later that this
On 16/06/16 10:55, Jan Beulich wrote:
>> Previously in the 2nd version, I used p2m_change_entry_type_global() to
>> reset the
>> outstanding p2m_ioreq_server entries back to p2m_ram_rw asynchronously after
>> the de-registration. But we realized later that this approach means we
>> can not
>>> On 16.06.16 at 13:18, wrote:
> On 6/16/2016 6:02 PM, Jan Beulich wrote:
>>> +struct xen_hvm_map_mem_type_to_ioreq_server {
>>> +domid_t domid; /* IN - domain to be serviced */
>>> +ioservid_t id; /* IN - ioreq server id */
>>> +
On 6/16/2016 6:02 PM, Jan Beulich wrote:
@@ -94,8 +96,16 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t
mfn,
default:
return flags | _PAGE_NX_BIT;
case p2m_grant_map_ro:
-case p2m_ioreq_server:
return flags | P2M_BASE_FLAGS |
>>> On 16.06.16 at 11:32, wrote:
>> On 6/14/2016 6:45 PM, Jan Beulich wrote:
>>> On 19.05.16 at 11:05, wrote:
> @@ -914,6 +916,45 @@ int hvm_unmap_io_range_from_ioreq_server(struct
> domain
>>> *d, ioservid_t id,
>
>>> On 16.06.16 at 11:30, wrote:
> On 6/15/2016 6:21 PM, Jan Beulich wrote:
> On 15.06.16 at 11:50, wrote:
>>> On 14/06/16 14:31, Jan Beulich wrote:
>>> On 14.06.16 at 15:13, wrote:
> On 14/06/16 11:45,
On 6/14/2016 6:45 PM, Jan Beulich wrote:
On 19.05.16 at 11:05, wrote:
A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
let one ioreq server claim/disclaim its responsibility for the
handling of guest pages with p2m type p2m_ioreq_server. Users
of this
On 6/15/2016 6:21 PM, Jan Beulich wrote:
On 15.06.16 at 11:50, wrote:
On 14/06/16 14:31, Jan Beulich wrote:
On 14.06.16 at 15:13, wrote:
On 14/06/16 11:45, Jan Beulich wrote:
Locking is somewhat strange here: You protect against the
>>> On 15.06.16 at 12:52, wrote:
> On 6/14/2016 6:45 PM, Jan Beulich wrote:
> On 19.05.16 at 11:05, wrote:
>>> A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
>>> let one ioreq server claim/disclaim its responsibility for
On 15/06/16 11:21, Jan Beulich wrote:
>> I think you've tripped over "changing coding styles" in unfamiliar code
>> before too, so you know how frustrating it is to try to follow the
>> existing coding style only to be told that you did it wrong. :-)
>
> Agreed, you caught me on this one. Albeit
On 6/14/2016 6:45 PM, Jan Beulich wrote:
On 19.05.16 at 11:05, wrote:
A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
let one ioreq server claim/disclaim its responsibility for the
handling of guest pages with p2m type p2m_ioreq_server. Users
of this
>>> On 15.06.16 at 11:50, wrote:
> On 14/06/16 14:31, Jan Beulich wrote:
> On 14.06.16 at 15:13, wrote:
>>> On 14/06/16 11:45, Jan Beulich wrote:
Locking is somewhat strange here: You protect against the "set"
counterpart altering
On 14/06/16 14:31, Jan Beulich wrote:
On 14.06.16 at 15:13, wrote:
>> On 14/06/16 11:45, Jan Beulich wrote:
+ struct hvm_ioreq_server *s)
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+int rc;
+
+
>>> On 14.06.16 at 15:13, wrote:
> On 14/06/16 11:45, Jan Beulich wrote:
>>> + struct hvm_ioreq_server *s)
>>> +{
>>> +struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +int rc;
>>> +
>>> +spin_lock(>ioreq.lock);
>>> +
>>> +if (
On 19/05/16 10:05, Yu Zhang wrote:
> A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
> let one ioreq server claim/disclaim its responsibility for the
> handling of guest pages with p2m type p2m_ioreq_server. Users
> of this HVMOP can specify which kind of operation is supposed
> to
On 14/06/16 11:45, Jan Beulich wrote:
>> + struct hvm_ioreq_server *s)
>> +{
>> +struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +int rc;
>> +
>> +spin_lock(>ioreq.lock);
>> +
>> +if ( flags == 0 )
>> +{
>> +rc = -EINVAL;
>> +if (
>>> On 19.05.16 at 11:05, wrote:
> A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
> let one ioreq server claim/disclaim its responsibility for the
> handling of guest pages with p2m type p2m_ioreq_server. Users
> of this HVMOP can specify which kind of
54 matches
Mail list logo