;yi.z.zh...@intel.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
Please try to have a cover letter in the future when you have multiple
patches. This will make easier to give comments/release-ack for the
all the patches.
to reference a superpage.
Therefore the logic to enumerate the L1/L2 page table and to
reset the corresponding L2/L3 PTE need to be protected with
spinlock. And the _PAGE_PRESENT and _PAGE_PSE flags need be
checked after the lock is obtained.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.
inlock is obtained,
for the corresponding L2/L3 entry.
Signed-off-by: Min He <min...@intel.com>
Signed-off-by: Yi Zhang <yi.z.zh...@intel.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citr
On 11/13/2017 5:31 PM, Jan Beulich wrote:
On 10.11.17 at 15:05, wrote:
On 11/10/2017 5:49 PM, Jan Beulich wrote:
I'm not certain this is important enough a fix to consider for 4.10,
and you seem to think it's good enough if this gets applied only
after the tree
On 11/10/2017 5:49 PM, Jan Beulich wrote:
On 10.11.17 at 08:18, wrote:
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4844,9 +4844,19 @@ int map_pages_to_xen(
{
unsigned long base_mfn;
-pl1e =
On 11/10/2017 5:57 PM, Jan Beulich wrote:
On 10.11.17 at 08:18, wrote:
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5097,6 +5097,17 @@ int modify_xen_mappings(unsigned long s, unsigned long
e, unsigned int nf)
*/
if ( (nf &
by: Yi Zhang <yi.z.zh...@intel.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citrix.com>
Changes in v2:
According to comments from Jan Beulich,
- check PSE of pl2e and pl3e, and skip the
.
Otherwise, the paging structure may be freed more than once, if
the same routine is invoked simultaneously on different CPUs.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citrix.com>
---
xen/
normal page.
Protecting the `pl1e` with the lock will fix this race condition.
Signed-off-by: Min He <min...@intel.com>
Signed-off-by: Yi Zhang <yi.z.zh...@intel.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Oh, one more thing: Is it really the case that all
On 11/9/2017 5:19 PM, Jan Beulich wrote:
On 09.11.17 at 16:29, wrote:
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4844,9 +4844,10 @@ int map_pages_to_xen(
{
unsigned long base_mfn;
-pl1e =
min...@intel.com>
Signed-off-by: Yi Zhang <yi.z.zh...@intel.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citrix.com>
---
xen/arch/x86/mm.c | 3 ++-
1 file changed, 2 insertions(+), 1 dele
On 8/22/2017 8:44 PM, Julien Grall wrote:
Hi,
On 22/08/17 11:22, Yu Zhang wrote:
On 8/21/2017 6:15 PM, Julien Grall wrote:
Hi Paul,
On 21/08/17 11:11, Paul Durrant wrote:
-Original Message-
From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
Julien Grall
Sent
XenGT (v7)
- XEN-43
- Yu Zhang
- Paul Durrant
I think this is either done or obsolete now. Not sure which.
CCed Yu Zhang to tell which one.
Thanks, Julien. This is done now. :)
Yu
Cheers,
___
Xen-devel mailing list
Xen-devel
On 8/15/2017 6:28 PM, Andrew Cooper wrote:
On 15/08/17 04:18, Boqun Feng (Intel) wrote:
Add a "umip" test for the User-Model Instruction Prevention. The test
simply tries to run sgdt/sidt/sldt/str/smsw in guest user-mode with
CR4_UMIP = 1.
Signed-off-by: Boqun Feng (Intel)
On 7/20/2017 7:24 PM, Andrew Cooper wrote:
On 20/07/17 11:36, Yu Zhang wrote:
On 7/20/2017 6:42 PM, Andrew Cooper wrote:
On 20/07/17 11:10, Yu Zhang wrote:
On 7/17/2017 6:53 PM, Juergen Gross wrote:
Hey,
I took a few notes at the 5-level-paging session at the summit.
I hope
On 7/20/2017 6:42 PM, Andrew Cooper wrote:
On 20/07/17 11:10, Yu Zhang wrote:
On 7/17/2017 6:53 PM, Juergen Gross wrote:
Hey,
I took a few notes at the 5-level-paging session at the summit.
I hope there isn't any major stuff missing...
Participants (at least naming the active ones
On 7/17/2017 6:53 PM, Juergen Gross wrote:
Hey,
I took a few notes at the 5-level-paging session at the summit.
I hope there isn't any major stuff missing...
Participants (at least naming the active ones): Andrew Cooper,
Jan Beulich, Yu Zhang and myself (the list is just from my memory
On 5/10/2017 12:29 AM, Jan Beulich wrote:
On 05.04.17 at 10:59, wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -411,14 +411,17 @@ static int dm_op(domid_t domid,
while ( read_atomic(>ioreq.entry_count) &&
On 5/8/2017 7:12 PM, George Dunlap wrote:
On 08/05/17 11:52, Zhang, Xiong Y wrote:
On 06.05.17 at 03:51, wrote:
On 05.05.17 at 05:52, wrote:
'commit 1679e0df3df6 ("x86/ioreq server: asynchronously reset
outstanding p2m_ioreq_server
On 4/28/2017 3:45 PM, Zhang, Xiong Y wrote:
I found this patch couldn't work, the reason is inline. And need propose to
fix this.
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 7e0da81..d72b7bd 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -384,15
On 4/20/2017 6:23 PM, Andrew Cooper wrote:
On 20/04/17 11:10, Yu Zhang wrote:
On 4/20/2017 6:01 PM, Jan Beulich wrote:
On 20.04.17 at 11:53, <yu.c.zh...@linux.intel.com> wrote:
On 4/20/2017 5:47 PM, Jan Beulich wrote:
On 20.04.17 at 09:15, <yu.c.zh...@linux.intel.com> wrot
On 4/20/2017 6:01 PM, Jan Beulich wrote:
On 20.04.17 at 11:53, wrote:
On 4/20/2017 5:47 PM, Jan Beulich wrote:
On 20.04.17 at 09:15, wrote:
And back to the schedule of this feature, are you working on it? Or any
specific plan?
Well,
On 4/20/2017 5:47 PM, Jan Beulich wrote:
On 20.04.17 at 09:15, wrote:
And back to the schedule of this feature, are you working on it? Or any
specific plan?
Well, the HVM side is basically ready (as said, the single hunk needed
to support UMIP when hardware
On 4/19/2017 10:09 PM, Andrew Cooper wrote:
On 19/04/17 15:07, Jan Beulich wrote:
On 19.04.17 at 15:58, <andrew.coop...@citrix.com> wrote:
On 19/04/17 14:50, Yu Zhang wrote:
On 4/19/2017 9:34 PM, Jan Beulich wrote:
On 19.04.17 at 13:44, <yu.c.zh...@linux.intel.com> wrote:
On
On 4/19/2017 9:34 PM, Jan Beulich wrote:
On 19.04.17 at 13:44, wrote:
On 4/19/2017 7:19 PM, Jan Beulich wrote:
On 19.04.17 at 11:48, wrote:
Does hypervisor need to differentiate dom0 kernel and its
user space?
If we want to
On 4/19/2017 7:19 PM, Jan Beulich wrote:
On 19.04.17 at 11:48, wrote:
On 4/19/2017 5:18 PM, Jan Beulich wrote:
On 19.04.17 at 10:48, wrote:
I saw that commit 8c14e5f provides emulations for UMIP affected
instructions. But
On 4/19/2017 5:59 PM, Andrew Cooper wrote:
On 19/04/17 10:48, Yu Zhang wrote:
On 4/19/2017 5:18 PM, Jan Beulich wrote:
On 19.04.17 at 10:48, <yu.c.zh...@linux.intel.com> wrote:
I saw that commit 8c14e5f provides emulations for UMIP affected
instructions. But realized that xe
On 4/19/2017 5:18 PM, Jan Beulich wrote:
On 19.04.17 at 10:48, wrote:
I saw that commit 8c14e5f provides emulations for UMIP affected
instructions. But realized that xen does not have logic to expose UMIP
feature to guests - you have sent out one in
Hi Jan,
I saw that commit 8c14e5f provides emulations for UMIP affected
instructions. But realized that xen does not have logic to expose UMIP
feature to guests - you have sent out one in
https://lists.xenproject.org/archives/html/xen-devel/2016-12/msg00552.html
to emulate the cpuid leaf,
t.c).
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Signed-off-by: George Dunlap <george.dun...@citrix.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <a
o see if p2m->ioreq.server is valid or
not. If it is, we leave it as type p2m_ioreq_server; if not, we reset
it to p2m_ram as appropriate.
To avoid code duplication, lift recalc_type() out of p2m-pt.c and use
it for all type recalculations (both in p2m-pt.c and p2m-ept.c).
Signed-off-by: Yu Zha
On 4/7/2017 7:28 PM, Jan Beulich wrote:
On 07.04.17 at 12:50, <yu.c.zh...@linux.intel.com> wrote:
On 4/7/2017 6:28 PM, George Dunlap wrote:
On 07/04/17 11:14, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, <yu.c.zh...@linux.intel.com> wrote:
--
On 4/7/2017 6:26 PM, Jan Beulich wrote:
On 07.04.17 at 11:53, wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int
On 4/7/2017 6:28 PM, George Dunlap wrote:
On 07/04/17 11:14, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, <yu.c.zh...@linux.intel.com> wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_mis
On 4/7/2017 6:22 PM, George Dunlap wrote:
On 07/04/17 10:53, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, <yu.c.zh...@linux.intel.com> wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_mis
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
unsigned long gfn)
e.ipat = ipat;
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
unsigned long gfn)
e.ipat = ipat;
is mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
Reviewed-by: George
. The core reason is our current
implementation of p2m_change_entry_type_global() lacks information
to resync p2m_ioreq_server entries correctly if global_logdirty is
on.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
---
Cc:
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulic
; only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Acked-by: Tim Deegan <t...@xen.org>
Reviewe
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
Acked-by: Wei Liu <wei.l...@citrix.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Ian Jackson &l
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com&g
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
On 4/6/2017 10:25 PM, George Dunlap wrote:
On 06/04/17 14:19, Yu Zhang wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously with the current p2m_change_entry_type_global()
interface.
New
Sorry, forgot cc.
Please ignore this thread.
Yu
On 4/6/2017 9:18 PM, Yu Zhang wrote:
A new device model wrapper is added for the newly introduced
DMOP - XEN_DMOP_map_mem_type_to_ioreq_server.
Since currently this DMOP only supports the emulation of write
operations, attempts to trigger
. The core reason is our current
implementation of p2m_change_entry_type_global() lacks information
to resync p2m_ioreq_server entries correctly if global_logdirty is
on.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
---
Cc:
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei
is mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
Reviewed-by: George
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulic
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
; only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Acked-by: Tim Deegan <t...@xen.org>
Reviewe
; only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Acked-by: Tim Deegan <t...@xen.org>
Reviewe
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
---
xen/arch/x86/hvm/hvm.c | 7 +--
1 fil
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
---
xen/arch/x86/hvm/hvm.c | 7 +--
1 fil
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei
On 4/6/2017 3:48 PM, Jan Beulich wrote:
On 05.04.17 at 20:04, wrote:
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
put_gfn(d, gmfn);
return 1;
}
+
On 4/6/2017 2:02 AM, Yu Zhang wrote:
On 4/6/2017 1:28 AM, Yu Zhang wrote:
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10
On 4/6/2017 1:28 AM, Yu Zhang wrote:
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang
<yu.c
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang <yu.c.zh...@linux.intel.com>
wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang <yu.c.zh...@linux.intel.com>
wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be
On 4/5/2017 11:11 PM, George Dunlap wrote:
On 05/04/17 16:10, George Dunlap wrote:
On 05/04/17 09:59, Yu Zhang wrote:
Previously, p2m_finish_type_change() is triggered to iterate and
clean up the p2m table when an ioreq server unmaps from memory type
HVMMEM_ioreq_server. And the current
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang <yu.c.zh...@linux.intel.com> wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously with the c
On 4/5/2017 6:46 PM, Jan Beulich wrote:
On 05.04.17 at 12:26, <yu.c.zh...@linux.intel.com> wrote:
On 4/5/2017 6:33 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 06:21:16PM +0800, Yu Zhang wrote:
So this series is OK for merge. And with compat wrapper dropped while
committing,
we do no
On 4/5/2017 6:33 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 06:21:16PM +0800, Yu Zhang wrote:
On 4/5/2017 6:08 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote
On 4/5/2017 6:20 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 11:08:46AM +0100, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote:
-Original Message-
From: Yu
On 4/5/2017 6:08 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote:
-Original Message-
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: 02 April 2017
On 4/5/2017 5:21 PM, Jan Beulich wrote:
On 05.04.17 at 08:53, wrote:
Or with other patches received "Reviewed by", we can just drop the
useless code of this patch.
Any suggestions?
Without the libxc wrapper, the new DMOP is effectively dead code
too. All or
change is performed for the just finished
iterations, which means p2m_finish_type_change() will return quite
soon. So in such scenario, we can allow the p2m iteration to continue,
without checking the hypercall pre-emption.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Note: this patch
ronous resetting is necessary because we need to guarantee
the p2m table is clean before another ioreq server is mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Be
On 4/3/2017 10:36 PM, Jan Beulich wrote:
So this produces the same -EINVAL as the earlier check in context
above. I think it would be nice if neither did - -EINUSE for the first
(which we don't have, so -EOPNOTSUPP would seem the second
bets option there) and -EBUSY for the second would seem
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote:
-Original Message-
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: 02 April 2017 13:24
To: xen-devel@lists.xen.org
Cc: zhiyuan...@intel.com; Paul Durrant <paul.durr...@citrix.
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulic
is mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.c
. The core reason is our current
implementation of p2m_change_entry_type_global() lacks information
to resync p2m_ioreq_server entries correctly if global_logdirty is
on.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Paul Durrant <paul.durr...@citrix.com>
---
Cc:
; only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
c> this patch shall be accepted together with the following ones in
this series.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei Liu <wei.l...@citrix.com>
---
tools/libs/devicemodel/core.
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com&g
On 3/24/2017 5:37 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Wednesday, March 22, 2017 6:12 PM
On 3/22/2017 4:10 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Tuesday, March 21, 2017 10:53 AM
After an ioreq server has
On 3/24/2017 6:37 PM, Jan Beulich wrote:
On 24.03.17 at 10:05, wrote:
On 3/23/2017 5:00 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:29 PM, Jan Beulich wrote:
On 21.03.17 at 03:52,
On 3/24/2017 6:19 PM, Jan Beulich wrote:
On 24.03.17 at 10:05, wrote:
On 3/23/2017 4:57 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:21 PM, Jan Beulich wrote:
On 21.03.17 at 03:52,
On 3/24/2017 5:26 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Wednesday, March 22, 2017 6:13 PM
On 3/22/2017 3:49 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Tuesday, March 21, 2017 10:53 AM
A new DMOP
On 3/23/2017 5:02 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:39 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -385,16 +385,51 @@ static int
On 3/23/2017 5:00 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:29 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -949,6 +949,14 @@ int
On 3/23/2017 4:57 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:21 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
---
xen/arch/x86/hvm/dm.c| 37 ++--
On 3/22/2017 10:21 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
---
xen/arch/x86/hvm/dm.c| 37 ++--
xen/arch/x86/hvm/emulate.c | 65 ---
xen/arch/x86/hvm/ioreq.c | 38
On 3/22/2017 10:39 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -385,16 +385,51 @@ static int dm_op(domid_t domid,
case XEN_DMOP_map_mem_type_to_ioreq_server:
{
-const struct
On 3/22/2017 10:29 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -949,6 +949,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d,
ioservid_t id,
On 3/22/2017 3:49 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Tuesday, March 21, 2017 10:53 AM
A new DMOP - XEN_DMOP_map_mem_type_to_ioreq_server, is added to let
one ioreq server claim/disclaim its responsibility for the handling of guest
pages with p2m
On 3/22/2017 4:10 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Tuesday, March 21, 2017 10:53 AM
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously
On 3/21/2017 9:49 PM, Paul Durrant wrote:
-Original Message-
[snip]
+if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) )
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+while ( read_atomic(>ioreq.entry_count) &&
+
On 3/21/2017 6:00 PM, Paul Durrant wrote:
-Original Message-
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: 21 March 2017 02:53
To: xen-devel@lists.xen.org
Cc: zhiyuan...@intel.com; Paul Durrant <paul.durr...@citrix.com>; Jan
Beulich <jbeul...@suse.com>; A
is mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.c
XEN_DMOP_map_mem_type_to_ioreq_server and p2m_ioreq_server
are only supported for HVMs with HAP enabled.
Also note that only after one ioreq server claims its ownership
of p2m_ioreq_server, will the p2m type change to p2m_ioreq_server
be allowed.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper &l
-by: Yu Zhang <yu.c.zh...@linux.intel.com>
---
Cc: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citrix.com>
Cc: George Dunlap <george.dun...@eu.citrix.com>
Cc: Jun Nakajima <jun.nakaj...@intel.com>
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (5):
x86/ioreq server
1 - 100 of 373 matches
Mail list logo