By default the sve bits are not set.
This patch adds the option of setting the sve bits upon creating a new
altp2m view.
Signed-off-by: Alexandru Isaila
---
tools/libxc/include/xenctrl.h | 3 +++
tools/libxc/xc_altp2m.c | 28
xen/arch/x86/hvm/hvm.c
On 29.08.2019 18:04, Jan Beulich wrote:
> On 22.08.2019 16:02, Alexandru Stefan ISAILA wrote:
>> This patch adds access control for NPT mode.
>>
>> The access rights are stored in the NPT p2m table 56:53.
>
> Why starting from bit 53? I can't seem to find any us
On 27.08.2019 11:26, Jan Beulich wrote:
> On 20.08.2019 22:11, Andrew Cooper wrote:
>> On 30/07/2019 15:54, Jan Beulich wrote:
@@ -622,14 +622,22 @@ static void *hvmemul_map_linear_addr(
}
if ( p2mt == p2m_ioreq_server )
- {
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind
On 03.09.2019 18:52, Jan Beulich wrote:
> On 02.09.2019 10:11, Alexandru Stefan ISAILA wrote:
>> @@ -1355,6 +1355,23 @@ void p2m_init_altp2m_ept(struct domain *d, unsigned
>> int i)
>> ept = &p2m->ept;
>> ept->mfn = pagetable_get_pf
On 04.09.2019 15:14, Jan Beulich wrote:
> On 04.09.2019 13:51, Alexandru Stefan ISAILA wrote:
>>
>>
>> On 03.09.2019 18:52, Jan Beulich wrote:
>>> On 02.09.2019 10:11, Alexandru Stefan ISAILA wrote:
>>>> @@ -1355,6 +1355,23 @@ void p2m_init_altp2m_
On 04.09.2019 16:17, Jan Beulich wrote:
> On 04.09.2019 15:04, Alexandru Stefan ISAILA wrote:
>>
>>
>> On 04.09.2019 15:14, Jan Beulich wrote:
>>> On 04.09.2019 13:51, Alexandru Stefan ISAILA wrote:
>>>>
>>>>
>>>> On 03.09.2
On 06.09.2019 18:46, Jan Beulich wrote:
> On 03.09.2019 16:01, Alexandru Stefan ISAILA wrote:
>> A/D bit writes (on page walks) can be considered benign by an introspection
>> agent, so receiving vm_events for them is a pessimization. We try here to
>> optimize by filte
On 09.09.2019 13:49, Jan Beulich wrote:
> On 09.09.2019 12:01, Alexandru Stefan ISAILA wrote:
>> On 06.09.2019 18:46, Jan Beulich wrote:
>>> On 03.09.2019 16:01, Alexandru Stefan ISAILA wrote:
>>>>}
>>>> +/* Check if eny vm_event was sent
On 09.09.2019 14:15, Jan Beulich wrote:
> On 09.09.2019 13:03, Alexandru Stefan ISAILA wrote:
>>
>>
>> On 09.09.2019 13:49, Jan Beulich wrote:
>>> On 09.09.2019 12:01, Alexandru Stefan ISAILA wrote:
>>>> On 06.09.2019 18:46, Jan Beulich wrote:
>>
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind
On 11.09.2019 12:57, Jan Beulich wrote:
> On 09.09.2019 17:35, Alexandru Stefan ISAILA wrote:
>> A/D bit writes (on page walks) can be considered benign by an introspection
>> agent, so receiving vm_events for them is a pessimization. We try here to
>> optimize by filte
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind
On 16.09.2019 18:58, Jan Beulich wrote:
> On 16.09.2019 10:10, Alexandru Stefan ISAILA wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3224,6 +3224,14 @@ static enum hvm_translation_result __hvm_copy(
>> ret
On 17.09.2019 11:09, Jan Beulich wrote:
> On 17.09.2019 09:52, Alexandru Stefan ISAILA wrote:
>> On 16.09.2019 18:58, Jan Beulich wrote:
>>> On 16.09.2019 10:10, Alexandru Stefan ISAILA wrote:
>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>> +++ b/xen/arch/x86/hv
On 17.09.2019 17:32, Jan Beulich wrote:
> On 17.09.2019 16:11, Alexandru Stefan ISAILA wrote:
>>
>>
>> On 17.09.2019 11:09, Jan Beulich wrote:
>>> On 17.09.2019 09:52, Alexandru Stefan ISAILA wrote:
>>>> On 16.09.2019 18:58, Jan Beulich wrote:
>>
On 17.09.2019 18:04, Jan Beulich wrote:
> On 17.09.2019 17:00, Alexandru Stefan ISAILA wrote:
>> There is no problem, I understand the risk of having suspicious return
>> values. I am not hanged on having this return. I used this to skip
>> adding a new return value. I can d
On 18.09.2019 12:47, Jan Beulich wrote:
> On 17.09.2019 17:09, Tamas K Lengyel wrote:
>> On Tue, Sep 17, 2019 at 8:24 AM Razvan Cojocaru
>> wrote:
>>>
>>> On 9/17/19 5:11 PM, Alexandru Stefan ISAILA wrote:
>>>>>>>> +bool hvm
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind
On 19.09.2019 16:59, Jan Beulich wrote:
> On 19.09.2019 15:03, Alexandru Stefan ISAILA wrote:
>> @@ -601,6 +602,7 @@ static void *hvmemul_map_linear_addr(
>>
>> case HVMTRANS_gfn_paged_out:
>> case HVMTRANS_gfn_shared:
>> +
On 19.09.2019 17:09, Paul Durrant wrote:
>> -Original Message-
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index fdb1e17f59..4cc077bb3f 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3236,6 +3236,19 @@ static enum hvm_translation_resul
On 20.09.2019 11:24, Jan Beulich wrote:
> On 20.09.2019 10:06, Alexandru Stefan ISAILA wrote:
>> On 19.09.2019 16:59, Jan Beulich wrote:
>>> Furthermore while you now restrict the check to linear address
>>> based accesses, other than the description says (or at le
On 19.09.2019 13:37, Jan Beulich wrote:
> hvm_monitor_cpuid() expects the input registers, not two of the outputs.
>
> However, once having made the necessary adjustment, the SVM and VMX
> functions are so similar that they should be folded (thus avoiding
> further similar asymmetries to get int
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind
On 20.09.2019 17:22, Jan Beulich wrote:
> On 20.09.2019 14:16, Alexandru Stefan ISAILA wrote:
>> In order to have __hvm_copy() issue ~X86EMUL_RETRY a new return type,
>> HVMTRANS_need_retry, was added and all the places that consume HVMTRANS*
>> and needed adjustment wher
On 20.09.2019 18:20, Jan Beulich wrote:
> On 20.09.2019 16:59, Alexandru Stefan ISAILA wrote:
>>
>>
>> On 20.09.2019 17:22, Jan Beulich wrote:
>>> On 20.09.2019 14:16, Alexandru Stefan ISAILA wrote:
>>>> In order to have __hvm_cop
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind
On 23.09.2019 16:43, Jan Beulich wrote:
> On 23.09.2019 14:05, Alexandru Stefan ISAILA wrote:
>> @@ -599,8 +600,15 @@ static void *hvmemul_map_linear_addr(
>> err = NULL;
>> goto out;
>>
>> -case HVMTRANS_gfn_paged_out:
>
On 23.09.2019 16:05, Paul Durrant wrote:
>> -Original Message-
>> From: Alexandru Stefan ISAILA
>> Sent: 23 September 2019 13:06
>> To: xen-devel@lists.xenproject.org
>> Cc: Paul Durrant ; jbeul...@suse.com; Andrew Cooper
>> ; w...@xen.org; Roger Pau
On 25.04.2019 15:54, Jan Beulich wrote:
>>>> On 24.04.19 at 16:46, wrote:
>> On Wed, Apr 24, 2019 at 02:27:32PM +, Alexandru Stefan ISAILA wrote:
>>> @@ -1053,15 +1053,11 @@ static void change_type_range(struct p2m_domain
>>> *p2m,
>>>
On 08.04.2019 18:32, Jan Beulich wrote:
On 06.02.19 at 13:53, wrote:
>> This patch aims to have mem access vm events sent from the emulator.
>> This is useful in the case of page-walks that have to emulate
>> instructions in access denied pages.
>
> I'm afraid that I can't make sense of th
>> @@ -530,6 +532,55 @@ static int hvmemul_do_mmio_addr(paddr_t mmio_gpa,
>> return hvmemul_do_io_addr(1, mmio_gpa, reps, size, dir, df, ram_gpa);
>> }
>>
>> +static bool hvmemul_send_vm_event(paddr_t gpa, unsigned long gla, gfn_t gfn,
>> + uint32_t pfec
On 15.05.2019 11:23, Jan Beulich wrote:
> Their pre-AVX512 incarnations have clearly been overlooked during much
> earlier work. Their memory access pattern is entirely standard, so no
> specific tests get added to the harness.
>
> Reported-by: Razvan Cojocaru
> Signed-off-by: Jan Beulich
Tes
Thiis is done so hvmemul_linear_to_phys() can be called from
hvmemul_send_vm_event().
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/hvm/emulate.c | 181 ++---
1 file changed, 90 insertions(+), 91 deletions(-)
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/
This patch aims to have mem access vm events sent from the emulator.
This is useful in the case of emulated instructions that cause
page-walks on access protected pages.
We use hvmemul_map_linear_addr() ro intercept r/w access and
hvmemul_insn_fetch() to intercept exec access.
First we try to sen
Hi George,
Did you have time to look at this patch?
Regards,
Alex
On 03.05.2019 11:04, Jan Beulich wrote:
On 03.05.19 at 09:53, wrote:
>> On 25.04.2019 15:54, Jan Beulich wrote:
>>> It is an issue anyway that a change is
>>> made without saying why the new behavior preferable over
>>> the
On 22.05.2019 12:56, Jan Beulich wrote:
On 20.05.19 at 14:55, wrote:
>> This patch aims to have mem access vm events sent from the emulator.
>> This is useful in the case of emulated instructions that cause
>> page-walks on access protected pages.
>>
>> We use hvmemul_map_linear_addr() ro i
>>> Despite what was said before you're still doing things a 2nd time
>>> here just because of hvmemul_send_vm_event()'s needs, even
>>> if that function ends up bailing right away.
>>
>> I don't understand what things are done 2 times. Can you please explain?
>
> You add code above that exists a
>
>> +return false;
>> +
>> +rc = hvmemul_linear_to_phys(gla, &gpa, bytes, &reps, pfec, &ctxt);
>
> As said before - I don't think it's a good idea to do the page walk
> twice: This and the pre-existing one can easily return different
> results.
What preexisting page walk are you tal
This patch aims to have mem access vm events sent from the emulator.
This is useful where we want to only emulate a page walk without
checking the EPT, but we still want to check the EPT when emulating
the instruction that caused the page walk. In this case, the original
EPT fault is caused by the
This new function returns the active altp2m index form a given vcp.
Signed-off-by: Alexandru Isaila
---
tools/libxc/include/xenctrl.h | 2 ++
tools/libxc/xc_altp2m.c | 25 +
xen/arch/x86/hvm/hvm.c | 24
xen/include/public/hvm/h
On 06.06.2019 15:25, Jan Beulich wrote:
On 06.06.19 at 14:16, wrote:
>> @@ -4735,6 +4736,29 @@ static int do_altp2m_op(
>> _gfn(a.u.change_gfn.old_gfn),
>> _gfn(a.u.change_gfn.new_gfn));
>> break;
>> +
>> +case HVMOP_altp2m_get_p2m_i
This new function returns the active altp2m index form a given vcpu.
Signed-off-by: Alexandru Isaila
---
Changes since V3:
- Use domain_vcpu()
- Drop xen_hvm_altp2m_get_vcpu_p2m_idx_t.
---
tools/libxc/include/xenctrl.h | 2 ++
tools/libxc/xc_altp2m.c | 25
The patch adds a new lib xc function (xc_altp2m_get_vcpu_p2m_idx) that
uses a new hvmop (HVMOP_altp2m_get_p2m_idx) to get the active altp2m
index from a given vcpu.
Signed-off-by: Alexandru Isaila
---
Changes since V2:
- Update comment and title
- Remove redundant max_vcpu check.
Hi all,
Any remarks on the patch at hand are appreciated.
Thanks,
Alex
On 04.06.2019 14:49, Alexandru Stefan ISAILA wrote:
> This patch aims to have mem access vm events sent from the emulator.
> This is useful where we want to only emulate a page walk without
> checking the EPT, but
This patch aims to have mem access vm events sent from the emulator.
This is useful in the case of page-walks that have to emulate
instructions in access denied pages.
We use hvmemul_map_linear_addr() ro intercept r/w access and
hvmemul_insn_fetch() to intercept exec access.
First we try to send
Ping
Suravee / Brian / Boris any ideas on this topic are appreciated.
Regards,
Alex
On 27.09.2018 13:37, George Dunlap wrote:
> On 09/26/2018 06:22 PM, Andrew Cooper wrote:
>> On 26/09/18 17:47, George Dunlap wrote:
>>> From: Isaila Alexandru
>>>
>>> This patch adds access control for NPT mode.
>> +if ( altp2m_active(current->domain) )
>> +p2m = p2m_get_altp2m(current);
>> +if ( !p2m )
>> +p2m = p2m_get_hostp2m(current->domain);
>> +
>> +gfn_lock(p2m, gfn, 0);
>> +mfn = p2m->get_entry(p2m, gfn, &p2mt, &access, 0, NULL, NULL);
>> +gfn_unlock(p2m, gfn, 0
>>>
>>> Newline.
>>>
+default:
+return false;
+}
>>>
>>> I'm not sure the switch is needed, you can't have a PFEC_write_access
>>> for example if the p2m type is p2m_access_w or p2m_access_rw, hence it
>>> seems like it could be simplified to only take the pfec into
>>>
>>> Newline.
>>>
+default:
+return false;
+}
>>>
>>> I'm not sure the switch is needed, you can't have a PFEC_write_access
>>> for example if the p2m type is p2m_access_w or p2m_access_rw, hence it
>>> seems like it could be simplified to only take the pfec into
This patch aims to have mem access vm events sent from the emulator.
This is useful in the case of page-walks that have to emulate
instructions in access denied pages.
We use hvmemul_map_linear_addr() ro intercept r/w access and
hvmemul_insn_fetch() to intercept exec access.
First we try to send
This is done so hvmemul_linear_to_phys() can be called from
hvmemul_map_linear_addr()
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/hvm/emulate.c | 181 ++---
1 file changed, 90 insertions(+), 91 deletions(-)
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/
Changed the return value of 1 to 0 so now p2m_finish_type_change returns
0 for success or <0 for error.
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/mm/p2m.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index d
On 16.01.2019 17:39, Jan Beulich wrote:
On 16.01.19 at 16:13, wrote:
>> Changed the return value of 1 to 0 so now p2m_finish_type_change returns
>> 0 for success or <0 for error.
>
> This is a valid alternative return value model. Both have their merits.
> Hence if you want to change from
Changed the return value of 1 to 0 so now p2m_finish_type_change returns
0 for success or <0 for error.
The “root” caller of p2m_finish_type_change() is
XEN_DMOP_map_mem_type_to_ioreq_server and this does nothing useful with
positive values.
Suggested-by: George Dunlap
Signed-off-by: Alexandru Is
Ping, any thoughts on this are appreciated.
Regards,
Alex
On 11.01.2019 17:37, Alexandru Stefan ISAILA wrote:
> This is done so hvmemul_linear_to_phys() can be called from
> hvmemul_map_linear_addr()
>
> Signed-off-by: Alexandru Isaila
> ---
> xen/arch/x86/h
On 19.07.2019 17:23, Razvan Cojocaru wrote:
> On 7/19/19 4:38 PM, Jan Beulich wrote:
>> On 19.07.2019 15:30, Razvan Cojocaru wrote:
>>> On 7/19/19 4:18 PM, Jan Beulich wrote:
>>>> On 19.07.2019 14:34, Alexandru Stefan ISAILA wrote:
>>>>> On 1
> @@ -629,6 +697,14 @@ static void *hvmemul_map_linear_addr(
>
> ASSERT(p2mt == p2m_ram_logdirty ||
> !p2m_is_readonly(p2mt));
> }
> +
> +if ( curr->arch.vm_event &&
> +curr->arch.vm_event->send_event &&
>>
On 30.07.2019 16:27, Jan Beulich wrote:
> On 30.07.2019 14:21, Alexandru Stefan ISAILA wrote:
>>
>>>>>>> @@ -629,6 +697,14 @@ static void *hvmemul_map_linear_addr(
>>>>>>>
>>>>>>>
On 30.07.2019 17:54, Jan Beulich wrote:
> On 30.07.2019 16:12, Alexandru Stefan ISAILA wrote:
>>
>>
>> On 30.07.2019 16:27, Jan Beulich wrote:
>>> On 30.07.2019 14:21, Alexandru Stefan ISAILA wrote:
>>>>
>>>>>>&
On 13.09.2018 13:12, Jan Beulich wrote:
> The function does two translations in one go for a single guest access.
> Any failure of the first translation step (guest linear -> guest
> physical), resulting in #PF, ought to take precedence over any failure
> of the second step (guest physical -> hos
On 30.07.2019 16:44, Paul Durrant wrote:
> Now that there is a per-domain IOMMU enable flag, which should be enabled if
> any device is going to be passed through, stop deferring page table
> construction until the assignment is done. Also don't tear down the tables
> again when the last device i
Hi George,
Did you get a chance to look at this clean-up?
Thanks,
Alex
On 16.07.2019 15:01, Alexandru Stefan ISAILA wrote:
> At this moment IOMMU pt sharing is disabled by commit [1].
>
> This patch aims to clear the IOMMU hap share support as it will not be
> used in the future. B
This patch adds access control for NPT mode.
The access rights are stored in the NPT p2m table 56:53.
The bits are free after clearing the IOMMU flags [1].
Modify p2m_type_to_flags() to accept and interpret an access value,
parallel to the ept code.
Add a set_default_access() method to the p2m-p
On 28.08.2019 16:32, Roger Pau Monne wrote:
> This partially reverts commit
> 854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
> propagates changes to the domain physmap done by p2m_pt_set_entry into
> the iommu page tables. Without this logic changes to the guest physmap
> ar
tables is based on the p2m type and the mfn.
>
> Fixes: 854a49a7486a02 ('x86/mm: Clean IOMMU flags from p2m-pt code')
> Signed-off-by: Roger Pau Monné
> ---
> Cc: Alexandru Stefan ISAILA
> ---
> Changes since v1:
> - Remove the share-pt branch, the
>>
>> /* FPU sub-types which may be requested via ->get_fpu(). */
>> enum x86_emulate_fpu_type {
>> diff --git a/xen/include/asm-x86/hvm/emulate.h
>> b/xen/include/asm-x86/hvm/emulate.h
>> index 26a01e83a4..721e175b04 100644
>> --- a/xen/include/asm-x86/hvm/emulate.h
>> +++ b/xen/include/a
This patch aims to have mem access vm events sent from the emulator.
This is useful in the case of page-walks that have to emulate
instructions in access denied pages.
We use hvmemul_map_linear_addr() ro intercept r/w access and
hvmemul_insn_fetch() to intercept exec access.
First we try to send
This is done so hvmemul_linear_to_phys() can be called from
hvmemul_map_linear_addr().
There is no functional change.
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/hvm/emulate.c | 181 ++---
1 file changed, 90 insertions(+), 91 deletions(-)
diff --git a/xen/a
Ping. Is this ok with you, George?
Regards,
Alex
On 17.01.2019 11:06, Alexandru Stefan ISAILA wrote:
> Changed the return value of 1 to 0 so now p2m_finish_type_change returns
> 0 for success or <0 for error.
> The “root” caller of p2m_finish_type
In the case of any errors, finish_type_change() passes values returned
from p2m->recalc() up the stack (with some exceptions in the case where
an error is expected); this eventually ends up being returned to the
XEN_DOMOP_map_mem_type_to_ioreq_server hypercall.
However, on Intel processors (but no
In the case of any errors, finish_type_change() passes values returned
from p2m->recalc() up the stack (with some exceptions in the case where
an error is expected); this eventually ends up being returned to the
XEN_DOMOP_map_mem_type_to_ioreq_server hypercall.
However, on Intel processors (but no
On 27.03.2019 18:07, Jan Beulich wrote:
On 27.03.19 at 16:21, wrote:
>> @@ -621,7 +623,7 @@ bool_t ept_handle_misconfig(uint64_t gpa)
>>
>> p2m_unlock(p2m);
>>
>> -return spurious ? (rc >= 0) : (rc > 0);
>> +return spurious && !rc;
>> }
>
> I think you've gone too far
On 27.03.2019 18:07, Jan Beulich wrote:
On 27.03.19 at 16:21, wrote:
>> @@ -621,7 +623,7 @@ bool_t ept_handle_misconfig(uint64_t gpa)
>>
>> p2m_unlock(p2m);
>>
>> -return spurious ? (rc >= 0) : (rc > 0);
>> +return spurious && !rc;
>> }
>
> I think you've gone too far
In the case of any errors, finish_type_change() passes values returned
from p2m->recalc() up the stack (with some exceptions in the case where
an error is expected); this eventually ends up being returned to the
XEN_DOMOP_map_mem_type_to_ioreq_server hypercall.
However, on Intel processors (but no
On a new altp2m view the p2m_set_suppress_ve() func will fail with invalid mfn
from p2m->get_entry() if the p2m->set_entry() was not called before.
This patch solves the problem by getting the mfn from __get_gfn_type_access()
and then returning error if the mfn is invalid.
Signed-off-by: Alexandr
Hi all,
I came across some code in the p2m_set_altp2m_mem_access() that does not
seem right. On the invalid mfn branch there is a try to set_entry() and
if that is not successful then the function does not return and calls
set_entry again for PAGE_ORDER_4K even if there was a check that page
o
This patch moves common code from p2m_set_altp2m_mem_access() and
p2m_change_altp2m_gfn() into one function
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/mm/mem_access.c | 7 ++-
xen/arch/x86/mm/p2m.c| 12 ++--
xen/include/asm-x86/p2m.h| 11 +++
3 files change
This patch moves common code from p2m_set_altp2m_mem_access() and
p2m_change_altp2m_gfn() into one function
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/mm/mem_access.c | 30 +++--
xen/arch/x86/mm/p2m.c| 37 ++--
xen/include/asm
On a new altp2m view the p2m_set_suppress_ve() func will fail with invalid mfn
from p2m->get_entry() if p2m->set_entry() was not called before.
This patch solves the problem by getting the mfn from hostp2m.
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/mm/p2m.c | 3 ++-
1 file changed, 2 ins
On 05.04.2019 18:04, Tamas K Lengyel wrote:
> On Fri, Apr 5, 2019 at 7:25 AM Alexandru Stefan ISAILA
> wrote:
>>
>> This patch moves common code from p2m_set_altp2m_mem_access() and
>> p2m_change_altp2m_gfn() into one function
>>
>> Signed-off-by: Alexandru
This patch moves common code from p2m_set_altp2m_mem_access() and
p2m_change_altp2m_gfn() into one function
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/mm/mem_access.c | 2 +-
xen/include/asm-x86/p2m.h| 11 +++
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/xen/
On a new altp2m view the p2m_set_suppress_ve() func will fail with invalid mfn
from p2m->get_entry() if p2m->set_entry() was not called before.
This patch solves the problem by getting the mfn from hostp2m.
Signed-off-by: Alexandru Isaila
---
xen/arch/x86/mm/p2m.c | 3 ++-
1 file changed, 2 ins
This patch moves common code from p2m_set_altp2m_mem_access() and
p2m_change_altp2m_gfn() into one function
Signed-off-by: Alexandru Isaila
---
Changes since V2:
- Change var name from found_in_hostp2m to copied_from_hostp2m
- Move the type check from altp2m_get_gfn_type_access()
On 09.04.2019 16:48, Tamas K Lengyel wrote:
> On Tue, Apr 9, 2019 at 6:04 AM Alexandru Stefan ISAILA
> wrote:
>>
>> This patch moves common code from p2m_set_altp2m_mem_access() and
>> p2m_change_altp2m_gfn() into one function
>>
>> Signed-off-by: Alexandru I
On 09.04.2019 17:37, Tamas K Lengyel wrote:
> On Tue, Apr 9, 2019 at 8:03 AM Alexandru Stefan ISAILA
> wrote:
>>
>>
>>
>> On 09.04.2019 16:48, Tamas K Lengyel wrote:
>>> On Tue, Apr 9, 2019 at 6:04 AM Alexandru Stefan ISAILA
>>> wr
On 09.04.2019 18:26, Tamas K Lengyel wrote:
> On Tue, Apr 9, 2019 at 8:48 AM Alexandru Stefan ISAILA
> wrote:
>>
>>
>>
>> On 09.04.2019 17:37, Tamas K Lengyel wrote:
>>> On Tue, Apr 9, 2019 at 8:03 AM Alexandru Stefan ISAILA
>>> wrote:
>>
Roger/Paul are you ok with the latest changes? Can this go in?
Regards,
Alex
On 29.03.2019 14:50, Alexandru Stefan ISAILA wrote:
> In the case of any errors, finish_type_change() passes values returned
> from p2m->recalc() up the stack (with some exceptions in the case where
>
On 10.04.2019 17:18, George Dunlap wrote:
> On 4/9/19 1:03 PM, Alexandru Stefan ISAILA wrote:
>> This patch moves common code from p2m_set_altp2m_mem_access() and
>> p2m_change_altp2m_gfn() into one function
>>
>> Signed-off-by: Alexandru Isaila
>> ---
>&
Dunlap wrote:
> On 4/9/19 1:03 PM, Alexandru Stefan ISAILA wrote:
>> This patch moves common code from p2m_set_altp2m_mem_access() and
>> p2m_change_altp2m_gfn() into one function
>>
>> Signed-off-by: Alexandru Isaila
>
> This patch contains a lot of behaviora
On 11.04.2019 16:28, Tamas K Lengyel wrote:
> On Thu, Apr 11, 2019 at 6:50 AM George Dunlap
> wrote:
>>
>> On 4/11/19 1:17 PM, Alexandru Stefan ISAILA wrote:
>>>>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>>>>> index b9bbb8f48
The code for getting the entry and then populating was repeated in
p2m_change_altp2m_gfn() and in p2m_set_altp2m_mem_access().
The code is now in one place with a bool param that lets the caller choose
if it populates after get_entry().
If remapping is being done then both the old and new gfn's s
On 15.04.2019 18:37, Jan Beulich wrote:
>>>> Alexandru Stefan ISAILA 04/15/19 11:23 AM >>>
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -514,6 +514,23 @@ static inline unsigned long mfn_to_gfn(struct domain
>>
The code for getting the entry and then populating was repeated in
p2m_change_altp2m_gfn() and in p2m_set_altp2m_mem_access().
The code is now in one place with a bool param that lets the caller choose
if it populates after get_entry().
If remapping is being done then both the old and new gfn's s
On 16.04.2019 18:07, George Dunlap wrote:
> On 4/16/19 3:19 PM, Tamas K Lengyel wrote:
>> On Tue, Apr 16, 2019 at 8:02 AM George Dunlap
>> wrote:
>>>
>>> On 4/16/19 2:44 PM, Tamas K Lengyel wrote:
>>>> On Tue, Apr 16, 2019 at 2:45 AM Alexandru St
9 7:22 PM, Tamas K Lengyel wrote:
>>>>> On Wed, Apr 17, 2019 at 1:15 AM Alexandru Stefan ISAILA
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 16.04.2019 18:07, George Dunlap wrote:
>>>>&
Ping!
Hi George,
How do we proceed with the function naming?
Regards,
Alex
On 19.04.2019 11:32, Alexandru Stefan ISAILA wrote:
>
>
> On 18.04.2019 21:42, Tamas K Lengyel wrote:
>> On Thu, Apr 18, 2019 at 11:02 AM George Dunlap
>> wrote:
>>>
>>> On
The code for getting the entry and then populating was repeated in
p2m_change_altp2m_gfn() and in p2m_set_altp2m_mem_access().
The code is now in one place with a bool param that lets the caller choose
if it populates after get_entry().
If remapping is being done then both the old and new gfn's s
At this moment change_type_range() prints a warning in case end >
host_max_pfn. While this is unlikely to happen the function should
return a error and propagate it to the caller, hap_track_dirty_vram()
This patch makes change_type_range() return -EINVAL or 0 if all is ok.
Signed-off-by: Alexandr
On 12.11.2019 14:02, Jan Beulich wrote:
> On 06.11.2019 16:35, Alexandru Stefan ISAILA wrote:
>> --- a/xen/arch/x86/mm/p2m-ept.c
>> +++ b/xen/arch/x86/mm/p2m-ept.c
>> @@ -1345,13 +1345,14 @@ void setup_ept_dump(void)
>> register_keyhandler('D', ept_
1 - 100 of 303 matches
Mail list logo