On 06/04/2020 16:14, Marc Zyngier wrote:
Hi Julien,
Hi Marc,
Thanks for the heads up.
On 2020-04-06 14:16, Julien Grall wrote:
Hi,
Xen community is currently reviewing a new implementation for reading
I{S,C}ACTIVER registers (see [1]).
The implementation is based
the task on
vCPU A be blocked for an arbitrary long time?
Cheers,
[1]
https://lists.xenproject.org/archives/html/xen-devel/2020-03/msg01844.html
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman
f-by: Andre Przywara
Reported-by: Dave Martin
I have tested with both a combination of GICv2/GICv3 and kvmtools/QEMU. I can
confirm the UBSAN warning is not present anymore. Feel free to add my tested-by:
Tested-by: Julien Grall
Cheers,
--
Julien Grall
__
ssue, as ->mpidr
is just used for the debugfs output and the IROUTER MMIO register, which
does not exist in redistributors (dealing with SGIs and PPIs).
Signed-off-by: Andre Przywara
Reported-by: Dave Martin
Tested-by: Julien Grall
Cheers,
---
Hi,
this came up here again, I think it fell
Hi Sebastian,
On 19/08/2019 08:33, Sebastian Andrzej Siewior wrote:
On 2019-08-16 17:32:38 [+0100], Julien Grall wrote:
Hi Sebastian,
Hi Julien,
hrtimer_callback_running() will be returning true as the callback is
running somewhere else. This means hrtimer_try_to_cancel()
would return -1
Hi Sebastian,
On 16/08/2019 16:23, Sebastian Andrzej Siewior wrote:
> On 2019-08-16 16:18:20 [+0100], Julien Grall wrote:
>> Sadly, I managed to hit the same BUG_ON() today with this patch
>> applied on top v5.2-rt1-rebase. :/ Although, it is more difficult
>> t
Hi all,
On 13/08/2019 17:24, Marc Zyngier wrote:
> On Tue, 13 Aug 2019 16:44:21 +0100,
> Julien Grall wrote:
>>
>> Hi Sebastian,
>>
>> On 8/13/19 1:58 PM, bige...@linutronix.de wrote:
>>> On 2019-07-27 14:37:11 [+0100], Julien Grall wrote:
>>
Hi Sebastian,
On 8/13/19 1:58 PM, bige...@linutronix.de wrote:
On 2019-07-27 14:37:11 [+0100], Julien Grall wrote:
8<
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -80,7 +80,7 @@ static inline bool userspace_irqchip(str
static void soft_timer_start(str
Hi,
On 7/27/19 12:13 PM, Marc Zyngier wrote:
On Fri, 26 Jul 2019 23:58:38 +0100,
Thomas Gleixner wrote:
On Wed, 24 Jul 2019, Marc Zyngier wrote:
On 23/07/2019 18:58, Julien Grall wrote:
It really feels like a change in hrtimer_cancel semantics. From what I
understand, this is used to avoid
-by: Julien Grall
Looking at the __kvm_flush_vm_context, it might be possible to
reduce more the overhead by removing the I-Cache flush for other
cache than VIPT. This has been left aside for now.
Changes in v3:
- Free resource if initialization failed
- s
headers.
Signed-off-by: Julien Grall
Cc: Russell King
---
I hit a warning when compiling the ASID code:
linux/arch/arm/kvm/../../arm64/lib/asid.c:17: warning: "ASID_MASK" redefined
#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0))
In file included from linux/
-off-by: Julien Grall
---
This code will be used in the virt code for allocating VMID. I am not
entirely sure where to place it. Lib could potentially be a good place but I
am not entirely convinced the algo as it is could be used by other
architecture.
Looking at x86, it seems
Some users of the ASID allocator (e.g VMID) may require to free any
resource if the initialization fail. So introduce a function allows to
free any memory allocated by the ASID allocator.
Signed-off-by: Julien Grall
---
Changes in v3:
- Patch added
---
arch/arm64/include/asm
the function to be called a every
context switch so we want the function to be more efficient.
A new capability is introduced to tell whether 16-bit VMID is
available.
Signed-off-by: Julien Grall
---
Changes in v3:
- Patch added
---
arch/arm64/include/asm/cpucaps.h | 3 ++-
arch/arm64
context.
This is stored in term of shift amount to avoid division in the code.
This means the number of ASID allocated per context should be a power of
two.
At the same time rename NUM_USERS_ASIDS to NUM_CTXT_ASIDS to make the
name more generic.
Signed-off-by: Julien Grall
---
arch/arm64/mm
Move out the common initialization of the ASID allocator in a separate
function.
Signed-off-by: Julien Grall
---
Changes in v3:
- Allow bisection (asid_allocator_init() return 0 on success not
error!).
---
arch/arm64/mm/context.c | 43
Flushing the local context will vary depending on the actual user of the ASID
allocator. Introduce a new callback to flush the local context and move
the call to flush local TLB in it.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 16 +---
1 file changed, 13 insertions
in a single function because we want to
avoid to add a branch in when the ASID is still valid. This will matter
when the code will be moved in separate file later on as 1) will reside
in the header as a static inline function.
Signed-off-by: Julien Grall
---
Will wants to avoid to add
The variable bits hold information for a given ASID allocator. So move
it to the asid_info structure.
Because most of the macros were relying on bits, they are now taking an
extra parameter that is a pointer to the asid_info structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c
At the moment ASID_FIRST_VERSION is used to know the number of ASIDs
supported. As we are going to move the ASID allocator in a separate, it
would be better to use a different name for external users.
This patch adds NUM_ASIDS and implements ASID_FIRST_VERSION using it.
Signed-off-by: Julien
The variables lock and tlb_flush_pending holds information for a given
ASID allocator. So move them to the asid_info structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/mm/context.c b
The variables active_asids and reserved_asids hold information for a
given ASID allocator. So move them to the structure asid_info.
At the same time, introduce wrappers to access the active and reserved
ASIDs to make the code clearer.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c
The function new_context will be part of a generic ASID allocator. At
the moment, the MM structure is only used to fetch the ASID.
To remove the dependency on MM, it is possible to just pass a pointer to
the current ASID.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 6 +++---
1
: Russell King
Julien Grall (15):
arm64/mm: Introduce asid_info structure and move
asid_generation/asid_map to it
arm64/mm: Move active_asids and reserved_asids to asid_info
arm64/mm: Move bits to asid_info
arm64/mm: Move the variable lock and tlb_flush_pending to asid_info
arm64/mm
renaming aftwards, a local variable 'info' has been
created and is a pointer to the ASID allocator structure.
Signed-off-by: Julien Grall
---
Changes in v2:
- Add turn asid_info to a static variable
---
arch/arm64/mm/context.c | 46 ++
1 file
to derefence an invalid value. I need to
investigate how this can happen.
Looking at the other RT tree, I think 5.0 RT now has the same problem.
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
On 03/07/2019 18:36, James Morse wrote:
Hi Julien,
Hi James,
On 20/06/2019 14:06, Julien Grall wrote:
At the moment, the VMID algorithm will send an SGI to all the CPUs to
force an exit and then broadcast a full TLB flush and I-Cache
invalidation.
This patch re-use the new ASID allocator
On 04/07/2019 15:56, James Morse wrote:
Hi Julien,
Hi James,
Thank you for the review.
On 20/06/2019 14:06, Julien Grall wrote:
We will want to re-use the ASID allocator in a separate context (e.g
allocating VMID). So move the code in a new file.
The function asid_check_context has been
introduces a new callback
that will be call when updating the context.
Signed-off-by: Julien Grall
---
arch/arm64/include/asm/lib_asid.h | 12
arch/arm64/lib/asid.c | 10 --
arch/arm64/mm/context.c | 11 ---
3 files changed, 24 insertions(+), 9
headers.
Signed-off-by: Julien Grall
Cc: Russell King
---
I hit a warning when compiling the ASID code:
linux/arch/arm/kvm/../../arm64/lib/asid.c:17: warning: "ASID_MASK" redefined
#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0))
In file included from linux/
Flushing the local context will vary depending on the actual user of the ASID
allocator. Introduce a new callback to flush the local context and move
the call to flush local TLB in it.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 16 +---
1 file changed, 13 insertions
Move out the common initialization of the ASID allocator in a separate
function.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 43 +++
1 file changed, 31 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm
-off-by: Julien Grall
---
This code will be used in the virt code for allocating VMID. I am not
entirely sure where to place it. Lib could potentially be a good place but I
am not entirely convinced the algo as it is could be used by other
architecture.
Looking at x86, it seems
.
The performance difference between the current algo and the new one are:
- 2.5% less exit from the guest
- 22.4% more flush, although they are now local rather than
broadcasted
- 0.11% faster (just for the record)
Signed-off-by: Julien Grall
Looking at the __kvm_flush_vm_context
The function new_context will be part of a generic ASID allocator. At
the moment, the MM structure is only used to fetch the ASID.
To remove the dependency on MM, it is possible to just pass a pointer to
the current ASID.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 6 +++---
1
The variables active_asids and reserved_asids hold information for a
given ASID allocator. So move them to the structure asid_info.
At the same time, introduce wrappers to access the active and reserved
ASIDs to make the code clearer.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c
in a single function because we want to
avoid to add a branch in when the ASID is still valid. This will matter
when the code will be moved in separate file later on as 1) will reside
in the header as a static inline function.
Signed-off-by: Julien Grall
---
Will wants to avoid to add
The variable bits hold information for a given ASID allocator. So move
it to the asid_info structure.
Because most of the macros were relying on bits, they are now taking an
extra parameter that is a pointer to the asid_info structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c
At the moment ASID_FIRST_VERSION is used to know the number of ASIDs
supported. As we are going to move the ASID allocator in a separate, it
would be better to use a different name for external users.
This patch adds NUM_ASIDS and implements ASID_FIRST_VERSION using it.
Signed-off-by: Julien
The variables lock and tlb_flush_pending holds information for a given
ASID allocator. So move them to the asid_info structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/mm/context.c b
context.
This is stored in term of shift amount to avoid division in the code.
This means the number of ASID allocated per context should be a power of
two.
At the same time rename NUM_USERS_ASIDS to NUM_CTXT_ASIDS to make the
name more generic.
Signed-off-by: Julien Grall
---
arch/arm64/mm
renaming aftwards, a local variable 'info' has been
created and is a pointer to the ASID allocator structure.
Signed-off-by: Julien Grall
---
Changes in v2:
- Add turn asid_info to a static variable
---
arch/arm64/mm/context.c | 46 ++
1 file
=people/julieng/linux-arm.git;a=shortlog;h=refs/heads/vmid-rework/rfc-v2
Best regards,
Cc: Russell King
Julien Grall (14):
arm64/mm: Introduce asid_info structure and move
asid_generation/asid_map to it
arm64/mm: Move active_asids and reserved_asids to asid_info
arm64/mm: Move bits
Hi Guo,
On 19/06/2019 12:51, Guo Ren wrote:
On Wed, Jun 19, 2019 at 4:54 PM Julien Grall wrote:
On 6/19/19 9:07 AM, Guo Ren wrote:
Hi Julien,
Hi,
You forgot CCing C-SKY folks :P
I wasn't aware you could be interested :).
Move arm asid allocator code in a generic one is a agood
-csky/1560930553-26502-1-git-send-email-guo...@kernel.org/
If you plan to seperate it into generic one, I could co-work with you.
Was the ASID allocator work out of box on C-Sky? If so, I can easily
move the code in a generic place (maybe lib/asid.c).
Cheers,
--
Julien Grall
valid.
Signed-off-by: Julien Grall
---
This code will be used in the virt code for allocating VMID. I am not
entirely sure where to place it. Lib could potentially be a good place but I
am not entirely convinced the algo as it is could be used by other
architecture.
Looking at x86, it seems
ll take a spinlock.
The spinlock is from the waitqueue, so using a raw_spin_lock cannot
even be considered.
Do you have any input on how this could be solved?
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.co
Hi Catalin,
On 6/3/19 10:21 PM, Catalin Marinas wrote:
On Mon, Jun 03, 2019 at 05:25:34PM +0100, Catalin Marinas wrote:
On Tue, May 21, 2019 at 06:21:39PM +0100, Julien Grall wrote:
Since a softirq is supposed to check may_use_simd() anyway before
attempting to use FPSIMD/SVE
On ThunderX 2:
* hackbench 1000 process 1000 (20 times)
* 3.4% quicker
Signed-off-by: Julien Grall
Reviewed-by: Dave Martin
---
Changes in v5:
- Update commit message
- Add Dave's reviewed-by
Changes in v4:
- Clarify the comment on top
:
* hackbench 1000 process 1000 (20 times)
* 3.4% quicker
Note that while the benchmark has been done on 5.1-rc4, the patch series is
based on 5.2-rc1.
Cheers,
Julien Grall (3):
arm64/fpsimd: Remove the prototype for sve_flush_cpu_state()
arch/arm64: fpsimd: Introduce
-off-by: Julien Grall
Reviewed-by: Dave Martin
---
kernel_neon_begin() does not use fpsimd_save_and_flush_cpu_state()
because the next patch will modify the function to also grab the
FPSIMD/SVE context.
Changes in v4:
- Remove newline before the new prototype
- Add
The function sve_flush_cpu_state() has been removed in commit 21cdd7fd76e3
("KVM: arm64: Remove eager host SVE state saving").
So remove the associated prototype in asm/fpsimd.h.
Signed-off-by: Julien Grall
Reviewed-by: Dave Martin
---
Changes in v3:
- Add Dave'
On 3/21/19 5:03 PM, Suzuki K Poulose wrote:
Hi Julien,
Hi Suzuki,
On 21/03/2019 16:36, Julien Grall wrote:
In an attempt to make the ASID allocator generic, create a new structure
asid_info to store all the information necessary for the allocator.
For now, move the variables
Flushing the local context will vary depending on the actual user of the ASID
allocator. Introduce a new callback to flush the local context and move
the call to flush local TLB in it.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 16 +---
1 file changed, 13 insertions
in a single function because we want to
avoid to add a branch in when the ASID is still valid. This will matter
when the code will be moved in separate file later on as 1) will reside
in the header as a static inline function.
Signed-off-by: Julien Grall
---
Will wants to avoid to add
The function new_context will be part of a generic ASID allocator. At
the moment, the MM structure is only used to fetch the ASID.
To remove the dependency on MM, it is possible to just pass a pointer to
the current ASID.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 6 +++---
1
The variable bits hold information for a given ASID allocator. So move
it to the asid_info structure.
Because most of the macros were relying on bits, they are now taking an
extra parameter that is a pointer to the asid_info structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c
At the moment ASID_FIRST_VERSION is used to know the number of ASIDs
supported. As we are going to move the ASID allocator in a separate, it
would be better to use a different name for external users.
This patch adds NUM_ASIDS and implements ASID_FIRST_VERSION using it.
Signed-off-by: Julien
Move out the common initialization of the ASID allocator in a separate
function.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 43 +++
1 file changed, 31 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm
.
The performance difference between the current algo and the new one are:
- 2.5% less exit from the guest
- 22.4% more flush, although they are now local rather than
broadcasted
- 0.11% faster (just for the record)
Signed-off-by: Julien Grall
Looking at the __kvm_flush_vm_context
The variables active_asids and reserved_asids hold information for a
given ASID allocator. So move them to the structure asid_info.
At the same time, introduce wrappers to access the active and reserved
ASIDs to make the code clearer.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c
introduces a new callback
that will be call when updating the context.
Signed-off-by: Julien Grall
---
arch/arm64/include/asm/asid.h | 12
arch/arm64/lib/asid.c | 10 --
arch/arm64/mm/context.c | 11 ---
3 files changed, 24 insertions(+), 9 deletions
The variables lock and tlb_flush_pending holds information for a given
ASID allocator. So move them to the asid_info structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/mm/context.c b
A follow-up patch will replace the KVM VMID allocator with the arm64 ASID
allocator. It is not yet clear how the code can be shared between arm
and arm64, so this is a verbatim copy of arch/arm64/lib/asid.c.
Signed-off-by: Julien Grall
---
arch/arm/include/asm/kvm_asid.h | 81
-off-by: Julien Grall
---
This code will be used in the virt code for allocating VMID. I am not
entirely sure where to place it. Lib could potentially be a good place but I
am not entirely convinced the algo as it is could be used by other
architecture.
Looking at x86, it seems
context.
This is stored in term of shift amount to avoid division in the code.
This means the number of ASID allocated per context should be a power of
two.
At the same time rename NUM_USERS_ASIDS to NUM_CTXT_ASIDS to make the
name more generic.
Signed-off-by: Julien Grall
---
arch/arm64/mm
been divided in multiple
patches to make the review easier.
A branch with the patch based on 5.1-rc1 can be found:
http://xenbits.xen.org/gitweb/?p=people/julieng/linux-arm.git;a=shortlog;h=refs/heads/vmid-rework/rfc
Cheers,
Julien Grall (14):
arm64/mm: Introduce asid_info structure and move
renaming aftwards, a local variable 'info' has been
created and is a pointer to the ASID allocator structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 46 ++
1 file changed, 26 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/mm
Hi,
On 04/03/2019 17:06, Marc Zyngier wrote:
On 04/03/2019 16:30, Julien Grall wrote:
Hi,
I noticed some issues with this patch when rebooting a guest after using perf.
[ 577.513447] BUG: sleeping function called from invalid context at
kernel/locking/mutex.c:908
[ 577.521926] in_atomic
set(vcpu);
+out:
+ if (loaded)
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+ return ret;
}
void kvm_set_ipa_limit(void)
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.col
Hi Dave,
On 2/27/19 1:50 PM, Dave Martin wrote:
On Wed, Feb 27, 2019 at 12:02:46PM +, Julien Grall wrote:
Hi Dave,
On 2/26/19 5:01 PM, Dave Martin wrote:
On Tue, Feb 26, 2019 at 04:32:30PM +, Julien Grall wrote:
On 18/02/2019 19:52, Dave Martin wrote:
We seem to already have code
Hi Dave,
On 2/26/19 5:01 PM, Dave Martin wrote:
On Tue, Feb 26, 2019 at 04:32:30PM +, Julien Grall wrote:
On 18/02/2019 19:52, Dave Martin wrote:
We seem to already have code for handling invariant registers as well as
reading ID register. I guess the only reason you can't use them
ld be more consistent if you use "vcpu" over "guest". After all
ZCR_EL2.LEN is per vCPU.
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
. So the check should not be reachable.
Did I miss anything?
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Hi Dave,
On 26/02/2019 15:58, Dave Martin wrote:
On Tue, Feb 26, 2019 at 03:49:00PM +, Julien Grall wrote:
Hi Dave,
On 26/02/2019 12:07, Dave Martin wrote:
On Fri, Feb 22, 2019 at 03:26:51PM +, Julien Grall wrote:
Hi Dave,
On 18/02/2019 19:52, Dave Martin wrote:
The current FPSIMD
Hi Dave,
On 26/02/2019 12:07, Dave Martin wrote:
On Fri, Feb 22, 2019 at 03:26:51PM +, Julien Grall wrote:
Hi Dave,
On 18/02/2019 19:52, Dave Martin wrote:
The current FPSIMD/SVE context handling support for non-task (i.e.,
KVM vcpu) contexts does not take SVE into account. This means
Hi Dave,
On 26/02/2019 12:06, Dave Martin wrote:
On Thu, Feb 21, 2019 at 01:36:26PM +, Julien Grall wrote:
Hi Dave,
On 18/02/2019 19:52, Dave Martin wrote:
+ /*
+* Mismatches above sve_max_virtualisable_vl are fine, since
+* no guest is allowed to configure ZCR_EL2
On 26/02/2019 12:06, Dave Martin wrote:
On Thu, Feb 21, 2019 at 12:39:39PM +, Julien Grall wrote:
Hi Dave,
On 18/02/2019 19:52, Dave Martin wrote:
This patch updates fpsimd_flush_task_state() to mirror the new
semantics of fpsimd_flush_cpu_state() introduced by commit
d8ad71fa38a9
in all bits being made UNKNOWN by this function: thus,
this patch makes no functional change for currently defined
registers.
Future patches will make use of non-zero val.
Signed-off-by: Dave Martin
Reviewed-by: Julien Grall
Cheers,
---
arch/arm64/kvm/sys_regs.h | 11 +--
1 file
ill be set here as appropriate, and the appropriate maximum vector
length for the vcpu will be passed when binding.
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Hi Marc,
On 22/02/2019 09:18, Marc Zyngier wrote:
On Thu, 21 Feb 2019 11:02:56 +
Julien Grall wrote:
Hi Julien,
Hi Christoffer,
On 24/01/2019 14:00, Christoffer Dall wrote:
Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
simply forego the masking of the vmid value
a full understanding of the cpufeatures code, this
patch adds comments to make the functions' roles clearer.
No functional change.
Signed-off-by: Dave Martin
Reviewed-by: Julien Grall
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm
_vl) {
+ pr_warn("SVE: cpu%d: Unsupported vector length(s) present\n",
+ smp_processor_id());
Would it be worth to print the unsupported vector length?
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs
malous cases are reordered appropriately in order
NIT: Double-space before "Anomalous".
to make the code more consistent, although there should be no
functional difference since these cases are protected by
local_bh_disable() anyway.
Signed-off-by: Dave Martin
Reviewed-by: Alex Bennée
.
Signed-off-by: Dave Martin
Reviewed-by: Julien Grall
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
(1 << kvm_get_vmid_bits()) - 1;
The arm64 version of kvm_get_vmid_bits does not look cheap. Indeed it required
to read the sanitized value of SYS_ID_AA64MMFR1_EL1 that is implemented using
the function bsearch.
So wouldn't it be better to keep kvm_vmid_bits variable for use in
update_vttbr(
Hi Julia,
On 01/02/2019 17:36, Julia Cartwright wrote:
On Fri, Feb 01, 2019 at 03:30:58PM +, Julien Grall wrote:
Hi Julien,
On 07/01/2019 15:06, Julien Thierry wrote:
vgic_irq->irq_lock must always be taken with interrupts disabled as
it is used in interrupt context.
I am a
(!list_is_last(>ap_list,
@@ -921,11 +921,11 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
spin_lock_irqsave(_cpu->ap_list_lock, flags);
list_for_each_entry(irq, _cpu->ap_list_head, ap_list) {
- spin_lock(>irq_lock);
+ raw_spin_lock(>irq_lock);
pending = irq_is_pending(irq) && irq->enabled &&
!irq->active &&
irq->priority < vmcr.pmr;
- spin_unlock(>irq_lock);
+ raw_spin_unlock(>irq_lock);
if (pending)
break;
@@ -963,11 +963,10 @@ bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu,
unsigned int vintid)
return false;
irq = vgic_get_irq(vcpu->kvm, vcpu, vintid);
- spin_lock_irqsave(>irq_lock, flags);
+ raw_spin_lock_irqsave(>irq_lock, flags);
map_is_active = irq->hw && irq->active;
- spin_unlock_irqrestore(>irq_lock, flags);
+ raw_spin_unlock_irqrestore(>irq_lock, flags);
vgic_put_irq(vcpu->kvm, irq);
return map_is_active;
}
-
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
r1p0, r2p0) could end-up with
+ corrupted TLBs by speculating an AT instruction during a guest
+ context switch.
+
+ If unsure, say Y.
Most of the code in the patch is not guarded by #ifdef ARM64_*. So is there any
benefits to add a Kconfig for this option?
Cheers,
Hi Will,
On 04/07/18 16:52, Will Deacon wrote:
On Wed, Jul 04, 2018 at 04:00:11PM +0100, Julien Grall wrote:
On 04/07/18 15:09, Will Deacon wrote:
On Fri, Jun 29, 2018 at 12:15:42PM +0100, Suzuki K Poulose wrote:
Add an option to specify the physical address size used by this
VM.
Signed-off
o specify the position
of the RAM [1]. With that series in mind, I think the user would not
really need to specify the maximum physical shift. Instead we could
automatically find it.
Cheers,
[1]
http://archive.armlinux.org.uk/lurker/message/20180510.140428.1c295b5b.en.html
Will
--
Julie
ine option,
let's enforce it by calling into the firmware again to disable it.
Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
Cheers,
---
arch/arm64/include/asm/cpufeature.h | 6 ++
arch/arm64/kernel/cpu_errata.c | 8
mitigation.
Think of it as a poor man's static key...
Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
Cheers,
---
arch/arm64/kernel/cpu_errata.c | 14 ++
arch/arm64/kernel/entry.S | 3 +++
2 files
<marc.zyng...@arm.com>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
Cheers,
---
arch/arm64/include/asm/cpufeature.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/cpufeature.h
b/arch/arm64/include/asm/cpufeature.h
index 9bc548e22784..
permanently
on or off instead of switching it on exception entry/exit.
In any case, default to the mitigation being enabled.
Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
Cheers,
---
Documentation/admin-guide/kernel-paramete
29
+#define ARM64_SSBD 30
NIT: Could you indent 30 the same way as the other number?
Reviewed-by: Julien Grall <julien.gr...@arm.com>
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https
Hi Marc,
On 05/22/2018 04:06 PM, Marc Zyngier wrote:
In a heterogeneous system, we can end up with both affected and
unaffected CPUs. Let's check their status before calling into the
firmware.
Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Reviewed-by: Julien Grall <julien.gr..
NIT: Shouldn't you use /* ... */ for multi-line comments?
Regardless that:
Reviewed-by: Julien Grall <julien.gr...@arm.com>
Cheers,
--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
)
+{
+ unsigned max_ipa;
+
+ max_ipa = ioctl(kvm->sys_fd, KVM_ARM_GET_MAX_VM_PHYS_SHIFT);
+ if (max_ipa < 0)
Another issues spotted while doing some testing. This will always be
false because max_ipa is unsigned.
I think we want to turn max_ipa to signed.
Cheers,
--
Julien
Hi Suzuki,
On 27/04/18 16:58, Suzuki K Poulose wrote:
On 27/04/18 16:22, Suzuki K Poulose wrote:
On 26/04/18 14:35, Julien Grall wrote:
Hi Suzuki,
On 27/03/18 14:15, Suzuki K Poulose wrote:
Right now the stage2 page table for a VM is hard coded, assuming
an IPA of 40bits. As we are about
1 - 100 of 201 matches
Mail list logo