Re: [PATCH] PM / EM: Inefficient OPPs detection

2021-04-15 Thread Quentin Perret
On Thursday 15 Apr 2021 at 16:32:31 (+0100), Lukasz Luba wrote:
> Are you sure that the 'policy' can be accessed from compute_energy()?
> It can be from schedutil freq switch path, but I'm not use about our
> feec()..

Right, I was just looking at cpufreq_cpu_get() and we'll have locking
issue in the wake-up path :/ So maybe making feec() aware of policy caps
is for later ...

> For me this cpufreq_driver_resolve_freq sounds a bit out of this patch
> subject.

Not sure I agree -- if we're going to index the EM table from schedutil
it should be integrated nicely if possible.

Thanks


Re: [PATCH] PM / EM: Inefficient OPPs detection

2021-04-15 Thread Quentin Perret
On Thursday 15 Apr 2021 at 16:14:46 (+0100), Vincent Donnefort wrote:
> On Thu, Apr 15, 2021 at 02:59:54PM +0000, Quentin Perret wrote:
> > On Thursday 15 Apr 2021 at 15:34:53 (+0100), Vincent Donnefort wrote:
> > > On Thu, Apr 15, 2021 at 01:16:35PM +0000, Quentin Perret wrote:
> > > > On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> > > > > --- a/kernel/sched/cpufreq_schedutil.c
> > > > > +++ b/kernel/sched/cpufreq_schedutil.c
> > > > > @@ -10,6 +10,7 @@
> > > > >  
> > > > >  #include "sched.h"
> > > > >  
> > > > > +#include 
> > > > >  #include 
> > > > >  #include 
> > > > >  
> > > > > @@ -164,6 +165,9 @@ static unsigned int get_next_freq(struct 
> > > > > sugov_policy *sg_policy,
> > > > >  
> > > > >   freq = map_util_freq(util, freq, max);
> > > > >  
> > > > > + /* Avoid inefficient performance states */
> > > > > + freq = em_pd_get_efficient_freq(em_cpu_get(policy->cpu), freq);
> > > > 
> > > > I remember this was discussed when Douglas sent his patches some time
> > > > ago, but I still find it sad we index the EM table here but still
> > > > re-index the cpufreq frequency table later :/
> > > > 
> > > > Yes in your case this lookup is very inexpensive, but still. EAS relies
> > > > on the EM's table matching cpufreq's accurately, so this second lookup
> > > > still feels rather unnecessary ...
> > > 
> > > To get only a single lookup, we would need to bring the inefficiency 
> > > knowledge
> > > directly to the cpufreq framework. But it has its own limitations: 
> > > 
> > >   The cpufreq driver can have its own resolve_freq() callback, which 
> > > means that
> > >   not all the drivers would benefit from that feature.
> > > 
> > >   The cpufreq_table can be ordered and accessed in several ways which 
> > > brings
> > >   many combinations that would need to be supported, ending-up with 
> > > something
> > >   much more intrusive. (We can though decide to limit the feature to the 
> > > low to
> > >   high access that schedutil needs).
> > > 
> > > As the EM needs schedutil to exist anyway, it seemed to be the right 
> > > place for
> > > this code. It allows any cpufreq driver to benefit from the feature, 
> > > simplify a
> > > potential extension for a usage by devfreq devices and as a bonus it 
> > > speeds-up
> > > energy computing, allowing a more complex Energy Model.
> > 
> > I was thinking of something a bit simpler. cpufreq_driver_resolve_freq
> > appears to be used only from schedutil (why is it even then?), so we
> > could just pull it into cpufreq_schedutil.c and just plain skip the call
> > to cpufreq_frequency_table_target if the target freq has been indexed in
> > the EM table -- it should already be matching a real OPP.
> > 
> > Thoughts?
> > Quentin
> 
> Can try that for a V2. That means em_pd_get_efficient_freq() would have to
> know about policy clamping (but I don't think that's an issue)

Indeed, and I think we can even see this as an improvement as EAS will
now see policy clamps as well in compute_energy().

> and probably
> we still have to do the frequency resolution if the driver declared the
> resolve_freq callback?

Yep, looks like this is unavoidable.

Thanks,
Quentin


Re: [PATCH] PM / EM: Inefficient OPPs detection

2021-04-15 Thread Quentin Perret
On Thursday 15 Apr 2021 at 14:59:54 (+), Quentin Perret wrote:
> On Thursday 15 Apr 2021 at 15:34:53 (+0100), Vincent Donnefort wrote:
> > On Thu, Apr 15, 2021 at 01:16:35PM +, Quentin Perret wrote:
> > > On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> > > > --- a/kernel/sched/cpufreq_schedutil.c
> > > > +++ b/kernel/sched/cpufreq_schedutil.c
> > > > @@ -10,6 +10,7 @@
> > > >  
> > > >  #include "sched.h"
> > > >  
> > > > +#include 
> > > >  #include 
> > > >  #include 
> > > >  
> > > > @@ -164,6 +165,9 @@ static unsigned int get_next_freq(struct 
> > > > sugov_policy *sg_policy,
> > > >  
> > > > freq = map_util_freq(util, freq, max);
> > > >  
> > > > +   /* Avoid inefficient performance states */
> > > > +   freq = em_pd_get_efficient_freq(em_cpu_get(policy->cpu), freq);
> > > 
> > > I remember this was discussed when Douglas sent his patches some time
> > > ago, but I still find it sad we index the EM table here but still
> > > re-index the cpufreq frequency table later :/
> > > 
> > > Yes in your case this lookup is very inexpensive, but still. EAS relies
> > > on the EM's table matching cpufreq's accurately, so this second lookup
> > > still feels rather unnecessary ...
> > 
> > To get only a single lookup, we would need to bring the inefficiency 
> > knowledge
> > directly to the cpufreq framework. But it has its own limitations: 
> > 
> >   The cpufreq driver can have its own resolve_freq() callback, which means 
> > that
> >   not all the drivers would benefit from that feature.
> > 
> >   The cpufreq_table can be ordered and accessed in several ways which brings
> >   many combinations that would need to be supported, ending-up with 
> > something
> >   much more intrusive. (We can though decide to limit the feature to the 
> > low to
> >   high access that schedutil needs).
> > 
> > As the EM needs schedutil to exist anyway, it seemed to be the right place 
> > for
> > this code. It allows any cpufreq driver to benefit from the feature, 
> > simplify a
> > potential extension for a usage by devfreq devices and as a bonus it 
> > speeds-up
> > energy computing, allowing a more complex Energy Model.
> 
> I was thinking of something a bit simpler. cpufreq_driver_resolve_freq
> appears to be used only from schedutil (why is it even then?), so we

why is it even *exported* then ...

> could just pull it into cpufreq_schedutil.c and just plain skip the call
> to cpufreq_frequency_table_target if the target freq has been indexed in
> the EM table -- it should already be matching a real OPP.
> 
> Thoughts?
> Quentin


Re: [PATCH] PM / EM: Inefficient OPPs detection

2021-04-15 Thread Quentin Perret
On Thursday 15 Apr 2021 at 15:12:08 (+0100), Vincent Donnefort wrote:
> On Thu, Apr 15, 2021 at 01:12:05PM +0000, Quentin Perret wrote:
> > Hi Vincent,
> > 
> > On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> > > Some SoCs, such as the sd855 have OPPs within the same performance domain,
> > > whose cost is higher than others with a higher frequency. Even though
> > > those OPPs are interesting from a cooling perspective, it makes no sense
> > > to use them when the device can run at full capacity. Those OPPs handicap
> > > the performance domain, when choosing the most energy-efficient CPU and
> > > are wasting energy. They are inefficient.
> > > 
> > > Hence, add support for such OPPs to the Energy Model, which creates for
> > > each OPP a performance state. The Energy Model can now be read using the
> > > regular table, which contains all performance states available, or using
> > > an efficient table, where inefficient performance states (and by
> > > extension, inefficient OPPs) have been removed.
> > > 
> > > Currently, the efficient table is used in two paths. Schedutil, and
> > > find_energy_efficient_cpu(). We have to modify both paths in the same
> > > patch so they stay synchronized. The thermal framework still relies on
> > > the original table and hence, DevFreq devices won't create the efficient
> > > table.
> > > 
> > > As used in the hot-path, the efficient table is a lookup table, generated
> > > dynamically when the perf domain is created. The complexity of searching
> > > a performance state is hence changed from O(n) to O(1). This also
> > > speeds-up em_cpu_energy() even if no inefficient OPPs have been found.
> > 
> > Interesting. Do you have measurements showing the benefits on wake-up
> > duration? I remember doing so by hacking the wake-up path to force tasks
> > into feec()/compute_energy() even when overutilized, and then running
> > hackbench. Maybe something like that would work for you?
> 
> I'll give a try and see if I get improved numbers.
> 
> > 
> > Just want to make sure we actually need all that complexity -- while
> > it's good to reduce the asymptotic complexity, we're looking at a rather
> > small problem (max 30 OPPs or so I expect?), so other effects may be
> > dominating. Simply skipping inefficient OPPs could be implemented in a
> > much simpler way I think.
> 
> I could indeed just skip the perf state if marked as ineffective. But the idea
> was to avoid bringing another for loop in this hot-path.

Right, though it would just extend a little bit the existing loop, so
the overhead is unlikely to be noticeable.

> Also, not covered by this patch but probably we could get rid of the EM
> complexity limit as the table resolution is way faster with this change.

Probably yeah. I was considering removing it since eb92692b2544
("sched/fair: Speed-up energy-aware wake-ups") but ended up keeping it
as it's entirely untested on large systems. But maybe we can reconsider.

Thanks,
Quentin


Re: [PATCH] PM / EM: Inefficient OPPs detection

2021-04-15 Thread Quentin Perret
On Thursday 15 Apr 2021 at 15:34:53 (+0100), Vincent Donnefort wrote:
> On Thu, Apr 15, 2021 at 01:16:35PM +0000, Quentin Perret wrote:
> > On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> > > --- a/kernel/sched/cpufreq_schedutil.c
> > > +++ b/kernel/sched/cpufreq_schedutil.c
> > > @@ -10,6 +10,7 @@
> > >  
> > >  #include "sched.h"
> > >  
> > > +#include 
> > >  #include 
> > >  #include 
> > >  
> > > @@ -164,6 +165,9 @@ static unsigned int get_next_freq(struct sugov_policy 
> > > *sg_policy,
> > >  
> > >   freq = map_util_freq(util, freq, max);
> > >  
> > > + /* Avoid inefficient performance states */
> > > + freq = em_pd_get_efficient_freq(em_cpu_get(policy->cpu), freq);
> > 
> > I remember this was discussed when Douglas sent his patches some time
> > ago, but I still find it sad we index the EM table here but still
> > re-index the cpufreq frequency table later :/
> > 
> > Yes in your case this lookup is very inexpensive, but still. EAS relies
> > on the EM's table matching cpufreq's accurately, so this second lookup
> > still feels rather unnecessary ...
> 
> To get only a single lookup, we would need to bring the inefficiency knowledge
> directly to the cpufreq framework. But it has its own limitations: 
> 
>   The cpufreq driver can have its own resolve_freq() callback, which means 
> that
>   not all the drivers would benefit from that feature.
> 
>   The cpufreq_table can be ordered and accessed in several ways which brings
>   many combinations that would need to be supported, ending-up with something
>   much more intrusive. (We can though decide to limit the feature to the low 
> to
>   high access that schedutil needs).
> 
> As the EM needs schedutil to exist anyway, it seemed to be the right place for
> this code. It allows any cpufreq driver to benefit from the feature, simplify 
> a
> potential extension for a usage by devfreq devices and as a bonus it speeds-up
> energy computing, allowing a more complex Energy Model.

I was thinking of something a bit simpler. cpufreq_driver_resolve_freq
appears to be used only from schedutil (why is it even then?), so we
could just pull it into cpufreq_schedutil.c and just plain skip the call
to cpufreq_frequency_table_target if the target freq has been indexed in
the EM table -- it should already be matching a real OPP.

Thoughts?
Quentin


Re: [PATCH] PM / EM: Inefficient OPPs detection

2021-04-15 Thread Quentin Perret
On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -10,6 +10,7 @@
>  
>  #include "sched.h"
>  
> +#include 
>  #include 
>  #include 
>  
> @@ -164,6 +165,9 @@ static unsigned int get_next_freq(struct sugov_policy 
> *sg_policy,
>  
>   freq = map_util_freq(util, freq, max);
>  
> + /* Avoid inefficient performance states */
> + freq = em_pd_get_efficient_freq(em_cpu_get(policy->cpu), freq);

I remember this was discussed when Douglas sent his patches some time
ago, but I still find it sad we index the EM table here but still
re-index the cpufreq frequency table later :/

Yes in your case this lookup is very inexpensive, but still. EAS relies
on the EM's table matching cpufreq's accurately, so this second lookup
still feels rather unnecessary ...

>   if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
>   return sg_policy->next_freq;
>  
> -- 
> 2.7.4
> 


Re: [PATCH] PM / EM: Inefficient OPPs detection

2021-04-15 Thread Quentin Perret
Hi Vincent,

On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> Some SoCs, such as the sd855 have OPPs within the same performance domain,
> whose cost is higher than others with a higher frequency. Even though
> those OPPs are interesting from a cooling perspective, it makes no sense
> to use them when the device can run at full capacity. Those OPPs handicap
> the performance domain, when choosing the most energy-efficient CPU and
> are wasting energy. They are inefficient.
> 
> Hence, add support for such OPPs to the Energy Model, which creates for
> each OPP a performance state. The Energy Model can now be read using the
> regular table, which contains all performance states available, or using
> an efficient table, where inefficient performance states (and by
> extension, inefficient OPPs) have been removed.
> 
> Currently, the efficient table is used in two paths. Schedutil, and
> find_energy_efficient_cpu(). We have to modify both paths in the same
> patch so they stay synchronized. The thermal framework still relies on
> the original table and hence, DevFreq devices won't create the efficient
> table.
> 
> As used in the hot-path, the efficient table is a lookup table, generated
> dynamically when the perf domain is created. The complexity of searching
> a performance state is hence changed from O(n) to O(1). This also
> speeds-up em_cpu_energy() even if no inefficient OPPs have been found.

Interesting. Do you have measurements showing the benefits on wake-up
duration? I remember doing so by hacking the wake-up path to force tasks
into feec()/compute_energy() even when overutilized, and then running
hackbench. Maybe something like that would work for you?

Just want to make sure we actually need all that complexity -- while
it's good to reduce the asymptotic complexity, we're looking at a rather
small problem (max 30 OPPs or so I expect?), so other effects may be
dominating. Simply skipping inefficient OPPs could be implemented in a
much simpler way I think.

Thanks,
Quentin


Re: [PATCH -next] sched/topology: Make some symbols static

2021-04-13 Thread Quentin Perret
On Thursday 08 Apr 2021 at 21:12:17 (+0800), Peng Wu wrote:
> The sparse tool complains as follows:
> 
> kernel/sched/topology.c:211:1: warning:
>  symbol 'sched_energy_mutex' was not declared. Should it be static?
> kernel/sched/topology.c:212:6: warning:
>  symbol 'sched_energy_update' was not declared. Should it be static?
> 
> This symbol is not used outside of topology.c, so this
> commit marks it static.
> 
> Reported-by: Hulk Robot 
> Signed-off-by: Peng Wu 
> ---
>  kernel/sched/topology.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index d1aec244c027..25c3f88d43cd 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -208,8 +208,8 @@ sd_parent_degenerate(struct sched_domain *sd, struct 
> sched_domain *parent)
>  #if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
>  DEFINE_STATIC_KEY_FALSE(sched_energy_present);
>  unsigned int sysctl_sched_energy_aware = 1;
> -DEFINE_MUTEX(sched_energy_mutex);
> -bool sched_energy_update;
> +static DEFINE_MUTEX(sched_energy_mutex);
> +static bool sched_energy_update;
>  
>  void rebuild_sched_domains_energy(void)
>  {
>

FWIW, that has been reported some time ago:

https://lore.kernel.org/lkml/1606218731-3999-1-git-send-email-zou_...@huawei.com/
https://lore.kernel.org/lkml/1606271447-74720-1-git-send-email-zou_...@huawei.com/

But otherwise, this looks OK to me.

Thanks,
Quentin


Re: [PATCH v4 1/2] KVM: arm64: Move CMOs from user_mem_abort to the fault handlers

2021-04-09 Thread Quentin Perret
Hi Yanan,

On Friday 09 Apr 2021 at 11:36:51 (+0800), Yanan Wang wrote:
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> +static void stage2_invalidate_icache(void *addr, u64 size)
> +{
> + if (icache_is_aliasing()) {
> + /* Flush any kind of VIPT icache */
> + __flush_icache_all();
> + } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> + /* PIPT or VPIPT at EL2 */
> + invalidate_icache_range((unsigned long)addr,
> + (unsigned long)addr + size);
> + }
> +}
> +

I would recommend to try and rebase this patch on kvmarm/next because
we've made a few changes in pgtable.c recently. It is now linked into
the EL2 NVHE code which means there are constraints on what can be used
from there -- you'll need a bit of extra work to make some of these
functions available to EL2.

Thanks,
Quentin


[PATCH] export: Make CRCs robust to symbol trimming

2021-04-08 Thread Quentin Perret
The CRC calculation done by genksyms is triggered when the parser hits
EXPORT_SYMBOL*() macros. At this point, genksyms recursively expands the
types, and uses that as the input for the CRC calculation. In the case
of forward-declared structs, the type expands to 'UNKNOWN'. Next, the
result of the expansion of each type is cached, and is re-used when/if
the same type is seen again for another exported symbol in the file.

Unfortunately, this can cause CRC 'stability' issues when a struct
definition becomes visible in the middle of a C file. For example, let's
assume code with the following pattern:

struct foo;

int bar(struct foo *arg)
{
/* Do work ... */
}
EXPORT_SYMBOL_GPL(bar);

/* This contains struct foo's definition */
#include "foo.h"

int baz(struct foo *arg)
{
/* Do more work ... */
}
EXPORT_SYMBOL_GPL(baz);

Here, baz's CRC will be computed using the expansion of struct foo that
was cached after bar's CRC calculation ('UNKOWN' here). But if
EXPORT_SYMBOL_GPL(bar) is removed from the file (because of e.g. symbol
trimming using CONFIG_TRIM_UNUSED_KSYMS), struct foo will be expanded
late, during baz's CRC calculation, which now has visibility over the
full struct definition, hence resulting in a different CRC for baz.

This can cause annoying issues for distro kernel (such as the Android
Generic Kernel Image) which use CONFIG_UNUSED_KSYMS_WHITELIST. Indeed,
as per the above, adding a symbol to the whitelist can change the CRC of
symbols that are already kept exported. As such, modules built against a
kernel with a trimmed ABI may not load against the same kernel built
with an extended whitelist, even though they are still strictly binary
compatible. While rebuilding the modules would obviously solve the
issue, I believe this classifies as an odd genksyms corner case, and it
gets in the way of kernel updates in the GKI context.

To work around the issue, make sure to keep issuing the
__GENKSYMS_EXPORT_SYMBOL macros for all trimmed symbols, hence making
the genksyms parsing insensitive to symbol trimming.

Signed-off-by: Quentin Perret 
---
 include/linux/export.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/include/linux/export.h b/include/linux/export.h
index 6271a5d9c988..27d848712b90 100644
--- a/include/linux/export.h
+++ b/include/linux/export.h
@@ -140,7 +140,12 @@ struct kernel_symbol {
 #define ___cond_export_sym(sym, sec, ns, enabled)  \
__cond_export_sym_##enabled(sym, sec, ns)
 #define __cond_export_sym_1(sym, sec, ns) ___EXPORT_SYMBOL(sym, sec, ns)
+
+#ifdef __GENKSYMS__
+#define __cond_export_sym_0(sym, sec, ns) __GENKSYMS_EXPORT_SYMBOL(sym)
+#else
 #define __cond_export_sym_0(sym, sec, ns) /* nothing */
+#endif
 
 #else
 
-- 
2.31.0.208.g409f899ff0-goog



Re: [PATCH] cgroup: Relax restrictions on kernel threads moving out of root cpu cgroup

2021-04-06 Thread Quentin Perret
Hi Pavan,

On Tuesday 06 Apr 2021 at 16:29:13 (+0530), Pavankumar Kondeti wrote:
> In Android GKI, CONFIG_FAIR_GROUP_SCHED is enabled [1] to help prioritize
> important work. Given that CPU shares of root cgroup can't be changed,
> leaving the tasks inside root cgroup will give them higher share
> compared to the other tasks inside important cgroups. This is mitigated
> by moving all tasks inside root cgroup to a different cgroup after
> Android is booted. However, there are many kernel tasks stuck in the
> root cgroup after the boot.
> 
> We see all kworker threads are in the root cpu cgroup. This is because,
> tasks with PF_NO_SETAFFINITY flag set are forbidden from cgroup migration.
> This restriction is in place to avoid kworkers getting moved to a cpuset
> which conflicts with kworker affinity. Relax this restriction by explicitly
> checking if the task is moving out of a cpuset cgroup. This allows kworkers
> to be moved out root cpu cgroup.
> 
> We also see kthreadd_task and any kernel thread created after the Android boot
> also stuck in the root cgroup. The current code prevents kthreadd_task moving
> out root cgroup to avoid the possibility of creating new RT kernel threads
> inside a cgroup with no RT runtime allocated. Apply this restriction when 
> tasks
> are moving out of cpu cgroup under CONFIG_RT_GROUP_SCHED. This allows all
> kernel threads to be moved out of root cpu cgroup if the kernel does not
> enable RT group scheduling.

OK, so IIUC this only works with cgroup v1 -- the unified hierarchy in
v2 forces you to keep cpu and cpuset in 'sync'. But that should be fine,
so this looks like a nice improvement to me.

> [1] 
> https://android.googlesource.com/kernel/common/+/f08f049de11c15a4251cb1db08cf0bee20bd9b59
> 
> Signed-off-by: Pavankumar Kondeti 
> ---
>  kernel/cgroup/cgroup-internal.h |  3 ++-
>  kernel/cgroup/cgroup-v1.c   |  2 +-
>  kernel/cgroup/cgroup.c  | 24 +++-
>  3 files changed, 22 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h
> index bfbeabc..a96ed9a 100644
> --- a/kernel/cgroup/cgroup-internal.h
> +++ b/kernel/cgroup/cgroup-internal.h
> @@ -232,7 +232,8 @@ int cgroup_migrate(struct task_struct *leader, bool 
> threadgroup,
>  int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader,
>  bool threadgroup);
>  struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
> -  bool *locked)
> +  bool *locked,
> +  struct cgroup *dst_cgrp)
>   __acquires(&cgroup_threadgroup_rwsem);
>  void cgroup_procs_write_finish(struct task_struct *task, bool locked)
>   __releases(&cgroup_threadgroup_rwsem);
> diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
> index a575178..d674a6c 100644
> --- a/kernel/cgroup/cgroup-v1.c
> +++ b/kernel/cgroup/cgroup-v1.c
> @@ -497,7 +497,7 @@ static ssize_t __cgroup1_procs_write(struct 
> kernfs_open_file *of,
>   if (!cgrp)
>   return -ENODEV;
>  
> - task = cgroup_procs_write_start(buf, threadgroup, &locked);
> + task = cgroup_procs_write_start(buf, threadgroup, &locked, cgrp);
>   ret = PTR_ERR_OR_ZERO(task);
>   if (ret)
>   goto out_unlock;
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 9153b20..41864a8 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -2744,7 +2744,8 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct 
> task_struct *leader,
>  }
>  
>  struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
> -  bool *locked)
> +  bool *locked,
> +  struct cgroup *dst_cgrp)
>   __acquires(&cgroup_threadgroup_rwsem)
>  {
>   struct task_struct *tsk;
> @@ -2784,15 +2785,28 @@ struct task_struct *cgroup_procs_write_start(char 
> *buf, bool threadgroup,
>   tsk = tsk->group_leader;
>  
>   /*
> +  * RT kthreads may be born in a cgroup with no rt_runtime allocated.
> +  * Just say no.
> +  */
> +#ifdef CONFIG_RT_GROUP_SCHED
> + if (tsk->no_cgroup_migration && (dst_cgrp->root->subsys_mask & (1U << 
> cpu_cgrp_id))) {
> + tsk = ERR_PTR(-EINVAL);
> + goto out_unlock_threadgroup;
> + }
> +#endif
> +
> + /*
>* kthreads may acquire PF_NO_SETAFFINITY during initialization.
>* If userland migrates such a kthread to a non-root cgroup, it can
> -  * become trapped in a cpuset, or RT kthread may be born in a
> -  * cgroup with no rt_runtime allocated.  Just say no.
> +  * become trapped in a cpuset. Just say no.
>*/
> - if (tsk->no_cgroup_migration || (tsk->flags & PF_NO_SETAFFINITY)) {
> +#ifdef CONFIG_CPUSETS
> + if ((tsk->no_

Re: [PATCH] sched/fair: use signed long when compute energy delta in eas

2021-03-30 Thread Quentin Perret
Hi,

On Tuesday 30 Mar 2021 at 13:21:54 (+0800), Xuewen Yan wrote:
> From: Xuewen Yan 
> 
> now the energy delta compute as follow:
> 
> base_energy_pd = compute_energy(p, -1, pd);
>   --->Traverse all CPUs in pd
>   --->em_pd_energy()
> - \
> search for the max_sapre_cap_cpu   \
> -   search time
> cur_delta = compute_energy(p, max_spare_cap_cpu, pd);  /
>   --->Traverse all CPUs in pd   /
>  /
>   --->em_pd_energy()
> cur_delta -= base_energy_pd;
> 
> During the search_time, or when calculate the cpu_util in
> compute_energy(), there may occurred task dequeue or cpu_util change,
> it may cause the cur_energy < base_energy_pd, so the cur_delta
> would be negative. But the cur_delta is unsigned long, at this time,
> the cur_delta would always bigger than best_delta of last pd.
> 
> Change the vars to signed long.

Is that really helping though?

Yes you will not overflow, but the decision is still 'wrong' if the util
values are not stable for the entire wake-up. I think folks on the Arm
side had patches to try and cache the util values upfront, and then use
them throughout feec() and compute_energy(), which I think would be a
better fix.

Dietmar, wdyt?

Thanks,
Quentin


Re: [PATCH v2 3/3] KVM: arm64: Drop the CPU_FTR_REG_HYP_COPY infrastructure

2021-03-23 Thread Quentin Perret
On Monday 22 Mar 2021 at 17:56:39 (+), Marc Zyngier wrote:
> Now that the read_ctr macro has been specialised for nVHE,
> the whole CPU_FTR_REG_HYP_COPY infrastrcture looks completely
> overengineered.
> 
> Simplify it by populating the two u64 quantities (MMFR0 and 1)
> that the hypervisor need.
> 
> Signed-off-by: Marc Zyngier 

Reviewed-by: Quentin Perret 

Thanks,
Quentin


Re: [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode

2021-03-23 Thread Quentin Perret
Hi Marc,

On Monday 22 Mar 2021 at 18:37:14 (+), Marc Zyngier wrote:
> Can't say I'm keen on the yucky bit, but here's an alternative (ha!)
> for you:
> 
> diff --git a/arch/arm64/include/asm/assembler.h 
> b/arch/arm64/include/asm/assembler.h
> index 1a4cee7eb3c9..7582c3bd2f05 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -278,6 +278,9 @@ alternative_else
>   ldr_l   \reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
>  alternative_endif
>  #else
> +alternative_if_not ARM64_KVM_PROTECTED_MODE
> + ASM_BUG()
> +alternative_else_nop_endif
>  alternative_cb kvm_compute_final_ctr_el0
>   movz\reg, #0
>   movk\reg, #0, lsl #16
> 
> Yes, it is one more instruction, but it is cleaner and allows us to
> from the first patch of the series.
> 
> What do you think?

Yes, I think having the ASM_BUG() in this macro is bit nicer and I doubt
the additional nop will make any difference, so this is looking good to
me!

Thanks,
Quentin


Re: [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode

2021-03-22 Thread Quentin Perret
Hey Marc,

On Monday 22 Mar 2021 at 16:48:27 (+), Marc Zyngier wrote:
> In protected mode, late CPUs are not allowed to boot (enforced by
> the PSCI relay). We can thus specialise the read_ctr macro to
> always return a pre-computed, sanitised value.
> 
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm64/include/asm/assembler.h | 9 +
>  arch/arm64/kernel/image-vars.h | 1 +
>  arch/arm64/kvm/va_layout.c | 7 +++
>  3 files changed, 17 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h 
> b/arch/arm64/include/asm/assembler.h
> index fb651c1f26e9..1a4cee7eb3c9 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -270,12 +270,21 @@ alternative_endif
>   * provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val
>   */
>   .macro  read_ctr, reg
> +#ifndef __KVM_NVHE_HYPERVISOR__
>  alternative_if_not ARM64_MISMATCHED_CACHE_TYPE
>   mrs \reg, ctr_el0   // read CTR
>   nop
>  alternative_else
>   ldr_l   \reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
>  alternative_endif
> +#else
> +alternative_cb kvm_compute_final_ctr_el0
> + movz\reg, #0
> + movk\reg, #0, lsl #16
> + movk\reg, #0, lsl #32
> + movk\reg, #0, lsl #48
> +alternative_cb_end
> +#endif
>   .endm

So, FWIW, if we wanted to make _this_ macro BUG in non-protected mode
(and drop patch 01), I think we could do something like:

alternative_cb kvm_compute_final_ctr_el0
movz\reg, #0
ASM_BUG()
nop
nop
alternative_cb_end

and then make kvm_compute_final_ctr_el0() check that we're in protected
mode before patching. That would be marginally better as that would
cover _all_ users of read_ctr and not just __flush_dcache_area, but that
first movz is a bit yuck (but necessary to keep generate_mov_q() happy I
think?), so I'll leave the decision to you.

No objection from me for the current implementation, and if you decide to
go with it:

Reviewed-by: Quentin Perret 

Thanks,
Quentin


Re: [PATCH v6 13/38] KVM: arm64: Enable access to sanitized CPU features at EL2

2021-03-22 Thread Quentin Perret
Hey Marc,

On Monday 22 Mar 2021 at 13:44:38 (+), Marc Zyngier wrote:
> I can't say I'm thrilled with this. Actually, it is fair to say that I
> don't like it at all! ;-)

:-)

> Copying whole structures with pointers that
> make no sense at EL2 feels... wrong.

And I don't disagree at all. I tried to keep this as small as possible
as the series is already quite intrusive, but I certainly understand the
concern.

> As we discussed offline, the main reason for this infrastructure is
> that the read_ctr macro directly uses arm64_ftr_reg_ctrel0.sys_val
> when ARM64_MISMATCHED_CACHE_TYPE is set.

Indeed that is the only reason.

> One thing to realise is that with the protected mode, we can rely on
> patching as there is no such thing as a "late" CPU. So by specialising
> read_ctr when compiled for nVHE, we can just make it give us the final
> value, provided that KVM's own __flush_dcache_area() is limited to
> protected mode.
> 
> Once this problem is solved, this whole patch can mostly go, as we are
> left with exactly *two* u64 quantities to be populated, something that
> we can probably do in kvm_sys_reg_table_init().
> 
> I'll post some patches later today to try and explain what I have in
> mind.

Sounds great, thank you very much for the help!
Quentin


[PATCH v6 37/38] KVM: arm64: Disable PMU support in protected mode

2021-03-19 Thread Quentin Perret
The host currently writes directly in EL2 per-CPU data sections from
the PMU code when running in nVHE. In preparation for unmapping the EL2
sections from the host stage 2, disable PMU support in protected mode as
we currently do not have a use-case for it.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/perf.c | 3 ++-
 arch/arm64/kvm/pmu.c  | 8 
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
index 739164324afe..8f860ae56bb7 100644
--- a/arch/arm64/kvm/perf.c
+++ b/arch/arm64/kvm/perf.c
@@ -55,7 +55,8 @@ int kvm_perf_init(void)
 * hardware performance counters. This could ensure the presence of
 * a physical PMU and CONFIG_PERF_EVENT is selected.
 */
-   if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
+   if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0
+  && !is_protected_kvm_enabled())
static_branch_enable(&kvm_arm_pmu_available);
 
return perf_register_guest_info_callbacks(&kvm_guest_cbs);
diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c
index faf32a44ba04..03a6c1f4a09a 100644
--- a/arch/arm64/kvm/pmu.c
+++ b/arch/arm64/kvm/pmu.c
@@ -33,7 +33,7 @@ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr)
 {
struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data);
 
-   if (!ctx || !kvm_pmu_switch_needed(attr))
+   if (!kvm_arm_support_pmu_v3() || !ctx || !kvm_pmu_switch_needed(attr))
return;
 
if (!attr->exclude_host)
@@ -49,7 +49,7 @@ void kvm_clr_pmu_events(u32 clr)
 {
struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data);
 
-   if (!ctx)
+   if (!kvm_arm_support_pmu_v3() || !ctx)
return;
 
ctx->pmu_events.events_host &= ~clr;
@@ -172,7 +172,7 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu)
struct kvm_host_data *host;
u32 events_guest, events_host;
 
-   if (!has_vhe())
+   if (!kvm_arm_support_pmu_v3() || !has_vhe())
return;
 
preempt_disable();
@@ -193,7 +193,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu)
struct kvm_host_data *host;
u32 events_guest, events_host;
 
-   if (!has_vhe())
+   if (!kvm_arm_support_pmu_v3() || !has_vhe())
return;
 
host = this_cpu_ptr_hyp_sym(kvm_host_data);
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 38/38] KVM: arm64: Protect the .hyp sections from the host

2021-03-19 Thread Quentin Perret
When KVM runs in nVHE protected mode, use the host stage 2 to unmap the
hypervisor sections by marking them as owned by the hypervisor itself.
The long-term goal is to ensure the EL2 code can remain robust
regardless of the host's state, so this starts by making sure the host
cannot e.g. write to the .hyp sections directly.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h  |  1 +
 arch/arm64/kvm/arm.c  | 46 +++
 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c|  9 
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 33 +
 5 files changed, 91 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 4149283b4cd1..cf8df032b9c3 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -62,6 +62,7 @@
 #define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping17
 #define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector18
 #define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize 19
+#define __KVM_HOST_SMCCC_FUNC___pkvm_mark_hyp  20
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d237c378e6fb..368159021dee 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1899,11 +1899,57 @@ void _kvm_host_prot_finalize(void *discard)
WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
 }
 
+static inline int pkvm_mark_hyp(phys_addr_t start, phys_addr_t end)
+{
+   return kvm_call_hyp_nvhe(__pkvm_mark_hyp, start, end);
+}
+
+#define pkvm_mark_hyp_section(__section)   \
+   pkvm_mark_hyp(__pa_symbol(__section##_start),   \
+   __pa_symbol(__section##_end))
+
 static int finalize_hyp_mode(void)
 {
+   int cpu, ret;
+
if (!is_protected_kvm_enabled())
return 0;
 
+   ret = pkvm_mark_hyp_section(__hyp_idmap_text);
+   if (ret)
+   return ret;
+
+   ret = pkvm_mark_hyp_section(__hyp_text);
+   if (ret)
+   return ret;
+
+   ret = pkvm_mark_hyp_section(__hyp_rodata);
+   if (ret)
+   return ret;
+
+   ret = pkvm_mark_hyp_section(__hyp_bss);
+   if (ret)
+   return ret;
+
+   ret = pkvm_mark_hyp(hyp_mem_base, hyp_mem_base + hyp_mem_size);
+   if (ret)
+   return ret;
+
+   for_each_possible_cpu(cpu) {
+   phys_addr_t start = virt_to_phys((void 
*)kvm_arm_hyp_percpu_base[cpu]);
+   phys_addr_t end = start + (PAGE_SIZE << nvhe_percpu_order());
+
+   ret = pkvm_mark_hyp(start, end);
+   if (ret)
+   return ret;
+
+   start = virt_to_phys((void *)per_cpu(kvm_arm_hyp_stack_page, 
cpu));
+   end = start + PAGE_SIZE;
+   ret = pkvm_mark_hyp(start, end);
+   if (ret)
+   return ret;
+   }
+
/*
 * Flip the static key upfront as that may no longer be possible
 * once the host stage 2 is installed.
diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h 
b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
index d293cb328cc4..42d81ec739fa 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
@@ -21,6 +21,8 @@ struct host_kvm {
 extern struct host_kvm host_kvm;
 
 int __pkvm_prot_finalize(void);
+int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end);
+
 int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool);
 void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
 
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c 
b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 69163f2cbb63..b4eaa7ef13e0 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -156,6 +156,14 @@ static void handle___pkvm_prot_finalize(struct 
kvm_cpu_context *host_ctxt)
 {
cpu_reg(host_ctxt, 1) = __pkvm_prot_finalize();
 }
+
+static void handle___pkvm_mark_hyp(struct kvm_cpu_context *host_ctxt)
+{
+   DECLARE_REG(phys_addr_t, start, host_ctxt, 1);
+   DECLARE_REG(phys_addr_t, end, host_ctxt, 2);
+
+   cpu_reg(host_ctxt, 1) = __pkvm_mark_hyp(start, end);
+}
 typedef void (*hcall_t)(struct kvm_cpu_context *);
 
 #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
@@ -180,6 +188,7 @@ static const hcall_t host_hcall[] = {
HANDLE_FUNC(__pkvm_create_mappings),
HANDLE_FUNC(__pkvm_create_private_mapping),
HANDLE_FUNC(__pkvm_prot_finalize),
+   HANDLE_FUNC(__pkvm_mark_hyp),
 };
 
 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c 
b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 77b48c47344d..808e2471091b 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/me

[PATCH v6 36/38] KVM: arm64: Page-align the .hyp sections

2021-03-19 Thread Quentin Perret
We will soon unmap the .hyp sections from the host stage 2 in Protected
nVHE mode, which obviously works with at least page granularity, so make
sure to align them correctly.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kernel/vmlinux.lds.S | 22 +-
 1 file changed, 9 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index e96173ce211b..709d2c433c5e 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -15,9 +15,11 @@
 
 #define HYPERVISOR_DATA_SECTIONS   \
HYP_SECTION_NAME(.rodata) : {   \
+   . = ALIGN(PAGE_SIZE);   \
__hyp_rodata_start = .; \
*(HYP_SECTION_NAME(.data..ro_after_init))   \
*(HYP_SECTION_NAME(.rodata))\
+   . = ALIGN(PAGE_SIZE);   \
__hyp_rodata_end = .;   \
}
 
@@ -72,21 +74,14 @@ ENTRY(_text)
 jiffies = jiffies_64;
 
 #define HYPERVISOR_TEXT\
-   /*  \
-* Align to 4 KB so that\
-* a) the HYP vector table is at its minimum\
-*alignment of 2048 bytes   \
-* b) the HYP init code will not cross a page   \
-*boundary if its size does not exceed  \
-*4 KB (see related ASSERT() below) \
-*/ \
-   . = ALIGN(SZ_4K);   \
+   . = ALIGN(PAGE_SIZE);   \
__hyp_idmap_text_start = .; \
*(.hyp.idmap.text)  \
__hyp_idmap_text_end = .;   \
__hyp_text_start = .;   \
*(.hyp.text)\
HYPERVISOR_EXTABLE  \
+   . = ALIGN(PAGE_SIZE);   \
__hyp_text_end = .;
 
 #define IDMAP_TEXT \
@@ -322,11 +317,12 @@ SECTIONS
 #include "image-vars.h"
 
 /*
- * The HYP init code and ID map text can't be longer than a page each,
- * and should not cross a page boundary.
+ * The HYP init code and ID map text can't be longer than a page each. The
+ * former is page-aligned, but the latter may not be with 16K or 64K pages, so
+ * it should also not cross a page boundary.
  */
-ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
-   "HYP init code too big or misaligned")
+ASSERT(__hyp_idmap_text_end - __hyp_idmap_text_start <= PAGE_SIZE,
+   "HYP init code too big")
 ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
"ID map text too big or misaligned")
 #ifdef CONFIG_HIBERNATION
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 35/38] KVM: arm64: Wrap the host with a stage 2

2021-03-19 Thread Quentin Perret
When KVM runs in protected nVHE mode, make use of a stage 2 page-table
to give the hypervisor some control over the host memory accesses. The
host stage 2 is created lazily using large block mappings if possible,
and will default to page mappings in absence of a better solution.

>From this point on, memory accesses from the host to protected memory
regions (e.g. not 'owned' by the host) are fatal and lead to hyp_panic().

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h  |   1 +
 arch/arm64/kernel/image-vars.h|   3 +
 arch/arm64/kvm/arm.c  |  10 +
 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  34 +++
 arch/arm64/kvm/hyp/nvhe/Makefile  |   2 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S|   1 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c|  10 +
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 246 ++
 arch/arm64/kvm/hyp/nvhe/setup.c   |   5 +
 arch/arm64/kvm/hyp/nvhe/switch.c  |   7 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c |   4 +-
 11 files changed, 316 insertions(+), 7 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/mem_protect.c

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 08f63c09cd11..4149283b4cd1 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -61,6 +61,7 @@
 #define __KVM_HOST_SMCCC_FUNC___pkvm_create_mappings   16
 #define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping17
 #define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector18
+#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize 19
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 940c378fa837..d5dc2b792651 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -131,6 +131,9 @@ KVM_NVHE_ALIAS(__hyp_bss_end);
 KVM_NVHE_ALIAS(__hyp_rodata_start);
 KVM_NVHE_ALIAS(__hyp_rodata_end);
 
+/* pKVM static key */
+KVM_NVHE_ALIAS(kvm_protected_mode_initialized);
+
 #endif /* CONFIG_KVM */
 
 #endif /* __ARM64_KERNEL_IMAGE_VARS_H */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a6b5ba195ca9..d237c378e6fb 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1894,12 +1894,22 @@ static int init_hyp_mode(void)
return err;
 }
 
+void _kvm_host_prot_finalize(void *discard)
+{
+   WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
+}
+
 static int finalize_hyp_mode(void)
 {
if (!is_protected_kvm_enabled())
return 0;
 
+   /*
+* Flip the static key upfront as that may no longer be possible
+* once the host stage 2 is installed.
+*/
static_branch_enable(&kvm_protected_mode_initialized);
+   on_each_cpu(_kvm_host_prot_finalize, NULL, 1);
 
return 0;
 }
diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h 
b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
new file mode 100644
index ..d293cb328cc4
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2020 Google LLC
+ * Author: Quentin Perret 
+ */
+
+#ifndef __KVM_NVHE_MEM_PROTECT__
+#define __KVM_NVHE_MEM_PROTECT__
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct host_kvm {
+   struct kvm_arch arch;
+   struct kvm_pgtable pgt;
+   struct kvm_pgtable_mm_ops mm_ops;
+   hyp_spinlock_t lock;
+};
+extern struct host_kvm host_kvm;
+
+int __pkvm_prot_finalize(void);
+int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool);
+void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
+
+static __always_inline void __load_host_stage2(void)
+{
+   if (static_branch_likely(&kvm_protected_mode_initialized))
+   __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
+   else
+   write_sysreg(0, vttbr_el2);
+}
+#endif /* __KVM_NVHE_MEM_PROTECT__ */
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index b334354b8dd0..f55201a7ff33 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
-cache.o setup.o mm.o
+cache.o setup.o mm.o mem_protect.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S 
b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 60d51515153e..c953fb4b9a13 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S

[PATCH v6 31/38] KVM: arm64: Add kvm_pgtable_stage2_find_range()

2021-03-19 Thread Quentin Perret
Since the host stage 2 will be identity mapped, and since it will own
most of memory, it would preferable for performance to try and use large
block mappings whenever that is possible. To ease this, introduce a new
helper in the KVM page-table code which allows to search for large
ranges of available IPA space. This will be used in the host memory
abort path to greedily idmap large portion of the PA space.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 29 +
 arch/arm64/kvm/hyp/pgtable.c | 89 ++--
 2 files changed, 114 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index eea2e2b0acaa..e1fed14aee17 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -94,6 +94,16 @@ enum kvm_pgtable_prot {
 #define PAGE_HYP_RO(KVM_PGTABLE_PROT_R)
 #define PAGE_HYP_DEVICE(PAGE_HYP | KVM_PGTABLE_PROT_DEVICE)
 
+/**
+ * struct kvm_mem_range - Range of Intermediate Physical Addresses
+ * @start: Start of the range.
+ * @end:   End of the range.
+ */
+struct kvm_mem_range {
+   u64 start;
+   u64 end;
+};
+
 /**
  * enum kvm_pgtable_walk_flags - Flags to control a depth-first page-table 
walk.
  * @KVM_PGTABLE_WALK_LEAF: Visit leaf entries, including invalid
@@ -397,4 +407,23 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 
addr, u64 size);
 int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
 struct kvm_pgtable_walker *walker);
 
+/**
+ * kvm_pgtable_stage2_find_range() - Find a range of Intermediate Physical
+ *  Addresses with compatible permission
+ *  attributes.
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @addr:  Address that must be covered by the range.
+ * @prot:  Protection attributes that the range must be compatible with.
+ * @range: Range structure used to limit the search space at call time and
+ * that will hold the result.
+ *
+ * The offset of @addr within a page is ignored. An IPA is compatible with 
@prot
+ * iff its corresponding stage-2 page-table entry has default ownership and, if
+ * valid, is mapped with protection attributes identical to @prot.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int kvm_pgtable_stage2_find_range(struct kvm_pgtable *pgt, u64 addr,
+ enum kvm_pgtable_prot prot,
+ struct kvm_mem_range *range);
 #endif /* __ARM64_KVM_PGTABLE_H__ */
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index f4a514a2e7ae..dc6ef2cfe3eb 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -48,6 +48,8 @@
 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
 KVM_PTE_LEAF_ATTR_HI_S2_XN)
 
+#define KVM_PTE_LEAF_ATTR_S2_IGNORED   GENMASK(58, 55)
+
 #define KVM_INVALID_PTE_OWNER_MASK GENMASK(63, 56)
 #define KVM_MAX_OWNER_ID   1
 
@@ -77,15 +79,20 @@ static bool kvm_phys_is_valid(u64 phys)
return phys < 
BIT(id_aa64mmfr0_parange_to_phys_shift(ID_AA64MMFR0_PARANGE_MAX));
 }
 
-static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
+static bool kvm_level_supports_block_mapping(u32 level)
 {
-   u64 granule = kvm_granule_size(level);
-
/*
 * Reject invalid block mappings and don't bother with 4TB mappings for
 * 52-bit PAs.
 */
-   if (level == 0 || (PAGE_SIZE != SZ_4K && level == 1))
+   return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1));
+}
+
+static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
+{
+   u64 granule = kvm_granule_size(level);
+
+   if (!kvm_level_supports_block_mapping(level))
return false;
 
if (granule > (end - addr))
@@ -1053,3 +1060,77 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt)
pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_sz);
pgt->pgd = NULL;
 }
+
+#define KVM_PTE_LEAF_S2_COMPAT_MASK(KVM_PTE_LEAF_ATTR_S2_PERMS | \
+KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR | \
+KVM_PTE_LEAF_ATTR_S2_IGNORED)
+
+static int stage2_check_permission_walker(u64 addr, u64 end, u32 level,
+ kvm_pte_t *ptep,
+ enum kvm_pgtable_walk_flags flag,
+ void * const arg)
+{
+   kvm_pte_t old_attr, pte = *ptep, *new_attr = arg;
+
+   /*
+* Compatible mappings are either invalid and owned by the page-table
+* owner (whose id is 0), or va

[PATCH v6 32/38] KVM: arm64: Introduce KVM_PGTABLE_S2_NOFWB stage 2 flag

2021-03-19 Thread Quentin Perret
In order to further configure stage 2 page-tables, pass flags to the
init function using a new enum.

The first of these flags allows to disable FWB even if the hardware
supports it as we will need to do so for the host stage 2.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h  | 43 +---
 arch/arm64/include/asm/pgtable-prot.h |  4 +-
 arch/arm64/kvm/hyp/pgtable.c  | 56 +++
 3 files changed, 62 insertions(+), 41 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index e1fed14aee17..55452f4831d2 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -56,6 +56,15 @@ struct kvm_pgtable_mm_ops {
phys_addr_t (*virt_to_phys)(void *addr);
 };
 
+/**
+ * enum kvm_pgtable_stage2_flags - Stage-2 page-table flags.
+ * @KVM_PGTABLE_S2_NOFWB:  Don't enforce Normal-WB even if the CPUs have
+ * ARM64_HAS_STAGE2_FWB.
+ */
+enum kvm_pgtable_stage2_flags {
+   KVM_PGTABLE_S2_NOFWB= BIT(0),
+};
+
 /**
  * struct kvm_pgtable - KVM page-table.
  * @ia_bits:   Maximum input address size, in bits.
@@ -72,6 +81,7 @@ struct kvm_pgtable {
 
/* Stage-2 only */
struct kvm_s2_mmu   *mmu;
+   enum kvm_pgtable_stage2_flags   flags;
 };
 
 /**
@@ -196,20 +206,25 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 
addr, u64 size, u64 phys,
 u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift);
 
 /**
- * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table.
+ * kvm_pgtable_stage2_init_flags() - Initialise a guest stage-2 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
  * @arch:  Arch-specific KVM structure representing the guest virtual
  * machine.
  * @mm_ops:Memory management callbacks.
+ * @flags: Stage-2 configuration flags.
  *
  * Return: 0 on success, negative error code on failure.
  */
-int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch,
-   struct kvm_pgtable_mm_ops *mm_ops);
+int kvm_pgtable_stage2_init_flags(struct kvm_pgtable *pgt, struct kvm_arch 
*arch,
+ struct kvm_pgtable_mm_ops *mm_ops,
+ enum kvm_pgtable_stage2_flags flags);
+
+#define kvm_pgtable_stage2_init(pgt, arch, mm_ops) \
+   kvm_pgtable_stage2_init_flags(pgt, arch, mm_ops, 0)
 
 /**
  * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table.
- * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init*().
  *
  * The page-table is assumed to be unreachable by any hardware walkers prior
  * to freeing and therefore no TLB invalidation is performed.
@@ -218,7 +233,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt);
 
 /**
  * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table.
- * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init*().
  * @addr:  Intermediate physical address at which to place the mapping.
  * @size:  Size of the mapping.
  * @phys:  Physical address of the memory to map.
@@ -251,7 +266,7 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 
addr, u64 size,
 /**
  * kvm_pgtable_stage2_set_owner() - Unmap and annotate pages in the IPA space 
to
  * track ownership.
- * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init*().
  * @addr:  Base intermediate physical address to annotate.
  * @size:  Size of the annotated range.
  * @mc:Cache of pre-allocated and zeroed memory from which to 
allocate
@@ -270,7 +285,7 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, 
u64 addr, u64 size,
 
 /**
  * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 
page-table.
- * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init*().
  * @addr:  Intermediate physical address from which to remove the mapping.
  * @size:  Size of the mapping.
  *
@@ -290,7 +305,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 
addr, u64 size);
 /**
  * kvm_pgtable_stage2_wrprotect() - Write-protect guest stage-2 address range
  *  without TLB invalidation.
- * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init*().
  * @addr:  Intermediate physical address from which to write-protect,
  * @size:  Size of the 

[PATCH v6 33/38] KVM: arm64: Introduce KVM_PGTABLE_S2_IDMAP stage 2 flag

2021-03-19 Thread Quentin Perret
Introduce a new stage 2 configuration flag to specify that all mappings
in a given page-table will be identity-mapped, as will be the case for
the host. This allows to introduce sanity checks in the map path and to
avoid programming errors.

Suggested-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 2 ++
 arch/arm64/kvm/hyp/pgtable.c | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 55452f4831d2..c3674c47d48c 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -60,9 +60,11 @@ struct kvm_pgtable_mm_ops {
  * enum kvm_pgtable_stage2_flags - Stage-2 page-table flags.
  * @KVM_PGTABLE_S2_NOFWB:  Don't enforce Normal-WB even if the CPUs have
  * ARM64_HAS_STAGE2_FWB.
+ * @KVM_PGTABLE_S2_IDMAP:  Only use identity mappings.
  */
 enum kvm_pgtable_stage2_flags {
KVM_PGTABLE_S2_NOFWB= BIT(0),
+   KVM_PGTABLE_S2_IDMAP= BIT(1),
 };
 
 /**
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index b22b4860630c..c37c1dc4feaf 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -760,6 +760,9 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 
addr, u64 size,
.arg= &map_data,
};
 
+   if (WARN_ON((pgt->flags & KVM_PGTABLE_S2_IDMAP) && (addr != phys)))
+   return -EINVAL;
+
ret = stage2_set_prot_attr(pgt, prot, &map_data.attr);
if (ret)
return ret;
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 34/38] KVM: arm64: Provide sanitized mmfr* registers at EL2

2021-03-19 Thread Quentin Perret
We will need to read sanitized values of mmfr{0,1}_el1 at EL2 soon, so
add them to the list of copied variables.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_cpufeature.h | 2 ++
 arch/arm64/kvm/hyp/nvhe/hyp-smp.c   | 2 ++
 arch/arm64/kvm/sys_regs.c   | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_cpufeature.h 
b/arch/arm64/include/asm/kvm_cpufeature.h
index c2e7735f502b..ff302d15e840 100644
--- a/arch/arm64/include/asm/kvm_cpufeature.h
+++ b/arch/arm64/include/asm/kvm_cpufeature.h
@@ -20,5 +20,7 @@
 #endif
 
 DECLARE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
+DECLARE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr0_el1);
+DECLARE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr1_el1);
 
 #endif
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c 
b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
index 71f00aca90e7..17ad1b3a9530 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
@@ -13,6 +13,8 @@
  * Copies of the host's CPU features registers holding sanitized values.
  */
 DEFINE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
+DEFINE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr0_el1);
+DEFINE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr1_el1);
 
 /*
  * nVHE copy of data structures tracking available CPU cores.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3ec34c25e877..dfb3b4f9ca84 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2784,6 +2784,8 @@ struct __ftr_reg_copy_entry {
struct arm64_ftr_reg*dst;
 } hyp_ftr_regs[] __initdata = {
CPU_FTR_REG_HYP_COPY(SYS_CTR_EL0, arm64_ftr_reg_ctrel0),
+   CPU_FTR_REG_HYP_COPY(SYS_ID_AA64MMFR0_EL1, 
arm64_ftr_reg_id_aa64mmfr0_el1),
+   CPU_FTR_REG_HYP_COPY(SYS_ID_AA64MMFR1_EL1, 
arm64_ftr_reg_id_aa64mmfr1_el1),
 };
 
 void __init setup_kvm_el2_caps(void)
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 30/38] KVM: arm64: Refactor the *_map_set_prot_attr() helpers

2021-03-19 Thread Quentin Perret
In order to ease their re-use in other code paths, refactor the
*_map_set_prot_attr() helpers to not depend on a map_data struct.
No functional change intended.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/pgtable.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index eefb226e1d1e..f4a514a2e7ae 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -325,8 +325,7 @@ struct hyp_map_data {
struct kvm_pgtable_mm_ops   *mm_ops;
 };
 
-static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot,
-struct hyp_map_data *data)
+static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
 {
bool device = prot & KVM_PGTABLE_PROT_DEVICE;
u32 mtype = device ? MT_DEVICE_nGnRE : MT_NORMAL;
@@ -351,7 +350,8 @@ static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot,
attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S1_AP, ap);
attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S1_SH, sh);
attr |= KVM_PTE_LEAF_ATTR_LO_S1_AF;
-   data->attr = attr;
+   *ptep = attr;
+
return 0;
 }
 
@@ -408,7 +408,7 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, 
u64 size, u64 phys,
.arg= &map_data,
};
 
-   ret = hyp_map_set_prot_attr(prot, &map_data);
+   ret = hyp_set_prot_attr(prot, &map_data.attr);
if (ret)
return ret;
 
@@ -501,8 +501,7 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
return vtcr;
 }
 
-static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot,
-   struct stage2_map_data *data)
+static int stage2_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
 {
bool device = prot & KVM_PGTABLE_PROT_DEVICE;
kvm_pte_t attr = device ? PAGE_S2_MEMATTR(DEVICE_nGnRE) :
@@ -522,7 +521,8 @@ static int stage2_map_set_prot_attr(enum kvm_pgtable_prot 
prot,
 
attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S2_SH, sh);
attr |= KVM_PTE_LEAF_ATTR_LO_S2_AF;
-   data->attr = attr;
+   *ptep = attr;
+
return 0;
 }
 
@@ -742,7 +742,7 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 
addr, u64 size,
.arg= &map_data,
};
 
-   ret = stage2_map_set_prot_attr(prot, &map_data);
+   ret = stage2_set_prot_attr(prot, &map_data.attr);
if (ret)
return ret;
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 25/38] KVM: arm64: Make memcache anonymous in pgtable allocator

2021-03-19 Thread Quentin Perret
The current stage2 page-table allocator uses a memcache to get
pre-allocated pages when it needs any. To allow re-using this code at
EL2 which uses a concept of memory pools, make the memcache argument of
kvm_pgtable_stage2_map() anonymous, and let the mm_ops zalloc_page()
callbacks use it the way they need to.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 6 +++---
 arch/arm64/kvm/hyp/pgtable.c | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 9cdc198ea6b4..4ae19247837b 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -213,8 +213,8 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt);
  * @size:  Size of the mapping.
  * @phys:  Physical address of the memory to map.
  * @prot:  Permissions and attributes for the mapping.
- * @mc:Cache of pre-allocated GFP_PGTABLE_USER memory from 
which to
- * allocate page-table pages.
+ * @mc:Cache of pre-allocated and zeroed memory from which to 
allocate
+ * page-table pages.
  *
  * The offset of @addr within a page is ignored, @size is rounded-up to
  * the next page boundary and @phys is rounded-down to the previous page
@@ -236,7 +236,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt);
  */
 int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
   u64 phys, enum kvm_pgtable_prot prot,
-  struct kvm_mmu_memory_cache *mc);
+  void *mc);
 
 /**
  * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 
page-table.
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4e15ccafd640..15de1708cfcd 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -446,7 +446,7 @@ struct stage2_map_data {
kvm_pte_t   *anchor;
 
struct kvm_s2_mmu   *mmu;
-   struct kvm_mmu_memory_cache *memcache;
+   void*memcache;
 
struct kvm_pgtable_mm_ops   *mm_ops;
 };
@@ -670,7 +670,7 @@ static int stage2_map_walker(u64 addr, u64 end, u32 level, 
kvm_pte_t *ptep,
 
 int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
   u64 phys, enum kvm_pgtable_prot prot,
-  struct kvm_mmu_memory_cache *mc)
+  void *mc)
 {
int ret;
struct stage2_map_data map_data = {
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 24/38] KVM: arm64: Refactor __populate_fault_info()

2021-03-19 Thread Quentin Perret
Refactor __populate_fault_info() to introduce __get_fault_info() which
will be used once the host is wrapped in a stage 2.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 28 +++--
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 1073f176e92c..cdf42e347d3f 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -160,18 +160,10 @@ static inline bool __translate_far_to_hpfar(u64 far, u64 
*hpfar)
return true;
 }
 
-static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
+static inline bool __get_fault_info(u64 esr, struct kvm_vcpu_fault_info *fault)
 {
-   u8 ec;
-   u64 esr;
u64 hpfar, far;
 
-   esr = vcpu->arch.fault.esr_el2;
-   ec = ESR_ELx_EC(esr);
-
-   if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
-   return true;
-
far = read_sysreg_el2(SYS_FAR);
 
/*
@@ -194,11 +186,25 @@ static inline bool __populate_fault_info(struct kvm_vcpu 
*vcpu)
hpfar = read_sysreg(hpfar_el2);
}
 
-   vcpu->arch.fault.far_el2 = far;
-   vcpu->arch.fault.hpfar_el2 = hpfar;
+   fault->far_el2 = far;
+   fault->hpfar_el2 = hpfar;
return true;
 }
 
+static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
+{
+   u8 ec;
+   u64 esr;
+
+   esr = vcpu->arch.fault.esr_el2;
+   ec = ESR_ELx_EC(esr);
+
+   if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
+   return true;
+
+   return __get_fault_info(esr, &vcpu->arch.fault);
+}
+
 static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu)
 {
struct thread_struct *thread;
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 27/38] KVM: arm64: Sort the hypervisor memblocks

2021-03-19 Thread Quentin Perret
We will soon need to check if a Physical Address belongs to a memblock
at EL2, so make sure to sort them so this can be done efficiently.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/reserved_mem.c | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/arch/arm64/kvm/hyp/reserved_mem.c 
b/arch/arm64/kvm/hyp/reserved_mem.c
index fd42705a3c26..83ca23ac259b 100644
--- a/arch/arm64/kvm/hyp/reserved_mem.c
+++ b/arch/arm64/kvm/hyp/reserved_mem.c
@@ -6,6 +6,7 @@
 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -18,6 +19,23 @@ static unsigned int *hyp_memblock_nr_ptr = 
&kvm_nvhe_sym(hyp_memblock_nr);
 phys_addr_t hyp_mem_base;
 phys_addr_t hyp_mem_size;
 
+static int cmp_hyp_memblock(const void *p1, const void *p2)
+{
+   const struct memblock_region *r1 = p1;
+   const struct memblock_region *r2 = p2;
+
+   return r1->base < r2->base ? -1 : (r1->base > r2->base);
+}
+
+static void __init sort_memblock_regions(void)
+{
+   sort(hyp_memory,
+*hyp_memblock_nr_ptr,
+sizeof(struct memblock_region),
+cmp_hyp_memblock,
+NULL);
+}
+
 static int __init register_memblock_regions(void)
 {
struct memblock_region *reg;
@@ -29,6 +47,7 @@ static int __init register_memblock_regions(void)
hyp_memory[*hyp_memblock_nr_ptr] = *reg;
(*hyp_memblock_nr_ptr)++;
}
+   sort_memblock_regions();
 
return 0;
 }
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 26/38] KVM: arm64: Reserve memory for host stage 2

2021-03-19 Thread Quentin Perret
Extend the memory pool allocated for the hypervisor to include enough
pages to map all of memory at page granularity for the host stage 2.
While at it, also reserve some memory for device mappings.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/nvhe/mm.h | 27 ++-
 arch/arm64/kvm/hyp/nvhe/setup.c  | 12 
 arch/arm64/kvm/hyp/reserved_mem.c|  2 ++
 3 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h 
b/arch/arm64/kvm/hyp/include/nvhe/mm.h
index ac0f7fcffd08..0095f6289742 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/mm.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h
@@ -53,7 +53,7 @@ static inline unsigned long __hyp_pgtable_max_pages(unsigned 
long nr_pages)
return total;
 }
 
-static inline unsigned long hyp_s1_pgtable_pages(void)
+static inline unsigned long __hyp_pgtable_total_pages(void)
 {
unsigned long res = 0, i;
 
@@ -63,9 +63,34 @@ static inline unsigned long hyp_s1_pgtable_pages(void)
res += __hyp_pgtable_max_pages(reg->size >> PAGE_SHIFT);
}
 
+   return res;
+}
+
+static inline unsigned long hyp_s1_pgtable_pages(void)
+{
+   unsigned long res;
+
+   res = __hyp_pgtable_total_pages();
+
/* Allow 1 GiB for private mappings */
res += __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT);
 
return res;
 }
+
+static inline unsigned long host_s2_mem_pgtable_pages(void)
+{
+   /*
+* Include an extra 16 pages to safely upper-bound the worst case of
+* concatenated pgds.
+*/
+   return __hyp_pgtable_total_pages() + 16;
+}
+
+static inline unsigned long host_s2_dev_pgtable_pages(void)
+{
+   /* Allow 1 GiB for MMIO mappings */
+   return __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT);
+}
+
 #endif /* __KVM_HYP_MM_H */
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 1e8bcd8b0299..c1a3e7e0ebbc 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -24,6 +24,8 @@ unsigned long hyp_nr_cpus;
 
 static void *vmemmap_base;
 static void *hyp_pgt_base;
+static void *host_s2_mem_pgt_base;
+static void *host_s2_dev_pgt_base;
 
 static int divide_memory_pool(void *virt, unsigned long size)
 {
@@ -42,6 +44,16 @@ static int divide_memory_pool(void *virt, unsigned long size)
if (!hyp_pgt_base)
return -ENOMEM;
 
+   nr_pages = host_s2_mem_pgtable_pages();
+   host_s2_mem_pgt_base = hyp_early_alloc_contig(nr_pages);
+   if (!host_s2_mem_pgt_base)
+   return -ENOMEM;
+
+   nr_pages = host_s2_dev_pgtable_pages();
+   host_s2_dev_pgt_base = hyp_early_alloc_contig(nr_pages);
+   if (!host_s2_dev_pgt_base)
+   return -ENOMEM;
+
return 0;
 }
 
diff --git a/arch/arm64/kvm/hyp/reserved_mem.c 
b/arch/arm64/kvm/hyp/reserved_mem.c
index 9bc6a6d27904..fd42705a3c26 100644
--- a/arch/arm64/kvm/hyp/reserved_mem.c
+++ b/arch/arm64/kvm/hyp/reserved_mem.c
@@ -52,6 +52,8 @@ void __init kvm_hyp_reserve(void)
}
 
hyp_mem_pages += hyp_s1_pgtable_pages();
+   hyp_mem_pages += host_s2_mem_pgtable_pages();
+   hyp_mem_pages += host_s2_dev_pgtable_pages();
 
/*
 * The hyp_vmemmap needs to be backed by pages, but these pages
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 23/38] KVM: arm64: Refactor __load_guest_stage2()

2021-03-19 Thread Quentin Perret
Refactor __load_guest_stage2() to introduce __load_stage2() which will
be re-used when loading the host stage 2.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_mmu.h | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 6f743e20cb06..9d64fa73ee67 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -270,9 +270,9 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu 
*mmu)
  * Must be called from hyp code running at EL2 with an updated VTTBR
  * and interrupts disabled.
  */
-static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu)
+static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned 
long vtcr)
 {
-   write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2);
+   write_sysreg(vtcr, vtcr_el2);
write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
 
/*
@@ -283,6 +283,11 @@ static __always_inline void __load_guest_stage2(struct 
kvm_s2_mmu *mmu)
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 }
 
+static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu)
+{
+   __load_stage2(mmu, kern_hyp_va(mmu->arch)->vtcr);
+}
+
 static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
 {
return container_of(mmu->arch, struct kvm, arch);
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 28/38] KVM: arm64: Always zero invalid PTEs

2021-03-19 Thread Quentin Perret
kvm_set_invalid_pte() currently only clears bit 0 from a PTE because
stage2_map_walk_table_post() needs to be able to follow the anchor. In
preparation for re-using bits 63-01 from invalid PTEs, make sure to zero
it entirely by ensuring to cache the anchor's child upfront.

Acked-by: Will Deacon 
Suggested-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/pgtable.c | 26 --
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 15de1708cfcd..0a674010afb6 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -156,10 +156,9 @@ static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte, struct 
kvm_pgtable_mm_ops *mm_op
return mm_ops->phys_to_virt(kvm_pte_to_phys(pte));
 }
 
-static void kvm_set_invalid_pte(kvm_pte_t *ptep)
+static void kvm_clear_pte(kvm_pte_t *ptep)
 {
-   kvm_pte_t pte = *ptep;
-   WRITE_ONCE(*ptep, pte & ~KVM_PTE_VALID);
+   WRITE_ONCE(*ptep, 0);
 }
 
 static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp,
@@ -444,6 +443,7 @@ struct stage2_map_data {
kvm_pte_t   attr;
 
kvm_pte_t   *anchor;
+   kvm_pte_t   *childp;
 
struct kvm_s2_mmu   *mmu;
void*memcache;
@@ -533,7 +533,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, 
u32 level,
 * There's an existing different valid leaf entry, so perform
 * break-before-make.
 */
-   kvm_set_invalid_pte(ptep);
+   kvm_clear_pte(ptep);
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
mm_ops->put_page(ptep);
}
@@ -554,7 +554,8 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 
level,
if (!kvm_block_mapping_supported(addr, end, data->phys, level))
return 0;
 
-   kvm_set_invalid_pte(ptep);
+   data->childp = kvm_pte_follow(*ptep, data->mm_ops);
+   kvm_clear_pte(ptep);
 
/*
 * Invalidate the whole stage-2, as we may have numerous leaf
@@ -600,7 +601,7 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 
level, kvm_pte_t *ptep,
 * will be mapped lazily.
 */
if (kvm_pte_valid(pte)) {
-   kvm_set_invalid_pte(ptep);
+   kvm_clear_pte(ptep);
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
mm_ops->put_page(ptep);
}
@@ -616,19 +617,24 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, 
u32 level,
  struct stage2_map_data *data)
 {
struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops;
+   kvm_pte_t *childp;
int ret = 0;
 
if (!data->anchor)
return 0;
 
-   mm_ops->put_page(kvm_pte_follow(*ptep, mm_ops));
-   mm_ops->put_page(ptep);
-
if (data->anchor == ptep) {
+   childp = data->childp;
data->anchor = NULL;
+   data->childp = NULL;
ret = stage2_map_walk_leaf(addr, end, level, ptep, data);
+   } else {
+   childp = kvm_pte_follow(*ptep, mm_ops);
}
 
+   mm_ops->put_page(childp);
+   mm_ops->put_page(ptep);
+
return ret;
 }
 
@@ -737,7 +743,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 
level, kvm_pte_t *ptep,
 * block entry and rely on the remaining portions being faulted
 * back lazily.
 */
-   kvm_set_invalid_pte(ptep);
+   kvm_clear_pte(ptep);
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, addr, level);
mm_ops->put_page(ptep);
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 29/38] KVM: arm64: Use page-table to track page ownership

2021-03-19 Thread Quentin Perret
As the host stage 2 will be identity mapped, all the .hyp memory regions
and/or memory pages donated to protected guestis will have to marked
invalid in the host stage 2 page-table. At the same time, the hypervisor
will need a way to track the ownership of each physical page to ensure
memory sharing or donation between entities (host, guests, hypervisor) is
legal.

In order to enable this tracking at EL2, let's use the host stage 2
page-table itself. The idea is to use the top bits of invalid mappings
to store the unique identifier of the page owner. The page-table owner
(the host) gets identifier 0 such that, at boot time, it owns the entire
IPA space as the pgd starts zeroed.

Provide kvm_pgtable_stage2_set_owner() which allows to modify the
ownership of pages in the host stage 2. It re-uses most of the map()
logic, but ends up creating invalid mappings instead. This impacts
how we do refcount as we now need to count invalid mappings when they
are used for ownership tracking.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h |  20 +
 arch/arm64/kvm/hyp/pgtable.c | 126 ++-
 2 files changed, 122 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 4ae19247837b..eea2e2b0acaa 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -238,6 +238,26 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 
addr, u64 size,
   u64 phys, enum kvm_pgtable_prot prot,
   void *mc);
 
+/**
+ * kvm_pgtable_stage2_set_owner() - Unmap and annotate pages in the IPA space 
to
+ * track ownership.
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @addr:  Base intermediate physical address to annotate.
+ * @size:  Size of the annotated range.
+ * @mc:Cache of pre-allocated and zeroed memory from which to 
allocate
+ * page-table pages.
+ * @owner_id:  Unique identifier for the owner of the page.
+ *
+ * By default, all page-tables are owned by identifier 0. This function can be
+ * used to mark portions of the IPA space as owned by other entities. When a
+ * stage 2 is used with identity-mappings, these annotations allow to use the
+ * page-table data structure as a simple rmap.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
+void *mc, u8 owner_id);
+
 /**
  * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 
page-table.
  * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 0a674010afb6..eefb226e1d1e 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -48,6 +48,9 @@
 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
 KVM_PTE_LEAF_ATTR_HI_S2_XN)
 
+#define KVM_INVALID_PTE_OWNER_MASK GENMASK(63, 56)
+#define KVM_MAX_OWNER_ID   1
+
 struct kvm_pgtable_walk_data {
struct kvm_pgtable  *pgt;
struct kvm_pgtable_walker   *walker;
@@ -67,6 +70,13 @@ static u64 kvm_granule_size(u32 level)
return BIT(kvm_granule_shift(level));
 }
 
+#define KVM_PHYS_INVALID (-1ULL)
+
+static bool kvm_phys_is_valid(u64 phys)
+{
+   return phys < 
BIT(id_aa64mmfr0_parange_to_phys_shift(ID_AA64MMFR0_PARANGE_MAX));
+}
+
 static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
 {
u64 granule = kvm_granule_size(level);
@@ -81,7 +91,10 @@ static bool kvm_block_mapping_supported(u64 addr, u64 end, 
u64 phys, u32 level)
if (granule > (end - addr))
return false;
 
-   return IS_ALIGNED(addr, granule) && IS_ALIGNED(phys, granule);
+   if (kvm_phys_is_valid(phys) && !IS_ALIGNED(phys, granule))
+   return false;
+
+   return IS_ALIGNED(addr, granule);
 }
 
 static u32 kvm_pgtable_idx(struct kvm_pgtable_walk_data *data, u32 level)
@@ -186,6 +199,11 @@ static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t 
attr, u32 level)
return pte;
 }
 
+static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
+{
+   return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
+}
+
 static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
  u32 level, kvm_pte_t *ptep,
  enum kvm_pgtable_walk_flags flag)
@@ -441,6 +459,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
 struct stage2_map_data {
u64 phys;
kvm_pte_t   attr;
+   u8  ow

[PATCH v6 21/38] KVM: arm64: Set host stage 2 using kvm_nvhe_init_params

2021-03-19 Thread Quentin Perret
Move the registers relevant to host stage 2 enablement to
kvm_nvhe_init_params to prepare the ground for enabling it in later
patches.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h   |  3 +++
 arch/arm64/kernel/asm-offsets.c|  3 +++
 arch/arm64/kvm/arm.c   |  5 +
 arch/arm64/kvm/hyp/nvhe/hyp-init.S | 14 +-
 arch/arm64/kvm/hyp/nvhe/switch.c   |  5 +
 5 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index ebe7007b820b..08f63c09cd11 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -158,6 +158,9 @@ struct kvm_nvhe_init_params {
unsigned long tpidr_el2;
unsigned long stack_hyp_va;
phys_addr_t pgd_pa;
+   unsigned long hcr_el2;
+   unsigned long vttbr;
+   unsigned long vtcr;
 };
 
 /* Translate a kernel address @ptr into its equivalent linear mapping */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index a36e2fc330d4..8930b42f6418 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -120,6 +120,9 @@ int main(void)
   DEFINE(NVHE_INIT_TPIDR_EL2,  offsetof(struct kvm_nvhe_init_params, 
tpidr_el2));
   DEFINE(NVHE_INIT_STACK_HYP_VA,   offsetof(struct kvm_nvhe_init_params, 
stack_hyp_va));
   DEFINE(NVHE_INIT_PGD_PA, offsetof(struct kvm_nvhe_init_params, pgd_pa));
+  DEFINE(NVHE_INIT_HCR_EL2,offsetof(struct kvm_nvhe_init_params, hcr_el2));
+  DEFINE(NVHE_INIT_VTTBR,  offsetof(struct kvm_nvhe_init_params, vttbr));
+  DEFINE(NVHE_INIT_VTCR,   offsetof(struct kvm_nvhe_init_params, vtcr));
 #endif
 #ifdef CONFIG_CPU_PM
   DEFINE(CPU_CTX_SP,   offsetof(struct cpu_suspend_ctx, sp));
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d93ea0b82491..a6b5ba195ca9 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1418,6 +1418,11 @@ static void cpu_prepare_hyp_mode(int cpu)
 
params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) 
+ PAGE_SIZE);
params->pgd_pa = kvm_mmu_get_httbr();
+   if (is_protected_kvm_enabled())
+   params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS;
+   else
+   params->hcr_el2 = HCR_HOST_NVHE_FLAGS;
+   params->vttbr = params->vtcr = 0;
 
/*
 * Flush the init params from the data cache because the struct will
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S 
b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 557fb79b29cf..60d51515153e 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -83,11 +83,6 @@ SYM_CODE_END(__kvm_hyp_init)
  * x0: struct kvm_nvhe_init_params PA
  */
 SYM_CODE_START_LOCAL(___kvm_hyp_init)
-alternative_if ARM64_KVM_PROTECTED_MODE
-   mov_q   x1, HCR_HOST_NVHE_PROTECTED_FLAGS
-   msr hcr_el2, x1
-alternative_else_nop_endif
-
ldr x1, [x0, #NVHE_INIT_TPIDR_EL2]
msr tpidr_el2, x1
 
@@ -97,6 +92,15 @@ alternative_else_nop_endif
ldr x1, [x0, #NVHE_INIT_MAIR_EL2]
msr mair_el2, x1
 
+   ldr x1, [x0, #NVHE_INIT_HCR_EL2]
+   msr hcr_el2, x1
+
+   ldr x1, [x0, #NVHE_INIT_VTTBR]
+   msr vttbr_el2, x1
+
+   ldr x1, [x0, #NVHE_INIT_VTCR]
+   msr vtcr_el2, x1
+
ldr x1, [x0, #NVHE_INIT_PGD_PA]
phys_to_ttbr x2, x1
 alternative_if ARM64_HAS_CNP
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f6d542ecf6a7..99323563022a 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -97,10 +97,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
write_sysreg(mdcr_el2, mdcr_el2);
-   if (is_protected_kvm_enabled())
-   write_sysreg(HCR_HOST_NVHE_PROTECTED_FLAGS, hcr_el2);
-   else
-   write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+   write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
 
cptr = CPTR_EL2_DEFAULT;
if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 20/38] KVM: arm64: Use kvm_arch in kvm_s2_mmu

2021-03-19 Thread Quentin Perret
In order to make use of the stage 2 pgtable code for the host stage 2,
change kvm_s2_mmu to use a kvm_arch pointer in lieu of the kvm pointer,
as the host will have the former but not the latter.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/include/asm/kvm_mmu.h  | 6 +-
 arch/arm64/kvm/mmu.c  | 8 
 3 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index f813e1191027..4859c9de75d7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -94,7 +94,7 @@ struct kvm_s2_mmu {
/* The last vcpu id that ran on each physical CPU */
int __percpu *last_vcpu_ran;
 
-   struct kvm *kvm;
+   struct kvm_arch *arch;
 };
 
 struct kvm_arch_memory_slot {
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index ce02a4052dcf..6f743e20cb06 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -272,7 +272,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu 
*mmu)
  */
 static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu)
 {
-   write_sysreg(kern_hyp_va(mmu->kvm)->arch.vtcr, vtcr_el2);
+   write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2);
write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
 
/*
@@ -283,5 +283,9 @@ static __always_inline void __load_guest_stage2(struct 
kvm_s2_mmu *mmu)
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 }
 
+static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
+{
+   return container_of(mmu->arch, struct kvm, arch);
+}
 #endif /* __ASSEMBLY__ */
 #endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d6eb1fb21232..0f16b70befa8 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -165,7 +165,7 @@ static void *kvm_host_va(phys_addr_t phys)
 static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, 
u64 size,
 bool may_block)
 {
-   struct kvm *kvm = mmu->kvm;
+   struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
phys_addr_t end = start + size;
 
assert_spin_locked(&kvm->mmu_lock);
@@ -470,7 +470,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu 
*mmu)
for_each_possible_cpu(cpu)
*per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1;
 
-   mmu->kvm = kvm;
+   mmu->arch = &kvm->arch;
mmu->pgt = pgt;
mmu->pgd_phys = __pa(pgt->pgd);
mmu->vmid.vmid_gen = 0;
@@ -552,7 +552,7 @@ void stage2_unmap_vm(struct kvm *kvm)
 
 void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
 {
-   struct kvm *kvm = mmu->kvm;
+   struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
struct kvm_pgtable *pgt = NULL;
 
spin_lock(&kvm->mmu_lock);
@@ -621,7 +621,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t 
guest_ipa,
  */
 static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, 
phys_addr_t end)
 {
-   struct kvm *kvm = mmu->kvm;
+   struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
stage2_apply_range_resched(kvm, addr, end, 
kvm_pgtable_stage2_wrprotect);
 }
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 22/38] KVM: arm64: Refactor kvm_arm_setup_stage2()

2021-03-19 Thread Quentin Perret
In order to re-use some of the stage 2 setup code at EL2, factor parts
of kvm_arm_setup_stage2() out into separate functions.

No functional change intended.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 26 +
 arch/arm64/kvm/hyp/pgtable.c | 32 +
 arch/arm64/kvm/reset.c   | 42 +++-
 3 files changed, 62 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 7945ec87eaec..9cdc198ea6b4 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -13,6 +13,16 @@
 
 #define KVM_PGTABLE_MAX_LEVELS 4U
 
+static inline u64 kvm_get_parange(u64 mmfr0)
+{
+   u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
+   ID_AA64MMFR0_PARANGE_SHIFT);
+   if (parange > ID_AA64MMFR0_PARANGE_MAX)
+   parange = ID_AA64MMFR0_PARANGE_MAX;
+
+   return parange;
+}
+
 typedef u64 kvm_pte_t;
 
 /**
@@ -159,6 +169,22 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt);
 int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys,
enum kvm_pgtable_prot prot);
 
+/**
+ * kvm_get_vtcr() - Helper to construct VTCR_EL2
+ * @mmfr0: Sanitized value of SYS_ID_AA64MMFR0_EL1 register.
+ * @mmfr1: Sanitized value of SYS_ID_AA64MMFR1_EL1 register.
+ * @phys_shfit:Value to set in VTCR_EL2.T0SZ.
+ *
+ * The VTCR value is common across all the physical CPUs on the system.
+ * We use system wide sanitised values to fill in different fields,
+ * except for Hardware Management of Access Flags. HA Flag is set
+ * unconditionally on all CPUs, as it is safe to run with or without
+ * the feature and the bit is RES0 on CPUs that don't support it.
+ *
+ * Return: VTCR_EL2 value
+ */
+u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift);
+
 /**
  * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index ea95bbc6ba80..4e15ccafd640 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -9,6 +9,7 @@
 
 #include 
 #include 
+#include 
 
 #define KVM_PTE_VALID  BIT(0)
 
@@ -450,6 +451,37 @@ struct stage2_map_data {
struct kvm_pgtable_mm_ops   *mm_ops;
 };
 
+u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
+{
+   u64 vtcr = VTCR_EL2_FLAGS;
+   u8 lvls;
+
+   vtcr |= kvm_get_parange(mmfr0) << VTCR_EL2_PS_SHIFT;
+   vtcr |= VTCR_EL2_T0SZ(phys_shift);
+   /*
+* Use a minimum 2 level page table to prevent splitting
+* host PMD huge pages at stage2.
+*/
+   lvls = stage2_pgtable_levels(phys_shift);
+   if (lvls < 2)
+   lvls = 2;
+   vtcr |= VTCR_EL2_LVLS_TO_SL0(lvls);
+
+   /*
+* Enable the Hardware Access Flag management, unconditionally
+* on all CPUs. The features is RES0 on CPUs without the support
+* and must be ignored by the CPUs.
+*/
+   vtcr |= VTCR_EL2_HA;
+
+   /* Set the vmid bits */
+   vtcr |= (get_vmid_bits(mmfr1) == 16) ?
+   VTCR_EL2_VS_16BIT :
+   VTCR_EL2_VS_8BIT;
+
+   return vtcr;
+}
+
 static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot,
struct stage2_map_data *data)
 {
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 67f30953d6d0..86d94f616a1e 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -329,19 +329,10 @@ int kvm_set_ipa_limit(void)
return 0;
 }
 
-/*
- * Configure the VTCR_EL2 for this VM. The VTCR value is common
- * across all the physical CPUs on the system. We use system wide
- * sanitised values to fill in different fields, except for Hardware
- * Management of Access Flags. HA Flag is set unconditionally on
- * all CPUs, as it is safe to run with or without the feature and
- * the bit is RES0 on CPUs that don't support it.
- */
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 {
-   u64 vtcr = VTCR_EL2_FLAGS, mmfr0;
-   u32 parange, phys_shift;
-   u8 lvls;
+   u64 mmfr0, mmfr1;
+   u32 phys_shift;
 
if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK)
return -EINVAL;
@@ -361,33 +352,8 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long 
type)
}
 
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
-   parange = cpuid_feature_extract_unsigned_field(mmfr0,
-   ID_AA64MMFR0_PARANGE_SHIFT);
-   if (parange > ID_AA64MMFR0_PARANGE_MAX)
-   parange = ID_AA64MMFR0_PARANGE_MAX;
-   vtcr |= parange << VTCR_EL2_PS_SHIFT;
-
-   vtcr |= VTCR_EL2_T0SZ(phys_shift);
-   /*
- 

[PATCH v6 19/38] KVM: arm64: Use kvm_arch for stage 2 pgtable

2021-03-19 Thread Quentin Perret
In order to make use of the stage 2 pgtable code for the host stage 2,
use struct kvm_arch in lieu of struct kvm as the host will have the
former but not the latter.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 5 +++--
 arch/arm64/kvm/hyp/pgtable.c | 6 +++---
 arch/arm64/kvm/mmu.c | 2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index bf7a3cc49420..7945ec87eaec 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -162,12 +162,13 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 
addr, u64 size, u64 phys,
 /**
  * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
- * @kvm:   KVM structure representing the guest virtual machine.
+ * @arch:  Arch-specific KVM structure representing the guest virtual
+ * machine.
  * @mm_ops:Memory management callbacks.
  *
  * Return: 0 on success, negative error code on failure.
  */
-int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm,
+int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch,
struct kvm_pgtable_mm_ops *mm_ops);
 
 /**
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 82aca35a22f6..ea95bbc6ba80 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -880,11 +880,11 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 
addr, u64 size)
return kvm_pgtable_walk(pgt, addr, size, &walker);
 }
 
-int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm,
+int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch,
struct kvm_pgtable_mm_ops *mm_ops)
 {
size_t pgd_sz;
-   u64 vtcr = kvm->arch.vtcr;
+   u64 vtcr = arch->vtcr;
u32 ia_bits = VTCR_EL2_IPA(vtcr);
u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr);
u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0;
@@ -897,7 +897,7 @@ int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct 
kvm *kvm,
pgt->ia_bits= ia_bits;
pgt->start_level= start_level;
pgt->mm_ops = mm_ops;
-   pgt->mmu= &kvm->arch.mmu;
+   pgt->mmu= &arch->mmu;
 
/* Ensure zeroed PGD pages are visible to the hardware walker */
dsb(ishst);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index de0ad79d2c90..d6eb1fb21232 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -457,7 +457,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu 
*mmu)
if (!pgt)
return -ENOMEM;
 
-   err = kvm_pgtable_stage2_init(pgt, kvm, &kvm_s2_mm_ops);
+   err = kvm_pgtable_stage2_init(pgt, &kvm->arch, &kvm_s2_mm_ops);
if (err)
goto out_free_pgtable;
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 18/38] KVM: arm64: Elevate hypervisor mappings creation at EL2

2021-03-19 Thread Quentin Perret
Previous commits have introduced infrastructure to enable the EL2 code
to manage its own stage 1 mappings. However, this was preliminary work,
and none of it is currently in use.

Put all of this together by elevating the mapping creation at EL2 when
memory protection is enabled. In this case, the host kernel running
at EL1 still creates _temporary_ EL2 mappings, only used while
initializing the hypervisor, but frees them right after.

As such, all calls to create_hyp_mappings() after kvm init has finished
turn into hypercalls, as the host now has no 'legal' way to modify the
hypevisor page tables directly.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_mmu.h |  2 +-
 arch/arm64/kvm/arm.c | 87 +---
 arch/arm64/kvm/mmu.c | 43 ++--
 3 files changed, 120 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 5c42ec023cc7..ce02a4052dcf 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -166,7 +166,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
 
 phys_addr_t kvm_mmu_get_httbr(void);
 phys_addr_t kvm_get_idmap_vector(void);
-int kvm_mmu_init(void);
+int kvm_mmu_init(u32 *hyp_va_bits);
 
 static inline void *__kvm_vector_slot2addr(void *base,
   enum arm64_hyp_spectre_vector slot)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e2c471117bff..d93ea0b82491 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1426,7 +1426,7 @@ static void cpu_prepare_hyp_mode(int cpu)
kvm_flush_dcache_to_poc(params, sizeof(*params));
 }
 
-static void cpu_init_hyp_mode(void)
+static void hyp_install_host_vector(void)
 {
struct kvm_nvhe_init_params *params;
struct arm_smccc_res res;
@@ -1444,6 +1444,11 @@ static void cpu_init_hyp_mode(void)
params = this_cpu_ptr_nvhe_sym(kvm_init_params);
arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(__kvm_hyp_init), 
virt_to_phys(params), &res);
WARN_ON(res.a0 != SMCCC_RET_SUCCESS);
+}
+
+static void cpu_init_hyp_mode(void)
+{
+   hyp_install_host_vector();
 
/*
 * Disabling SSBD on a non-VHE system requires us to enable SSBS
@@ -1486,7 +1491,10 @@ static void cpu_set_hyp_vector(void)
struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data);
void *vector = hyp_spectre_vector_selector[data->slot];
 
-   *this_cpu_ptr_hyp_sym(kvm_hyp_vector) = (unsigned long)vector;
+   if (!is_protected_kvm_enabled())
+   *this_cpu_ptr_hyp_sym(kvm_hyp_vector) = (unsigned long)vector;
+   else
+   kvm_call_hyp_nvhe(__pkvm_cpu_set_vector, data->slot);
 }
 
 static void cpu_hyp_reinit(void)
@@ -1494,13 +1502,14 @@ static void cpu_hyp_reinit(void)

kvm_init_host_cpu_context(&this_cpu_ptr_hyp_sym(kvm_host_data)->host_ctxt);
 
cpu_hyp_reset();
-   cpu_set_hyp_vector();
 
if (is_kernel_in_hyp_mode())
kvm_timer_init_vhe();
else
cpu_init_hyp_mode();
 
+   cpu_set_hyp_vector();
+
kvm_arm_init_debug();
 
if (vgic_present)
@@ -1696,18 +1705,59 @@ static void teardown_hyp_mode(void)
}
 }
 
+static int do_pkvm_init(u32 hyp_va_bits)
+{
+   void *per_cpu_base = kvm_ksym_ref(kvm_arm_hyp_percpu_base);
+   int ret;
+
+   preempt_disable();
+   hyp_install_host_vector();
+   ret = kvm_call_hyp_nvhe(__pkvm_init, hyp_mem_base, hyp_mem_size,
+   num_possible_cpus(), kern_hyp_va(per_cpu_base),
+   hyp_va_bits);
+   preempt_enable();
+
+   return ret;
+}
+
+static int kvm_hyp_init_protection(u32 hyp_va_bits)
+{
+   void *addr = phys_to_virt(hyp_mem_base);
+   int ret;
+
+   ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
+   if (ret)
+   return ret;
+
+   ret = do_pkvm_init(hyp_va_bits);
+   if (ret)
+   return ret;
+
+   free_hyp_pgds();
+
+   return 0;
+}
+
 /**
  * Inits Hyp-mode on all online CPUs
  */
 static int init_hyp_mode(void)
 {
+   u32 hyp_va_bits;
int cpu;
-   int err = 0;
+   int err = -ENOMEM;
+
+   /*
+* The protected Hyp-mode cannot be initialized if the memory pool
+* allocation has failed.
+*/
+   if (is_protected_kvm_enabled() && !hyp_mem_base)
+   goto out_err;
 
/*
 * Allocate Hyp PGD and setup Hyp identity mapping
 */
-   err = kvm_mmu_init();
+   err = kvm_mmu_init(&hyp_va_bits);
if (err)
goto out_err;
 
@@ -1823,6 +1873,14 @@ static int init_hyp_mode(void)
goto out_err;
}
 
+   if (is_protected_kvm_enabled()) {
+   err = kvm_hyp_init_prote

[PATCH v6 17/38] KVM: arm64: Prepare the creation of s1 mappings at EL2

2021-03-19 Thread Quentin Perret
When memory protection is enabled, the EL2 code needs the ability to
create and manage its own page-table. To do so, introduce a new set of
hypercalls to bootstrap a memory management system at EL2.

This leads to the following boot flow in nVHE Protected mode:

 1. the host allocates memory for the hypervisor very early on, using
the memblock API;

 2. the host creates a set of stage 1 page-table for EL2, installs the
EL2 vectors, and issues the __pkvm_init hypercall;

 3. during __pkvm_init, the hypervisor re-creates its stage 1 page-table
and stores it in the memory pool provided by the host;

 4. the hypervisor then extends its stage 1 mappings to include a
vmemmap in the EL2 VA space, hence allowing to use the buddy
allocator introduced in a previous patch;

 5. the hypervisor jumps back in the idmap page, switches from the
host-provided page-table to the new one, and wraps up its
initialization by enabling the new allocator, before returning to
the host.

 6. the host can free the now unused page-table created for EL2, and
will now need to issue hypercalls to make changes to the EL2 stage 1
mappings instead of modifying them directly.

Note that for the sake of simplifying the review, this patch focuses on
the hypervisor side of things. In other words, this only implements the
new hypercalls, but does not make use of them from the host yet. The
host-side changes will follow in a subsequent patch.

Credits to Will for __pkvm_init_switch_pgd.

Acked-by: Will Deacon 
Co-authored-by: Will Deacon 
Signed-off-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h |   4 +
 arch/arm64/include/asm/kvm_host.h|   7 +
 arch/arm64/include/asm/kvm_hyp.h |   8 ++
 arch/arm64/include/asm/kvm_pgtable.h |   2 +
 arch/arm64/kernel/image-vars.h   |  16 +++
 arch/arm64/kvm/hyp/Makefile  |   2 +-
 arch/arm64/kvm/hyp/include/nvhe/mm.h |  71 ++
 arch/arm64/kvm/hyp/nvhe/Makefile |   4 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S   |  27 
 arch/arm64/kvm/hyp/nvhe/hyp-main.c   |  49 +++
 arch/arm64/kvm/hyp/nvhe/mm.c | 173 +++
 arch/arm64/kvm/hyp/nvhe/setup.c  | 197 +++
 arch/arm64/kvm/hyp/pgtable.c |   2 -
 arch/arm64/kvm/hyp/reserved_mem.c|  92 +
 arch/arm64/mm/init.c |   3 +
 15 files changed, 652 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mm.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/mm.c
 create mode 100644 arch/arm64/kvm/hyp/nvhe/setup.c
 create mode 100644 arch/arm64/kvm/hyp/reserved_mem.c

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index a7ab84f781f7..ebe7007b820b 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -57,6 +57,10 @@
 #define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2   12
 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs  13
 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs   14
+#define __KVM_HOST_SMCCC_FUNC___pkvm_init  15
+#define __KVM_HOST_SMCCC_FUNC___pkvm_create_mappings   16
+#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping17
+#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector18
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 02e172dc5087..f813e1191027 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -770,5 +770,12 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
(test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
 
 int kvm_trng_call(struct kvm_vcpu *vcpu);
+#ifdef CONFIG_KVM
+extern phys_addr_t hyp_mem_base;
+extern phys_addr_t hyp_mem_size;
+void __init kvm_hyp_reserve(void);
+#else
+static inline void kvm_hyp_reserve(void) { }
+#endif
 
 #endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 8b6c3a7aac51..de40a565d7e5 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -108,4 +108,12 @@ void __noreturn __hyp_do_panic(struct kvm_cpu_context 
*host_ctxt, u64 spsr,
   u64 elr, u64 par);
 #endif
 
+#ifdef __KVM_NVHE_HYPERVISOR__
+void __pkvm_init_switch_pgd(phys_addr_t phys, unsigned long size,
+   phys_addr_t pgd, void *sp, void *cont_fn);
+int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
+   unsigned long *per_cpu_base, u32 hyp_va_bits);
+void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
+#endif
+
 #endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index bbe840e430cb..bf7a3cc49420 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -11,6 +1

[PATCH v6 13/38] KVM: arm64: Enable access to sanitized CPU features at EL2

2021-03-19 Thread Quentin Perret
Introduce the infrastructure in KVM enabling to copy CPU feature
registers into EL2-owned data-structures, to allow reading sanitised
values directly at EL2 in nVHE.

Given that only a subset of these features are being read by the
hypervisor, the ones that need to be copied are to be listed under
 together with the name of the nVHE variable that
will hold the copy. This introduces only the infrastructure enabling
this copy. The first users will follow shortly.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/cpufeature.h |  1 +
 arch/arm64/include/asm/kvm_cpufeature.h | 22 ++
 arch/arm64/include/asm/kvm_host.h   |  4 
 arch/arm64/kernel/cpufeature.c  | 13 +
 arch/arm64/kvm/sys_regs.c   | 19 +++
 5 files changed, 59 insertions(+)
 create mode 100644 arch/arm64/include/asm/kvm_cpufeature.h

diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index 61177bac49fa..a85cea2cac57 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -607,6 +607,7 @@ void check_local_cpu_capabilities(void);
 
 u64 read_sanitised_ftr_reg(u32 id);
 u64 __read_sysreg_by_encoding(u32 sys_id);
+int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst);
 
 static inline bool cpu_supports_mixed_endian_el0(void)
 {
diff --git a/arch/arm64/include/asm/kvm_cpufeature.h 
b/arch/arm64/include/asm/kvm_cpufeature.h
new file mode 100644
index ..3d245f96a9fe
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_cpufeature.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2020 - Google LLC
+ * Author: Quentin Perret 
+ */
+
+#ifndef __ARM64_KVM_CPUFEATURE_H__
+#define __ARM64_KVM_CPUFEATURE_H__
+
+#include 
+
+#include 
+
+#if defined(__KVM_NVHE_HYPERVISOR__)
+#define DECLARE_KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg name
+#define DEFINE_KVM_HYP_CPU_FTR_REG(name) struct arm64_ftr_reg name
+#else
+#define DECLARE_KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg 
kvm_nvhe_sym(name)
+#define DEFINE_KVM_HYP_CPU_FTR_REG(name) BUILD_BUG()
+#endif
+
+#endif
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 6a2031af9562..02e172dc5087 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -740,9 +740,13 @@ void kvm_clr_pmu_events(u32 clr);
 
 void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
 void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
+
+void setup_kvm_el2_caps(void);
 #else
 static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {}
 static inline void kvm_clr_pmu_events(u32 clr) {}
+
+static inline void setup_kvm_el2_caps(void) {}
 #endif
 
 void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 066030717a4c..6252476e4e73 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1154,6 +1154,18 @@ u64 read_sanitised_ftr_reg(u32 id)
 }
 EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);
 
+int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst)
+{
+   struct arm64_ftr_reg *regp = get_arm64_ftr_reg(id);
+
+   if (!regp)
+   return -EINVAL;
+
+   *dst = *regp;
+
+   return 0;
+}
+
 #define read_sysreg_case(r)\
case r: val = read_sysreg_s(r); break;
 
@@ -2773,6 +2785,7 @@ void __init setup_cpu_features(void)
 
setup_system_capabilities();
setup_elf_hwcaps(arm64_elf_hwcaps);
+   setup_kvm_el2_caps();
 
if (system_supports_32bit_el0())
setup_elf_hwcaps(compat_elf_hwcaps);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4f2f1e3145de..6c5d133689ae 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -2775,3 +2776,21 @@ void kvm_sys_reg_table_init(void)
/* Clear all higher bits. */
cache_levels &= (1 << (i*3))-1;
 }
+
+#define CPU_FTR_REG_HYP_COPY(id, name) \
+   { .sys_id = id, .dst = (struct arm64_ftr_reg *)&kvm_nvhe_sym(name) }
+struct __ftr_reg_copy_entry {
+   u32 sys_id;
+   struct arm64_ftr_reg*dst;
+} hyp_ftr_regs[] __initdata = {
+};
+
+void __init setup_kvm_el2_caps(void)
+{
+   int i;
+
+   for (i = 0; i < ARRAY_SIZE(hyp_ftr_regs); i++) {
+   WARN(copy_ftr_reg(hyp_ftr_regs[i].sys_id, hyp_ftr_regs[i].dst),
+"%u feature register not found\n", hyp_ftr_regs[i].sys_id);
+   }
+}
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 15/38] KVM: arm64: Factor out vector address calculation

2021-03-19 Thread Quentin Perret
In order to re-map the guest vectors at EL2 when pKVM is enabled,
refactor __kvm_vector_slot2idx() and kvm_init_vector_slot() to move all
the address calculation logic in a static inline function.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_mmu.h | 8 
 arch/arm64/kvm/arm.c | 9 +
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 90873851f677..5c42ec023cc7 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -168,6 +168,14 @@ phys_addr_t kvm_mmu_get_httbr(void);
 phys_addr_t kvm_get_idmap_vector(void);
 int kvm_mmu_init(void);
 
+static inline void *__kvm_vector_slot2addr(void *base,
+  enum arm64_hyp_spectre_vector slot)
+{
+   int idx = slot - (slot != HYP_VECTOR_DIRECT);
+
+   return base + (idx * SZ_2K);
+}
+
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)   __flush_dcache_area((a), (l))
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 22d6df525254..e2c471117bff 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1350,16 +1350,9 @@ static unsigned long nvhe_percpu_order(void)
 /* A lookup table holding the hypervisor VA for each vector slot */
 static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS];
 
-static int __kvm_vector_slot2idx(enum arm64_hyp_spectre_vector slot)
-{
-   return slot - (slot != HYP_VECTOR_DIRECT);
-}
-
 static void kvm_init_vector_slot(void *base, enum arm64_hyp_spectre_vector 
slot)
 {
-   int idx = __kvm_vector_slot2idx(slot);
-
-   hyp_spectre_vector_selector[slot] = base + (idx * SZ_2K);
+   hyp_spectre_vector_selector[slot] = __kvm_vector_slot2addr(base, slot);
 }
 
 static int kvm_init_vector_slots(void)
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 14/38] KVM: arm64: Provide __flush_dcache_area at EL2

2021-03-19 Thread Quentin Perret
We will need to do cache maintenance at EL2 soon, so compile a copy of
__flush_dcache_area at EL2, and provide a copy of arm64_ftr_reg_ctrel0
as it is needed by the read_ctr macro.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_cpufeature.h |  2 ++
 arch/arm64/kvm/hyp/nvhe/Makefile|  3 ++-
 arch/arm64/kvm/hyp/nvhe/cache.S | 13 +
 arch/arm64/kvm/hyp/nvhe/hyp-smp.c   |  6 ++
 arch/arm64/kvm/sys_regs.c   |  1 +
 5 files changed, 24 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/hyp/nvhe/cache.S

diff --git a/arch/arm64/include/asm/kvm_cpufeature.h 
b/arch/arm64/include/asm/kvm_cpufeature.h
index 3d245f96a9fe..c2e7735f502b 100644
--- a/arch/arm64/include/asm/kvm_cpufeature.h
+++ b/arch/arm64/include/asm/kvm_cpufeature.h
@@ -19,4 +19,6 @@
 #define DEFINE_KVM_HYP_CPU_FTR_REG(name) BUILD_BUG()
 #endif
 
+DECLARE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
+
 #endif
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 6894a917f290..42dde4bb80b1 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -13,7 +13,8 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o
 lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
-hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o
+hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
+cache.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
new file mode 100644
index ..36cef6915428
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Code copied from arch/arm64/mm/cache.S.
+ */
+
+#include 
+#include 
+#include 
+
+SYM_FUNC_START_PI(__flush_dcache_area)
+   dcache_by_line_op civac, sy, x0, x1, x2, x3
+   ret
+SYM_FUNC_END_PI(__flush_dcache_area)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c 
b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
index 879559057dee..71f00aca90e7 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
@@ -5,9 +5,15 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 
+/*
+ * Copies of the host's CPU features registers holding sanitized values.
+ */
+DEFINE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
+
 /*
  * nVHE copy of data structures tracking available CPU cores.
  * Only entries for CPUs that were online at KVM init are populated.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6c5d133689ae..3ec34c25e877 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2783,6 +2783,7 @@ struct __ftr_reg_copy_entry {
u32 sys_id;
struct arm64_ftr_reg*dst;
 } hyp_ftr_regs[] __initdata = {
+   CPU_FTR_REG_HYP_COPY(SYS_CTR_EL0, arm64_ftr_reg_ctrel0),
 };
 
 void __init setup_kvm_el2_caps(void)
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 12/38] KVM: arm64: Introduce a Hyp buddy page allocator

2021-03-19 Thread Quentin Perret
When memory protection is enabled, the hyp code will require a basic
form of memory management in order to allocate and free memory pages at
EL2. This is needed for various use-cases, including the creation of hyp
mappings or the allocation of stage 2 page tables.

To address these use-case, introduce a simple memory allocator in the
hyp code. The allocator is designed as a conventional 'buddy allocator',
working with a page granularity. It allows to allocate and free
physically contiguous pages from memory 'pools', with a guaranteed order
alignment in the PA space. Each page in a memory pool is associated
with a struct hyp_page which holds the page's metadata, including its
refcount, as well as its current order, hence mimicking the kernel's
buddy system in the GFP infrastructure. The hyp_page metadata are made
accessible through a hyp_vmemmap, following the concept of
SPARSE_VMEMMAP in the kernel.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/nvhe/gfp.h|  68 
 arch/arm64/kvm/hyp/include/nvhe/memory.h |  28 
 arch/arm64/kvm/hyp/nvhe/Makefile |   2 +-
 arch/arm64/kvm/hyp/nvhe/page_alloc.c | 195 +++
 4 files changed, 292 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/gfp.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/page_alloc.c

diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h 
b/arch/arm64/kvm/hyp/include/nvhe/gfp.h
new file mode 100644
index ..55b3f0ce5bc8
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __KVM_HYP_GFP_H
+#define __KVM_HYP_GFP_H
+
+#include 
+
+#include 
+#include 
+
+#define HYP_NO_ORDER   UINT_MAX
+
+struct hyp_pool {
+   /*
+* Spinlock protecting concurrent changes to the memory pool as well as
+* the struct hyp_page of the pool's pages until we have a proper atomic
+* API at EL2.
+*/
+   hyp_spinlock_t lock;
+   struct list_head free_area[MAX_ORDER];
+   phys_addr_t range_start;
+   phys_addr_t range_end;
+   unsigned int max_order;
+};
+
+static inline void hyp_page_ref_inc(struct hyp_page *p)
+{
+   struct hyp_pool *pool = hyp_page_to_pool(p);
+
+   hyp_spin_lock(&pool->lock);
+   p->refcount++;
+   hyp_spin_unlock(&pool->lock);
+}
+
+static inline int hyp_page_ref_dec_and_test(struct hyp_page *p)
+{
+   struct hyp_pool *pool = hyp_page_to_pool(p);
+   int ret;
+
+   hyp_spin_lock(&pool->lock);
+   p->refcount--;
+   ret = (p->refcount == 0);
+   hyp_spin_unlock(&pool->lock);
+
+   return ret;
+}
+
+static inline void hyp_set_page_refcounted(struct hyp_page *p)
+{
+   struct hyp_pool *pool = hyp_page_to_pool(p);
+
+   hyp_spin_lock(&pool->lock);
+   if (p->refcount) {
+   hyp_spin_unlock(&pool->lock);
+   hyp_panic();
+   }
+   p->refcount = 1;
+   hyp_spin_unlock(&pool->lock);
+}
+
+/* Allocation */
+void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order);
+void hyp_get_page(void *addr);
+void hyp_put_page(void *addr);
+
+/* Used pages cannot be freed */
+int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages,
+ unsigned int reserved_pages);
+#endif /* __KVM_HYP_GFP_H */
diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h 
b/arch/arm64/kvm/hyp/include/nvhe/memory.h
index 3e49eaa7e682..d2fb307c5952 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/memory.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h
@@ -6,7 +6,17 @@
 
 #include 
 
+struct hyp_pool;
+struct hyp_page {
+   unsigned int refcount;
+   unsigned int order;
+   struct hyp_pool *pool;
+   struct list_head node;
+};
+
 extern s64 hyp_physvirt_offset;
+extern u64 __hyp_vmemmap;
+#define hyp_vmemmap ((struct hyp_page *)__hyp_vmemmap)
 
 #define __hyp_pa(virt) ((phys_addr_t)(virt) + hyp_physvirt_offset)
 #define __hyp_va(phys) ((void *)((phys_addr_t)(phys) - hyp_physvirt_offset))
@@ -21,4 +31,22 @@ static inline phys_addr_t hyp_virt_to_phys(void *addr)
return __hyp_pa(addr);
 }
 
+#define hyp_phys_to_pfn(phys)  ((phys) >> PAGE_SHIFT)
+#define hyp_pfn_to_phys(pfn)   ((phys_addr_t)((pfn) << PAGE_SHIFT))
+#define hyp_phys_to_page(phys) (&hyp_vmemmap[hyp_phys_to_pfn(phys)])
+#define hyp_virt_to_page(virt) hyp_phys_to_page(__hyp_pa(virt))
+#define hyp_virt_to_pfn(virt)  hyp_phys_to_pfn(__hyp_pa(virt))
+
+#define hyp_page_to_pfn(page)  ((struct hyp_page *)(page) - hyp_vmemmap)
+#define hyp_page_to_phys(page)  hyp_pfn_to_phys((hyp_page_to_pfn(page)))
+#define hyp_page_to_virt(page) __hyp_va(hyp_page_to_phys(page))
+#define hyp_page_to_pool(page) (((struct hyp_page *)page)->pool)
+
+static inline int hyp_page_count(void *addr)
+{
+   struct hyp_page *p = hyp_virt_to_page(addr);
+
+   re

[PATCH v6 16/38] arm64: asm: Provide set_sctlr_el2 macro

2021-03-19 Thread Quentin Perret
We will soon need to turn the EL2 stage 1 MMU on and off in nVHE
protected mode, so refactor the set_sctlr_el1 macro to make it usable
for that purpose.

Acked-by: Will Deacon 
Suggested-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/assembler.h | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h 
b/arch/arm64/include/asm/assembler.h
index ca31594d3d6c..fb651c1f26e9 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -676,11 +676,11 @@ USER(\label, ic   ivau, \tmp2)// 
invalidate I line PoU
.endm
 
 /*
- * Set SCTLR_EL1 to the passed value, and invalidate the local icache
+ * Set SCTLR_ELx to the @reg value, and invalidate the local icache
  * in the process. This is called when setting the MMU on.
  */
-.macro set_sctlr_el1, reg
-   msr sctlr_el1, \reg
+.macro set_sctlr, sreg, reg
+   msr \sreg, \reg
isb
/*
 * Invalidate the local I-cache so that any instructions fetched
@@ -692,6 +692,14 @@ USER(\label, icivau, \tmp2)// 
invalidate I line PoU
isb
 .endm
 
+.macro set_sctlr_el1, reg
+   set_sctlr sctlr_el1, \reg
+.endm
+
+.macro set_sctlr_el2, reg
+   set_sctlr sctlr_el2, \reg
+.endm
+
 /*
  * Check whether to yield to another runnable task from kernel mode NEON code
  * (which runs with preemption disabled).
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 09/38] KVM: arm64: Allow using kvm_nvhe_sym() in hyp code

2021-03-19 Thread Quentin Perret
In order to allow the usage of code shared by the host and the hyp in
static inline library functions, allow the usage of kvm_nvhe_sym() at
EL2 by defaulting to the raw symbol name.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/hyp_image.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/hyp_image.h 
b/arch/arm64/include/asm/hyp_image.h
index 78cd77990c9c..b4b3076a76fb 100644
--- a/arch/arm64/include/asm/hyp_image.h
+++ b/arch/arm64/include/asm/hyp_image.h
@@ -10,11 +10,15 @@
 #define __HYP_CONCAT(a, b) a ## b
 #define HYP_CONCAT(a, b)   __HYP_CONCAT(a, b)
 
+#ifndef __KVM_NVHE_HYPERVISOR__
 /*
  * KVM nVHE code has its own symbol namespace prefixed with __kvm_nvhe_,
  * to separate it from the kernel proper.
  */
 #define kvm_nvhe_sym(sym)  __kvm_nvhe_##sym
+#else
+#define kvm_nvhe_sym(sym)  sym
+#endif
 
 #ifdef LINKER_SCRIPT
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 10/38] KVM: arm64: Introduce an early Hyp page allocator

2021-03-19 Thread Quentin Perret
With nVHE, the host currently creates all stage 1 hypervisor mappings at
EL1 during boot, installs them at EL2, and extends them as required
(e.g. when creating a new VM). But in a world where the host is no
longer trusted, it cannot have full control over the code mapped in the
hypervisor.

In preparation for enabling the hypervisor to create its own stage 1
mappings during boot, introduce an early page allocator, with minimal
functionality. This allocator is designed to be used only during early
bootstrap of the hyp code when memory protection is enabled, which will
then switch to using a full-fledged page allocator after init.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/nvhe/early_alloc.h | 14 +
 arch/arm64/kvm/hyp/include/nvhe/memory.h  | 24 +
 arch/arm64/kvm/hyp/nvhe/Makefile  |  2 +-
 arch/arm64/kvm/hyp/nvhe/early_alloc.c | 54 +++
 arch/arm64/kvm/hyp/nvhe/psci-relay.c  |  4 +-
 5 files changed, 94 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/early_alloc.h
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/memory.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/early_alloc.c

diff --git a/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h 
b/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h
new file mode 100644
index ..dc61aaa56f31
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __KVM_HYP_EARLY_ALLOC_H
+#define __KVM_HYP_EARLY_ALLOC_H
+
+#include 
+
+void hyp_early_alloc_init(void *virt, unsigned long size);
+unsigned long hyp_early_alloc_nr_used_pages(void);
+void *hyp_early_alloc_page(void *arg);
+void *hyp_early_alloc_contig(unsigned int nr_pages);
+
+extern struct kvm_pgtable_mm_ops hyp_early_alloc_mm_ops;
+
+#endif /* __KVM_HYP_EARLY_ALLOC_H */
diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h 
b/arch/arm64/kvm/hyp/include/nvhe/memory.h
new file mode 100644
index ..3e49eaa7e682
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __KVM_HYP_MEMORY_H
+#define __KVM_HYP_MEMORY_H
+
+#include 
+
+#include 
+
+extern s64 hyp_physvirt_offset;
+
+#define __hyp_pa(virt) ((phys_addr_t)(virt) + hyp_physvirt_offset)
+#define __hyp_va(phys) ((void *)((phys_addr_t)(phys) - hyp_physvirt_offset))
+
+static inline void *hyp_phys_to_virt(phys_addr_t phys)
+{
+   return __hyp_va(phys);
+}
+
+static inline phys_addr_t hyp_virt_to_phys(void *addr)
+{
+   return __hyp_pa(addr);
+}
+
+#endif /* __KVM_HYP_MEMORY_H */
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index bc98f8e3d1da..24ff99e2eac5 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -13,7 +13,7 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o
 lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
-hyp-main.o hyp-smp.o psci-relay.o
+hyp-main.o hyp-smp.o psci-relay.o early_alloc.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/early_alloc.c 
b/arch/arm64/kvm/hyp/nvhe/early_alloc.c
new file mode 100644
index ..1306c430ab87
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/early_alloc.c
@@ -0,0 +1,54 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Google LLC
+ * Author: Quentin Perret 
+ */
+
+#include 
+
+#include 
+#include 
+
+struct kvm_pgtable_mm_ops hyp_early_alloc_mm_ops;
+s64 __ro_after_init hyp_physvirt_offset;
+
+static unsigned long base;
+static unsigned long end;
+static unsigned long cur;
+
+unsigned long hyp_early_alloc_nr_used_pages(void)
+{
+   return (cur - base) >> PAGE_SHIFT;
+}
+
+void *hyp_early_alloc_contig(unsigned int nr_pages)
+{
+   unsigned long size = (nr_pages << PAGE_SHIFT);
+   void *ret = (void *)cur;
+
+   if (!nr_pages)
+   return NULL;
+
+   if (end - cur < size)
+   return NULL;
+
+   cur += size;
+   memset(ret, 0, size);
+
+   return ret;
+}
+
+void *hyp_early_alloc_page(void *arg)
+{
+   return hyp_early_alloc_contig(1);
+}
+
+void hyp_early_alloc_init(void *virt, unsigned long size)
+{
+   base = cur = (unsigned long)virt;
+   end = base + size;
+
+   hyp_early_alloc_mm_ops.zalloc_page = hyp_early_alloc_page;
+   hyp_early_alloc_mm_ops.phys_to_virt = hyp_phys_to_virt;
+   hyp_early_alloc_mm_ops.virt_to_phys = hyp_virt_to_phys;
+}
diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c 
b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
index 63de71c0481e..08508783ec3d 100644
--- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c
+++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
@@ -11,6 +11

[PATCH v6 11/38] KVM: arm64: Stub CONFIG_DEBUG_LIST at Hyp

2021-03-19 Thread Quentin Perret
In order to use the kernel list library at EL2, introduce stubs for the
CONFIG_DEBUG_LIST out-of-lines calls.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/nvhe/Makefile |  2 +-
 arch/arm64/kvm/hyp/nvhe/stub.c   | 22 ++
 2 files changed, 23 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/hyp/nvhe/stub.c

diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 24ff99e2eac5..144da72ad510 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -13,7 +13,7 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o
 lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
-hyp-main.o hyp-smp.o psci-relay.o early_alloc.o
+hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/stub.c b/arch/arm64/kvm/hyp/nvhe/stub.c
new file mode 100644
index ..c0aa6bbfd79d
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/stub.c
@@ -0,0 +1,22 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Stubs for out-of-line function calls caused by re-using kernel
+ * infrastructure at EL2.
+ *
+ * Copyright (C) 2020 - Google LLC
+ */
+
+#include 
+
+#ifdef CONFIG_DEBUG_LIST
+bool __list_add_valid(struct list_head *new, struct list_head *prev,
+ struct list_head *next)
+{
+   return true;
+}
+
+bool __list_del_entry_valid(struct list_head *entry)
+{
+   return true;
+}
+#endif
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 07/38] KVM: arm64: Introduce a BSS section for use at Hyp

2021-03-19 Thread Quentin Perret
Currently, the hyp code cannot make full use of a bss, as the kernel
section is mapped read-only.

While this mapping could simply be changed to read-write, it would
intermingle even more the hyp and kernel state than they currently are.
Instead, introduce a __hyp_bss section, that uses reserved pages, and
create the appropriate RW hyp mappings during KVM init.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/sections.h |  1 +
 arch/arm64/kernel/vmlinux.lds.S   | 52 ---
 arch/arm64/kvm/arm.c  | 14 -
 arch/arm64/kvm/hyp/nvhe/hyp.lds.S |  1 +
 4 files changed, 49 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h 
b/arch/arm64/include/asm/sections.h
index 2f36b16a5b5d..e4ad9db53af1 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -13,6 +13,7 @@ extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 extern char __hyp_text_start[], __hyp_text_end[];
 extern char __hyp_rodata_start[], __hyp_rodata_end[];
 extern char __hyp_reloc_begin[], __hyp_reloc_end[];
+extern char __hyp_bss_start[], __hyp_bss_end[];
 extern char __idmap_text_start[], __idmap_text_end[];
 extern char __initdata_begin[], __initdata_end[];
 extern char __inittext_begin[], __inittext_end[];
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 7eea7888bb02..e96173ce211b 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -5,24 +5,7 @@
  * Written by Martin Mares 
  */
 
-#define RO_EXCEPTION_TABLE_ALIGN   8
-#define RUNTIME_DISCARD_EXIT
-
-#include 
-#include 
 #include 
-#include 
-#include 
-#include 
-
-#include "image.h"
-
-OUTPUT_ARCH(aarch64)
-ENTRY(_text)
-
-jiffies = jiffies_64;
-
-
 #ifdef CONFIG_KVM
 #define HYPERVISOR_EXTABLE \
. = ALIGN(SZ_8);\
@@ -51,13 +34,43 @@ jiffies = jiffies_64;
__hyp_reloc_end = .;\
}
 
+#define BSS_FIRST_SECTIONS \
+   __hyp_bss_start = .;\
+   *(HYP_SECTION_NAME(.bss))   \
+   . = ALIGN(PAGE_SIZE);   \
+   __hyp_bss_end = .;
+
+/*
+ * We require that __hyp_bss_start and __bss_start are aligned, and enforce it
+ * with an assertion. But the BSS_SECTION macro places an empty .sbss section
+ * between them, which can in some cases cause the linker to misalign them. To
+ * work around the issue, force a page alignment for __bss_start.
+ */
+#define SBSS_ALIGN PAGE_SIZE
 #else /* CONFIG_KVM */
 #define HYPERVISOR_EXTABLE
 #define HYPERVISOR_DATA_SECTIONS
 #define HYPERVISOR_PERCPU_SECTION
 #define HYPERVISOR_RELOC_SECTION
+#define SBSS_ALIGN 0
 #endif
 
+#define RO_EXCEPTION_TABLE_ALIGN   8
+#define RUNTIME_DISCARD_EXIT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "image.h"
+
+OUTPUT_ARCH(aarch64)
+ENTRY(_text)
+
+jiffies = jiffies_64;
+
 #define HYPERVISOR_TEXT\
/*  \
 * Align to 4 KB so that\
@@ -276,7 +289,7 @@ SECTIONS
__pecoff_data_rawsize = ABSOLUTE(. - __initdata_begin);
_edata = .;
 
-   BSS_SECTION(0, 0, 0)
+   BSS_SECTION(SBSS_ALIGN, 0, 0)
 
. = ALIGN(PAGE_SIZE);
init_pg_dir = .;
@@ -324,6 +337,9 @@ ASSERT(__hibernate_exit_text_end - 
(__hibernate_exit_text_start & ~(SZ_4K - 1))
 ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
"Entry trampoline text too big")
 #endif
+#ifdef CONFIG_KVM
+ASSERT(__hyp_bss_start == __bss_start, "HYP and Host BSS are misaligned")
+#endif
 /*
  * If padding is applied before .head.text, virt<->phys conversions will fail.
  */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 2adb8d878bb9..22d6df525254 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1775,7 +1775,19 @@ static int init_hyp_mode(void)
goto out_err;
}
 
-   err = create_hyp_mappings(kvm_ksym_ref(__bss_start),
+   /*
+* .hyp.bss is guaranteed to be placed at the beginning of the .bss
+* section thanks to an assertion in the linker script. Map it RW and
+* the rest of .bss RO.
+*/
+   err = create_hyp_mappings(kvm_ksym_ref(__hyp_bss_start),
+ kvm_ksym_ref(__hyp_bss_end), PAGE_HYP);
+   if (err) {
+   kvm_err("Cannot map hyp bss section: %d\n", err);
+   goto out_err;
+   }
+
+   err = create_hyp_mappings(kvm_ksym_ref(__hyp_bss_end),
  kvm_ksym_ref(__bss_stop), PAGE_HYP_RO);
if (err) {
   

[PATCH v6 08/38] KVM: arm64: Make kvm_call_hyp() a function call at Hyp

2021-03-19 Thread Quentin Perret
kvm_call_hyp() has some logic to issue a function call or a hypercall
depending on the EL at which the kernel is running. However, all the
code compiled under __KVM_NVHE_HYPERVISOR__ is guaranteed to only run
at EL2 which allows us to simplify.

Add ifdefery to kvm_host.h to simplify kvm_call_hyp() in .hyp.text.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_host.h | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 08f500b2551a..6a2031af9562 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -593,6 +593,7 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
 
+#ifndef __KVM_NVHE_HYPERVISOR__
 #define kvm_call_hyp_nvhe(f, ...)  
\
({  \
struct arm_smccc_res res;   \
@@ -632,6 +633,11 @@ void kvm_arm_resume_guest(struct kvm *kvm);
\
ret;\
})
+#else /* __KVM_NVHE_HYPERVISOR__ */
+#define kvm_call_hyp(f, ...) f(__VA_ARGS__)
+#define kvm_call_hyp_ret(f, ...) f(__VA_ARGS__)
+#define kvm_call_hyp_nvhe(f, ...) f(__VA_ARGS__)
+#endif /* __KVM_NVHE_HYPERVISOR__ */
 
 void force_vm_exit(const cpumask_t *mask);
 void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 05/38] KVM: arm64: Avoid free_page() in page-table allocator

2021-03-19 Thread Quentin Perret
Currently, the KVM page-table allocator uses a mix of put_page() and
free_page() calls depending on the context even though page-allocation
is always achieved using variants of __get_free_page().

Make the code consistent by using put_page() throughout, and reduce the
memory management API surface used by the page-table code. This will
ease factoring out page-allocation from pgtable.c, which is a
pre-requisite to creating page-tables at EL2.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/pgtable.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 926fc07074f5..0990fda19198 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -414,7 +414,7 @@ int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 
va_bits)
 static int hyp_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
   enum kvm_pgtable_walk_flags flag, void * const arg)
 {
-   free_page((unsigned long)kvm_pte_follow(*ptep));
+   put_page(virt_to_page(kvm_pte_follow(*ptep)));
return 0;
 }
 
@@ -426,7 +426,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
};
 
WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
-   free_page((unsigned long)pgt->pgd);
+   put_page(virt_to_page(pgt->pgd));
pgt->pgd = NULL;
 }
 
@@ -578,7 +578,7 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, 
u32 level,
if (!data->anchor)
return 0;
 
-   free_page((unsigned long)kvm_pte_follow(*ptep));
+   put_page(virt_to_page(kvm_pte_follow(*ptep)));
put_page(virt_to_page(ptep));
 
if (data->anchor == ptep) {
@@ -701,7 +701,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 
level, kvm_pte_t *ptep,
}
 
if (childp)
-   free_page((unsigned long)childp);
+   put_page(virt_to_page(childp));
 
return 0;
 }
@@ -898,7 +898,7 @@ static int stage2_free_walker(u64 addr, u64 end, u32 level, 
kvm_pte_t *ptep,
put_page(virt_to_page(ptep));
 
if (kvm_pte_table(pte, level))
-   free_page((unsigned long)kvm_pte_follow(pte));
+   put_page(virt_to_page(kvm_pte_follow(pte)));
 
return 0;
 }
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 06/38] KVM: arm64: Factor memory allocation out of pgtable.c

2021-03-19 Thread Quentin Perret
In preparation for enabling the creation of page-tables at EL2, factor
all memory allocation out of the page-table code, hence making it
re-usable with any compatible memory allocator.

No functional changes intended.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 41 +++-
 arch/arm64/kvm/hyp/pgtable.c | 98 +---
 arch/arm64/kvm/mmu.c | 66 ++-
 3 files changed, 163 insertions(+), 42 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 8886d43cfb11..bbe840e430cb 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -13,17 +13,50 @@
 
 typedef u64 kvm_pte_t;
 
+/**
+ * struct kvm_pgtable_mm_ops - Memory management callbacks.
+ * @zalloc_page:   Allocate a single zeroed memory page. The @arg parameter
+ * can be used by the walker to pass a memcache. The
+ * initial refcount of the page is 1.
+ * @zalloc_pages_exact:Allocate an exact number of zeroed memory 
pages. The
+ * @size parameter is in bytes, and is rounded-up to the
+ * next page boundary. The resulting allocation is
+ * physically contiguous.
+ * @free_pages_exact:  Free an exact number of memory pages previously
+ * allocated by zalloc_pages_exact.
+ * @get_page:  Increment the refcount on a page.
+ * @put_page:  Decrement the refcount on a page. When the refcount
+ * reaches 0 the page is automatically freed.
+ * @page_count:Return the refcount of a page.
+ * @phys_to_virt:  Convert a physical address into a virtual address mapped
+ * in the current context.
+ * @virt_to_phys:  Convert a virtual address mapped in the current context
+ * into a physical address.
+ */
+struct kvm_pgtable_mm_ops {
+   void*   (*zalloc_page)(void *arg);
+   void*   (*zalloc_pages_exact)(size_t size);
+   void(*free_pages_exact)(void *addr, size_t size);
+   void(*get_page)(void *addr);
+   void(*put_page)(void *addr);
+   int (*page_count)(void *addr);
+   void*   (*phys_to_virt)(phys_addr_t phys);
+   phys_addr_t (*virt_to_phys)(void *addr);
+};
+
 /**
  * struct kvm_pgtable - KVM page-table.
  * @ia_bits:   Maximum input address size, in bits.
  * @start_level:   Level at which the page-table walk starts.
  * @pgd:   Pointer to the first top-level entry of the page-table.
+ * @mm_ops:Memory management callbacks.
  * @mmu:   Stage-2 KVM MMU struct. Unused for stage-1 page-tables.
  */
 struct kvm_pgtable {
u32 ia_bits;
u32 start_level;
kvm_pte_t   *pgd;
+   struct kvm_pgtable_mm_ops   *mm_ops;
 
/* Stage-2 only */
struct kvm_s2_mmu   *mmu;
@@ -86,10 +119,12 @@ struct kvm_pgtable_walker {
  * kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
  * @va_bits:   Maximum virtual address bits.
+ * @mm_ops:Memory management callbacks.
  *
  * Return: 0 on success, negative error code on failure.
  */
-int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits);
+int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits,
+struct kvm_pgtable_mm_ops *mm_ops);
 
 /**
  * kvm_pgtable_hyp_destroy() - Destroy an unused hypervisor stage-1 page-table.
@@ -126,10 +161,12 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 
addr, u64 size, u64 phys,
  * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
  * @kvm:   KVM structure representing the guest virtual machine.
+ * @mm_ops:Memory management callbacks.
  *
  * Return: 0 on success, negative error code on failure.
  */
-int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm);
+int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm,
+   struct kvm_pgtable_mm_ops *mm_ops);
 
 /**
  * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table.
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 0990fda19198..ff478a576f4d 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -152,9 +152,9 @@ static kvm_pte_t kvm_phys_to_pte(u64 pa)
return pte;
 }
 
-static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte)
+static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte, struct kvm_pgtable_mm_ops 
*mm_ops)
 {
-   return __va(kvm_pte_to_phys(pte

[PATCH v6 02/38] KVM: arm64: Link position-independent string routines into .hyp.text

2021-03-19 Thread Quentin Perret
From: Will Deacon 

Pull clear_page(), copy_page(), memcpy() and memset() into the nVHE hyp
code and ensure that we always execute the '__pi_' entry point on the
offchance that it changes in future.

[ qperret: Commit title nits and added linker script alias ]

Signed-off-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/hyp_image.h |  3 +++
 arch/arm64/kernel/image-vars.h | 11 +++
 arch/arm64/kvm/hyp/nvhe/Makefile   |  4 
 3 files changed, 18 insertions(+)

diff --git a/arch/arm64/include/asm/hyp_image.h 
b/arch/arm64/include/asm/hyp_image.h
index 737ded6b6d0d..78cd77990c9c 100644
--- a/arch/arm64/include/asm/hyp_image.h
+++ b/arch/arm64/include/asm/hyp_image.h
@@ -56,6 +56,9 @@
  */
 #define KVM_NVHE_ALIAS(sym)kvm_nvhe_sym(sym) = sym;
 
+/* Defines a linker script alias for KVM nVHE hyp symbols */
+#define KVM_NVHE_ALIAS_HYP(first, sec) kvm_nvhe_sym(first) = kvm_nvhe_sym(sec);
+
 #endif /* LINKER_SCRIPT */
 
 #endif /* __ARM64_HYP_IMAGE_H__ */
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 5aa9ed1e9ec6..4eb7a15c8b60 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -104,6 +104,17 @@ KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base);
 /* PMU available static key */
 KVM_NVHE_ALIAS(kvm_arm_pmu_available);
 
+/* Position-independent library routines */
+KVM_NVHE_ALIAS_HYP(clear_page, __pi_clear_page);
+KVM_NVHE_ALIAS_HYP(copy_page, __pi_copy_page);
+KVM_NVHE_ALIAS_HYP(memcpy, __pi_memcpy);
+KVM_NVHE_ALIAS_HYP(memset, __pi_memset);
+
+#ifdef CONFIG_KASAN
+KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy);
+KVM_NVHE_ALIAS_HYP(__memset, __pi_memset);
+#endif
+
 #endif /* CONFIG_KVM */
 
 #endif /* __ARM64_KERNEL_IMAGE_VARS_H */
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index a6707df4f6c0..bc98f8e3d1da 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -9,10 +9,14 @@ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
 hostprogs := gen-hyprel
 HOST_EXTRACFLAGS += -I$(objtree)/include
 
+lib-objs := clear_page.o copy_page.o memcpy.o memset.o
+lib-objs := $(addprefix ../../../lib/, $(lib-objs))
+
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
 hyp-main.o hyp-smp.o psci-relay.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o
+obj-y += $(lib-objs)
 
 ##
 ## Build rules for compiling nVHE hyp code
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 04/38] KVM: arm64: Initialize kvm_nvhe_init_params early

2021-03-19 Thread Quentin Perret
Move the initialization of kvm_nvhe_init_params in a dedicated function
that is run early, and only once during KVM init, rather than every time
the KVM vectors are set and reset.

This also opens the opportunity for the hypervisor to change the init
structs during boot, hence simplifying the replacement of host-provided
page-table by the one the hypervisor will create for itself.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/arm.c | 30 ++
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index c2df58be5b0c..2adb8d878bb9 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1388,22 +1388,18 @@ static int kvm_init_vector_slots(void)
return 0;
 }
 
-static void cpu_init_hyp_mode(void)
+static void cpu_prepare_hyp_mode(int cpu)
 {
-   struct kvm_nvhe_init_params *params = 
this_cpu_ptr_nvhe_sym(kvm_init_params);
-   struct arm_smccc_res res;
+   struct kvm_nvhe_init_params *params = 
per_cpu_ptr_nvhe_sym(kvm_init_params, cpu);
unsigned long tcr;
 
-   /* Switch from the HYP stub to our own HYP init vector */
-   __hyp_set_vectors(kvm_get_idmap_vector());
-
/*
 * Calculate the raw per-cpu offset without a translation from the
 * kernel's mapping to the linear mapping, and store it in tpidr_el2
 * so that we can use adr_l to access per-cpu variables in EL2.
 * Also drop the KASAN tag which gets in the way...
 */
-   params->tpidr_el2 = (unsigned 
long)kasan_reset_tag(this_cpu_ptr_nvhe_sym(__per_cpu_start)) -
+   params->tpidr_el2 = (unsigned 
long)kasan_reset_tag(per_cpu_ptr_nvhe_sym(__per_cpu_start, cpu)) -
(unsigned 
long)kvm_ksym_ref(CHOOSE_NVHE_SYM(__per_cpu_start));
 
params->mair_el2 = read_sysreg(mair_el1);
@@ -1427,7 +1423,7 @@ static void cpu_init_hyp_mode(void)
tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET;
params->tcr_el2 = tcr;
 
-   params->stack_hyp_va = 
kern_hyp_va(__this_cpu_read(kvm_arm_hyp_stack_page) + PAGE_SIZE);
+   params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) 
+ PAGE_SIZE);
params->pgd_pa = kvm_mmu_get_httbr();
 
/*
@@ -1435,6 +1431,15 @@ static void cpu_init_hyp_mode(void)
 * be read while the MMU is off.
 */
kvm_flush_dcache_to_poc(params, sizeof(*params));
+}
+
+static void cpu_init_hyp_mode(void)
+{
+   struct kvm_nvhe_init_params *params;
+   struct arm_smccc_res res;
+
+   /* Switch from the HYP stub to our own HYP init vector */
+   __hyp_set_vectors(kvm_get_idmap_vector());
 
/*
 * Call initialization code, and switch to the full blown HYP code.
@@ -1443,6 +1448,7 @@ static void cpu_init_hyp_mode(void)
 * cpus_have_const_cap() wrapper.
 */
BUG_ON(!system_capabilities_finalized());
+   params = this_cpu_ptr_nvhe_sym(kvm_init_params);
arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(__kvm_hyp_init), 
virt_to_phys(params), &res);
WARN_ON(res.a0 != SMCCC_RET_SUCCESS);
 
@@ -1790,19 +1796,19 @@ static int init_hyp_mode(void)
}
}
 
-   /*
-* Map Hyp percpu pages
-*/
for_each_possible_cpu(cpu) {
char *percpu_begin = (char *)kvm_arm_hyp_percpu_base[cpu];
char *percpu_end = percpu_begin + nvhe_percpu_size();
 
+   /* Map Hyp percpu pages */
err = create_hyp_mappings(percpu_begin, percpu_end, PAGE_HYP);
-
if (err) {
kvm_err("Cannot map hyp percpu region\n");
goto out_err;
}
+
+   /* Prepare the CPU initialization parameters */
+   cpu_prepare_hyp_mode(cpu);
}
 
if (is_protected_kvm_enabled()) {
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 03/38] arm64: kvm: Add standalone ticket spinlock implementation for use at hyp

2021-03-19 Thread Quentin Perret
From: Will Deacon 

We will soon need to synchronise multiple CPUs in the hyp text at EL2.
The qspinlock-based locking used by the host is overkill for this purpose
and relies on the kernel's "percpu" implementation for the MCS nodes.

Implement a simple ticket locking scheme based heavily on the code removed
by commit c11090474d70 ("arm64: locking: Replace ticket lock implementation
with qspinlock").

Signed-off-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 92 ++
 1 file changed, 92 insertions(+)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/spinlock.h

diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h 
b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h
new file mode 100644
index ..76b537f8d1c6
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * A stand-alone ticket spinlock implementation for use by the non-VHE
+ * KVM hypervisor code running at EL2.
+ *
+ * Copyright (C) 2020 Google LLC
+ * Author: Will Deacon 
+ *
+ * Heavily based on the implementation removed by c11090474d70 which was:
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __ARM64_KVM_NVHE_SPINLOCK_H__
+#define __ARM64_KVM_NVHE_SPINLOCK_H__
+
+#include 
+#include 
+
+typedef union hyp_spinlock {
+   u32 __val;
+   struct {
+#ifdef __AARCH64EB__
+   u16 next, owner;
+#else
+   u16 owner, next;
+#endif
+   };
+} hyp_spinlock_t;
+
+#define hyp_spin_lock_init(l)  \
+do {   \
+   *(l) = (hyp_spinlock_t){ .__val = 0 };  \
+} while (0)
+
+static inline void hyp_spin_lock(hyp_spinlock_t *lock)
+{
+   u32 tmp;
+   hyp_spinlock_t lockval, newval;
+
+   asm volatile(
+   /* Atomically increment the next ticket. */
+   ARM64_LSE_ATOMIC_INSN(
+   /* LL/SC */
+"  prfmpstl1strm, %3\n"
+"1:ldaxr   %w0, %3\n"
+"  add %w1, %w0, #(1 << 16)\n"
+"  stxr%w2, %w1, %3\n"
+"  cbnz%w2, 1b\n",
+   /* LSE atomics */
+"  mov %w2, #(1 << 16)\n"
+"  ldadda  %w2, %w0, %3\n"
+   __nops(3))
+
+   /* Did we get the lock? */
+"  eor %w1, %w0, %w0, ror #16\n"
+"  cbz %w1, 3f\n"
+   /*
+* No: spin on the owner. Send a local event to avoid missing an
+* unlock before the exclusive load.
+*/
+"  sevl\n"
+"2:wfe\n"
+"  ldaxrh  %w2, %4\n"
+"  eor %w1, %w2, %w0, lsr #16\n"
+"  cbnz%w1, 2b\n"
+   /* We got the lock. Critical section starts here. */
+"3:"
+   : "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock)
+   : "Q" (lock->owner)
+   : "memory");
+}
+
+static inline void hyp_spin_unlock(hyp_spinlock_t *lock)
+{
+   u64 tmp;
+
+   asm volatile(
+   ARM64_LSE_ATOMIC_INSN(
+   /* LL/SC */
+   "   ldrh%w1, %0\n"
+   "   add %w1, %w1, #1\n"
+   "   stlrh   %w1, %0",
+   /* LSE atomics */
+   "   mov %w1, #1\n"
+   "   staddlh %w1, %0\n"
+   __nops(1))
+   : "=Q" (lock->owner), "=&r" (tmp)
+   :
+   : "memory");
+}
+
+#endif /* __ARM64_KVM_NVHE_SPINLOCK_H__ */
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v6 00/38] KVM: arm64: Stage-2 for the host

2021-03-19 Thread Quentin Perret
Hi all,

This is the v6 of the series previously posted here:

  https://lore.kernel.org/r/20210315143536.214621-1-qper...@google.com/

This basically allows us to wrap the host with a stage 2 when running in
nVHE, hence paving the way for protecting guest memory from the host in
the future (among other use-cases). For more details about the
motivation and the design angle taken here, I would recommend to have a
look at the cover letter of v1, and/or to watch these presentations at
LPC [1] and KVM forum 2020 [2].

Changes since v5:

 - disabled FWB for the host even when the CPUs support it using stage-2
   config flags;

 - added a stage-2 config flag to enfore identity mappings for the host;

 - refactored/simplified the cpu feature register copy;

 - removed unecessary ISB() from the set_ownership() path, and improved
   kerneldoc;

 - rebased on kvmarm/next to fix (trivial) conflicts with Marc's SVE
   series [3].

And as usual, there is a branch available here:

  https://android-kvm.googlesource.com/linux qperret/host-stage2-v6

Thanks,
Quentin

[1] https://youtu.be/54q6RzS9BpQ?t=10859
[2] https://youtu.be/wY-u6n75iXc
[3] https://lore.kernel.org/r/20210318122532.505263-1-...@kernel.org/

Quentin Perret (35):
  KVM: arm64: Initialize kvm_nvhe_init_params early
  KVM: arm64: Avoid free_page() in page-table allocator
  KVM: arm64: Factor memory allocation out of pgtable.c
  KVM: arm64: Introduce a BSS section for use at Hyp
  KVM: arm64: Make kvm_call_hyp() a function call at Hyp
  KVM: arm64: Allow using kvm_nvhe_sym() in hyp code
  KVM: arm64: Introduce an early Hyp page allocator
  KVM: arm64: Stub CONFIG_DEBUG_LIST at Hyp
  KVM: arm64: Introduce a Hyp buddy page allocator
  KVM: arm64: Enable access to sanitized CPU features at EL2
  KVM: arm64: Provide __flush_dcache_area at EL2
  KVM: arm64: Factor out vector address calculation
  arm64: asm: Provide set_sctlr_el2 macro
  KVM: arm64: Prepare the creation of s1 mappings at EL2
  KVM: arm64: Elevate hypervisor mappings creation at EL2
  KVM: arm64: Use kvm_arch for stage 2 pgtable
  KVM: arm64: Use kvm_arch in kvm_s2_mmu
  KVM: arm64: Set host stage 2 using kvm_nvhe_init_params
  KVM: arm64: Refactor kvm_arm_setup_stage2()
  KVM: arm64: Refactor __load_guest_stage2()
  KVM: arm64: Refactor __populate_fault_info()
  KVM: arm64: Make memcache anonymous in pgtable allocator
  KVM: arm64: Reserve memory for host stage 2
  KVM: arm64: Sort the hypervisor memblocks
  KVM: arm64: Always zero invalid PTEs
  KVM: arm64: Use page-table to track page ownership
  KVM: arm64: Refactor the *_map_set_prot_attr() helpers
  KVM: arm64: Add kvm_pgtable_stage2_find_range()
  KVM: arm64: Introduce KVM_PGTABLE_S2_NOFWB stage 2 flag
  KVM: arm64: Introduce KVM_PGTABLE_S2_IDMAP stage 2 flag
  KVM: arm64: Provide sanitized mmfr* registers at EL2
  KVM: arm64: Wrap the host with a stage 2
  KVM: arm64: Page-align the .hyp sections
  KVM: arm64: Disable PMU support in protected mode
  KVM: arm64: Protect the .hyp sections from the host

Will Deacon (3):
  arm64: lib: Annotate {clear,copy}_page() as position-independent
  KVM: arm64: Link position-independent string routines into .hyp.text
  arm64: kvm: Add standalone ticket spinlock implementation for use at
hyp

 arch/arm64/include/asm/assembler.h|  14 +-
 arch/arm64/include/asm/cpufeature.h   |   1 +
 arch/arm64/include/asm/hyp_image.h|   7 +
 arch/arm64/include/asm/kvm_asm.h  |   9 +
 arch/arm64/include/asm/kvm_cpufeature.h   |  26 ++
 arch/arm64/include/asm/kvm_host.h |  19 +-
 arch/arm64/include/asm/kvm_hyp.h  |   8 +
 arch/arm64/include/asm/kvm_mmu.h  |  23 +-
 arch/arm64/include/asm/kvm_pgtable.h  | 164 ++-
 arch/arm64/include/asm/pgtable-prot.h |   4 +-
 arch/arm64/include/asm/sections.h |   1 +
 arch/arm64/kernel/asm-offsets.c   |   3 +
 arch/arm64/kernel/cpufeature.c|  13 +
 arch/arm64/kernel/image-vars.h|  30 ++
 arch/arm64/kernel/vmlinux.lds.S   |  74 ++--
 arch/arm64/kvm/arm.c  | 199 +++--
 arch/arm64/kvm/hyp/Makefile   |   2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h   |  28 +-
 arch/arm64/kvm/hyp/include/nvhe/early_alloc.h |  14 +
 arch/arm64/kvm/hyp/include/nvhe/gfp.h |  68 +++
 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  36 ++
 arch/arm64/kvm/hyp/include/nvhe/memory.h  |  52 +++
 arch/arm64/kvm/hyp/include/nvhe/mm.h  |  96 
 arch/arm64/kvm/hyp/include/nvhe/spinlock.h|  92 
 arch/arm64/kvm/hyp/nvhe/Makefile  |   9 +-
 arch/arm64/kvm/hyp/nvhe/cache.S   |  13 +
 arch/arm64/kvm/hyp/nvhe/early_alloc.c |  54 +++
 arch/arm64/kvm/hyp/nvhe/hyp-init.S|  42 +-
 arch/arm64/kvm/hyp/nvhe/hyp-main.c|  68 +++
 arch/arm64/kvm/hyp/nvhe/hyp-smp.c |   8 +
 arch/arm64/kvm/hyp/nvhe/hyp.

[PATCH v6 01/38] arm64: lib: Annotate {clear,copy}_page() as position-independent

2021-03-19 Thread Quentin Perret
From: Will Deacon 

clear_page() and copy_page() are suitable for use outside of the kernel
address space, so annotate them as position-independent code.

Signed-off-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/lib/clear_page.S | 4 ++--
 arch/arm64/lib/copy_page.S  | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S
index 073acbf02a7c..b84b179edba3 100644
--- a/arch/arm64/lib/clear_page.S
+++ b/arch/arm64/lib/clear_page.S
@@ -14,7 +14,7 @@
  * Parameters:
  * x0 - dest
  */
-SYM_FUNC_START(clear_page)
+SYM_FUNC_START_PI(clear_page)
mrs x1, dczid_el0
and w1, w1, #0xf
mov x2, #4
@@ -25,5 +25,5 @@ SYM_FUNC_START(clear_page)
tst x0, #(PAGE_SIZE - 1)
b.ne1b
ret
-SYM_FUNC_END(clear_page)
+SYM_FUNC_END_PI(clear_page)
 EXPORT_SYMBOL(clear_page)
diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S
index e7a793961408..29144f4cd449 100644
--- a/arch/arm64/lib/copy_page.S
+++ b/arch/arm64/lib/copy_page.S
@@ -17,7 +17,7 @@
  * x0 - dest
  * x1 - src
  */
-SYM_FUNC_START(copy_page)
+SYM_FUNC_START_PI(copy_page)
 alternative_if ARM64_HAS_NO_HW_PREFETCH
// Prefetch three cache lines ahead.
prfmpldl1strm, [x1, #128]
@@ -75,5 +75,5 @@ alternative_else_nop_endif
stnpx16, x17, [x0, #112 - 256]
 
ret
-SYM_FUNC_END(copy_page)
+SYM_FUNC_END_PI(copy_page)
 EXPORT_SYMBOL(copy_page)
-- 
2.31.0.rc2.261.g7f71774620-goog



Re: [PATCH 1/2] KVM: arm64: Introduce KVM_PGTABLE_S2_NOFWB Stage-2 flag

2021-03-17 Thread Quentin Perret
On Wednesday 17 Mar 2021 at 14:42:46 (+), Will Deacon wrote:
> On Wed, Mar 17, 2021 at 02:17:13PM +0000, Quentin Perret wrote:
> > In order to further configure stage-2 page-tables, pass flags to the
> > init function using a new enum.
> > 
> > The first of these flags allows to disable FWB even if the hardware
> > supports it as we will need to do so for the host stage-2.
> > 
> > Signed-off-by: Quentin Perret 
> > 
> > ---
> > 
> > One question is, do we want to use stage2_has_fwb() everywhere, including
> > guest-specific paths (e.g. kvm_arch_prepare_memory_region(), ...) ?
> > 
> > That'd make this patch more intrusive, but would make the whole codebase
> > work with FWB enabled on a guest by guest basis. I don't see us use that
> > anytime soon (other than maybe debug of some sort?) but it'd be good to
> > have an agreement.
> 
> I don't see the value in spreading this everywhere for now.

Good. Sounds like we're all in agreement.

> >  arch/arm64/include/asm/kvm_pgtable.h  | 19 +--
> >  arch/arm64/include/asm/pgtable-prot.h |  4 +--
> >  arch/arm64/kvm/hyp/pgtable.c  | 49 +--
> >  3 files changed, 50 insertions(+), 22 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
> > b/arch/arm64/include/asm/kvm_pgtable.h
> > index b93a2a3526ab..7382bdfb6284 100644
> > --- a/arch/arm64/include/asm/kvm_pgtable.h
> > +++ b/arch/arm64/include/asm/kvm_pgtable.h
> > @@ -56,6 +56,15 @@ struct kvm_pgtable_mm_ops {
> > phys_addr_t (*virt_to_phys)(void *addr);
> >  };
> >  
> > +/**
> > + * enum kvm_pgtable_stage2_flags - Stage-2 page-table flags.
> > + * @KVM_PGTABLE_S2_NOFWB:  Don't enforce Normal-WB even if the CPUs have
> > + * ARM64_HAS_STAGE2_FWB.
> > + */
> > +enum kvm_pgtable_stage2_flags {
> > +   KVM_PGTABLE_S2_NOFWB= BIT(0),
> > +};
> > +
> >  /**
> >   * struct kvm_pgtable - KVM page-table.
> >   * @ia_bits:   Maximum input address size, in bits.
> > @@ -72,6 +81,7 @@ struct kvm_pgtable {
> >  
> > /* Stage-2 only */
> > struct kvm_s2_mmu   *mmu;
> > +   enum kvm_pgtable_stage2_flags   flags;
> >  };
> >  
> >  /**
> > @@ -201,11 +211,16 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 
> > phys_shift);
> >   * @arch:  Arch-specific KVM structure representing the guest virtual
> >   * machine.
> >   * @mm_ops:Memory management callbacks.
> > + * @flags: Stage-2 configuration flags.
> >   *
> >   * Return: 0 on success, negative error code on failure.
> >   */
> > -int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch,
> > -   struct kvm_pgtable_mm_ops *mm_ops);
> > +int kvm_pgtable_stage2_init_flags(struct kvm_pgtable *pgt, struct kvm_arch 
> > *arch,
> > + struct kvm_pgtable_mm_ops *mm_ops,
> > + enum kvm_pgtable_stage2_flags flags);
> > +
> > +#define kvm_pgtable_stage2_init(pgt, arch, mm_ops) \
> > +   kvm_pgtable_stage2_init_flags(pgt, arch, mm_ops, 0)
> 
> nit: I think some of the kerneldoc refers to "kvm_pgtable_stage_init()"
> so that needs a trivial update to e.g. "kvm_pgtable_stage_init*()".

Will do.

> > diff --git a/arch/arm64/include/asm/pgtable-prot.h 
> > b/arch/arm64/include/asm/pgtable-prot.h
> > index 046be789fbb4..beeb722a82d3 100644
> > --- a/arch/arm64/include/asm/pgtable-prot.h
> > +++ b/arch/arm64/include/asm/pgtable-prot.h
> > @@ -72,10 +72,10 @@ extern bool arm64_use_ng_mappings;
> >  #define PAGE_KERNEL_EXEC   __pgprot(PROT_NORMAL & ~PTE_PXN)
> >  #define PAGE_KERNEL_EXEC_CONT  __pgprot((PROT_NORMAL & ~PTE_PXN) | 
> > PTE_CONT)
> >  
> > -#define PAGE_S2_MEMATTR(attr)  
> > \
> > +#define PAGE_S2_MEMATTR(attr, has_fwb) 
> > \
> > ({  \
> > u64 __val;  \
> > -   if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))  \
> > +   if (has_fwb)\
> > __val = PTE_S2_MEMATTR(MT_S2_FWB_ ## attr); \
> > else\
> > __val = PTE_S2_MEMATTR(MT_S2_ ## attr); \
> 
> Can you take the pgt structure instead of a bool here, or does it end up
> being really ugly?

It means I need to expose the stage2_has_fwb() helper in pgtable.h so I
can use it here. But Marc suggested that I introduce another macro along
the lines of

#define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt))

which can be defined in pgtable.c and keep everything neatly contained
in there. So I think I'll go ahead with that unless you feel strongly
about it.

Cheers,
Quentin


Re: [PATCH 1/2] KVM: arm64: Introduce KVM_PGTABLE_S2_NOFWB Stage-2 flag

2021-03-17 Thread Quentin Perret
On Wednesday 17 Mar 2021 at 14:41:31 (+), Marc Zyngier wrote:
> Hi Quentin,
> 
> On Wed, 17 Mar 2021 14:17:13 +,
> Quentin Perret  wrote:
> > 
> > In order to further configure stage-2 page-tables, pass flags to the
> > init function using a new enum.
> > 
> > The first of these flags allows to disable FWB even if the hardware
> > supports it as we will need to do so for the host stage-2.
> > 
> > Signed-off-by: Quentin Perret 
> > 
> > ---
> > 
> > One question is, do we want to use stage2_has_fwb() everywhere, including
> > guest-specific paths (e.g. kvm_arch_prepare_memory_region(), ...) ?
> > 
> > That'd make this patch more intrusive, but would make the whole codebase
> > work with FWB enabled on a guest by guest basis. I don't see us use that
> > anytime soon (other than maybe debug of some sort?) but it'd be good to
> > have an agreement.
> 
> I'm not sure how useful that would be. We fought long and hard to get
> FWB, and I can't see a good reason to disable it for guests unless the
> HW was buggy (but in which case that'd be for everyone). I'd rather
> keep the changes small for now (this whole series is invasive
> enough!).

OK, that works for me.

> As for this patch, I only have a few cosmetic comments:

Happy with the suggestions, I'll fold that in v6.

Cheers,
Quentin


[PATCH 2/2] KVM: arm64: Disable FWB in host stage-2

2021-03-17 Thread Quentin Perret
We need the host to be in control of cacheability of its own mappings,
so let's disable FWB altogether in its stage 2.

Signed-off-by: Quentin Perret 

---

Obviously this will have to be folded in the relevant patch for v6, but
I kept it separate for the sake of review.
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c 
b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index dd03252b9574..c472c3becf40 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -94,8 +94,8 @@ int kvm_host_prepare_stage2(void *mem_pgt_pool, void 
*dev_pgt_pool)
if (ret)
return ret;
 
-   ret = kvm_pgtable_stage2_init(&host_kvm.pgt, &host_kvm.arch,
- &host_kvm.mm_ops);
+   ret = kvm_pgtable_stage2_init_flags(&host_kvm.pgt, &host_kvm.arch,
+   &host_kvm.mm_ops, 
KVM_PGTABLE_S2_NOFWB);
if (ret)
return ret;
 
@@ -116,8 +116,6 @@ int __pkvm_prot_finalize(void)
params->vttbr = kvm_get_vttbr(mmu);
params->vtcr = host_kvm.arch.vtcr;
params->hcr_el2 |= HCR_VM;
-   if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
-   params->hcr_el2 |= HCR_FWB;
kvm_flush_dcache_to_poc(params, sizeof(*params));
 
write_sysreg(params->hcr_el2, hcr_el2);
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH 0/2] Fixes for FWB

2021-03-17 Thread Quentin Perret
Hi folks,

This is an alternative solution to the KVM_PGTABLE_PROT_S2_NOFWB patch I
shared earlier (and which is a bit of a hack).

With this series we basically force FWB off for the host stage-2, even
when the CPUs support it. This is done by passing flags to the pgtable
init function, and propagating it down where needed. It's a bit more
intrusive, but cleaner conceptually.

Thoughts?

Thanks,
Quentin

Quentin Perret (2):
  KVM: arm64: Introduce KVM_PGTABLE_S2_NOFWB Stage-2 flag
  KVM: arm64: Disable FWB in host stage-2

 arch/arm64/include/asm/kvm_pgtable.h  | 19 +--
 arch/arm64/include/asm/pgtable-prot.h |  4 +--
 arch/arm64/kvm/hyp/nvhe/mem_protect.c |  6 ++--
 arch/arm64/kvm/hyp/pgtable.c  | 49 +--
 4 files changed, 52 insertions(+), 26 deletions(-)

-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH 1/2] KVM: arm64: Introduce KVM_PGTABLE_S2_NOFWB Stage-2 flag

2021-03-17 Thread Quentin Perret
In order to further configure stage-2 page-tables, pass flags to the
init function using a new enum.

The first of these flags allows to disable FWB even if the hardware
supports it as we will need to do so for the host stage-2.

Signed-off-by: Quentin Perret 

---

One question is, do we want to use stage2_has_fwb() everywhere, including
guest-specific paths (e.g. kvm_arch_prepare_memory_region(), ...) ?

That'd make this patch more intrusive, but would make the whole codebase
work with FWB enabled on a guest by guest basis. I don't see us use that
anytime soon (other than maybe debug of some sort?) but it'd be good to
have an agreement.
---
 arch/arm64/include/asm/kvm_pgtable.h  | 19 +--
 arch/arm64/include/asm/pgtable-prot.h |  4 +--
 arch/arm64/kvm/hyp/pgtable.c  | 49 +--
 3 files changed, 50 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index b93a2a3526ab..7382bdfb6284 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -56,6 +56,15 @@ struct kvm_pgtable_mm_ops {
phys_addr_t (*virt_to_phys)(void *addr);
 };
 
+/**
+ * enum kvm_pgtable_stage2_flags - Stage-2 page-table flags.
+ * @KVM_PGTABLE_S2_NOFWB:  Don't enforce Normal-WB even if the CPUs have
+ * ARM64_HAS_STAGE2_FWB.
+ */
+enum kvm_pgtable_stage2_flags {
+   KVM_PGTABLE_S2_NOFWB= BIT(0),
+};
+
 /**
  * struct kvm_pgtable - KVM page-table.
  * @ia_bits:   Maximum input address size, in bits.
@@ -72,6 +81,7 @@ struct kvm_pgtable {
 
/* Stage-2 only */
struct kvm_s2_mmu   *mmu;
+   enum kvm_pgtable_stage2_flags   flags;
 };
 
 /**
@@ -201,11 +211,16 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift);
  * @arch:  Arch-specific KVM structure representing the guest virtual
  * machine.
  * @mm_ops:Memory management callbacks.
+ * @flags: Stage-2 configuration flags.
  *
  * Return: 0 on success, negative error code on failure.
  */
-int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch,
-   struct kvm_pgtable_mm_ops *mm_ops);
+int kvm_pgtable_stage2_init_flags(struct kvm_pgtable *pgt, struct kvm_arch 
*arch,
+ struct kvm_pgtable_mm_ops *mm_ops,
+ enum kvm_pgtable_stage2_flags flags);
+
+#define kvm_pgtable_stage2_init(pgt, arch, mm_ops) \
+   kvm_pgtable_stage2_init_flags(pgt, arch, mm_ops, 0)
 
 /**
  * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table.
diff --git a/arch/arm64/include/asm/pgtable-prot.h 
b/arch/arm64/include/asm/pgtable-prot.h
index 046be789fbb4..beeb722a82d3 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -72,10 +72,10 @@ extern bool arm64_use_ng_mappings;
 #define PAGE_KERNEL_EXEC   __pgprot(PROT_NORMAL & ~PTE_PXN)
 #define PAGE_KERNEL_EXEC_CONT  __pgprot((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
 
-#define PAGE_S2_MEMATTR(attr)  \
+#define PAGE_S2_MEMATTR(attr, has_fwb) \
({  \
u64 __val;  \
-   if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))  \
+   if (has_fwb)\
__val = PTE_S2_MEMATTR(MT_S2_FWB_ ## attr); \
else\
__val = PTE_S2_MEMATTR(MT_S2_ ## attr); \
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 3a971df278bd..dee8aaeaf13e 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -507,12 +507,25 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
return vtcr;
 }
 
-static int stage2_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
+static bool stage2_has_fwb(struct kvm_pgtable *pgt)
+{
+   if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
+   return false;
+
+   return !(pgt->flags & KVM_PGTABLE_S2_NOFWB);
+}
+
+static int stage2_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep,
+   struct kvm_pgtable *pgt)
 {
bool device = prot & KVM_PGTABLE_PROT_DEVICE;
-   kvm_pte_t attr = device ? PAGE_S2_MEMATTR(DEVICE_nGnRE) :
-   PAGE_S2_MEMATTR(NORMAL);
u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS;
+   kvm_pte_t attr;
+
+   if (device)
+   attr = PAGE_S2_MEMATTR(DEVICE_nGnRE, stage2_has_fwb(pgt));
+   else
+   attr = PAGE_S2_MEMATTR(NORMAL, stage2_has_fwb(pgt));
 
if (!(prot &

Re: [PATCH v5 33/36] KVM: arm64: Wrap the host with a stage 2

2021-03-17 Thread Quentin Perret
On Wednesday 17 Mar 2021 at 09:41:09 (+0100), Mate Toth-Pal wrote:
> On 2021-03-16 18:46, Quentin Perret wrote:
> > On Tuesday 16 Mar 2021 at 16:16:18 (+0100), Mate Toth-Pal wrote:
> > > On 2021-03-16 15:29, Quentin Perret wrote:
> > > > On Tuesday 16 Mar 2021 at 12:53:53 (+), Quentin Perret wrote:
> > > > > On Tuesday 16 Mar 2021 at 13:28:42 (+0100), Mate Toth-Pal wrote:
> > > > > > Changing the value of MT_S2_FWB_NORMAL to 7 would change this 
> > > > > > behavior, and
> > > > > > the resulting memory type would be device.
> > > > > 
> > > > > Sounds like the correct fix here -- see below.
> > > > 
> > > > Just to clarify this, I meant this should be the configuration for the
> > > > host stage-2. We'll want to keep the existing behaviour for guests I
> > > > believe.
> > > 
> > > I Agree.
> > 
> > OK, so the below seems to boot on my non-FWB-capable hardware and should
> > fix the issue. Could you by any chance give it a spin?
> > 
> 
> Sure, I can give it a go. I was trying to apply the patch on top of 
> https://android-kvm.googlesource.com/linux/+/refs/heads/qperret/host-stage2-v5
> but it seems that your base is significantly different. Can you give some
> hints what should I use as base?

Oh interesting, it _should_ apply on v5. I just pushed a branch with
everything applied if that helps:

  https://android-kvm.googlesource.com/linux qperret/wip/fix-fwb-host-stage2

Thanks again!
Quentin


Re: [PATCH v5 33/36] KVM: arm64: Wrap the host with a stage 2

2021-03-16 Thread Quentin Perret
On Tuesday 16 Mar 2021 at 16:16:18 (+0100), Mate Toth-Pal wrote:
> On 2021-03-16 15:29, Quentin Perret wrote:
> > On Tuesday 16 Mar 2021 at 12:53:53 (+), Quentin Perret wrote:
> > > On Tuesday 16 Mar 2021 at 13:28:42 (+0100), Mate Toth-Pal wrote:
> > > > Changing the value of MT_S2_FWB_NORMAL to 7 would change this behavior, 
> > > > and
> > > > the resulting memory type would be device.
> > > 
> > > Sounds like the correct fix here -- see below.
> > 
> > Just to clarify this, I meant this should be the configuration for the
> > host stage-2. We'll want to keep the existing behaviour for guests I
> > believe.
> 
> I Agree.

OK, so the below seems to boot on my non-FWB-capable hardware and should
fix the issue. Could you by any chance give it a spin?

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index b93a2a3526ab..b2066bd03ca2 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -76,10 +76,11 @@ struct kvm_pgtable {

 /**
  * enum kvm_pgtable_prot - Page-table permissions and attributes.
- * @KVM_PGTABLE_PROT_X:Execute permission.
- * @KVM_PGTABLE_PROT_W:Write permission.
- * @KVM_PGTABLE_PROT_R:Read permission.
- * @KVM_PGTABLE_PROT_DEVICE:   Device attributes.
+ * @KVM_PGTABLE_PROT_X:Execute permission.
+ * @KVM_PGTABLE_PROT_W:Write permission.
+ * @KVM_PGTABLE_PROT_R:Read permission.
+ * @KVM_PGTABLE_PROT_DEVICE:   Device attributes.
+ * @KVM_PGTABLE_PROT_S2_NOFWB: Don't enforce Normal-WB with FWB.
  */
 enum kvm_pgtable_prot {
KVM_PGTABLE_PROT_X  = BIT(0),
@@ -87,6 +88,8 @@ enum kvm_pgtable_prot {
KVM_PGTABLE_PROT_R  = BIT(2),

KVM_PGTABLE_PROT_DEVICE = BIT(3),
+
+   KVM_PGTABLE_PROT_S2_NOFWB   = BIT(4),
 };

 #define PAGE_HYP   (KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index c759faf7a1ff..e695d2e1839d 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -144,13 +144,16 @@
  * Memory types for Stage-2 translation
  */
 #define MT_S2_NORMAL   0xf
+#define MT_S2_WEAK MT_S2_NORMAL
 #define MT_S2_DEVICE_nGnRE 0x1

 /*
  * Memory types for Stage-2 translation when ID_AA64MMFR2_EL1.FWB is 0001
- * Stage-2 enforces Normal-WB and Device-nGnRE
+ * Stage-2 enforces Normal-WB and Device-nGnRE by default. The 'weak' mode
+ * honors Stage-1 attributes.
  */
 #define MT_S2_FWB_NORMAL   6
+#define MT_S2_FWB_WEAK 7
 #define MT_S2_FWB_DEVICE_nGnRE 1

 #ifdef CONFIG_ARM64_4K_PAGES
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c 
b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index dd03252b9574..1ff72babe565 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -214,6 +214,8 @@ static int host_stage2_idmap(u64 addr)

if (is_memory)
prot |= KVM_PGTABLE_PROT_X;
+   else
+   prot |= KVM_PGTABLE_PROT_S2_NOFWB;

hyp_spin_lock(&host_kvm.lock);
ret = kvm_pgtable_stage2_find_range(&host_kvm.pgt, addr, prot, &range);
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 3a971df278bd..bd1b8464a537 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -343,6 +343,9 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, 
kvm_pte_t *ptep)
if (!(prot & KVM_PGTABLE_PROT_R))
return -EINVAL;

+   if (prot & KVM_PGTABLE_PROT_S2_NOFWB)
+   return -EINVAL;
+
if (prot & KVM_PGTABLE_PROT_X) {
if (prot & KVM_PGTABLE_PROT_W)
return -EINVAL;
@@ -510,9 +513,18 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
 static int stage2_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
 {
bool device = prot & KVM_PGTABLE_PROT_DEVICE;
-   kvm_pte_t attr = device ? PAGE_S2_MEMATTR(DEVICE_nGnRE) :
-   PAGE_S2_MEMATTR(NORMAL);
+   bool nofwb = prot & KVM_PGTABLE_PROT_S2_NOFWB;
u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS;
+   kvm_pte_t attr;
+
+   WARN_ON(nofwb && device);
+
+   if (device)
+   attr = PAGE_S2_MEMATTR(DEVICE_nGnRE);
+   else if (nofwb)
+   attr = PAGE_S2_MEMATTR(WEAK);
+   else
+   attr = PAGE_S2_MEMATTR(NORMAL);

if (!(prot & KVM_PGTABLE_PROT_X))
attr |= KVM_PTE_LEAF_ATTR_HI_S2_XN;


Re: [PATCH v5 33/36] KVM: arm64: Wrap the host with a stage 2

2021-03-16 Thread Quentin Perret
On Tuesday 16 Mar 2021 at 12:53:53 (+), Quentin Perret wrote:
> On Tuesday 16 Mar 2021 at 13:28:42 (+0100), Mate Toth-Pal wrote:
> > Changing the value of MT_S2_FWB_NORMAL to 7 would change this behavior, and
> > the resulting memory type would be device.
> 
> Sounds like the correct fix here -- see below.

Just to clarify this, I meant this should be the configuration for the
host stage-2. We'll want to keep the existing behaviour for guests I
believe.

Thanks,
Quentin


Re: [PATCH v5 33/36] KVM: arm64: Wrap the host with a stage 2

2021-03-16 Thread Quentin Perret
On Tuesday 16 Mar 2021 at 13:28:42 (+0100), Mate Toth-Pal wrote:
> Testing the latest version of the patchset, we seem to have found another
> thing related to FEAT_S2FWB.

Argh! I wish I could put my hands on hardware with FWB. Thanks again for
the report.

> This function always sets Normal memory in the stage 2 table, even if the
> address in stage 1 was mapped as a device memory. However with the current
> settings for normal memory (i.e. MT_S2_FWB_NORMAL being defined to 6)
> according to the architecture (See Arm ARM, 'D5.5.5 Stage 2 memory region
> type and cacheability attributes when FEAT_S2FWB is implemented') the
> resulting attributes will be 'Normal Write-Back' even if the stage 1 mapping
> sets device memory. Accessing device memory mapped like this causes an
> SError on some platforms with FEAT_S2FWB being implemented.

Right.

> Changing the value of MT_S2_FWB_NORMAL to 7 would change this behavior, and
> the resulting memory type would be device.

Sounds like the correct fix here -- see below.

> Another solution would be to add an else branch to the last 'if' above like
> this:
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index fffa432ce3eb..54e5d3b0b2e1 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -214,6 +214,8 @@ static int host_stage2_idmap(u64 addr)
> 
>     if (is_memory)
>     prot |= KVM_PGTABLE_PROT_X;
> +   else
> +   prot |= KVM_PGTABLE_PROT_DEVICE;
> 
>     hyp_spin_lock(&host_kvm.lock);
>     ret = kvm_pgtable_stage2_find_range(&host_kvm.pgt, addr, prot,
> &range);

While this would work in this particular case, I don't think we should
force all non-RAM accesses as device as the host may have reasons not to
want this (e.g. accessing SRAM). Your first suggestions allows us to do
just that, so it is preferable I think.

Thanks,
Quentin


Re: [PATCH v5 14/36] KVM: arm64: Provide __flush_dcache_area at EL2

2021-03-15 Thread Quentin Perret
On Monday 15 Mar 2021 at 16:33:23 (+), Will Deacon wrote:
> On Mon, Mar 15, 2021 at 02:35:14PM +0000, Quentin Perret wrote:
> > We will need to do cache maintenance at EL2 soon, so compile a copy of
> > __flush_dcache_area at EL2, and provide a copy of arm64_ftr_reg_ctrel0
> > as it is needed by the read_ctr macro.
> > 
> > Signed-off-by: Quentin Perret 
> > ---
> >  arch/arm64/include/asm/kvm_cpufeature.h |  2 ++
> >  arch/arm64/kvm/hyp/nvhe/Makefile|  3 ++-
> >  arch/arm64/kvm/hyp/nvhe/cache.S | 13 +
> >  arch/arm64/kvm/sys_regs.c   |  1 +
> >  4 files changed, 18 insertions(+), 1 deletion(-)
> >  create mode 100644 arch/arm64/kvm/hyp/nvhe/cache.S
> > 
> > diff --git a/arch/arm64/include/asm/kvm_cpufeature.h 
> > b/arch/arm64/include/asm/kvm_cpufeature.h
> > index 3fd9f60d2180..efba1b89b8a4 100644
> > --- a/arch/arm64/include/asm/kvm_cpufeature.h
> > +++ b/arch/arm64/include/asm/kvm_cpufeature.h
> > @@ -13,3 +13,5 @@
> >  #define KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg 
> > kvm_nvhe_sym(name)
> >  #endif
> >  #endif
> > +
> > +KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
> 
> I still think this is a bit weird. If you really want to macro-ise stuff,
> then why not follow the sort of thing we do for e.g. per-cpu variables and
> have separate DECLARE_HYP_CPU_FTR_REG and DEFINE_HYP_CPU_FTR_REG macros.
> 
> That way kvm_cpufeature.h can have header guards like a normal header and
> we can drop the '#ifndef KVM_HYP_CPU_FTR_REG' altogether. I don't think
> the duplication of the symbol name really matters -- it should fail at
> build time if something is missing.

I just tend to hate unnecessary boilerplate, but if you feel strongly
about it, happy to change :)

Cheers,
Quentin


Re: [PATCH v5 29/36] KVM: arm64: Use page-table to track page ownership

2021-03-15 Thread Quentin Perret
On Monday 15 Mar 2021 at 16:36:19 (+), Will Deacon wrote:
> On Mon, Mar 15, 2021 at 02:35:29PM +0000, Quentin Perret wrote:
> > As the host stage 2 will be identity mapped, all the .hyp memory regions
> > and/or memory pages donated to protected guestis will have to marked
> > invalid in the host stage 2 page-table. At the same time, the hypervisor
> > will need a way to track the ownership of each physical page to ensure
> > memory sharing or donation between entities (host, guests, hypervisor) is
> > legal.
> > 
> > In order to enable this tracking at EL2, let's use the host stage 2
> > page-table itself. The idea is to use the top bits of invalid mappings
> > to store the unique identifier of the page owner. The page-table owner
> > (the host) gets identifier 0 such that, at boot time, it owns the entire
> > IPA space as the pgd starts zeroed.
> > 
> > Provide kvm_pgtable_stage2_set_owner() which allows to modify the
> > ownership of pages in the host stage 2. It re-uses most of the map()
> > logic, but ends up creating invalid mappings instead. This impacts
> > how we do refcount as we now need to count invalid mappings when they
> > are used for ownership tracking.
> > 
> > Signed-off-by: Quentin Perret 
> > ---
> >  arch/arm64/include/asm/kvm_pgtable.h |  21 +
> >  arch/arm64/kvm/hyp/pgtable.c | 127 ++-
> >  2 files changed, 124 insertions(+), 24 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
> > b/arch/arm64/include/asm/kvm_pgtable.h
> > index 4ae19247837b..683e96abdc24 100644
> > --- a/arch/arm64/include/asm/kvm_pgtable.h
> > +++ b/arch/arm64/include/asm/kvm_pgtable.h
> > @@ -238,6 +238,27 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, 
> > u64 addr, u64 size,
> >u64 phys, enum kvm_pgtable_prot prot,
> >void *mc);
> >  
> > +/**
> > + * kvm_pgtable_stage2_set_owner() - Annotate invalid mappings with metadata
> > + * encoding the ownership of a page in the
> > + * IPA space.
> 
> The function does more than this, though, as it will also go ahead and unmap
> existing valid mappings which I think should be mentioned here, no?

Right, I see why you mean. How about:

'Unmap and annotate pages in the IPA space to track ownership'

> > +int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 
> > size,
> > +void *mc, u8 owner_id)
> > +{
> > +   int ret;
> > +   struct stage2_map_data map_data = {
> > +   .phys   = KVM_PHYS_INVALID,
> > +   .mmu= pgt->mmu,
> > +   .memcache   = mc,
> > +   .mm_ops = pgt->mm_ops,
> > +   .owner_id   = owner_id,
> > +   };
> > +   struct kvm_pgtable_walker walker = {
> > +   .cb = stage2_map_walker,
> > +   .flags  = KVM_PGTABLE_WALK_TABLE_PRE |
> > + KVM_PGTABLE_WALK_LEAF |
> > + KVM_PGTABLE_WALK_TABLE_POST,
> > +   .arg= &map_data,
> > +   };
> > +
> > +   if (owner_id > KVM_MAX_OWNER_ID)
> > +   return -EINVAL;
> > +
> > +   ret = kvm_pgtable_walk(pgt, addr, size, &walker);
> > +   dsb(ishst);
> 
> Why is the DSB needed here? afaict, we only ever unmap a valid entry (which
> will have a DSB as part of the TLBI sequence) or we update the owner for an
> existing invalid entry, in which case the walker doesn't care.

Indeed, that is now unnecessary. I'll remove it.

Thanks,
Quentin


[PATCH v5 32/36] KVM: arm64: Provide sanitized mmfr* registers at EL2

2021-03-15 Thread Quentin Perret
We will need to read sanitized values of mmfr{0,1}_el1 at EL2 soon, so
add them to the list of copied variables.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_cpufeature.h | 2 ++
 arch/arm64/kvm/sys_regs.c   | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_cpufeature.h 
b/arch/arm64/include/asm/kvm_cpufeature.h
index efba1b89b8a4..48cba6cecd71 100644
--- a/arch/arm64/include/asm/kvm_cpufeature.h
+++ b/arch/arm64/include/asm/kvm_cpufeature.h
@@ -15,3 +15,5 @@
 #endif
 
 KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
+KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr0_el1);
+KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr1_el1);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3ec34c25e877..dfb3b4f9ca84 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2784,6 +2784,8 @@ struct __ftr_reg_copy_entry {
struct arm64_ftr_reg*dst;
 } hyp_ftr_regs[] __initdata = {
CPU_FTR_REG_HYP_COPY(SYS_CTR_EL0, arm64_ftr_reg_ctrel0),
+   CPU_FTR_REG_HYP_COPY(SYS_ID_AA64MMFR0_EL1, 
arm64_ftr_reg_id_aa64mmfr0_el1),
+   CPU_FTR_REG_HYP_COPY(SYS_ID_AA64MMFR1_EL1, 
arm64_ftr_reg_id_aa64mmfr1_el1),
 };
 
 void __init setup_kvm_el2_caps(void)
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 07/36] KVM: arm64: Introduce a BSS section for use at Hyp

2021-03-15 Thread Quentin Perret
Currently, the hyp code cannot make full use of a bss, as the kernel
section is mapped read-only.

While this mapping could simply be changed to read-write, it would
intermingle even more the hyp and kernel state than they currently are.
Instead, introduce a __hyp_bss section, that uses reserved pages, and
create the appropriate RW hyp mappings during KVM init.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/sections.h |  1 +
 arch/arm64/kernel/vmlinux.lds.S   | 52 ---
 arch/arm64/kvm/arm.c  | 14 -
 arch/arm64/kvm/hyp/nvhe/hyp.lds.S |  1 +
 4 files changed, 49 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h 
b/arch/arm64/include/asm/sections.h
index 2f36b16a5b5d..e4ad9db53af1 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -13,6 +13,7 @@ extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 extern char __hyp_text_start[], __hyp_text_end[];
 extern char __hyp_rodata_start[], __hyp_rodata_end[];
 extern char __hyp_reloc_begin[], __hyp_reloc_end[];
+extern char __hyp_bss_start[], __hyp_bss_end[];
 extern char __idmap_text_start[], __idmap_text_end[];
 extern char __initdata_begin[], __initdata_end[];
 extern char __inittext_begin[], __inittext_end[];
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 7eea7888bb02..e96173ce211b 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -5,24 +5,7 @@
  * Written by Martin Mares 
  */
 
-#define RO_EXCEPTION_TABLE_ALIGN   8
-#define RUNTIME_DISCARD_EXIT
-
-#include 
-#include 
 #include 
-#include 
-#include 
-#include 
-
-#include "image.h"
-
-OUTPUT_ARCH(aarch64)
-ENTRY(_text)
-
-jiffies = jiffies_64;
-
-
 #ifdef CONFIG_KVM
 #define HYPERVISOR_EXTABLE \
. = ALIGN(SZ_8);\
@@ -51,13 +34,43 @@ jiffies = jiffies_64;
__hyp_reloc_end = .;\
}
 
+#define BSS_FIRST_SECTIONS \
+   __hyp_bss_start = .;\
+   *(HYP_SECTION_NAME(.bss))   \
+   . = ALIGN(PAGE_SIZE);   \
+   __hyp_bss_end = .;
+
+/*
+ * We require that __hyp_bss_start and __bss_start are aligned, and enforce it
+ * with an assertion. But the BSS_SECTION macro places an empty .sbss section
+ * between them, which can in some cases cause the linker to misalign them. To
+ * work around the issue, force a page alignment for __bss_start.
+ */
+#define SBSS_ALIGN PAGE_SIZE
 #else /* CONFIG_KVM */
 #define HYPERVISOR_EXTABLE
 #define HYPERVISOR_DATA_SECTIONS
 #define HYPERVISOR_PERCPU_SECTION
 #define HYPERVISOR_RELOC_SECTION
+#define SBSS_ALIGN 0
 #endif
 
+#define RO_EXCEPTION_TABLE_ALIGN   8
+#define RUNTIME_DISCARD_EXIT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "image.h"
+
+OUTPUT_ARCH(aarch64)
+ENTRY(_text)
+
+jiffies = jiffies_64;
+
 #define HYPERVISOR_TEXT\
/*  \
 * Align to 4 KB so that\
@@ -276,7 +289,7 @@ SECTIONS
__pecoff_data_rawsize = ABSOLUTE(. - __initdata_begin);
_edata = .;
 
-   BSS_SECTION(0, 0, 0)
+   BSS_SECTION(SBSS_ALIGN, 0, 0)
 
. = ALIGN(PAGE_SIZE);
init_pg_dir = .;
@@ -324,6 +337,9 @@ ASSERT(__hibernate_exit_text_end - 
(__hibernate_exit_text_start & ~(SZ_4K - 1))
 ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
"Entry trampoline text too big")
 #endif
+#ifdef CONFIG_KVM
+ASSERT(__hyp_bss_start == __bss_start, "HYP and Host BSS are misaligned")
+#endif
 /*
  * If padding is applied before .head.text, virt<->phys conversions will fail.
  */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 2d1e7ef69c04..3f8bcf8db036 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1770,7 +1770,19 @@ static int init_hyp_mode(void)
goto out_err;
}
 
-   err = create_hyp_mappings(kvm_ksym_ref(__bss_start),
+   /*
+* .hyp.bss is guaranteed to be placed at the beginning of the .bss
+* section thanks to an assertion in the linker script. Map it RW and
+* the rest of .bss RO.
+*/
+   err = create_hyp_mappings(kvm_ksym_ref(__hyp_bss_start),
+ kvm_ksym_ref(__hyp_bss_end), PAGE_HYP);
+   if (err) {
+   kvm_err("Cannot map hyp bss section: %d\n", err);
+   goto out_err;
+   }
+
+   err = create_hyp_mappings(kvm_ksym_ref(__hyp_bss_end),
  kvm_ksym_ref(__bss_stop), PAGE_HYP_RO);
if (err) {
   

[PATCH v5 06/36] KVM: arm64: Factor memory allocation out of pgtable.c

2021-03-15 Thread Quentin Perret
In preparation for enabling the creation of page-tables at EL2, factor
all memory allocation out of the page-table code, hence making it
re-usable with any compatible memory allocator.

No functional changes intended.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 41 +++-
 arch/arm64/kvm/hyp/pgtable.c | 98 +---
 arch/arm64/kvm/mmu.c | 66 ++-
 3 files changed, 163 insertions(+), 42 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 8886d43cfb11..bbe840e430cb 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -13,17 +13,50 @@
 
 typedef u64 kvm_pte_t;
 
+/**
+ * struct kvm_pgtable_mm_ops - Memory management callbacks.
+ * @zalloc_page:   Allocate a single zeroed memory page. The @arg parameter
+ * can be used by the walker to pass a memcache. The
+ * initial refcount of the page is 1.
+ * @zalloc_pages_exact:Allocate an exact number of zeroed memory 
pages. The
+ * @size parameter is in bytes, and is rounded-up to the
+ * next page boundary. The resulting allocation is
+ * physically contiguous.
+ * @free_pages_exact:  Free an exact number of memory pages previously
+ * allocated by zalloc_pages_exact.
+ * @get_page:  Increment the refcount on a page.
+ * @put_page:  Decrement the refcount on a page. When the refcount
+ * reaches 0 the page is automatically freed.
+ * @page_count:Return the refcount of a page.
+ * @phys_to_virt:  Convert a physical address into a virtual address mapped
+ * in the current context.
+ * @virt_to_phys:  Convert a virtual address mapped in the current context
+ * into a physical address.
+ */
+struct kvm_pgtable_mm_ops {
+   void*   (*zalloc_page)(void *arg);
+   void*   (*zalloc_pages_exact)(size_t size);
+   void(*free_pages_exact)(void *addr, size_t size);
+   void(*get_page)(void *addr);
+   void(*put_page)(void *addr);
+   int (*page_count)(void *addr);
+   void*   (*phys_to_virt)(phys_addr_t phys);
+   phys_addr_t (*virt_to_phys)(void *addr);
+};
+
 /**
  * struct kvm_pgtable - KVM page-table.
  * @ia_bits:   Maximum input address size, in bits.
  * @start_level:   Level at which the page-table walk starts.
  * @pgd:   Pointer to the first top-level entry of the page-table.
+ * @mm_ops:Memory management callbacks.
  * @mmu:   Stage-2 KVM MMU struct. Unused for stage-1 page-tables.
  */
 struct kvm_pgtable {
u32 ia_bits;
u32 start_level;
kvm_pte_t   *pgd;
+   struct kvm_pgtable_mm_ops   *mm_ops;
 
/* Stage-2 only */
struct kvm_s2_mmu   *mmu;
@@ -86,10 +119,12 @@ struct kvm_pgtable_walker {
  * kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
  * @va_bits:   Maximum virtual address bits.
+ * @mm_ops:Memory management callbacks.
  *
  * Return: 0 on success, negative error code on failure.
  */
-int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits);
+int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits,
+struct kvm_pgtable_mm_ops *mm_ops);
 
 /**
  * kvm_pgtable_hyp_destroy() - Destroy an unused hypervisor stage-1 page-table.
@@ -126,10 +161,12 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 
addr, u64 size, u64 phys,
  * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
  * @kvm:   KVM structure representing the guest virtual machine.
+ * @mm_ops:Memory management callbacks.
  *
  * Return: 0 on success, negative error code on failure.
  */
-int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm);
+int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm,
+   struct kvm_pgtable_mm_ops *mm_ops);
 
 /**
  * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table.
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 81fe032f34d1..b975a67d1f85 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -152,9 +152,9 @@ static kvm_pte_t kvm_phys_to_pte(u64 pa)
return pte;
 }
 
-static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte)
+static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte, struct kvm_pgtable_mm_ops 
*mm_ops)
 {
-   return __va(kvm_pte_to_phys(pte

[PATCH v5 30/36] KVM: arm64: Refactor the *_map_set_prot_attr() helpers

2021-03-15 Thread Quentin Perret
In order to ease their re-use in other code paths, refactor the
*_map_set_prot_attr() helpers to not depend on a map_data struct.
No functional change intended.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/pgtable.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index bd44e84dedc4..a5347d78293f 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -324,8 +324,7 @@ struct hyp_map_data {
struct kvm_pgtable_mm_ops   *mm_ops;
 };
 
-static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot,
-struct hyp_map_data *data)
+static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
 {
bool device = prot & KVM_PGTABLE_PROT_DEVICE;
u32 mtype = device ? MT_DEVICE_nGnRE : MT_NORMAL;
@@ -350,7 +349,8 @@ static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot,
attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S1_AP, ap);
attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S1_SH, sh);
attr |= KVM_PTE_LEAF_ATTR_LO_S1_AF;
-   data->attr = attr;
+   *ptep = attr;
+
return 0;
 }
 
@@ -407,7 +407,7 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, 
u64 size, u64 phys,
.arg= &map_data,
};
 
-   ret = hyp_map_set_prot_attr(prot, &map_data);
+   ret = hyp_set_prot_attr(prot, &map_data.attr);
if (ret)
return ret;
 
@@ -500,8 +500,7 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
return vtcr;
 }
 
-static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot,
-   struct stage2_map_data *data)
+static int stage2_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
 {
bool device = prot & KVM_PGTABLE_PROT_DEVICE;
kvm_pte_t attr = device ? PAGE_S2_MEMATTR(DEVICE_nGnRE) :
@@ -521,7 +520,8 @@ static int stage2_map_set_prot_attr(enum kvm_pgtable_prot 
prot,
 
attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S2_SH, sh);
attr |= KVM_PTE_LEAF_ATTR_LO_S2_AF;
-   data->attr = attr;
+   *ptep = attr;
+
return 0;
 }
 
@@ -741,7 +741,7 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 
addr, u64 size,
.arg= &map_data,
};
 
-   ret = stage2_map_set_prot_attr(prot, &map_data);
+   ret = stage2_set_prot_attr(prot, &map_data.attr);
if (ret)
return ret;
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 29/36] KVM: arm64: Use page-table to track page ownership

2021-03-15 Thread Quentin Perret
As the host stage 2 will be identity mapped, all the .hyp memory regions
and/or memory pages donated to protected guestis will have to marked
invalid in the host stage 2 page-table. At the same time, the hypervisor
will need a way to track the ownership of each physical page to ensure
memory sharing or donation between entities (host, guests, hypervisor) is
legal.

In order to enable this tracking at EL2, let's use the host stage 2
page-table itself. The idea is to use the top bits of invalid mappings
to store the unique identifier of the page owner. The page-table owner
(the host) gets identifier 0 such that, at boot time, it owns the entire
IPA space as the pgd starts zeroed.

Provide kvm_pgtable_stage2_set_owner() which allows to modify the
ownership of pages in the host stage 2. It re-uses most of the map()
logic, but ends up creating invalid mappings instead. This impacts
how we do refcount as we now need to count invalid mappings when they
are used for ownership tracking.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h |  21 +
 arch/arm64/kvm/hyp/pgtable.c | 127 ++-
 2 files changed, 124 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 4ae19247837b..683e96abdc24 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -238,6 +238,27 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 
addr, u64 size,
   u64 phys, enum kvm_pgtable_prot prot,
   void *mc);
 
+/**
+ * kvm_pgtable_stage2_set_owner() - Annotate invalid mappings with metadata
+ * encoding the ownership of a page in the
+ * IPA space.
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @addr:  Base intermediate physical address to annotate.
+ * @size:  Size of the annotated range.
+ * @mc:Cache of pre-allocated and zeroed memory from which to 
allocate
+ * page-table pages.
+ * @owner_id:  Unique identifier for the owner of the page.
+ *
+ * By default, all page-tables are owned by identifier 0. This function can be
+ * used to mark portions of the IPA space as owned by other entities. When a
+ * stage 2 is used with identity-mappings, these annotations allow to use the
+ * page-table data structure as a simple rmap.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
+void *mc, u8 owner_id);
+
 /**
  * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 
page-table.
  * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index f37b4179b880..bd44e84dedc4 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -48,6 +48,9 @@
 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
 KVM_PTE_LEAF_ATTR_HI_S2_XN)
 
+#define KVM_INVALID_PTE_OWNER_MASK GENMASK(63, 56)
+#define KVM_MAX_OWNER_ID   1
+
 struct kvm_pgtable_walk_data {
struct kvm_pgtable  *pgt;
struct kvm_pgtable_walker   *walker;
@@ -67,6 +70,13 @@ static u64 kvm_granule_size(u32 level)
return BIT(kvm_granule_shift(level));
 }
 
+#define KVM_PHYS_INVALID (-1ULL)
+
+static bool kvm_phys_is_valid(u64 phys)
+{
+   return phys < 
BIT(id_aa64mmfr0_parange_to_phys_shift(ID_AA64MMFR0_PARANGE_MAX));
+}
+
 static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
 {
u64 granule = kvm_granule_size(level);
@@ -81,7 +91,10 @@ static bool kvm_block_mapping_supported(u64 addr, u64 end, 
u64 phys, u32 level)
if (granule > (end - addr))
return false;
 
-   return IS_ALIGNED(addr, granule) && IS_ALIGNED(phys, granule);
+   if (kvm_phys_is_valid(phys) && !IS_ALIGNED(phys, granule))
+   return false;
+
+   return IS_ALIGNED(addr, granule);
 }
 
 static u32 kvm_pgtable_idx(struct kvm_pgtable_walk_data *data, u32 level)
@@ -186,6 +199,11 @@ static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t 
attr, u32 level)
return pte;
 }
 
+static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
+{
+   return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
+}
+
 static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
  u32 level, kvm_pte_t *ptep,
  enum kvm_pgtable_walk_flags flag)
@@ -440,6 +458,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
 struct stage2_map_data {
u64 phys;
kvm_pte_t

[PATCH v5 12/36] KVM: arm64: Introduce a Hyp buddy page allocator

2021-03-15 Thread Quentin Perret
When memory protection is enabled, the hyp code will require a basic
form of memory management in order to allocate and free memory pages at
EL2. This is needed for various use-cases, including the creation of hyp
mappings or the allocation of stage 2 page tables.

To address these use-case, introduce a simple memory allocator in the
hyp code. The allocator is designed as a conventional 'buddy allocator',
working with a page granularity. It allows to allocate and free
physically contiguous pages from memory 'pools', with a guaranteed order
alignment in the PA space. Each page in a memory pool is associated
with a struct hyp_page which holds the page's metadata, including its
refcount, as well as its current order, hence mimicking the kernel's
buddy system in the GFP infrastructure. The hyp_page metadata are made
accessible through a hyp_vmemmap, following the concept of
SPARSE_VMEMMAP in the kernel.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/nvhe/gfp.h|  68 
 arch/arm64/kvm/hyp/include/nvhe/memory.h |  28 
 arch/arm64/kvm/hyp/nvhe/Makefile |   2 +-
 arch/arm64/kvm/hyp/nvhe/page_alloc.c | 195 +++
 4 files changed, 292 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/gfp.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/page_alloc.c

diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h 
b/arch/arm64/kvm/hyp/include/nvhe/gfp.h
new file mode 100644
index ..55b3f0ce5bc8
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __KVM_HYP_GFP_H
+#define __KVM_HYP_GFP_H
+
+#include 
+
+#include 
+#include 
+
+#define HYP_NO_ORDER   UINT_MAX
+
+struct hyp_pool {
+   /*
+* Spinlock protecting concurrent changes to the memory pool as well as
+* the struct hyp_page of the pool's pages until we have a proper atomic
+* API at EL2.
+*/
+   hyp_spinlock_t lock;
+   struct list_head free_area[MAX_ORDER];
+   phys_addr_t range_start;
+   phys_addr_t range_end;
+   unsigned int max_order;
+};
+
+static inline void hyp_page_ref_inc(struct hyp_page *p)
+{
+   struct hyp_pool *pool = hyp_page_to_pool(p);
+
+   hyp_spin_lock(&pool->lock);
+   p->refcount++;
+   hyp_spin_unlock(&pool->lock);
+}
+
+static inline int hyp_page_ref_dec_and_test(struct hyp_page *p)
+{
+   struct hyp_pool *pool = hyp_page_to_pool(p);
+   int ret;
+
+   hyp_spin_lock(&pool->lock);
+   p->refcount--;
+   ret = (p->refcount == 0);
+   hyp_spin_unlock(&pool->lock);
+
+   return ret;
+}
+
+static inline void hyp_set_page_refcounted(struct hyp_page *p)
+{
+   struct hyp_pool *pool = hyp_page_to_pool(p);
+
+   hyp_spin_lock(&pool->lock);
+   if (p->refcount) {
+   hyp_spin_unlock(&pool->lock);
+   hyp_panic();
+   }
+   p->refcount = 1;
+   hyp_spin_unlock(&pool->lock);
+}
+
+/* Allocation */
+void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order);
+void hyp_get_page(void *addr);
+void hyp_put_page(void *addr);
+
+/* Used pages cannot be freed */
+int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages,
+ unsigned int reserved_pages);
+#endif /* __KVM_HYP_GFP_H */
diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h 
b/arch/arm64/kvm/hyp/include/nvhe/memory.h
index 3e49eaa7e682..d2fb307c5952 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/memory.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h
@@ -6,7 +6,17 @@
 
 #include 
 
+struct hyp_pool;
+struct hyp_page {
+   unsigned int refcount;
+   unsigned int order;
+   struct hyp_pool *pool;
+   struct list_head node;
+};
+
 extern s64 hyp_physvirt_offset;
+extern u64 __hyp_vmemmap;
+#define hyp_vmemmap ((struct hyp_page *)__hyp_vmemmap)
 
 #define __hyp_pa(virt) ((phys_addr_t)(virt) + hyp_physvirt_offset)
 #define __hyp_va(phys) ((void *)((phys_addr_t)(phys) - hyp_physvirt_offset))
@@ -21,4 +31,22 @@ static inline phys_addr_t hyp_virt_to_phys(void *addr)
return __hyp_pa(addr);
 }
 
+#define hyp_phys_to_pfn(phys)  ((phys) >> PAGE_SHIFT)
+#define hyp_pfn_to_phys(pfn)   ((phys_addr_t)((pfn) << PAGE_SHIFT))
+#define hyp_phys_to_page(phys) (&hyp_vmemmap[hyp_phys_to_pfn(phys)])
+#define hyp_virt_to_page(virt) hyp_phys_to_page(__hyp_pa(virt))
+#define hyp_virt_to_pfn(virt)  hyp_phys_to_pfn(__hyp_pa(virt))
+
+#define hyp_page_to_pfn(page)  ((struct hyp_page *)(page) - hyp_vmemmap)
+#define hyp_page_to_phys(page)  hyp_pfn_to_phys((hyp_page_to_pfn(page)))
+#define hyp_page_to_virt(page) __hyp_va(hyp_page_to_phys(page))
+#define hyp_page_to_pool(page) (((struct hyp_page *)page)->pool)
+
+static inline int hyp_page_count(void *addr)
+{
+   struct hyp_page *p = hyp_virt_to_page(addr);
+
+   re

[PATCH v5 31/36] KVM: arm64: Add kvm_pgtable_stage2_find_range()

2021-03-15 Thread Quentin Perret
Since the host stage 2 will be identity mapped, and since it will own
most of memory, it would preferable for performance to try and use large
block mappings whenever that is possible. To ease this, introduce a new
helper in the KVM page-table code which allows to search for large
ranges of available IPA space. This will be used in the host memory
abort path to greedily idmap large portion of the PA space.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 29 +
 arch/arm64/kvm/hyp/pgtable.c | 89 ++--
 2 files changed, 114 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 683e96abdc24..b93a2a3526ab 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -94,6 +94,16 @@ enum kvm_pgtable_prot {
 #define PAGE_HYP_RO(KVM_PGTABLE_PROT_R)
 #define PAGE_HYP_DEVICE(PAGE_HYP | KVM_PGTABLE_PROT_DEVICE)
 
+/**
+ * struct kvm_mem_range - Range of Intermediate Physical Addresses
+ * @start: Start of the range.
+ * @end:   End of the range.
+ */
+struct kvm_mem_range {
+   u64 start;
+   u64 end;
+};
+
 /**
  * enum kvm_pgtable_walk_flags - Flags to control a depth-first page-table 
walk.
  * @KVM_PGTABLE_WALK_LEAF: Visit leaf entries, including invalid
@@ -398,4 +408,23 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 
addr, u64 size);
 int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
 struct kvm_pgtable_walker *walker);
 
+/**
+ * kvm_pgtable_stage2_find_range() - Find a range of Intermediate Physical
+ *  Addresses with compatible permission
+ *  attributes.
+ * @pgt:   Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @addr:  Address that must be covered by the range.
+ * @prot:  Protection attributes that the range must be compatible with.
+ * @range: Range structure used to limit the search space at call time and
+ * that will hold the result.
+ *
+ * The offset of @addr within a page is ignored. An IPA is compatible with 
@prot
+ * iff its corresponding stage-2 page-table entry has default ownership and, if
+ * valid, is mapped with protection attributes identical to @prot.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int kvm_pgtable_stage2_find_range(struct kvm_pgtable *pgt, u64 addr,
+ enum kvm_pgtable_prot prot,
+ struct kvm_mem_range *range);
 #endif /* __ARM64_KVM_PGTABLE_H__ */
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index a5347d78293f..3a971df278bd 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -48,6 +48,8 @@
 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
 KVM_PTE_LEAF_ATTR_HI_S2_XN)
 
+#define KVM_PTE_LEAF_ATTR_S2_IGNORED   GENMASK(58, 55)
+
 #define KVM_INVALID_PTE_OWNER_MASK GENMASK(63, 56)
 #define KVM_MAX_OWNER_ID   1
 
@@ -77,15 +79,20 @@ static bool kvm_phys_is_valid(u64 phys)
return phys < 
BIT(id_aa64mmfr0_parange_to_phys_shift(ID_AA64MMFR0_PARANGE_MAX));
 }
 
-static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
+static bool kvm_level_supports_block_mapping(u32 level)
 {
-   u64 granule = kvm_granule_size(level);
-
/*
 * Reject invalid block mappings and don't bother with 4TB mappings for
 * 52-bit PAs.
 */
-   if (level == 0 || (PAGE_SIZE != SZ_4K && level == 1))
+   return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1));
+}
+
+static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
+{
+   u64 granule = kvm_granule_size(level);
+
+   if (!kvm_level_supports_block_mapping(level))
return false;
 
if (granule > (end - addr))
@@ -1053,3 +1060,77 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt)
pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_sz);
pgt->pgd = NULL;
 }
+
+#define KVM_PTE_LEAF_S2_COMPAT_MASK(KVM_PTE_LEAF_ATTR_S2_PERMS | \
+KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR | \
+KVM_PTE_LEAF_ATTR_S2_IGNORED)
+
+static int stage2_check_permission_walker(u64 addr, u64 end, u32 level,
+ kvm_pte_t *ptep,
+ enum kvm_pgtable_walk_flags flag,
+ void * const arg)
+{
+   kvm_pte_t old_attr, pte = *ptep, *new_attr = arg;
+
+   /*
+* Compatible mappings are either invalid and owned by the page-table
+* owner (whose id is 0), or valid with matching permission attributes.
+ 

[PATCH v5 14/36] KVM: arm64: Provide __flush_dcache_area at EL2

2021-03-15 Thread Quentin Perret
We will need to do cache maintenance at EL2 soon, so compile a copy of
__flush_dcache_area at EL2, and provide a copy of arm64_ftr_reg_ctrel0
as it is needed by the read_ctr macro.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_cpufeature.h |  2 ++
 arch/arm64/kvm/hyp/nvhe/Makefile|  3 ++-
 arch/arm64/kvm/hyp/nvhe/cache.S | 13 +
 arch/arm64/kvm/sys_regs.c   |  1 +
 4 files changed, 18 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/hyp/nvhe/cache.S

diff --git a/arch/arm64/include/asm/kvm_cpufeature.h 
b/arch/arm64/include/asm/kvm_cpufeature.h
index 3fd9f60d2180..efba1b89b8a4 100644
--- a/arch/arm64/include/asm/kvm_cpufeature.h
+++ b/arch/arm64/include/asm/kvm_cpufeature.h
@@ -13,3 +13,5 @@
 #define KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg 
kvm_nvhe_sym(name)
 #endif
 #endif
+
+KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 6894a917f290..42dde4bb80b1 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -13,7 +13,8 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o
 lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
-hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o
+hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
+cache.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
new file mode 100644
index ..36cef6915428
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Code copied from arch/arm64/mm/cache.S.
+ */
+
+#include 
+#include 
+#include 
+
+SYM_FUNC_START_PI(__flush_dcache_area)
+   dcache_by_line_op civac, sy, x0, x1, x2, x3
+   ret
+SYM_FUNC_END_PI(__flush_dcache_area)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6c5d133689ae..3ec34c25e877 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2783,6 +2783,7 @@ struct __ftr_reg_copy_entry {
u32 sys_id;
struct arm64_ftr_reg*dst;
 } hyp_ftr_regs[] __initdata = {
+   CPU_FTR_REG_HYP_COPY(SYS_CTR_EL0, arm64_ftr_reg_ctrel0),
 };
 
 void __init setup_kvm_el2_caps(void)
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 33/36] KVM: arm64: Wrap the host with a stage 2

2021-03-15 Thread Quentin Perret
When KVM runs in protected nVHE mode, make use of a stage 2 page-table
to give the hypervisor some control over the host memory accesses. The
host stage 2 is created lazily using large block mappings if possible,
and will default to page mappings in absence of a better solution.

>From this point on, memory accesses from the host to protected memory
regions (e.g. not 'owned' by the host) are fatal and lead to hyp_panic().

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h  |   1 +
 arch/arm64/kernel/image-vars.h|   3 +
 arch/arm64/kvm/arm.c  |  10 +
 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  34 +++
 arch/arm64/kvm/hyp/nvhe/Makefile  |   2 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S|   1 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c|  11 +
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 246 ++
 arch/arm64/kvm/hyp/nvhe/setup.c   |   5 +
 arch/arm64/kvm/hyp/nvhe/switch.c  |   7 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c |   4 +-
 11 files changed, 317 insertions(+), 7 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/mem_protect.c

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 6dce860f8bca..b127af02bd45 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -61,6 +61,7 @@
 #define __KVM_HOST_SMCCC_FUNC___pkvm_create_mappings   16
 #define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping17
 #define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector18
+#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize 19
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 940c378fa837..d5dc2b792651 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -131,6 +131,9 @@ KVM_NVHE_ALIAS(__hyp_bss_end);
 KVM_NVHE_ALIAS(__hyp_rodata_start);
 KVM_NVHE_ALIAS(__hyp_rodata_end);
 
+/* pKVM static key */
+KVM_NVHE_ALIAS(kvm_protected_mode_initialized);
+
 #endif /* CONFIG_KVM */
 
 #endif /* __ARM64_KERNEL_IMAGE_VARS_H */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d474eec606a3..7e6a81079652 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1889,12 +1889,22 @@ static int init_hyp_mode(void)
return err;
 }
 
+void _kvm_host_prot_finalize(void *discard)
+{
+   WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
+}
+
 static int finalize_hyp_mode(void)
 {
if (!is_protected_kvm_enabled())
return 0;
 
+   /*
+* Flip the static key upfront as that may no longer be possible
+* once the host stage 2 is installed.
+*/
static_branch_enable(&kvm_protected_mode_initialized);
+   on_each_cpu(_kvm_host_prot_finalize, NULL, 1);
 
return 0;
 }
diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h 
b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
new file mode 100644
index ..d293cb328cc4
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2020 Google LLC
+ * Author: Quentin Perret 
+ */
+
+#ifndef __KVM_NVHE_MEM_PROTECT__
+#define __KVM_NVHE_MEM_PROTECT__
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct host_kvm {
+   struct kvm_arch arch;
+   struct kvm_pgtable pgt;
+   struct kvm_pgtable_mm_ops mm_ops;
+   hyp_spinlock_t lock;
+};
+extern struct host_kvm host_kvm;
+
+int __pkvm_prot_finalize(void);
+int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool);
+void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
+
+static __always_inline void __load_host_stage2(void)
+{
+   if (static_branch_likely(&kvm_protected_mode_initialized))
+   __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
+   else
+   write_sysreg(0, vttbr_el2);
+}
+#endif /* __KVM_NVHE_MEM_PROTECT__ */
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index b334354b8dd0..f55201a7ff33 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
-cache.o setup.o mm.o
+cache.o setup.o mm.o mem_protect.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S 
b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index a50ad9e9fc05..c164045af238 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S

[PATCH v5 05/36] KVM: arm64: Avoid free_page() in page-table allocator

2021-03-15 Thread Quentin Perret
Currently, the KVM page-table allocator uses a mix of put_page() and
free_page() calls depending on the context even though page-allocation
is always achieved using variants of __get_free_page().

Make the code consistent by using put_page() throughout, and reduce the
memory management API surface used by the page-table code. This will
ease factoring out page-allocation from pgtable.c, which is a
pre-requisite to creating page-tables at EL2.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/pgtable.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4d177ce1d536..81fe032f34d1 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -413,7 +413,7 @@ int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 
va_bits)
 static int hyp_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
   enum kvm_pgtable_walk_flags flag, void * const arg)
 {
-   free_page((unsigned long)kvm_pte_follow(*ptep));
+   put_page(virt_to_page(kvm_pte_follow(*ptep)));
return 0;
 }
 
@@ -425,7 +425,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
};
 
WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
-   free_page((unsigned long)pgt->pgd);
+   put_page(virt_to_page(pgt->pgd));
pgt->pgd = NULL;
 }
 
@@ -577,7 +577,7 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, 
u32 level,
if (!data->anchor)
return 0;
 
-   free_page((unsigned long)kvm_pte_follow(*ptep));
+   put_page(virt_to_page(kvm_pte_follow(*ptep)));
put_page(virt_to_page(ptep));
 
if (data->anchor == ptep) {
@@ -700,7 +700,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 
level, kvm_pte_t *ptep,
}
 
if (childp)
-   free_page((unsigned long)childp);
+   put_page(virt_to_page(childp));
 
return 0;
 }
@@ -897,7 +897,7 @@ static int stage2_free_walker(u64 addr, u64 end, u32 level, 
kvm_pte_t *ptep,
put_page(virt_to_page(ptep));
 
if (kvm_pte_table(pte, level))
-   free_page((unsigned long)kvm_pte_follow(pte));
+   put_page(virt_to_page(kvm_pte_follow(pte)));
 
return 0;
 }
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 11/36] KVM: arm64: Stub CONFIG_DEBUG_LIST at Hyp

2021-03-15 Thread Quentin Perret
In order to use the kernel list library at EL2, introduce stubs for the
CONFIG_DEBUG_LIST out-of-lines calls.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/nvhe/Makefile |  2 +-
 arch/arm64/kvm/hyp/nvhe/stub.c   | 22 ++
 2 files changed, 23 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/hyp/nvhe/stub.c

diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 24ff99e2eac5..144da72ad510 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -13,7 +13,7 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o
 lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
-hyp-main.o hyp-smp.o psci-relay.o early_alloc.o
+hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/stub.c b/arch/arm64/kvm/hyp/nvhe/stub.c
new file mode 100644
index ..c0aa6bbfd79d
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/stub.c
@@ -0,0 +1,22 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Stubs for out-of-line function calls caused by re-using kernel
+ * infrastructure at EL2.
+ *
+ * Copyright (C) 2020 - Google LLC
+ */
+
+#include 
+
+#ifdef CONFIG_DEBUG_LIST
+bool __list_add_valid(struct list_head *new, struct list_head *prev,
+ struct list_head *next)
+{
+   return true;
+}
+
+bool __list_del_entry_valid(struct list_head *entry)
+{
+   return true;
+}
+#endif
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 34/36] KVM: arm64: Page-align the .hyp sections

2021-03-15 Thread Quentin Perret
We will soon unmap the .hyp sections from the host stage 2 in Protected
nVHE mode, which obviously works with at least page granularity, so make
sure to align them correctly.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kernel/vmlinux.lds.S | 22 +-
 1 file changed, 9 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index e96173ce211b..709d2c433c5e 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -15,9 +15,11 @@
 
 #define HYPERVISOR_DATA_SECTIONS   \
HYP_SECTION_NAME(.rodata) : {   \
+   . = ALIGN(PAGE_SIZE);   \
__hyp_rodata_start = .; \
*(HYP_SECTION_NAME(.data..ro_after_init))   \
*(HYP_SECTION_NAME(.rodata))\
+   . = ALIGN(PAGE_SIZE);   \
__hyp_rodata_end = .;   \
}
 
@@ -72,21 +74,14 @@ ENTRY(_text)
 jiffies = jiffies_64;
 
 #define HYPERVISOR_TEXT\
-   /*  \
-* Align to 4 KB so that\
-* a) the HYP vector table is at its minimum\
-*alignment of 2048 bytes   \
-* b) the HYP init code will not cross a page   \
-*boundary if its size does not exceed  \
-*4 KB (see related ASSERT() below) \
-*/ \
-   . = ALIGN(SZ_4K);   \
+   . = ALIGN(PAGE_SIZE);   \
__hyp_idmap_text_start = .; \
*(.hyp.idmap.text)  \
__hyp_idmap_text_end = .;   \
__hyp_text_start = .;   \
*(.hyp.text)\
HYPERVISOR_EXTABLE  \
+   . = ALIGN(PAGE_SIZE);   \
__hyp_text_end = .;
 
 #define IDMAP_TEXT \
@@ -322,11 +317,12 @@ SECTIONS
 #include "image-vars.h"
 
 /*
- * The HYP init code and ID map text can't be longer than a page each,
- * and should not cross a page boundary.
+ * The HYP init code and ID map text can't be longer than a page each. The
+ * former is page-aligned, but the latter may not be with 16K or 64K pages, so
+ * it should also not cross a page boundary.
  */
-ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
-   "HYP init code too big or misaligned")
+ASSERT(__hyp_idmap_text_end - __hyp_idmap_text_start <= PAGE_SIZE,
+   "HYP init code too big")
 ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
"ID map text too big or misaligned")
 #ifdef CONFIG_HIBERNATION
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 35/36] KVM: arm64: Disable PMU support in protected mode

2021-03-15 Thread Quentin Perret
The host currently writes directly in EL2 per-CPU data sections from
the PMU code when running in nVHE. In preparation for unmapping the EL2
sections from the host stage 2, disable PMU support in protected mode as
we currently do not have a use-case for it.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/perf.c | 3 ++-
 arch/arm64/kvm/pmu.c  | 8 
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
index 739164324afe..8f860ae56bb7 100644
--- a/arch/arm64/kvm/perf.c
+++ b/arch/arm64/kvm/perf.c
@@ -55,7 +55,8 @@ int kvm_perf_init(void)
 * hardware performance counters. This could ensure the presence of
 * a physical PMU and CONFIG_PERF_EVENT is selected.
 */
-   if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
+   if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0
+  && !is_protected_kvm_enabled())
static_branch_enable(&kvm_arm_pmu_available);
 
return perf_register_guest_info_callbacks(&kvm_guest_cbs);
diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c
index faf32a44ba04..03a6c1f4a09a 100644
--- a/arch/arm64/kvm/pmu.c
+++ b/arch/arm64/kvm/pmu.c
@@ -33,7 +33,7 @@ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr)
 {
struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data);
 
-   if (!ctx || !kvm_pmu_switch_needed(attr))
+   if (!kvm_arm_support_pmu_v3() || !ctx || !kvm_pmu_switch_needed(attr))
return;
 
if (!attr->exclude_host)
@@ -49,7 +49,7 @@ void kvm_clr_pmu_events(u32 clr)
 {
struct kvm_host_data *ctx = this_cpu_ptr_hyp_sym(kvm_host_data);
 
-   if (!ctx)
+   if (!kvm_arm_support_pmu_v3() || !ctx)
return;
 
ctx->pmu_events.events_host &= ~clr;
@@ -172,7 +172,7 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu)
struct kvm_host_data *host;
u32 events_guest, events_host;
 
-   if (!has_vhe())
+   if (!kvm_arm_support_pmu_v3() || !has_vhe())
return;
 
preempt_disable();
@@ -193,7 +193,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu)
struct kvm_host_data *host;
u32 events_guest, events_host;
 
-   if (!has_vhe())
+   if (!kvm_arm_support_pmu_v3() || !has_vhe())
return;
 
host = this_cpu_ptr_hyp_sym(kvm_host_data);
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 09/36] KVM: arm64: Allow using kvm_nvhe_sym() in hyp code

2021-03-15 Thread Quentin Perret
In order to allow the usage of code shared by the host and the hyp in
static inline library functions, allow the usage of kvm_nvhe_sym() at
EL2 by defaulting to the raw symbol name.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/hyp_image.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/hyp_image.h 
b/arch/arm64/include/asm/hyp_image.h
index 78cd77990c9c..b4b3076a76fb 100644
--- a/arch/arm64/include/asm/hyp_image.h
+++ b/arch/arm64/include/asm/hyp_image.h
@@ -10,11 +10,15 @@
 #define __HYP_CONCAT(a, b) a ## b
 #define HYP_CONCAT(a, b)   __HYP_CONCAT(a, b)
 
+#ifndef __KVM_NVHE_HYPERVISOR__
 /*
  * KVM nVHE code has its own symbol namespace prefixed with __kvm_nvhe_,
  * to separate it from the kernel proper.
  */
 #define kvm_nvhe_sym(sym)  __kvm_nvhe_##sym
+#else
+#define kvm_nvhe_sym(sym)  sym
+#endif
 
 #ifdef LINKER_SCRIPT
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 10/36] KVM: arm64: Introduce an early Hyp page allocator

2021-03-15 Thread Quentin Perret
With nVHE, the host currently creates all stage 1 hypervisor mappings at
EL1 during boot, installs them at EL2, and extends them as required
(e.g. when creating a new VM). But in a world where the host is no
longer trusted, it cannot have full control over the code mapped in the
hypervisor.

In preparation for enabling the hypervisor to create its own stage 1
mappings during boot, introduce an early page allocator, with minimal
functionality. This allocator is designed to be used only during early
bootstrap of the hyp code when memory protection is enabled, which will
then switch to using a full-fledged page allocator after init.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/nvhe/early_alloc.h | 14 +
 arch/arm64/kvm/hyp/include/nvhe/memory.h  | 24 +
 arch/arm64/kvm/hyp/nvhe/Makefile  |  2 +-
 arch/arm64/kvm/hyp/nvhe/early_alloc.c | 54 +++
 arch/arm64/kvm/hyp/nvhe/psci-relay.c  |  4 +-
 5 files changed, 94 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/early_alloc.h
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/memory.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/early_alloc.c

diff --git a/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h 
b/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h
new file mode 100644
index ..dc61aaa56f31
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/early_alloc.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __KVM_HYP_EARLY_ALLOC_H
+#define __KVM_HYP_EARLY_ALLOC_H
+
+#include 
+
+void hyp_early_alloc_init(void *virt, unsigned long size);
+unsigned long hyp_early_alloc_nr_used_pages(void);
+void *hyp_early_alloc_page(void *arg);
+void *hyp_early_alloc_contig(unsigned int nr_pages);
+
+extern struct kvm_pgtable_mm_ops hyp_early_alloc_mm_ops;
+
+#endif /* __KVM_HYP_EARLY_ALLOC_H */
diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h 
b/arch/arm64/kvm/hyp/include/nvhe/memory.h
new file mode 100644
index ..3e49eaa7e682
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __KVM_HYP_MEMORY_H
+#define __KVM_HYP_MEMORY_H
+
+#include 
+
+#include 
+
+extern s64 hyp_physvirt_offset;
+
+#define __hyp_pa(virt) ((phys_addr_t)(virt) + hyp_physvirt_offset)
+#define __hyp_va(phys) ((void *)((phys_addr_t)(phys) - hyp_physvirt_offset))
+
+static inline void *hyp_phys_to_virt(phys_addr_t phys)
+{
+   return __hyp_va(phys);
+}
+
+static inline phys_addr_t hyp_virt_to_phys(void *addr)
+{
+   return __hyp_pa(addr);
+}
+
+#endif /* __KVM_HYP_MEMORY_H */
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index bc98f8e3d1da..24ff99e2eac5 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -13,7 +13,7 @@ lib-objs := clear_page.o copy_page.o memcpy.o memset.o
 lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
-hyp-main.o hyp-smp.o psci-relay.o
+hyp-main.o hyp-smp.o psci-relay.o early_alloc.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/early_alloc.c 
b/arch/arm64/kvm/hyp/nvhe/early_alloc.c
new file mode 100644
index ..1306c430ab87
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/early_alloc.c
@@ -0,0 +1,54 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Google LLC
+ * Author: Quentin Perret 
+ */
+
+#include 
+
+#include 
+#include 
+
+struct kvm_pgtable_mm_ops hyp_early_alloc_mm_ops;
+s64 __ro_after_init hyp_physvirt_offset;
+
+static unsigned long base;
+static unsigned long end;
+static unsigned long cur;
+
+unsigned long hyp_early_alloc_nr_used_pages(void)
+{
+   return (cur - base) >> PAGE_SHIFT;
+}
+
+void *hyp_early_alloc_contig(unsigned int nr_pages)
+{
+   unsigned long size = (nr_pages << PAGE_SHIFT);
+   void *ret = (void *)cur;
+
+   if (!nr_pages)
+   return NULL;
+
+   if (end - cur < size)
+   return NULL;
+
+   cur += size;
+   memset(ret, 0, size);
+
+   return ret;
+}
+
+void *hyp_early_alloc_page(void *arg)
+{
+   return hyp_early_alloc_contig(1);
+}
+
+void hyp_early_alloc_init(void *virt, unsigned long size)
+{
+   base = cur = (unsigned long)virt;
+   end = base + size;
+
+   hyp_early_alloc_mm_ops.zalloc_page = hyp_early_alloc_page;
+   hyp_early_alloc_mm_ops.phys_to_virt = hyp_phys_to_virt;
+   hyp_early_alloc_mm_ops.virt_to_phys = hyp_virt_to_phys;
+}
diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c 
b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
index 63de71c0481e..08508783ec3d 100644
--- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c
+++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
@@ -11,6 +11

[PATCH v5 28/36] KVM: arm64: Always zero invalid PTEs

2021-03-15 Thread Quentin Perret
kvm_set_invalid_pte() currently only clears bit 0 from a PTE because
stage2_map_walk_table_post() needs to be able to follow the anchor. In
preparation for re-using bits 63-01 from invalid PTEs, make sure to zero
it entirely by ensuring to cache the anchor's child upfront.

Acked-by: Will Deacon 
Suggested-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/pgtable.c | 26 --
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index bdd6e3d4eeb6..f37b4179b880 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -156,10 +156,9 @@ static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte, struct 
kvm_pgtable_mm_ops *mm_op
return mm_ops->phys_to_virt(kvm_pte_to_phys(pte));
 }
 
-static void kvm_set_invalid_pte(kvm_pte_t *ptep)
+static void kvm_clear_pte(kvm_pte_t *ptep)
 {
-   kvm_pte_t pte = *ptep;
-   WRITE_ONCE(*ptep, pte & ~KVM_PTE_VALID);
+   WRITE_ONCE(*ptep, 0);
 }
 
 static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp,
@@ -443,6 +442,7 @@ struct stage2_map_data {
kvm_pte_t   attr;
 
kvm_pte_t   *anchor;
+   kvm_pte_t   *childp;
 
struct kvm_s2_mmu   *mmu;
void*memcache;
@@ -532,7 +532,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, 
u32 level,
 * There's an existing different valid leaf entry, so perform
 * break-before-make.
 */
-   kvm_set_invalid_pte(ptep);
+   kvm_clear_pte(ptep);
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
mm_ops->put_page(ptep);
}
@@ -553,7 +553,8 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 
level,
if (!kvm_block_mapping_supported(addr, end, data->phys, level))
return 0;
 
-   kvm_set_invalid_pte(ptep);
+   data->childp = kvm_pte_follow(*ptep, data->mm_ops);
+   kvm_clear_pte(ptep);
 
/*
 * Invalidate the whole stage-2, as we may have numerous leaf
@@ -599,7 +600,7 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 
level, kvm_pte_t *ptep,
 * will be mapped lazily.
 */
if (kvm_pte_valid(pte)) {
-   kvm_set_invalid_pte(ptep);
+   kvm_clear_pte(ptep);
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
mm_ops->put_page(ptep);
}
@@ -615,19 +616,24 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, 
u32 level,
  struct stage2_map_data *data)
 {
struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops;
+   kvm_pte_t *childp;
int ret = 0;
 
if (!data->anchor)
return 0;
 
-   mm_ops->put_page(kvm_pte_follow(*ptep, mm_ops));
-   mm_ops->put_page(ptep);
-
if (data->anchor == ptep) {
+   childp = data->childp;
data->anchor = NULL;
+   data->childp = NULL;
ret = stage2_map_walk_leaf(addr, end, level, ptep, data);
+   } else {
+   childp = kvm_pte_follow(*ptep, mm_ops);
}
 
+   mm_ops->put_page(childp);
+   mm_ops->put_page(ptep);
+
return ret;
 }
 
@@ -736,7 +742,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 
level, kvm_pte_t *ptep,
 * block entry and rely on the remaining portions being faulted
 * back lazily.
 */
-   kvm_set_invalid_pte(ptep);
+   kvm_clear_pte(ptep);
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, addr, level);
mm_ops->put_page(ptep);
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 24/36] KVM: arm64: Refactor __populate_fault_info()

2021-03-15 Thread Quentin Perret
Refactor __populate_fault_info() to introduce __get_fault_info() which
will be used once the host is wrapped in a stage 2.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 34 +
 1 file changed, 18 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 6c1f51f25eb3..40c274da5a7c 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -160,19 +160,9 @@ static inline bool __translate_far_to_hpfar(u64 far, u64 
*hpfar)
return true;
 }
 
-static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
+static inline bool __get_fault_info(u64 esr, struct kvm_vcpu_fault_info *fault)
 {
-   u8 ec;
-   u64 esr;
-   u64 hpfar, far;
-
-   esr = vcpu->arch.fault.esr_el2;
-   ec = ESR_ELx_EC(esr);
-
-   if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
-   return true;
-
-   far = read_sysreg_el2(SYS_FAR);
+   fault->far_el2 = read_sysreg_el2(SYS_FAR);
 
/*
 * The HPFAR can be invalid if the stage 2 fault did not
@@ -188,17 +178,29 @@ static inline bool __populate_fault_info(struct kvm_vcpu 
*vcpu)
if (!(esr & ESR_ELx_S1PTW) &&
(cpus_have_final_cap(ARM64_WORKAROUND_834220) ||
 (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
-   if (!__translate_far_to_hpfar(far, &hpfar))
+   if (!__translate_far_to_hpfar(fault->far_el2, 
&fault->hpfar_el2))
return false;
} else {
-   hpfar = read_sysreg(hpfar_el2);
+   fault->hpfar_el2 = read_sysreg(hpfar_el2);
}
 
-   vcpu->arch.fault.far_el2 = far;
-   vcpu->arch.fault.hpfar_el2 = hpfar;
return true;
 }
 
+static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
+{
+   u8 ec;
+   u64 esr;
+
+   esr = vcpu->arch.fault.esr_el2;
+   ec = ESR_ELx_EC(esr);
+
+   if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
+   return true;
+
+   return __get_fault_info(esr, &vcpu->arch.fault);
+}
+
 /* Check for an FPSIMD/SVE trap and handle as appropriate */
 static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 {
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 36/36] KVM: arm64: Protect the .hyp sections from the host

2021-03-15 Thread Quentin Perret
When KVM runs in nVHE protected mode, use the host stage 2 to unmap the
hypervisor sections by marking them as owned by the hypervisor itself.
The long-term goal is to ensure the EL2 code can remain robust
regardless of the host's state, so this starts by making sure the host
cannot e.g. write to the .hyp sections directly.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h  |  1 +
 arch/arm64/kvm/arm.c  | 46 +++
 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c|  9 
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 33 +
 5 files changed, 91 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index b127af02bd45..d468c4b37190 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -62,6 +62,7 @@
 #define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping17
 #define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector18
 #define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize 19
+#define __KVM_HOST_SMCCC_FUNC___pkvm_mark_hyp  20
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 7e6a81079652..d6baf76d4747 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1894,11 +1894,57 @@ void _kvm_host_prot_finalize(void *discard)
WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
 }
 
+static inline int pkvm_mark_hyp(phys_addr_t start, phys_addr_t end)
+{
+   return kvm_call_hyp_nvhe(__pkvm_mark_hyp, start, end);
+}
+
+#define pkvm_mark_hyp_section(__section)   \
+   pkvm_mark_hyp(__pa_symbol(__section##_start),   \
+   __pa_symbol(__section##_end))
+
 static int finalize_hyp_mode(void)
 {
+   int cpu, ret;
+
if (!is_protected_kvm_enabled())
return 0;
 
+   ret = pkvm_mark_hyp_section(__hyp_idmap_text);
+   if (ret)
+   return ret;
+
+   ret = pkvm_mark_hyp_section(__hyp_text);
+   if (ret)
+   return ret;
+
+   ret = pkvm_mark_hyp_section(__hyp_rodata);
+   if (ret)
+   return ret;
+
+   ret = pkvm_mark_hyp_section(__hyp_bss);
+   if (ret)
+   return ret;
+
+   ret = pkvm_mark_hyp(hyp_mem_base, hyp_mem_base + hyp_mem_size);
+   if (ret)
+   return ret;
+
+   for_each_possible_cpu(cpu) {
+   phys_addr_t start = virt_to_phys((void 
*)kvm_arm_hyp_percpu_base[cpu]);
+   phys_addr_t end = start + (PAGE_SIZE << nvhe_percpu_order());
+
+   ret = pkvm_mark_hyp(start, end);
+   if (ret)
+   return ret;
+
+   start = virt_to_phys((void *)per_cpu(kvm_arm_hyp_stack_page, 
cpu));
+   end = start + PAGE_SIZE;
+   ret = pkvm_mark_hyp(start, end);
+   if (ret)
+   return ret;
+   }
+
/*
 * Flip the static key upfront as that may no longer be possible
 * once the host stage 2 is installed.
diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h 
b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
index d293cb328cc4..42d81ec739fa 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
@@ -21,6 +21,8 @@ struct host_kvm {
 extern struct host_kvm host_kvm;
 
 int __pkvm_prot_finalize(void);
+int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end);
+
 int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool);
 void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
 
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c 
b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index f47028d3fd0a..3df33d4de4a1 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -156,6 +156,14 @@ static void handle___pkvm_prot_finalize(struct 
kvm_cpu_context *host_ctxt)
 {
cpu_reg(host_ctxt, 1) = __pkvm_prot_finalize();
 }
+
+static void handle___pkvm_mark_hyp(struct kvm_cpu_context *host_ctxt)
+{
+   DECLARE_REG(phys_addr_t, start, host_ctxt, 1);
+   DECLARE_REG(phys_addr_t, end, host_ctxt, 2);
+
+   cpu_reg(host_ctxt, 1) = __pkvm_mark_hyp(start, end);
+}
 typedef void (*hcall_t)(struct kvm_cpu_context *);
 
 #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
@@ -180,6 +188,7 @@ static const hcall_t host_hcall[] = {
HANDLE_FUNC(__pkvm_create_mappings),
HANDLE_FUNC(__pkvm_create_private_mapping),
HANDLE_FUNC(__pkvm_prot_finalize),
+   HANDLE_FUNC(__pkvm_mark_hyp),
 };
 
 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c 
b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 5c88a325e6fc..dd03252b9574 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/me

[PATCH v5 23/36] KVM: arm64: Refactor __load_guest_stage2()

2021-03-15 Thread Quentin Perret
Refactor __load_guest_stage2() to introduce __load_stage2() which will
be re-used when loading the host stage 2.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_mmu.h | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 6f743e20cb06..9d64fa73ee67 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -270,9 +270,9 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu 
*mmu)
  * Must be called from hyp code running at EL2 with an updated VTTBR
  * and interrupts disabled.
  */
-static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu)
+static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned 
long vtcr)
 {
-   write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2);
+   write_sysreg(vtcr, vtcr_el2);
write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
 
/*
@@ -283,6 +283,11 @@ static __always_inline void __load_guest_stage2(struct 
kvm_s2_mmu *mmu)
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 }
 
+static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu)
+{
+   __load_stage2(mmu, kern_hyp_va(mmu->arch)->vtcr);
+}
+
 static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
 {
return container_of(mmu->arch, struct kvm, arch);
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 27/36] KVM: arm64: Sort the hypervisor memblocks

2021-03-15 Thread Quentin Perret
We will soon need to check if a Physical Address belongs to a memblock
at EL2, so make sure to sort them so this can be done efficiently.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/reserved_mem.c | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/arch/arm64/kvm/hyp/reserved_mem.c 
b/arch/arm64/kvm/hyp/reserved_mem.c
index fd42705a3c26..83ca23ac259b 100644
--- a/arch/arm64/kvm/hyp/reserved_mem.c
+++ b/arch/arm64/kvm/hyp/reserved_mem.c
@@ -6,6 +6,7 @@
 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -18,6 +19,23 @@ static unsigned int *hyp_memblock_nr_ptr = 
&kvm_nvhe_sym(hyp_memblock_nr);
 phys_addr_t hyp_mem_base;
 phys_addr_t hyp_mem_size;
 
+static int cmp_hyp_memblock(const void *p1, const void *p2)
+{
+   const struct memblock_region *r1 = p1;
+   const struct memblock_region *r2 = p2;
+
+   return r1->base < r2->base ? -1 : (r1->base > r2->base);
+}
+
+static void __init sort_memblock_regions(void)
+{
+   sort(hyp_memory,
+*hyp_memblock_nr_ptr,
+sizeof(struct memblock_region),
+cmp_hyp_memblock,
+NULL);
+}
+
 static int __init register_memblock_regions(void)
 {
struct memblock_region *reg;
@@ -29,6 +47,7 @@ static int __init register_memblock_regions(void)
hyp_memory[*hyp_memblock_nr_ptr] = *reg;
(*hyp_memblock_nr_ptr)++;
}
+   sort_memblock_regions();
 
return 0;
 }
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 25/36] KVM: arm64: Make memcache anonymous in pgtable allocator

2021-03-15 Thread Quentin Perret
The current stage2 page-table allocator uses a memcache to get
pre-allocated pages when it needs any. To allow re-using this code at
EL2 which uses a concept of memory pools, make the memcache argument of
kvm_pgtable_stage2_map() anonymous, and let the mm_ops zalloc_page()
callbacks use it the way they need to.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 6 +++---
 arch/arm64/kvm/hyp/pgtable.c | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 9cdc198ea6b4..4ae19247837b 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -213,8 +213,8 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt);
  * @size:  Size of the mapping.
  * @phys:  Physical address of the memory to map.
  * @prot:  Permissions and attributes for the mapping.
- * @mc:Cache of pre-allocated GFP_PGTABLE_USER memory from 
which to
- * allocate page-table pages.
+ * @mc:Cache of pre-allocated and zeroed memory from which to 
allocate
+ * page-table pages.
  *
  * The offset of @addr within a page is ignored, @size is rounded-up to
  * the next page boundary and @phys is rounded-down to the previous page
@@ -236,7 +236,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt);
  */
 int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
   u64 phys, enum kvm_pgtable_prot prot,
-  struct kvm_mmu_memory_cache *mc);
+  void *mc);
 
 /**
  * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 
page-table.
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 296675e5600d..bdd6e3d4eeb6 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -445,7 +445,7 @@ struct stage2_map_data {
kvm_pte_t   *anchor;
 
struct kvm_s2_mmu   *mmu;
-   struct kvm_mmu_memory_cache *memcache;
+   void*memcache;
 
struct kvm_pgtable_mm_ops   *mm_ops;
 };
@@ -669,7 +669,7 @@ static int stage2_map_walker(u64 addr, u64 end, u32 level, 
kvm_pte_t *ptep,
 
 int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
   u64 phys, enum kvm_pgtable_prot prot,
-  struct kvm_mmu_memory_cache *mc)
+  void *mc)
 {
int ret;
struct stage2_map_data map_data = {
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 26/36] KVM: arm64: Reserve memory for host stage 2

2021-03-15 Thread Quentin Perret
Extend the memory pool allocated for the hypervisor to include enough
pages to map all of memory at page granularity for the host stage 2.
While at it, also reserve some memory for device mappings.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/nvhe/mm.h | 27 ++-
 arch/arm64/kvm/hyp/nvhe/setup.c  | 12 
 arch/arm64/kvm/hyp/reserved_mem.c|  2 ++
 3 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h 
b/arch/arm64/kvm/hyp/include/nvhe/mm.h
index ac0f7fcffd08..0095f6289742 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/mm.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h
@@ -53,7 +53,7 @@ static inline unsigned long __hyp_pgtable_max_pages(unsigned 
long nr_pages)
return total;
 }
 
-static inline unsigned long hyp_s1_pgtable_pages(void)
+static inline unsigned long __hyp_pgtable_total_pages(void)
 {
unsigned long res = 0, i;
 
@@ -63,9 +63,34 @@ static inline unsigned long hyp_s1_pgtable_pages(void)
res += __hyp_pgtable_max_pages(reg->size >> PAGE_SHIFT);
}
 
+   return res;
+}
+
+static inline unsigned long hyp_s1_pgtable_pages(void)
+{
+   unsigned long res;
+
+   res = __hyp_pgtable_total_pages();
+
/* Allow 1 GiB for private mappings */
res += __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT);
 
return res;
 }
+
+static inline unsigned long host_s2_mem_pgtable_pages(void)
+{
+   /*
+* Include an extra 16 pages to safely upper-bound the worst case of
+* concatenated pgds.
+*/
+   return __hyp_pgtable_total_pages() + 16;
+}
+
+static inline unsigned long host_s2_dev_pgtable_pages(void)
+{
+   /* Allow 1 GiB for MMIO mappings */
+   return __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT);
+}
+
 #endif /* __KVM_HYP_MM_H */
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 1e8bcd8b0299..c1a3e7e0ebbc 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -24,6 +24,8 @@ unsigned long hyp_nr_cpus;
 
 static void *vmemmap_base;
 static void *hyp_pgt_base;
+static void *host_s2_mem_pgt_base;
+static void *host_s2_dev_pgt_base;
 
 static int divide_memory_pool(void *virt, unsigned long size)
 {
@@ -42,6 +44,16 @@ static int divide_memory_pool(void *virt, unsigned long size)
if (!hyp_pgt_base)
return -ENOMEM;
 
+   nr_pages = host_s2_mem_pgtable_pages();
+   host_s2_mem_pgt_base = hyp_early_alloc_contig(nr_pages);
+   if (!host_s2_mem_pgt_base)
+   return -ENOMEM;
+
+   nr_pages = host_s2_dev_pgtable_pages();
+   host_s2_dev_pgt_base = hyp_early_alloc_contig(nr_pages);
+   if (!host_s2_dev_pgt_base)
+   return -ENOMEM;
+
return 0;
 }
 
diff --git a/arch/arm64/kvm/hyp/reserved_mem.c 
b/arch/arm64/kvm/hyp/reserved_mem.c
index 9bc6a6d27904..fd42705a3c26 100644
--- a/arch/arm64/kvm/hyp/reserved_mem.c
+++ b/arch/arm64/kvm/hyp/reserved_mem.c
@@ -52,6 +52,8 @@ void __init kvm_hyp_reserve(void)
}
 
hyp_mem_pages += hyp_s1_pgtable_pages();
+   hyp_mem_pages += host_s2_mem_pgtable_pages();
+   hyp_mem_pages += host_s2_dev_pgtable_pages();
 
/*
 * The hyp_vmemmap needs to be backed by pages, but these pages
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 18/36] KVM: arm64: Elevate hypervisor mappings creation at EL2

2021-03-15 Thread Quentin Perret
Previous commits have introduced infrastructure to enable the EL2 code
to manage its own stage 1 mappings. However, this was preliminary work,
and none of it is currently in use.

Put all of this together by elevating the mapping creation at EL2 when
memory protection is enabled. In this case, the host kernel running
at EL1 still creates _temporary_ EL2 mappings, only used while
initializing the hypervisor, but frees them right after.

As such, all calls to create_hyp_mappings() after kvm init has finished
turn into hypercalls, as the host now has no 'legal' way to modify the
hypevisor page tables directly.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_mmu.h |  2 +-
 arch/arm64/kvm/arm.c | 87 +---
 arch/arm64/kvm/mmu.c | 43 ++--
 3 files changed, 120 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 5c42ec023cc7..ce02a4052dcf 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -166,7 +166,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
 
 phys_addr_t kvm_mmu_get_httbr(void);
 phys_addr_t kvm_get_idmap_vector(void);
-int kvm_mmu_init(void);
+int kvm_mmu_init(u32 *hyp_va_bits);
 
 static inline void *__kvm_vector_slot2addr(void *base,
   enum arm64_hyp_spectre_vector slot)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 26e573cdede3..7d62211109d9 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1421,7 +1421,7 @@ static void cpu_prepare_hyp_mode(int cpu)
kvm_flush_dcache_to_poc(params, sizeof(*params));
 }
 
-static void cpu_init_hyp_mode(void)
+static void hyp_install_host_vector(void)
 {
struct kvm_nvhe_init_params *params;
struct arm_smccc_res res;
@@ -1439,6 +1439,11 @@ static void cpu_init_hyp_mode(void)
params = this_cpu_ptr_nvhe_sym(kvm_init_params);
arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(__kvm_hyp_init), 
virt_to_phys(params), &res);
WARN_ON(res.a0 != SMCCC_RET_SUCCESS);
+}
+
+static void cpu_init_hyp_mode(void)
+{
+   hyp_install_host_vector();
 
/*
 * Disabling SSBD on a non-VHE system requires us to enable SSBS
@@ -1481,7 +1486,10 @@ static void cpu_set_hyp_vector(void)
struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data);
void *vector = hyp_spectre_vector_selector[data->slot];
 
-   *this_cpu_ptr_hyp_sym(kvm_hyp_vector) = (unsigned long)vector;
+   if (!is_protected_kvm_enabled())
+   *this_cpu_ptr_hyp_sym(kvm_hyp_vector) = (unsigned long)vector;
+   else
+   kvm_call_hyp_nvhe(__pkvm_cpu_set_vector, data->slot);
 }
 
 static void cpu_hyp_reinit(void)
@@ -1489,13 +1497,14 @@ static void cpu_hyp_reinit(void)

kvm_init_host_cpu_context(&this_cpu_ptr_hyp_sym(kvm_host_data)->host_ctxt);
 
cpu_hyp_reset();
-   cpu_set_hyp_vector();
 
if (is_kernel_in_hyp_mode())
kvm_timer_init_vhe();
else
cpu_init_hyp_mode();
 
+   cpu_set_hyp_vector();
+
kvm_arm_init_debug();
 
if (vgic_present)
@@ -1691,18 +1700,59 @@ static void teardown_hyp_mode(void)
}
 }
 
+static int do_pkvm_init(u32 hyp_va_bits)
+{
+   void *per_cpu_base = kvm_ksym_ref(kvm_arm_hyp_percpu_base);
+   int ret;
+
+   preempt_disable();
+   hyp_install_host_vector();
+   ret = kvm_call_hyp_nvhe(__pkvm_init, hyp_mem_base, hyp_mem_size,
+   num_possible_cpus(), kern_hyp_va(per_cpu_base),
+   hyp_va_bits);
+   preempt_enable();
+
+   return ret;
+}
+
+static int kvm_hyp_init_protection(u32 hyp_va_bits)
+{
+   void *addr = phys_to_virt(hyp_mem_base);
+   int ret;
+
+   ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
+   if (ret)
+   return ret;
+
+   ret = do_pkvm_init(hyp_va_bits);
+   if (ret)
+   return ret;
+
+   free_hyp_pgds();
+
+   return 0;
+}
+
 /**
  * Inits Hyp-mode on all online CPUs
  */
 static int init_hyp_mode(void)
 {
+   u32 hyp_va_bits;
int cpu;
-   int err = 0;
+   int err = -ENOMEM;
+
+   /*
+* The protected Hyp-mode cannot be initialized if the memory pool
+* allocation has failed.
+*/
+   if (is_protected_kvm_enabled() && !hyp_mem_base)
+   goto out_err;
 
/*
 * Allocate Hyp PGD and setup Hyp identity mapping
 */
-   err = kvm_mmu_init();
+   err = kvm_mmu_init(&hyp_va_bits);
if (err)
goto out_err;
 
@@ -1818,6 +1868,14 @@ static int init_hyp_mode(void)
goto out_err;
}
 
+   if (is_protected_kvm_enabled()) {
+   err = kvm_hyp_init_prote

[PATCH v5 17/36] KVM: arm64: Prepare the creation of s1 mappings at EL2

2021-03-15 Thread Quentin Perret
When memory protection is enabled, the EL2 code needs the ability to
create and manage its own page-table. To do so, introduce a new set of
hypercalls to bootstrap a memory management system at EL2.

This leads to the following boot flow in nVHE Protected mode:

 1. the host allocates memory for the hypervisor very early on, using
the memblock API;

 2. the host creates a set of stage 1 page-table for EL2, installs the
EL2 vectors, and issues the __pkvm_init hypercall;

 3. during __pkvm_init, the hypervisor re-creates its stage 1 page-table
and stores it in the memory pool provided by the host;

 4. the hypervisor then extends its stage 1 mappings to include a
vmemmap in the EL2 VA space, hence allowing to use the buddy
allocator introduced in a previous patch;

 5. the hypervisor jumps back in the idmap page, switches from the
host-provided page-table to the new one, and wraps up its
initialization by enabling the new allocator, before returning to
the host.

 6. the host can free the now unused page-table created for EL2, and
will now need to issue hypercalls to make changes to the EL2 stage 1
mappings instead of modifying them directly.

Note that for the sake of simplifying the review, this patch focuses on
the hypervisor side of things. In other words, this only implements the
new hypercalls, but does not make use of them from the host yet. The
host-side changes will follow in a subsequent patch.

Credits to Will for __pkvm_init_switch_pgd.

Acked-by: Will Deacon 
Co-authored-by: Will Deacon 
Signed-off-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h |   4 +
 arch/arm64/include/asm/kvm_host.h|   7 +
 arch/arm64/include/asm/kvm_hyp.h |   8 ++
 arch/arm64/include/asm/kvm_pgtable.h |   2 +
 arch/arm64/kernel/image-vars.h   |  16 +++
 arch/arm64/kvm/hyp/Makefile  |   2 +-
 arch/arm64/kvm/hyp/include/nvhe/mm.h |  71 ++
 arch/arm64/kvm/hyp/nvhe/Makefile |   4 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S   |  27 
 arch/arm64/kvm/hyp/nvhe/hyp-main.c   |  49 +++
 arch/arm64/kvm/hyp/nvhe/mm.c | 173 +++
 arch/arm64/kvm/hyp/nvhe/setup.c  | 197 +++
 arch/arm64/kvm/hyp/pgtable.c |   2 -
 arch/arm64/kvm/hyp/reserved_mem.c|  92 +
 arch/arm64/mm/init.c |   3 +
 15 files changed, 652 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mm.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/mm.c
 create mode 100644 arch/arm64/kvm/hyp/nvhe/setup.c
 create mode 100644 arch/arm64/kvm/hyp/reserved_mem.c

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 22d933e9b59e..db20a9477870 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -57,6 +57,10 @@
 #define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2   12
 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs  13
 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs   14
+#define __KVM_HOST_SMCCC_FUNC___pkvm_init  15
+#define __KVM_HOST_SMCCC_FUNC___pkvm_create_mappings   16
+#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping17
+#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector18
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 459ee557f87c..b9d45a1f8538 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -781,5 +781,12 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
(test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
 
 int kvm_trng_call(struct kvm_vcpu *vcpu);
+#ifdef CONFIG_KVM
+extern phys_addr_t hyp_mem_base;
+extern phys_addr_t hyp_mem_size;
+void __init kvm_hyp_reserve(void);
+#else
+static inline void kvm_hyp_reserve(void) { }
+#endif
 
 #endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index c0450828378b..ae55351b99a4 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -100,4 +100,12 @@ void __noreturn hyp_panic(void);
 void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par);
 #endif
 
+#ifdef __KVM_NVHE_HYPERVISOR__
+void __pkvm_init_switch_pgd(phys_addr_t phys, unsigned long size,
+   phys_addr_t pgd, void *sp, void *cont_fn);
+int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
+   unsigned long *per_cpu_base, u32 hyp_va_bits);
+void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
+#endif
+
 #endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index bbe840e430cb..bf7a3cc49420 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -11,6 +11,8 @@
 #incl

[PATCH v5 15/36] KVM: arm64: Factor out vector address calculation

2021-03-15 Thread Quentin Perret
In order to re-map the guest vectors at EL2 when pKVM is enabled,
refactor __kvm_vector_slot2idx() and kvm_init_vector_slot() to move all
the address calculation logic in a static inline function.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_mmu.h | 8 
 arch/arm64/kvm/arm.c | 9 +
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 90873851f677..5c42ec023cc7 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -168,6 +168,14 @@ phys_addr_t kvm_mmu_get_httbr(void);
 phys_addr_t kvm_get_idmap_vector(void);
 int kvm_mmu_init(void);
 
+static inline void *__kvm_vector_slot2addr(void *base,
+  enum arm64_hyp_spectre_vector slot)
+{
+   int idx = slot - (slot != HYP_VECTOR_DIRECT);
+
+   return base + (idx * SZ_2K);
+}
+
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)   __flush_dcache_area((a), (l))
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 3f8bcf8db036..26e573cdede3 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1345,16 +1345,9 @@ static unsigned long nvhe_percpu_order(void)
 /* A lookup table holding the hypervisor VA for each vector slot */
 static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS];
 
-static int __kvm_vector_slot2idx(enum arm64_hyp_spectre_vector slot)
-{
-   return slot - (slot != HYP_VECTOR_DIRECT);
-}
-
 static void kvm_init_vector_slot(void *base, enum arm64_hyp_spectre_vector 
slot)
 {
-   int idx = __kvm_vector_slot2idx(slot);
-
-   hyp_spectre_vector_selector[slot] = base + (idx * SZ_2K);
+   hyp_spectre_vector_selector[slot] = __kvm_vector_slot2addr(base, slot);
 }
 
 static int kvm_init_vector_slots(void)
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 19/36] KVM: arm64: Use kvm_arch for stage 2 pgtable

2021-03-15 Thread Quentin Perret
In order to make use of the stage 2 pgtable code for the host stage 2,
use struct kvm_arch in lieu of struct kvm as the host will have the
former but not the latter.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 5 +++--
 arch/arm64/kvm/hyp/pgtable.c | 6 +++---
 arch/arm64/kvm/mmu.c | 2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index bf7a3cc49420..7945ec87eaec 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -162,12 +162,13 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 
addr, u64 size, u64 phys,
 /**
  * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
- * @kvm:   KVM structure representing the guest virtual machine.
+ * @arch:  Arch-specific KVM structure representing the guest virtual
+ * machine.
  * @mm_ops:Memory management callbacks.
  *
  * Return: 0 on success, negative error code on failure.
  */
-int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm,
+int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch,
struct kvm_pgtable_mm_ops *mm_ops);
 
 /**
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 7ce0969203e8..3d79c8094cdd 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -879,11 +879,11 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 
addr, u64 size)
return kvm_pgtable_walk(pgt, addr, size, &walker);
 }
 
-int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm *kvm,
+int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_arch *arch,
struct kvm_pgtable_mm_ops *mm_ops)
 {
size_t pgd_sz;
-   u64 vtcr = kvm->arch.vtcr;
+   u64 vtcr = arch->vtcr;
u32 ia_bits = VTCR_EL2_IPA(vtcr);
u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr);
u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0;
@@ -896,7 +896,7 @@ int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct 
kvm *kvm,
pgt->ia_bits= ia_bits;
pgt->start_level= start_level;
pgt->mm_ops = mm_ops;
-   pgt->mmu= &kvm->arch.mmu;
+   pgt->mmu= &arch->mmu;
 
/* Ensure zeroed PGD pages are visible to the hardware walker */
dsb(ishst);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 9d331bf262d2..41f9c03cbcc3 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -457,7 +457,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu 
*mmu)
if (!pgt)
return -ENOMEM;
 
-   err = kvm_pgtable_stage2_init(pgt, kvm, &kvm_s2_mm_ops);
+   err = kvm_pgtable_stage2_init(pgt, &kvm->arch, &kvm_s2_mm_ops);
if (err)
goto out_free_pgtable;
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 20/36] KVM: arm64: Use kvm_arch in kvm_s2_mmu

2021-03-15 Thread Quentin Perret
In order to make use of the stage 2 pgtable code for the host stage 2,
change kvm_s2_mmu to use a kvm_arch pointer in lieu of the kvm pointer,
as the host will have the former but not the latter.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/include/asm/kvm_mmu.h  | 6 +-
 arch/arm64/kvm/mmu.c  | 8 
 3 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index b9d45a1f8538..90565782ce3e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -94,7 +94,7 @@ struct kvm_s2_mmu {
/* The last vcpu id that ran on each physical CPU */
int __percpu *last_vcpu_ran;
 
-   struct kvm *kvm;
+   struct kvm_arch *arch;
 };
 
 struct kvm_arch_memory_slot {
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index ce02a4052dcf..6f743e20cb06 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -272,7 +272,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu 
*mmu)
  */
 static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu)
 {
-   write_sysreg(kern_hyp_va(mmu->kvm)->arch.vtcr, vtcr_el2);
+   write_sysreg(kern_hyp_va(mmu->arch)->vtcr, vtcr_el2);
write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
 
/*
@@ -283,5 +283,9 @@ static __always_inline void __load_guest_stage2(struct 
kvm_s2_mmu *mmu)
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 }
 
+static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
+{
+   return container_of(mmu->arch, struct kvm, arch);
+}
 #endif /* __ASSEMBLY__ */
 #endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 41f9c03cbcc3..3257cadfab24 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -165,7 +165,7 @@ static void *kvm_host_va(phys_addr_t phys)
 static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, 
u64 size,
 bool may_block)
 {
-   struct kvm *kvm = mmu->kvm;
+   struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
phys_addr_t end = start + size;
 
assert_spin_locked(&kvm->mmu_lock);
@@ -470,7 +470,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu 
*mmu)
for_each_possible_cpu(cpu)
*per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1;
 
-   mmu->kvm = kvm;
+   mmu->arch = &kvm->arch;
mmu->pgt = pgt;
mmu->pgd_phys = __pa(pgt->pgd);
mmu->vmid.vmid_gen = 0;
@@ -552,7 +552,7 @@ void stage2_unmap_vm(struct kvm *kvm)
 
 void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
 {
-   struct kvm *kvm = mmu->kvm;
+   struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
struct kvm_pgtable *pgt = NULL;
 
spin_lock(&kvm->mmu_lock);
@@ -621,7 +621,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t 
guest_ipa,
  */
 static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, 
phys_addr_t end)
 {
-   struct kvm *kvm = mmu->kvm;
+   struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
stage2_apply_range_resched(kvm, addr, end, 
kvm_pgtable_stage2_wrprotect);
 }
 
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 16/36] arm64: asm: Provide set_sctlr_el2 macro

2021-03-15 Thread Quentin Perret
We will soon need to turn the EL2 stage 1 MMU on and off in nVHE
protected mode, so refactor the set_sctlr_el1 macro to make it usable
for that purpose.

Acked-by: Will Deacon 
Suggested-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/assembler.h | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h 
b/arch/arm64/include/asm/assembler.h
index ca31594d3d6c..fb651c1f26e9 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -676,11 +676,11 @@ USER(\label, ic   ivau, \tmp2)// 
invalidate I line PoU
.endm
 
 /*
- * Set SCTLR_EL1 to the passed value, and invalidate the local icache
+ * Set SCTLR_ELx to the @reg value, and invalidate the local icache
  * in the process. This is called when setting the MMU on.
  */
-.macro set_sctlr_el1, reg
-   msr sctlr_el1, \reg
+.macro set_sctlr, sreg, reg
+   msr \sreg, \reg
isb
/*
 * Invalidate the local I-cache so that any instructions fetched
@@ -692,6 +692,14 @@ USER(\label, icivau, \tmp2)// 
invalidate I line PoU
isb
 .endm
 
+.macro set_sctlr_el1, reg
+   set_sctlr sctlr_el1, \reg
+.endm
+
+.macro set_sctlr_el2, reg
+   set_sctlr sctlr_el2, \reg
+.endm
+
 /*
  * Check whether to yield to another runnable task from kernel mode NEON code
  * (which runs with preemption disabled).
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 22/36] KVM: arm64: Refactor kvm_arm_setup_stage2()

2021-03-15 Thread Quentin Perret
In order to re-use some of the stage 2 setup code at EL2, factor parts
of kvm_arm_setup_stage2() out into separate functions.

No functional change intended.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_pgtable.h | 26 +
 arch/arm64/kvm/hyp/pgtable.c | 32 +
 arch/arm64/kvm/reset.c   | 42 +++-
 3 files changed, 62 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h 
b/arch/arm64/include/asm/kvm_pgtable.h
index 7945ec87eaec..9cdc198ea6b4 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -13,6 +13,16 @@
 
 #define KVM_PGTABLE_MAX_LEVELS 4U
 
+static inline u64 kvm_get_parange(u64 mmfr0)
+{
+   u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
+   ID_AA64MMFR0_PARANGE_SHIFT);
+   if (parange > ID_AA64MMFR0_PARANGE_MAX)
+   parange = ID_AA64MMFR0_PARANGE_MAX;
+
+   return parange;
+}
+
 typedef u64 kvm_pte_t;
 
 /**
@@ -159,6 +169,22 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt);
 int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys,
enum kvm_pgtable_prot prot);
 
+/**
+ * kvm_get_vtcr() - Helper to construct VTCR_EL2
+ * @mmfr0: Sanitized value of SYS_ID_AA64MMFR0_EL1 register.
+ * @mmfr1: Sanitized value of SYS_ID_AA64MMFR1_EL1 register.
+ * @phys_shfit:Value to set in VTCR_EL2.T0SZ.
+ *
+ * The VTCR value is common across all the physical CPUs on the system.
+ * We use system wide sanitised values to fill in different fields,
+ * except for Hardware Management of Access Flags. HA Flag is set
+ * unconditionally on all CPUs, as it is safe to run with or without
+ * the feature and the bit is RES0 on CPUs that don't support it.
+ *
+ * Return: VTCR_EL2 value
+ */
+u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift);
+
 /**
  * kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table.
  * @pgt:   Uninitialised page-table structure to initialise.
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 3d79c8094cdd..296675e5600d 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -9,6 +9,7 @@
 
 #include 
 #include 
+#include 
 
 #define KVM_PTE_VALID  BIT(0)
 
@@ -449,6 +450,37 @@ struct stage2_map_data {
struct kvm_pgtable_mm_ops   *mm_ops;
 };
 
+u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
+{
+   u64 vtcr = VTCR_EL2_FLAGS;
+   u8 lvls;
+
+   vtcr |= kvm_get_parange(mmfr0) << VTCR_EL2_PS_SHIFT;
+   vtcr |= VTCR_EL2_T0SZ(phys_shift);
+   /*
+* Use a minimum 2 level page table to prevent splitting
+* host PMD huge pages at stage2.
+*/
+   lvls = stage2_pgtable_levels(phys_shift);
+   if (lvls < 2)
+   lvls = 2;
+   vtcr |= VTCR_EL2_LVLS_TO_SL0(lvls);
+
+   /*
+* Enable the Hardware Access Flag management, unconditionally
+* on all CPUs. The features is RES0 on CPUs without the support
+* and must be ignored by the CPUs.
+*/
+   vtcr |= VTCR_EL2_HA;
+
+   /* Set the vmid bits */
+   vtcr |= (get_vmid_bits(mmfr1) == 16) ?
+   VTCR_EL2_VS_16BIT :
+   VTCR_EL2_VS_8BIT;
+
+   return vtcr;
+}
+
 static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot,
struct stage2_map_data *data)
 {
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 47f3f035f3ea..6aae118c960a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -332,19 +332,10 @@ int kvm_set_ipa_limit(void)
return 0;
 }
 
-/*
- * Configure the VTCR_EL2 for this VM. The VTCR value is common
- * across all the physical CPUs on the system. We use system wide
- * sanitised values to fill in different fields, except for Hardware
- * Management of Access Flags. HA Flag is set unconditionally on
- * all CPUs, as it is safe to run with or without the feature and
- * the bit is RES0 on CPUs that don't support it.
- */
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 {
-   u64 vtcr = VTCR_EL2_FLAGS, mmfr0;
-   u32 parange, phys_shift;
-   u8 lvls;
+   u64 mmfr0, mmfr1;
+   u32 phys_shift;
 
if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK)
return -EINVAL;
@@ -359,33 +350,8 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long 
type)
}
 
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
-   parange = cpuid_feature_extract_unsigned_field(mmfr0,
-   ID_AA64MMFR0_PARANGE_SHIFT);
-   if (parange > ID_AA64MMFR0_PARANGE_MAX)
-   parange = ID_AA64MMFR0_PARANGE_MAX;
-   vtcr |= parange << VTCR_EL2_PS_SHIFT;
-
-   vtcr |= VTCR_EL2_T0SZ(phys_shift);
-   /*
- 

[PATCH v5 13/36] KVM: arm64: Enable access to sanitized CPU features at EL2

2021-03-15 Thread Quentin Perret
Introduce the infrastructure in KVM enabling to copy CPU feature
registers into EL2-owned data-structures, to allow reading sanitised
values directly at EL2 in nVHE.

Given that only a subset of these features are being read by the
hypervisor, the ones that need to be copied are to be listed under
 together with the name of the nVHE variable that
will hold the copy. This introduces only the infrastructure enabling
this copy. The first users will follow shortly.

Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/cpufeature.h |  1 +
 arch/arm64/include/asm/kvm_cpufeature.h | 15 +++
 arch/arm64/include/asm/kvm_host.h   |  4 
 arch/arm64/kernel/cpufeature.c  | 13 +
 arch/arm64/kvm/hyp/nvhe/hyp-smp.c   |  7 +++
 arch/arm64/kvm/sys_regs.c   | 19 +++
 6 files changed, 59 insertions(+)
 create mode 100644 arch/arm64/include/asm/kvm_cpufeature.h

diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index 61177bac49fa..a85cea2cac57 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -607,6 +607,7 @@ void check_local_cpu_capabilities(void);
 
 u64 read_sanitised_ftr_reg(u32 id);
 u64 __read_sysreg_by_encoding(u32 sys_id);
+int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst);
 
 static inline bool cpu_supports_mixed_endian_el0(void)
 {
diff --git a/arch/arm64/include/asm/kvm_cpufeature.h 
b/arch/arm64/include/asm/kvm_cpufeature.h
new file mode 100644
index ..3fd9f60d2180
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_cpufeature.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2020 - Google LLC
+ * Author: Quentin Perret 
+ */
+
+#include 
+
+#ifndef KVM_HYP_CPU_FTR_REG
+#if defined(__KVM_NVHE_HYPERVISOR__)
+#define KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg name
+#else
+#define KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg 
kvm_nvhe_sym(name)
+#endif
+#endif
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 06ca4828005f..459ee557f87c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -751,9 +751,13 @@ void kvm_clr_pmu_events(u32 clr);
 
 void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
 void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
+
+void setup_kvm_el2_caps(void);
 #else
 static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {}
 static inline void kvm_clr_pmu_events(u32 clr) {}
+
+static inline void setup_kvm_el2_caps(void) {}
 #endif
 
 void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 066030717a4c..6252476e4e73 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1154,6 +1154,18 @@ u64 read_sanitised_ftr_reg(u32 id)
 }
 EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);
 
+int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst)
+{
+   struct arm64_ftr_reg *regp = get_arm64_ftr_reg(id);
+
+   if (!regp)
+   return -EINVAL;
+
+   *dst = *regp;
+
+   return 0;
+}
+
 #define read_sysreg_case(r)\
case r: val = read_sysreg_s(r); break;
 
@@ -2773,6 +2785,7 @@ void __init setup_cpu_features(void)
 
setup_system_capabilities();
setup_elf_hwcaps(arm64_elf_hwcaps);
+   setup_kvm_el2_caps();
 
if (system_supports_32bit_el0())
setup_elf_hwcaps(compat_elf_hwcaps);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c 
b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
index 879559057dee..cc829b9db0da 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
@@ -38,3 +38,10 @@ unsigned long __hyp_per_cpu_offset(unsigned int cpu)
elf_base = (unsigned long)&__per_cpu_start;
return this_cpu_base - elf_base;
 }
+
+/*
+ * Define the CPU feature registers variables that will hold the copies of
+ * the host's sanitized values.
+ */
+#define KVM_HYP_CPU_FTR_REG(name) struct arm64_ftr_reg name
+#include 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4f2f1e3145de..6c5d133689ae 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -2775,3 +2776,21 @@ void kvm_sys_reg_table_init(void)
/* Clear all higher bits. */
cache_levels &= (1 << (i*3))-1;
 }
+
+#define CPU_FTR_REG_HYP_COPY(id, name) \
+   { .sys_id = id, .dst = (struct arm64_ftr_reg *)&kvm_nvhe_sym(name) }
+struct __ftr_reg_copy_entry {
+   u32 sys_id;
+   struct arm64_ftr_reg*dst;
+} hyp_ftr_regs[] __initdata = {
+};
+
+void __init setup_kvm_el2_caps(void)
+{
+   int i;
+
+   for (i = 0; i < ARRAY_SIZE(hyp_ftr_regs); i++) {
+   WARN(copy_ftr_reg(hyp_ftr_regs[i].s

[PATCH v5 21/36] KVM: arm64: Set host stage 2 using kvm_nvhe_init_params

2021-03-15 Thread Quentin Perret
Move the registers relevant to host stage 2 enablement to
kvm_nvhe_init_params to prepare the ground for enabling it in later
patches.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h   |  3 +++
 arch/arm64/kernel/asm-offsets.c|  3 +++
 arch/arm64/kvm/arm.c   |  5 +
 arch/arm64/kvm/hyp/nvhe/hyp-init.S | 14 +-
 arch/arm64/kvm/hyp/nvhe/switch.c   |  5 +
 5 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index db20a9477870..6dce860f8bca 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -158,6 +158,9 @@ struct kvm_nvhe_init_params {
unsigned long tpidr_el2;
unsigned long stack_hyp_va;
phys_addr_t pgd_pa;
+   unsigned long hcr_el2;
+   unsigned long vttbr;
+   unsigned long vtcr;
 };
 
 /* Translate a kernel address @ptr into its equivalent linear mapping */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index a36e2fc330d4..8930b42f6418 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -120,6 +120,9 @@ int main(void)
   DEFINE(NVHE_INIT_TPIDR_EL2,  offsetof(struct kvm_nvhe_init_params, 
tpidr_el2));
   DEFINE(NVHE_INIT_STACK_HYP_VA,   offsetof(struct kvm_nvhe_init_params, 
stack_hyp_va));
   DEFINE(NVHE_INIT_PGD_PA, offsetof(struct kvm_nvhe_init_params, pgd_pa));
+  DEFINE(NVHE_INIT_HCR_EL2,offsetof(struct kvm_nvhe_init_params, hcr_el2));
+  DEFINE(NVHE_INIT_VTTBR,  offsetof(struct kvm_nvhe_init_params, vttbr));
+  DEFINE(NVHE_INIT_VTCR,   offsetof(struct kvm_nvhe_init_params, vtcr));
 #endif
 #ifdef CONFIG_CPU_PM
   DEFINE(CPU_CTX_SP,   offsetof(struct cpu_suspend_ctx, sp));
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 7d62211109d9..d474eec606a3 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1413,6 +1413,11 @@ static void cpu_prepare_hyp_mode(int cpu)
 
params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) 
+ PAGE_SIZE);
params->pgd_pa = kvm_mmu_get_httbr();
+   if (is_protected_kvm_enabled())
+   params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS;
+   else
+   params->hcr_el2 = HCR_HOST_NVHE_FLAGS;
+   params->vttbr = params->vtcr = 0;
 
/*
 * Flush the init params from the data cache because the struct will
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S 
b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index a2b8b6a84cbd..a50ad9e9fc05 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -83,11 +83,6 @@ SYM_CODE_END(__kvm_hyp_init)
  * x0: struct kvm_nvhe_init_params PA
  */
 SYM_CODE_START_LOCAL(___kvm_hyp_init)
-alternative_if ARM64_KVM_PROTECTED_MODE
-   mov_q   x1, HCR_HOST_NVHE_PROTECTED_FLAGS
-   msr hcr_el2, x1
-alternative_else_nop_endif
-
ldr x1, [x0, #NVHE_INIT_TPIDR_EL2]
msr tpidr_el2, x1
 
@@ -97,6 +92,15 @@ alternative_else_nop_endif
ldr x1, [x0, #NVHE_INIT_MAIR_EL2]
msr mair_el2, x1
 
+   ldr x1, [x0, #NVHE_INIT_HCR_EL2]
+   msr hcr_el2, x1
+
+   ldr x1, [x0, #NVHE_INIT_VTTBR]
+   msr vttbr_el2, x1
+
+   ldr x1, [x0, #NVHE_INIT_VTCR]
+   msr vtcr_el2, x1
+
ldr x1, [x0, #NVHE_INIT_PGD_PA]
phys_to_ttbr x2, x1
 alternative_if ARM64_HAS_CNP
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f3d0e9eca56c..979a76cdf9fb 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -97,10 +97,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
write_sysreg(mdcr_el2, mdcr_el2);
-   if (is_protected_kvm_enabled())
-   write_sysreg(HCR_HOST_NVHE_PROTECTED_FLAGS, hcr_el2);
-   else
-   write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+   write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
write_sysreg(__kvm_hyp_host_vector, vbar_el2);
 }
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 02/36] KVM: arm64: Link position-independent string routines into .hyp.text

2021-03-15 Thread Quentin Perret
From: Will Deacon 

Pull clear_page(), copy_page(), memcpy() and memset() into the nVHE hyp
code and ensure that we always execute the '__pi_' entry point on the
offchance that it changes in future.

[ qperret: Commit title nits and added linker script alias ]

Signed-off-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/hyp_image.h |  3 +++
 arch/arm64/kernel/image-vars.h | 11 +++
 arch/arm64/kvm/hyp/nvhe/Makefile   |  4 
 3 files changed, 18 insertions(+)

diff --git a/arch/arm64/include/asm/hyp_image.h 
b/arch/arm64/include/asm/hyp_image.h
index 737ded6b6d0d..78cd77990c9c 100644
--- a/arch/arm64/include/asm/hyp_image.h
+++ b/arch/arm64/include/asm/hyp_image.h
@@ -56,6 +56,9 @@
  */
 #define KVM_NVHE_ALIAS(sym)kvm_nvhe_sym(sym) = sym;
 
+/* Defines a linker script alias for KVM nVHE hyp symbols */
+#define KVM_NVHE_ALIAS_HYP(first, sec) kvm_nvhe_sym(first) = kvm_nvhe_sym(sec);
+
 #endif /* LINKER_SCRIPT */
 
 #endif /* __ARM64_HYP_IMAGE_H__ */
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 5aa9ed1e9ec6..4eb7a15c8b60 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -104,6 +104,17 @@ KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base);
 /* PMU available static key */
 KVM_NVHE_ALIAS(kvm_arm_pmu_available);
 
+/* Position-independent library routines */
+KVM_NVHE_ALIAS_HYP(clear_page, __pi_clear_page);
+KVM_NVHE_ALIAS_HYP(copy_page, __pi_copy_page);
+KVM_NVHE_ALIAS_HYP(memcpy, __pi_memcpy);
+KVM_NVHE_ALIAS_HYP(memset, __pi_memset);
+
+#ifdef CONFIG_KASAN
+KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy);
+KVM_NVHE_ALIAS_HYP(__memset, __pi_memset);
+#endif
+
 #endif /* CONFIG_KVM */
 
 #endif /* __ARM64_KERNEL_IMAGE_VARS_H */
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index a6707df4f6c0..bc98f8e3d1da 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -9,10 +9,14 @@ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
 hostprogs := gen-hyprel
 HOST_EXTRACFLAGS += -I$(objtree)/include
 
+lib-objs := clear_page.o copy_page.o memcpy.o memset.o
+lib-objs := $(addprefix ../../../lib/, $(lib-objs))
+
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
 hyp-main.o hyp-smp.o psci-relay.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 ../fpsimd.o ../hyp-entry.o ../exception.o
+obj-y += $(lib-objs)
 
 ##
 ## Build rules for compiling nVHE hyp code
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 03/36] arm64: kvm: Add standalone ticket spinlock implementation for use at hyp

2021-03-15 Thread Quentin Perret
From: Will Deacon 

We will soon need to synchronise multiple CPUs in the hyp text at EL2.
The qspinlock-based locking used by the host is overkill for this purpose
and relies on the kernel's "percpu" implementation for the MCS nodes.

Implement a simple ticket locking scheme based heavily on the code removed
by commit c11090474d70 ("arm64: locking: Replace ticket lock implementation
with qspinlock").

Signed-off-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 92 ++
 1 file changed, 92 insertions(+)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/spinlock.h

diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h 
b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h
new file mode 100644
index ..76b537f8d1c6
--- /dev/null
+++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * A stand-alone ticket spinlock implementation for use by the non-VHE
+ * KVM hypervisor code running at EL2.
+ *
+ * Copyright (C) 2020 Google LLC
+ * Author: Will Deacon 
+ *
+ * Heavily based on the implementation removed by c11090474d70 which was:
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __ARM64_KVM_NVHE_SPINLOCK_H__
+#define __ARM64_KVM_NVHE_SPINLOCK_H__
+
+#include 
+#include 
+
+typedef union hyp_spinlock {
+   u32 __val;
+   struct {
+#ifdef __AARCH64EB__
+   u16 next, owner;
+#else
+   u16 owner, next;
+#endif
+   };
+} hyp_spinlock_t;
+
+#define hyp_spin_lock_init(l)  \
+do {   \
+   *(l) = (hyp_spinlock_t){ .__val = 0 };  \
+} while (0)
+
+static inline void hyp_spin_lock(hyp_spinlock_t *lock)
+{
+   u32 tmp;
+   hyp_spinlock_t lockval, newval;
+
+   asm volatile(
+   /* Atomically increment the next ticket. */
+   ARM64_LSE_ATOMIC_INSN(
+   /* LL/SC */
+"  prfmpstl1strm, %3\n"
+"1:ldaxr   %w0, %3\n"
+"  add %w1, %w0, #(1 << 16)\n"
+"  stxr%w2, %w1, %3\n"
+"  cbnz%w2, 1b\n",
+   /* LSE atomics */
+"  mov %w2, #(1 << 16)\n"
+"  ldadda  %w2, %w0, %3\n"
+   __nops(3))
+
+   /* Did we get the lock? */
+"  eor %w1, %w0, %w0, ror #16\n"
+"  cbz %w1, 3f\n"
+   /*
+* No: spin on the owner. Send a local event to avoid missing an
+* unlock before the exclusive load.
+*/
+"  sevl\n"
+"2:wfe\n"
+"  ldaxrh  %w2, %4\n"
+"  eor %w1, %w2, %w0, lsr #16\n"
+"  cbnz%w1, 2b\n"
+   /* We got the lock. Critical section starts here. */
+"3:"
+   : "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock)
+   : "Q" (lock->owner)
+   : "memory");
+}
+
+static inline void hyp_spin_unlock(hyp_spinlock_t *lock)
+{
+   u64 tmp;
+
+   asm volatile(
+   ARM64_LSE_ATOMIC_INSN(
+   /* LL/SC */
+   "   ldrh%w1, %0\n"
+   "   add %w1, %w1, #1\n"
+   "   stlrh   %w1, %0",
+   /* LSE atomics */
+   "   mov %w1, #1\n"
+   "   staddlh %w1, %0\n"
+   __nops(1))
+   : "=Q" (lock->owner), "=&r" (tmp)
+   :
+   : "memory");
+}
+
+#endif /* __ARM64_KVM_NVHE_SPINLOCK_H__ */
-- 
2.31.0.rc2.261.g7f71774620-goog



[PATCH v5 08/36] KVM: arm64: Make kvm_call_hyp() a function call at Hyp

2021-03-15 Thread Quentin Perret
kvm_call_hyp() has some logic to issue a function call or a hypercall
depending on the EL at which the kernel is running. However, all the
code compiled under __KVM_NVHE_HYPERVISOR__ is guaranteed to only run
at EL2 which allows us to simplify.

Add ifdefery to kvm_host.h to simplify kvm_call_hyp() in .hyp.text.

Acked-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_host.h | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 3d10e6527f7d..06ca4828005f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -591,6 +591,7 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
 
+#ifndef __KVM_NVHE_HYPERVISOR__
 #define kvm_call_hyp_nvhe(f, ...)  
\
({  \
struct arm_smccc_res res;   \
@@ -630,6 +631,11 @@ void kvm_arm_resume_guest(struct kvm *kvm);
\
ret;\
})
+#else /* __KVM_NVHE_HYPERVISOR__ */
+#define kvm_call_hyp(f, ...) f(__VA_ARGS__)
+#define kvm_call_hyp_ret(f, ...) f(__VA_ARGS__)
+#define kvm_call_hyp_nvhe(f, ...) f(__VA_ARGS__)
+#endif /* __KVM_NVHE_HYPERVISOR__ */
 
 void force_vm_exit(const cpumask_t *mask);
 void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
-- 
2.31.0.rc2.261.g7f71774620-goog



  1   2   3   4   5   6   7   8   9   10   >