Hi Julien,

On Fri, 2020-06-12 at 13:45 +0100, Julien Grall wrote:
> Hi Volodymyr,
> 
> On 12/06/2020 12:44, Volodymyr Babchuk wrote:
> > On Fri, 2020-06-12 at 06:57 +0200, Jürgen Groß wrote:
> > > On 12.06.20 02:22, Volodymyr Babchuk wrote:
> > > > As scheduler code now collects time spent in IRQ handlers and in
> > > > do_softirq(), we can present those values to userspace tools like
> > > > xentop, so system administrator can see how system behaves.
> > > > 
> > > > We are updating counters only in sched_get_time_correction() function
> > > > to minimize number of taken spinlocks. As atomic_t is 32 bit wide, it
> > > > is not enough to store time with nanosecond precision. So we need to
> > > > use 64 bit variables and protect them with spinlock.
> > > > 
> > > > Signed-off-by: Volodymyr Babchuk <volodymyr_babc...@epam.com>
> > > > ---
> > > >    xen/common/sched/core.c     | 17 +++++++++++++++++
> > > >    xen/common/sysctl.c         |  1 +
> > > >    xen/include/public/sysctl.h |  4 +++-
> > > >    xen/include/xen/sched.h     |  2 ++
> > > >    4 files changed, 23 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> > > > index a7294ff5c3..ee6b1d9161 100644
> > > > --- a/xen/common/sched/core.c
> > > > +++ b/xen/common/sched/core.c
> > > > @@ -95,6 +95,10 @@ static struct scheduler __read_mostly ops;
> > > >    
> > > >    static bool scheduler_active;
> > > >    
> > > > +static DEFINE_SPINLOCK(sched_stat_lock);
> > > > +s_time_t sched_stat_irq_time;
> > > > +s_time_t sched_stat_hyp_time;
> > > > +
> > > >    static void sched_set_affinity(
> > > >        struct sched_unit *unit, const cpumask_t *hard, const cpumask_t 
> > > > *soft);
> > > >    
> > > > @@ -994,9 +998,22 @@ s_time_t sched_get_time_correction(struct 
> > > > sched_unit *u)
> > > >                break;
> > > >        }
> > > >    
> > > > +    spin_lock_irqsave(&sched_stat_lock, flags);
> > > > +    sched_stat_irq_time += irq;
> > > > +    sched_stat_hyp_time += hyp;
> > > > +    spin_unlock_irqrestore(&sched_stat_lock, flags);
> > > 
> > > Please don't use a lock. Just use add_sized() instead which will add
> > > atomically.
> > 
> > Looks like arm does not support 64 bit variables. >
> > Julien, I believe, this is armv7 limitation? Should armv8 work with 64-
> > bit atomics?
> 
> 64-bit atomics can work on both Armv7 and Armv8 :). It just haven't been 
> plumbed yet.

Wow, didn't know that armv7 is capable of that.

> I am happy to write a patch if you need atomic64_t or even a 64-bit 
> add_sized().

That would be cool. Certainly. But looks like x86 code does not have
implementation for atomic64_t as well. So, there would be lots of
changes just for one use case. I don't know if it is worth it.

Let's finish discussion of other parts of the series. If it will appear
that atomic64_t is absolutely necessay, I'll return back to you.
Thanks for offer anyways. 

Reply via email to