Hi Dan,
On 02/23/2017 10:54 AM, Dan Zach wrote:
> So /proc/timer_list is below.
> In the dts configured both: the ARM internal timer and the SoC Tegra
> timer(dont think it actually works though):
>
> timer@0,60005000 {
> compatible = "nvidia,tegra124-timer",
> "nvidia,tegra20-timer";
> reg = <0x60005000 0x400>;
> interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
> <GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
> <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
> <GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>,
> <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>,
> <GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>;
> clocks = <&tegra_car 5>;
> };
Don't use this one. It requires a clock to be gated when being enabled,
so you also need to root-share the whole Tegra CAR (which doesn't work).
Partitioning of clock controllers is currently not supported by Jailhouse.
I've developed a paravirtual c&r controller for jailhouse, and we
already had some off-list discussions as we definitely will have to
address these issues in future, but for the moment clock gating is not
supported.
>
>
> timer {
> compatible = "arm,armv7-timer";
> interrupts = <GIC_PPI 13
> (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
> <GIC_PPI 14
> (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
> <GIC_PPI 11
> (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
> <GIC_PPI 10
> (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>;
> };
Yep, use this timer. Should be absolutely sufficient.
>
>
> Can you tell if below looks ok?
Ok, so this is the timer list of the non-root RT Linux, right?
Does you non-root RT Linux contain this [1] patch?
Ralf
[1]
http://git.kiszka.org/?p=linux.git;a=commitdiff;h=70671bb3cd47fe70b9a7076625e09d0411d58de9
>
> Timer List Version: v0.8
> HRTIMER_MAX_CLOCK_BASES: 4
> now at 145981795048 nsecs
>
> cpu: 0
> clock 0:
> .base: ceec8980
> .index: 0
> .resolution: 1 nsecs
> .get_time: ktime_get
> .offset: 0 nsecs
> active timers:
> #0: <ceec8c50>, tick_sched_timer, S:01, tick_nohz_restart, swapper/0/0
> # expires at 145982000000-145982000000 nsecs [in 204952 to 204952 nsecs]
> #1: <cdc25e90>, hrtimer_wakeup, S:01, schedule_hrtimeout_range_clock,
> bus_10ms/40
> # expires at 145982000000-145982000001 nsecs [in 204952 to 204953 nsecs]
> #2: def_rt_bandwidth, sched_rt_period_timer, S:01, enqueue_task_rt,
> ktimersoftd/0/4
> # expires at 146026000000-146026000000 nsecs [in 44204952 to 44204952
> nsecs]
> #3: <ceec8d70>, watchdog_timer_fn, S:01, watchdog_enable, watchdog/0/14
> # expires at 148032000000-148032000000 nsecs [in 2050204952 to
> 2050204952 nsecs]
> 0 d_clock_timer, sched_clock_poll, S:01, sched_clock_postinit,
> swapper/0/--More--
> 48 nsecs]s at 4398046511096-4398046511096 nsecs [in 4252064716048 to
> 42520647160--More--
> clock 1:
> .base: ceec89c0
> .index: 1
> .resolution: 1 nsecs
> .get_time: ktime_get_real
> .offset: 0 nsecs
> active timers:
> clock 2:
> .base: ceec8a00
> .index: 2
> .resolution: 1 nsecs
> .get_time: ktime_get_boottime
> .offset: 0 nsecs
> active timers:
> clock 3:
> .base: ceec8a40
> .index: 3
> .resolution: 1 nsecs
> .get_time: ktime_get_clocktai
> .offset: 0 nsecs
> active timers:
> .expires_next : 145985000000 nsecs
> .hres_active : 1 .idle_jiffies : 4294667614
> .idle_calls : 4
> .idle_sleeps : 2
> .idle_entrytime : 319965499 nsecs
> .idle_waketime : 319965499 nsecs
> .idle_exittime : 319985583 nsecs
> .idle_sleeptime : 1838332 nsecs
> .iowait_sleeptime: 0 nsecs
> .last_jiffies : 4294667615
> .next_timer : 329000000
> .idle_expires : 329000000 nsecs
> jiffies: 4294813280
>
> Tick Device: mode: 1
> Broadcast device
> Clock Event Device: timer0
> max_delta_ns: 536870948001
> min_delta_ns: 1001
> mult: 4294967
> shift: 32
> mode: 1
> next_event: 9223372036854775807 nsecs
> set_next_event: tegra_timer_set_next_event
> shutdown: tegra_timer_shutdown
> periodic: tegra_timer_set_periodic
> oneshot: tegra_timer_shutdown
> resume: tegra_timer_shutdown
> event_handler: tick_handle_oneshot_broadcast
> retries: 0
>
> tick_broadcast_mask: 0
> tick_broadcast_oneshot_mask: 0
>
> Tick Device: mode: 1
> Per CPU device: 0
> Clock Event Device: arch_sys_timer
> max_delta_ns: 178956969028
> min_delta_ns: 1250
> mult: 51539608
> shift: 32
> mode: 3
> next_event: 145985000000 nsecs
> set_next_event: arch_timer_set_next_event_virt
> shutdown: arch_timer_shutdown_virt
> oneshot stopped: arch_timer_shutdown_virt
> event_handler: hrtimer_interrupt
> retries: 4
>
> #
> #
>
> .nr_events : 145926
> .nr_retries : 1
> .nr_hangs : 0
> .max_hang_time : 0
> .nohz_mode : 2
> .last_tick : 319000000 nsecs
> .tick_stopped : 0
>
>
> On 23 February 2017 at 19:22, Ralf Ramsauer
> <[email protected]
> <mailto:[email protected]>> wrote:
>
> On 02/23/2017 08:17 AM, Henning Schild wrote:
> > On Thu, 23 Feb 2017 06:00:15 -0800
> > Dan Zach <[email protected] <mailto:[email protected]>> wrote:
> >
> >> Dear forum,
> >>
> >> On the Jetson TK1 inmate I use linux 4.8.2 with PREEMPT-RT patch.
> >> I measure a 1KHz high priority task based on hrtimer events that
> >> measures it's own jitter.
> Run jailhouse cell stats for your RT-cell and watch the MMIO traps. I
> guess it will heavily trap on RT. RT heavily accesses the GICD that
> needs to be emulated in hardware
> >>
> >> The worst case, I get 400uS jitter on 1mS tick - not so bad, but the
> Uh, okay? With the root cell idling? CONFIG_HZ_1000?
>
> Uhm -- What's your clocksource? Could you please post /proc/timer_list?
> >> interesting thing is that the jitter stays as low as 50uS, untill I
> >> start activity on the root cell, especially GUI.
> >>
> >> So the questions is: if the non-root cell is completely isolated from
> >> the root: separate physical memory, PPI from its local timer, where
> >> this cross influence can come from?
> >
> > Well the cells are not completely isolated, some shared resources
> > remain. The jitter you are seeing is caused by those, i.e. caches and
> > busses. And GPU workloads would likely stress exactly those.
> Yep, mainly shared system bus and caches. Others like MMIO dispatching
> and IRQ reinjection cause (at least measurable) latencies. I already did
> some measurements to get worst case latencies, but I never hit anything
> like that, interestingly.
>
> Do those latencies also occur if you use an Preempt-RT patched kernel
> without Jailhouse when the GPU gets stressed?
>
> Ralf
> >
> > Henning
> >
> >> Thanks
> >> Dan
> >>
> >
>
> --
> You received this message because you are subscribed to a topic in
> the Google Groups "Jailhouse" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/jailhouse-dev/RfucfkcbNQU/unsubscribe
> <https://groups.google.com/d/topic/jailhouse-dev/RfucfkcbNQU/unsubscribe>.
> To unsubscribe from this group and all its topics, send an email to
> [email protected]
> <mailto:jailhouse-dev%[email protected]>.
> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Jailhouse" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to [email protected]
> <mailto:[email protected]>.
> For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.