2019年11月1日金曜日 16時36分16秒 UTC+9 Jan Kiszka:
>
> On 31.10.19 06:57, Chung-Fan Yang wrote: 
> > 
> >     Interesting findings already, but I'm afraid we will need to dig 
> >     deeper: 
> >     Can you describe what all is part of your measured latency path? 
> > 
> > 
> > I measured using an oscillate scope and function generator. 
> > I am using MMIO GPIOs. The application calls a system call and waits for 
> > an interrupt on a certain GPIO. 
> > When I send a pulse to the GPIO, the IRQ handler release a semaphore, 
> > interm trigger the scheduler and wake-up the application, which send 
> > another pulse to another GPIO using MMIO. 
> > 
> > FG -> Serial -> APIC -> RTOS IRQ Hnd -> Scheduler -> Application -> 
> > Serial -> OSC 
> > 
> > The timing different of these 2 pulses are measured. 
> > 
> > Because of the waiting mechanism used, receiving the pulse involved the 
> > system call / semaphore / interrupt handling of the RTOS. 
> > On the other hand, sending doesn't use any of the RTOS feature. 
> > 
> >     Do you just run code in guest mode or do you also trigger VM exits, 
> >     e.g. to 
> >     issue ivshmem interrupts to a remote side? 
> > 
> > 
> > I tried to instrument the system. 
> > So far there are no additional interrupts send, nor received during the 
> > whole process. 
> > VMExit do exist for EOI(systick and serial IRQ) and when I fiddle the 
> > TSC_deadline timer enable/disable bit of APIC MSR. 
> > The whole process is not related to any ivshmem operations. 
>
> Use x2APIC in your guest, and you will get rid of those VMexits (due to 
> xAPIC MMIO interception). But that's an unrelated optimization. 
>
>
Thanks for the hint.
I override the BIOS and enabled it.
There are no VM exits now.
 

> > 
> >     Maybe you can sample some latencies along the critical path so that 
> >     we have a better picture about  
> > 
> >     where we lose time, overall or rather on specific actions. 
> > 
> > 
> > Basically, it is an overall slowdown. 
> > But code in the scheduler and application slowdown more than other 
> places. 
> > 
> > BTW, I tested the again with a partially working setup of <kernel 
> > 4.19/Jailhouse v0.11/old ivshmem2>. 
> > Currently, I cannot get my application running, due to some mystery, but 
> > I am observing some slowdown. 
> > Pinging the RTOS using ivshmem-net the RTT has about 2x latency: 
> >  * <kernel 4.19/Jailhouse v0.11/old ivshmem2>: ~0.060ms 
> >  * <kernel 4.19/Jailhouse v0.11/new ivshmem2>: ~0.130ms 
> > 
>
> Sound like as if we have some caching related problem. You could enable 
> access to the perf MSRs (small patch to the MSR bitmap in vmx.c) and 
> check if you see excessive cache misses in the counters. 
>
>
I am quite busy lately, so I might let this problem be as is and revisit it 
later.

I will update the thread when I made new discoveries.

Yang.

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/780f0dbb-4fcf-4e0e-a247-8e9f2ff0ec76%40googlegroups.com.

Reply via email to