Hi Dario,

Thank you very much for your reply!!! It is very nice of you :)

Yes I have checked the blog entry and that is actually where I started!

To define my "scheduling overhead", I think I should tell you what I am
trying to accomplish:

So I am currently using Xen 4.5 to help us understand more about real time
virtualization. When I am experimenting with Xen-4.5, I observed that the
guest VM tend to perform not as well when there are a lot of small jobs in
a given period.

I think the reason behind it is because even though the jobs are small, but
there are so many tasks so the large amount of scheduling overhead for each
job cause the performance to get worse.

So I want to measure the scheduling overhead for the tasks.

So I guess my definition for "scheduling overhead" in this case will be the
time interval between when the clock interrupts for scheduling before a job
is run and when the job starts.
Basically, I want to see how long does it take to schedule a job and how
long does it take to actually run the job.

So in that case just looking at how vcpus are scheduled is not enough? If
you think this concept is incorrect, please correct me, thx!


Oh and yes I can get a similar output file by using xentrace -D -e
0x0002f000 trace.bin & xenalyze dump-all.

that brings me to 2 more questions about how to read this dump-all output:

First Question:  how do I read those lines, for example :

 0.000019969 x----------  -- d32767v0   22805(2:2:805) 5 [ 7fff 0 0 0 0 ]

I can only tell that the first number( 0.000019969) means time, and
d32767v0 means "domain 32767 at vcpu0".
what does  22805(2:2:805) 5 [ 7fff 0 0 0 0 ] each stands for? hopefully it
contain the information I need.


Second Question: Why are there domain 32768 and 32767? I never created
those domains and they took up most of the dump-all file.


Again thank you very much for your reply. and sorry  for the long replies
if my reply format is wrong, I am new to mailing list too.



Victor





On Wed, Oct 7, 2015 at 3:32 AM, Dario Faggioli <raist...@linux.it> wrote:

> On Tue, 2015-10-06 at 20:46 -0700, Yu-An(Victor) Chen wrote:
> > Hi,
> >
> Hi,
>
> > I am new to xen environment
> >
> Welcome :-)
>
> > and I am wondering how to trace scheduling overhead of guest vm using
> > xentrace/xenalyze ?
> >
> Have you had a look at this blog entry? It's from some time ago, but
> the basic concepts should still hold.
>
>  https://blog.xenproject.org/2012/09/27/tracing-with-xentrace-and-xenal
> yze/
>
> Of course, that tells how to use the tools (xentrace and xenalyze), not
> how to "trace scheduling overhead". For doing so, I think you should
> first define what it is really that you want to measure (many
> definitions of scheduling overhead are possible) and, only after that,
> check whether it is possible to do that with existing tools.
>
> > I have tried using $xentrace -D -e all -S 256 {filename}
> >
> > and then use various xenalyze options but most of them gave me empty
> > result, and I dont really see where I can get scheduling overhead so
>
> >
> Mmm... The command above works for me (just tried). However, '-e all'
> is a lot of data, and it may actually take a bit to xenalyze to parse
> it.
>
> Maybe, if you are interested in tracing scheduling related events, use
> the appropriate event mask?
>
> > I can see how are the jobs are scheduled and its execution time and
> > stuff. Please point me to a direction. Thank you!
> >
> Again, you should detail more what you think 'scheduling overhead' is.
> If you are interested in seeing how and where the vcpus are scheduled,
> you want a dump.
>
> With this:
>
>  xentrace -D -e 0x0002f000 trace.bin
>
> and then this:
>
>  xenalyze --dump-all trace.bin
>
> Here's an excerpt of what I get:
>
> ]  0.000019647 x----------  -- d32767v0   22802(2:2:802) 0 [ ]
> ]  0.000019969 x----------  -- d32767v0   22805(2:2:805) 5 [ 7fff 0 0 0 0 ]
>    0.000020474 x----------  -- d32767v0 runstate_continue d32767v0
> running->running
> ]  0.000021170 ----x------  -- d32767v4   22802(2:2:802) 0 [ ]
> ]  0.000021370 ----x------  -- d32767v4   22805(2:2:805) 5 [ 47fff 0 0 0 0
> ]
>    0.000021817 ----x------  -- d32767v4 runstate_continue d32767v4
> running->running
> ]  0.000022235 ---------x-  -- d32767v9   22802(2:2:802) 0 [ ]
> ]  0.000022467 ---------x-  -- d32767v9   22805(2:2:805) 5 [ 97fff 0 0 0 0
> ]
>    0.000022983 ---------x-  -- d32767v9 runstate_continue d32767v9
> running->running
> ]  0.000023438 --------x--  -- d32767v8   22802(2:2:802) 0 [ ]
> ]  0.000023638 --------x--  -- d32767v8   22805(2:2:805) 5 [ 87fff 0 0 0 0
> ]
>
> Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to