Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Daniel P. Berrange
On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:14, Marcelo Tosatti wrote:
  On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
  On 2012-01-19 18:53, Marcelo Tosatti wrote:
  What problems does it cause, and in which scenarios? Can't they be
  fixed?
 
  If the guest compensates for lost ticks, and KVM reinjects them, guest
  time advances faster then it should, to the extent where NTP fails to
  correct it. This is the case with RHEL4.
 
  But for example v2.4 kernel (or Windows with non-acpi HAL) do not
  compensate. In that case you want KVM to reinject.
 
  I don't know of any other way to fix this.
 
  OK, i see. The old unsolved problem of guessing what is being executed.
 
  Then the next question is how and where to control this. Conceptually,
  there should rather be a global switch say compensate for lost ticks of
  periodic timers: yes/no - instead of a per-timer knob. Didn't we
  discussed something like this before?
  
  I don't see the advantage of a global control versus per device
  control (in fact it lowers flexibility).
 
 Usability. Users should not have to care about individual tick-based
 clocks. They care about my OS requires lost ticks compensation, yes or no.

FYI, at the libvirt level we model policy against individual timers, for
example:

  clock offset=localtime
timer name=rtc tickpolicy=catchup track=guest/
timer name=pit tickpolicy=delay/
  /clock


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Jamie Lokier
Jan Kiszka wrote:
 Usability. Users should not have to care about individual tick-based
 clocks. They care about my OS requires lost ticks compensation, yes or no.

Conceivably an OS may require lost ticks compensation depending on
boot options given to the OS telling it which clock sources to use.

However I like the idea of a global default, which you can set and all
the devices inherit it unless overridden in each device.

-- Jamie
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Jan Kiszka
On 2012-01-20 11:39, Jamie Lokier wrote:
 Jan Kiszka wrote:
 Usability. Users should not have to care about individual tick-based
 clocks. They care about my OS requires lost ticks compensation, yes or no.
 
 Conceivably an OS may require lost ticks compensation depending on
 boot options given to the OS telling it which clock sources to use.
 
 However I like the idea of a global default, which you can set and all
 the devices inherit it unless overridden in each device.

OK, this sounds like a good option: add per-device control but also
introduce global default. The latter can still be done later on.

The only problem is that we should already come up with the right,
generic control switch template. reinject=on|off, as I did it for now
for the PIT, is definitely not optimal.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Jan Kiszka
On 2012-01-20 11:25, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:14, Marcelo Tosatti wrote:
 On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
 On 2012-01-19 18:53, Marcelo Tosatti wrote:
 What problems does it cause, and in which scenarios? Can't they be
 fixed?

 If the guest compensates for lost ticks, and KVM reinjects them, guest
 time advances faster then it should, to the extent where NTP fails to
 correct it. This is the case with RHEL4.

 But for example v2.4 kernel (or Windows with non-acpi HAL) do not
 compensate. In that case you want KVM to reinject.

 I don't know of any other way to fix this.

 OK, i see. The old unsolved problem of guessing what is being executed.

 Then the next question is how and where to control this. Conceptually,
 there should rather be a global switch say compensate for lost ticks of
 periodic timers: yes/no - instead of a per-timer knob. Didn't we
 discussed something like this before?

 I don't see the advantage of a global control versus per device
 control (in fact it lowers flexibility).

 Usability. Users should not have to care about individual tick-based
 clocks. They care about my OS requires lost ticks compensation, yes or no.
 
 FYI, at the libvirt level we model policy against individual timers, for
 example:
 
   clock offset=localtime
 timer name=rtc tickpolicy=catchup track=guest/
 timer name=pit tickpolicy=delay/
   /clock

Are the various modes of tickpolicy fully specified somewhere?

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Daniel P. Berrange
On Fri, Jan 20, 2012 at 12:13:48PM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:25, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
  On 2012-01-20 11:14, Marcelo Tosatti wrote:
  On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
  On 2012-01-19 18:53, Marcelo Tosatti wrote:
  What problems does it cause, and in which scenarios? Can't they be
  fixed?
 
  If the guest compensates for lost ticks, and KVM reinjects them, guest
  time advances faster then it should, to the extent where NTP fails to
  correct it. This is the case with RHEL4.
 
  But for example v2.4 kernel (or Windows with non-acpi HAL) do not
  compensate. In that case you want KVM to reinject.
 
  I don't know of any other way to fix this.
 
  OK, i see. The old unsolved problem of guessing what is being executed.
 
  Then the next question is how and where to control this. Conceptually,
  there should rather be a global switch say compensate for lost ticks of
  periodic timers: yes/no - instead of a per-timer knob. Didn't we
  discussed something like this before?
 
  I don't see the advantage of a global control versus per device
  control (in fact it lowers flexibility).
 
  Usability. Users should not have to care about individual tick-based
  clocks. They care about my OS requires lost ticks compensation, yes or 
  no.
  
  FYI, at the libvirt level we model policy against individual timers, for
  example:
  
clock offset=localtime
  timer name=rtc tickpolicy=catchup track=guest/
  timer name=pit tickpolicy=delay/
/clock
 
 Are the various modes of tickpolicy fully specified somewhere?

There are some (not all that great) docs here:

  http://libvirt.org/formatdomain.html#elementsTime

The meaning of the 4 policies are:

  delay: continue to deliver at normal rate
catchup: deliver at higher rate to catchup
  merge: ticks merged into 1 single tick
discard: all missed ticks are discarded


The original design rationale was here, though beware that some things
changed between this design  the actual implementation libvirt has:

  https://www.redhat.com/archives/libvir-list/2010-March/msg00304.html

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Jan Kiszka
On 2012-01-20 12:45, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 12:13:48PM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:25, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:14, Marcelo Tosatti wrote:
 On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
 On 2012-01-19 18:53, Marcelo Tosatti wrote:
 What problems does it cause, and in which scenarios? Can't they be
 fixed?

 If the guest compensates for lost ticks, and KVM reinjects them, guest
 time advances faster then it should, to the extent where NTP fails to
 correct it. This is the case with RHEL4.

 But for example v2.4 kernel (or Windows with non-acpi HAL) do not
 compensate. In that case you want KVM to reinject.

 I don't know of any other way to fix this.

 OK, i see. The old unsolved problem of guessing what is being executed.

 Then the next question is how and where to control this. Conceptually,
 there should rather be a global switch say compensate for lost ticks of
 periodic timers: yes/no - instead of a per-timer knob. Didn't we
 discussed something like this before?

 I don't see the advantage of a global control versus per device
 control (in fact it lowers flexibility).

 Usability. Users should not have to care about individual tick-based
 clocks. They care about my OS requires lost ticks compensation, yes or 
 no.

 FYI, at the libvirt level we model policy against individual timers, for
 example:

   clock offset=localtime
 timer name=rtc tickpolicy=catchup track=guest/
 timer name=pit tickpolicy=delay/
   /clock

 Are the various modes of tickpolicy fully specified somewhere?
 
 There are some (not all that great) docs here:
 
   http://libvirt.org/formatdomain.html#elementsTime
 
 The meaning of the 4 policies are:
 
   delay: continue to deliver at normal rate

What does this mean? The timer stops ticking until the guest accepts its
ticks again?

 catchup: deliver at higher rate to catchup
   merge: ticks merged into 1 single tick
 discard: all missed ticks are discarded

But those interpretations aren't stated in the docs. That makes it hard
to map them on individual hypervisors - or model proper new hypervisor
interfaces accordingly.

 
 
 The original design rationale was here, though beware that some things
 changed between this design  the actual implementation libvirt has:
 
   https://www.redhat.com/archives/libvir-list/2010-March/msg00304.html
 
 Regards,
 Daniel

Given that there is almost no tick compensation in QEMU yet (ignoring
the awful RTC hack for now), this is a good time to establish a useful
generic interface with the advent of the KVM device models.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Paolo Bonzini

On 01/20/2012 12:13 PM, Jan Kiszka wrote:

OK, this sounds like a good option: add per-device control but also
introduce global default. The latter can still be done later on.

The only problem is that we should already come up with the right,
generic control switch template. reinject=on|off, as I did it for now
for the PIT, is definitely not optimal.


What about adding suboptions to -clock (like driftfix we have for -rtc)?

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Daniel P. Berrange
On Fri, Jan 20, 2012 at 01:00:06PM +0100, Jan Kiszka wrote:
 On 2012-01-20 12:45, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 12:13:48PM +0100, Jan Kiszka wrote:
  On 2012-01-20 11:25, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
  On 2012-01-20 11:14, Marcelo Tosatti wrote:
  On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
  On 2012-01-19 18:53, Marcelo Tosatti wrote:
  What problems does it cause, and in which scenarios? Can't they be
  fixed?
 
  If the guest compensates for lost ticks, and KVM reinjects them, guest
  time advances faster then it should, to the extent where NTP fails to
  correct it. This is the case with RHEL4.
 
  But for example v2.4 kernel (or Windows with non-acpi HAL) do not
  compensate. In that case you want KVM to reinject.
 
  I don't know of any other way to fix this.
 
  OK, i see. The old unsolved problem of guessing what is being executed.
 
  Then the next question is how and where to control this. Conceptually,
  there should rather be a global switch say compensate for lost ticks 
  of
  periodic timers: yes/no - instead of a per-timer knob. Didn't we
  discussed something like this before?
 
  I don't see the advantage of a global control versus per device
  control (in fact it lowers flexibility).
 
  Usability. Users should not have to care about individual tick-based
  clocks. They care about my OS requires lost ticks compensation, yes or 
  no.
 
  FYI, at the libvirt level we model policy against individual timers, for
  example:
 
clock offset=localtime
  timer name=rtc tickpolicy=catchup track=guest/
  timer name=pit tickpolicy=delay/
/clock
 
  Are the various modes of tickpolicy fully specified somewhere?
  
  There are some (not all that great) docs here:
  
http://libvirt.org/formatdomain.html#elementsTime
  
  The meaning of the 4 policies are:
  
delay: continue to deliver at normal rate
 
 What does this mean? The timer stops ticking until the guest accepts its
 ticks again?

It means that the hypervisor will not attempt to do any compensation,
so the guest will see delays in its ticks being delivered  gradually
drift over time.

  catchup: deliver at higher rate to catchup
merge: ticks merged into 1 single tick
  discard: all missed ticks are discarded
 
 But those interpretations aren't stated in the docs. That makes it hard
 to map them on individual hypervisors - or model proper new hypervisor
 interfaces accordingly.

That's not a real problem, now I notice they are missing the docs, I
can just add them in.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Jan Kiszka
On 2012-01-20 13:42, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 01:00:06PM +0100, Jan Kiszka wrote:
 On 2012-01-20 12:45, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 12:13:48PM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:25, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:14, Marcelo Tosatti wrote:
 On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
 On 2012-01-19 18:53, Marcelo Tosatti wrote:
 What problems does it cause, and in which scenarios? Can't they be
 fixed?

 If the guest compensates for lost ticks, and KVM reinjects them, guest
 time advances faster then it should, to the extent where NTP fails to
 correct it. This is the case with RHEL4.

 But for example v2.4 kernel (or Windows with non-acpi HAL) do not
 compensate. In that case you want KVM to reinject.

 I don't know of any other way to fix this.

 OK, i see. The old unsolved problem of guessing what is being executed.

 Then the next question is how and where to control this. Conceptually,
 there should rather be a global switch say compensate for lost ticks 
 of
 periodic timers: yes/no - instead of a per-timer knob. Didn't we
 discussed something like this before?

 I don't see the advantage of a global control versus per device
 control (in fact it lowers flexibility).

 Usability. Users should not have to care about individual tick-based
 clocks. They care about my OS requires lost ticks compensation, yes or 
 no.

 FYI, at the libvirt level we model policy against individual timers, for
 example:

   clock offset=localtime
 timer name=rtc tickpolicy=catchup track=guest/
 timer name=pit tickpolicy=delay/
   /clock

 Are the various modes of tickpolicy fully specified somewhere?

 There are some (not all that great) docs here:

   http://libvirt.org/formatdomain.html#elementsTime

 The meaning of the 4 policies are:

   delay: continue to deliver at normal rate

 What does this mean? The timer stops ticking until the guest accepts its
 ticks again?
 
 It means that the hypervisor will not attempt to do any compensation,
 so the guest will see delays in its ticks being delivered  gradually
 drift over time.

Still, is the logic as I described? Or what is the difference to discard.

 
 catchup: deliver at higher rate to catchup
   merge: ticks merged into 1 single tick
 discard: all missed ticks are discarded

 But those interpretations aren't stated in the docs. That makes it hard
 to map them on individual hypervisors - or model proper new hypervisor
 interfaces accordingly.
 
 That's not a real problem, now I notice they are missing the docs, I
 can just add them in.

TIA, but just please more verbose. The above descriptions only help if
you take real implementations of hypervisors as reference.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Daniel P. Berrange
On Fri, Jan 20, 2012 at 01:51:20PM +0100, Jan Kiszka wrote:
 On 2012-01-20 13:42, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 01:00:06PM +0100, Jan Kiszka wrote:
  On 2012-01-20 12:45, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 12:13:48PM +0100, Jan Kiszka wrote:
  On 2012-01-20 11:25, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
  On 2012-01-20 11:14, Marcelo Tosatti wrote:
  On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
  On 2012-01-19 18:53, Marcelo Tosatti wrote:
  What problems does it cause, and in which scenarios? Can't they be
  fixed?
 
  If the guest compensates for lost ticks, and KVM reinjects them, 
  guest
  time advances faster then it should, to the extent where NTP fails 
  to
  correct it. This is the case with RHEL4.
 
  But for example v2.4 kernel (or Windows with non-acpi HAL) do not
  compensate. In that case you want KVM to reinject.
 
  I don't know of any other way to fix this.
 
  OK, i see. The old unsolved problem of guessing what is being 
  executed.
 
  Then the next question is how and where to control this. 
  Conceptually,
  there should rather be a global switch say compensate for lost 
  ticks of
  periodic timers: yes/no - instead of a per-timer knob. Didn't we
  discussed something like this before?
 
  I don't see the advantage of a global control versus per device
  control (in fact it lowers flexibility).
 
  Usability. Users should not have to care about individual tick-based
  clocks. They care about my OS requires lost ticks compensation, yes 
  or no.
 
  FYI, at the libvirt level we model policy against individual timers, for
  example:
 
clock offset=localtime
  timer name=rtc tickpolicy=catchup track=guest/
  timer name=pit tickpolicy=delay/
/clock
 
  Are the various modes of tickpolicy fully specified somewhere?
 
  There are some (not all that great) docs here:
 
http://libvirt.org/formatdomain.html#elementsTime
 
  The meaning of the 4 policies are:
 
delay: continue to deliver at normal rate
 
  What does this mean? The timer stops ticking until the guest accepts its
  ticks again?
  
  It means that the hypervisor will not attempt to do any compensation,
  so the guest will see delays in its ticks being delivered  gradually
  drift over time.
 
 Still, is the logic as I described? Or what is the difference to discard.

With 'discard', the delayed tick will be thrown away. In 'delay', the
delayed tick will still be injected to the guest, possibly well after
the intended injection time though, and there will be no attempt to
compensate by speeding up delivery of later ticks.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Jan Kiszka
On 2012-01-20 13:54, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 01:51:20PM +0100, Jan Kiszka wrote:
 On 2012-01-20 13:42, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 01:00:06PM +0100, Jan Kiszka wrote:
 On 2012-01-20 12:45, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 12:13:48PM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:25, Daniel P. Berrange wrote:
 On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
 On 2012-01-20 11:14, Marcelo Tosatti wrote:
 On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
 On 2012-01-19 18:53, Marcelo Tosatti wrote:
 What problems does it cause, and in which scenarios? Can't they be
 fixed?

 If the guest compensates for lost ticks, and KVM reinjects them, 
 guest
 time advances faster then it should, to the extent where NTP fails 
 to
 correct it. This is the case with RHEL4.

 But for example v2.4 kernel (or Windows with non-acpi HAL) do not
 compensate. In that case you want KVM to reinject.

 I don't know of any other way to fix this.

 OK, i see. The old unsolved problem of guessing what is being 
 executed.

 Then the next question is how and where to control this. 
 Conceptually,
 there should rather be a global switch say compensate for lost 
 ticks of
 periodic timers: yes/no - instead of a per-timer knob. Didn't we
 discussed something like this before?

 I don't see the advantage of a global control versus per device
 control (in fact it lowers flexibility).

 Usability. Users should not have to care about individual tick-based
 clocks. They care about my OS requires lost ticks compensation, yes 
 or no.

 FYI, at the libvirt level we model policy against individual timers, for
 example:

   clock offset=localtime
 timer name=rtc tickpolicy=catchup track=guest/
 timer name=pit tickpolicy=delay/
   /clock

 Are the various modes of tickpolicy fully specified somewhere?

 There are some (not all that great) docs here:

   http://libvirt.org/formatdomain.html#elementsTime

 The meaning of the 4 policies are:

   delay: continue to deliver at normal rate

 What does this mean? The timer stops ticking until the guest accepts its
 ticks again?

 It means that the hypervisor will not attempt to do any compensation,
 so the guest will see delays in its ticks being delivered  gradually
 drift over time.

 Still, is the logic as I described? Or what is the difference to discard.
 
 With 'discard', the delayed tick will be thrown away. In 'delay', the
 delayed tick will still be injected to the guest, possibly well after
 the intended injection time though, and there will be no attempt to
 compensate by speeding up delivery of later ticks.

OK, let's see if I got it:

delay   - all lost ticks are replayed in a row once the guest accepts
  them again
catchup - lost ticks are gradually replayed at a higher frequency than
  the original tick
merge   - at most one tick is replayed once the guest accepts it again
discard - no lost ticks compensation

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?

2012-01-20 Thread Daniel P. Berrange
On Fri, Jan 20, 2012 at 02:02:03PM +0100, Jan Kiszka wrote:
 On 2012-01-20 13:54, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 01:51:20PM +0100, Jan Kiszka wrote:
  On 2012-01-20 13:42, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 01:00:06PM +0100, Jan Kiszka wrote:
  On 2012-01-20 12:45, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 12:13:48PM +0100, Jan Kiszka wrote:
  On 2012-01-20 11:25, Daniel P. Berrange wrote:
  On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
  On 2012-01-20 11:14, Marcelo Tosatti wrote:
  On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
  On 2012-01-19 18:53, Marcelo Tosatti wrote:
  What problems does it cause, and in which scenarios? Can't they 
  be
  fixed?
 
  If the guest compensates for lost ticks, and KVM reinjects them, 
  guest
  time advances faster then it should, to the extent where NTP 
  fails to
  correct it. This is the case with RHEL4.
 
  But for example v2.4 kernel (or Windows with non-acpi HAL) do not
  compensate. In that case you want KVM to reinject.
 
  I don't know of any other way to fix this.
 
  OK, i see. The old unsolved problem of guessing what is being 
  executed.
 
  Then the next question is how and where to control this. 
  Conceptually,
  there should rather be a global switch say compensate for lost 
  ticks of
  periodic timers: yes/no - instead of a per-timer knob. Didn't we
  discussed something like this before?
 
  I don't see the advantage of a global control versus per device
  control (in fact it lowers flexibility).
 
  Usability. Users should not have to care about individual tick-based
  clocks. They care about my OS requires lost ticks compensation, yes 
  or no.
 
  FYI, at the libvirt level we model policy against individual timers, 
  for
  example:
 
clock offset=localtime
  timer name=rtc tickpolicy=catchup track=guest/
  timer name=pit tickpolicy=delay/
/clock
 
  Are the various modes of tickpolicy fully specified somewhere?
 
  There are some (not all that great) docs here:
 
http://libvirt.org/formatdomain.html#elementsTime
 
  The meaning of the 4 policies are:
 
delay: continue to deliver at normal rate
 
  What does this mean? The timer stops ticking until the guest accepts its
  ticks again?
 
  It means that the hypervisor will not attempt to do any compensation,
  so the guest will see delays in its ticks being delivered  gradually
  drift over time.
 
  Still, is the logic as I described? Or what is the difference to discard.
  
  With 'discard', the delayed tick will be thrown away. In 'delay', the
  delayed tick will still be injected to the guest, possibly well after
  the intended injection time though, and there will be no attempt to
  compensate by speeding up delivery of later ticks.
 
 OK, let's see if I got it:
 
 delay   - all lost ticks are replayed in a row once the guest accepts
   them again
 catchup - lost ticks are gradually replayed at a higher frequency than
   the original tick
 merge   - at most one tick is replayed once the guest accepts it again
 discard - no lost ticks compensation

Yes, I think that is a good understanding.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html