On Mon, Mar 26, 2012 at 10:11:43PM +0200, Vadim Rozenfeld wrote:
> On Monday, March 26, 2012 08:54:50 PM Peter Lieven wrote:
> > On 26.03.2012 20:36, Vadim Rozenfeld wrote:
> > > On Monday, March 26, 2012 07:52:49 PM Gleb Natapov wrote:
> > >> On Mon, Mar 26, 2012 at 07:46:03PM +0200, Vadim Rozenfeld wrote:
> > >>> On Monday, March 26, 2012 07:00:32 PM Peter Lieven wrote:
> > >>>> On 22.03.2012 10:38, Vadim Rozenfeld wrote:
> > >>>>> On Thursday, March 22, 2012 10:52:42 AM Peter Lieven wrote:
> > >>>>>> On 22.03.2012 09:48, Vadim Rozenfeld wrote:
> > >>>>>>> On Thursday, March 22, 2012 09:53:45 AM Gleb Natapov wrote:
> > >>>>>>>> On Wed, Mar 21, 2012 at 06:31:02PM +0100, Peter Lieven wrote:
> > >>>>>>>>> On 21.03.2012 12:10, David Cure wrote:
> > >>>>>>>>>>          hello,
> > >>>>>>>>>> 
> > >>>>>>>>>> Le Tue, Mar 20, 2012 at 02:38:22PM +0200, Gleb Natapov ecrivait :
> > >>>>>>>>>>> Try to add<feature policy='disable' name='hypervisor'/>     to
> > >>>>>>>>>>> cpu definition in XML and check command line.
> > >>>>>>>>>>> 
> > >>>>>>>>>>  ok I try this but I can't use<cpu model>     to map the host
> > >>>>>>>>>>  cpu
> > >>>>>>>>>> 
> > >>>>>>>>>> (my libvirt is 0.9.8) so I use :
> > >>>>>>>>>>      <cpu match='exact'>
> > >>>>>>>>>>      
> > >>>>>>>>>>        <model>Opteron_G3</model>
> > >>>>>>>>>>        <feature policy='disable' name='hypervisor'/>
> > >>>>>>>>>>      
> > >>>>>>>>>>      </cpu>
> > >>>>>>>>>>  
> > >>>>>>>>>>  (the physical server use Opteron CPU).
> > >>>>>>>>>> 
> > >>>>>>>>>>  The log is here :
> > >>>>>>>>>> http://www.roullier.net/Report/report-3.2-vhost-net-1vcpu-cpu.tx
> > >>>>>>>>>> t.gz
> > >>>>>>>>>> 
> > >>>>>>>>>>  And now with only 1 vcpu, the response time is 8.5s, great
> > >>>>>>>>>> 
> > >>>>>>>>>> improvment. We keep this configuration for production : we check
> > >>>>>>>>>> the response time when some other users are connected.
> > >>>>>>>>> 
> > >>>>>>>>> please keep in mind, that setting -hypervisor, disabling hpet and
> > >>>>>>>>> only one vcpu
> > >>>>>>>>> makes windows use tsc as clocksource. you have to make sure, that
> > >>>>>>>>> your vm is not switching between physical sockets on your system
> > >>>>>>>>> and that you have constant_tsc feature to have a stable tsc
> > >>>>>>>>> between the cores in the same socket. its also likely that the
> > >>>>>>>>> vm will crash when live migrated.
> > >>>>>>>> 
> > >>>>>>>> All true. I asked to try -hypervisor only to verify where we loose
> > >>>>>>>> performance. Since you get good result with it frequent access to
> > >>>>>>>> PM timer is probably the reason. I do not recommend using
> > >>>>>>>> -hypervisor for production!
> > >>>>>>>> 
> > >>>>>>>>> @gleb: do you know whats the state of in-kernel hyper-v timers?
> > >>>>>>>> 
> > >>>>>>>> Vadim is working on it. I'll let him answer.
> > >>>>>>> 
> > >>>>>>> It would be nice to have synthetic timers supported. But,  at the
> > >>>>>>> moment, I'm only researching  this feature.
> > >>>>>> 
> > >>>>>> So it will take months at least?
> > >>>>> 
> > >>>>> I would say weeks.
> > >>>> 
> > >>>> Is there a way, we could contribute and help you with this?
> > >>> 
> > >>> Hi Peter,
> > >>> You are welcome to add  an appropriate handler.
> > >> 
> > >> I think Vadim refers to this HV MSR
> > >> http://msdn.microsoft.com/en-us/library/windows/hardware/ff542633%28v=vs
> > >> .85 %29.aspx
> > > 
> > > This one is pretty simple to support. Please see attachments for more
> > > details. I was thinking about synthetic  timers
> > > http://msdn.microsoft.com/en-
> > > us/library/windows/hardware/ff542758(v=vs.85).aspx
> > 
> > is this what microsoft qpc uses as clocksource in hyper-v?
> Yes, it should be enough for Win7 / W2K8R2.  
To clarify the thing that microsoft qpc uses is what is implemented by
the patch Vadim attached to his previous email. But I believe that
additional qemu patch is needed for Windows to actually use it.

--
                        Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to