* Paolo Bonzini (pbonz...@redhat.com) wrote:
>
>
> On 04/03/2016 15:26, Li, Liang Z wrote:
> >> >
> >> > The memory usage will keep increasing due to ever growing caches, etc, so
> >> > you'll be left with very little free memory fairly soon.
> >> >
> > I don't think so.
> >
>
> Roman is
On Tue, Feb 16, 2016 at 02:09:44PM +, Carlos Palminha wrote:
> This patch set nukes all the dummy crtc mode_fixup implementations.
> (made on top of Daniel topic/drm-misc branch)
>
> Carlos Palminha (16):
> drm: fixes crct set_mode when crtc mode_fixup is null.
> drm/cirrus: removed
On 04/03/2016 15:26, Li, Liang Z wrote:
>> >
>> > The memory usage will keep increasing due to ever growing caches, etc, so
>> > you'll be left with very little free memory fairly soon.
>> >
> I don't think so.
>
Roman is right. For example, here I am looking at a 64 GB (physical)
machine
> > > > > > Only detect the unmapped/zero mapped pages is not enough.
> > > Consider
> > > > > the
> > > > > > situation like case 2, it can't achieve the same result.
> > > > >
> > > > > Your case 2 doesn't exist in the real world. If people could
> > > > > stop their main memory consumer in the
> > Maybe I am not clear enough.
> >
> > I mean if we inflate balloon before live migration, for a 8GB guest, it
> > takes
> about 5 Seconds for the inflating operation to finish.
>
> And these 5 seconds are spent where?
>
The time is spent on allocating the pages and send the allocated pages
On Fri, Mar 04, 2016 at 02:26:49PM +, Li, Liang Z wrote:
> > Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration
> > optimization
> >
> > On Fri, Mar 04, 2016 at 09:08:44AM +, Li, Liang Z wrote:
> > > > On Fri, Mar 04, 2016 at 01:52:53AM +, Li, Liang Z wrote:
> >
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration
> optimization
>
> On Fri, Mar 04, 2016 at 09:08:44AM +, Li, Liang Z wrote:
> > > On Fri, Mar 04, 2016 at 01:52:53AM +, Li, Liang Z wrote:
> > > > > I wonder if it would be possible to avoid the kernel changes
>
This patch introduces a helper which will return true if we're sure
that the available ring is empty for a specific vq. When we're not
sure, e.g vq access failure, return false instead. This could be used
for busy polling code to exit the busy loop.
Signed-off-by: Jason Wang
This patch tries to poll for new added tx buffer or socket receive
queue for a while at the end of tx/rx processing. The maximum time
spent on polling were specified through a new kind of vring ioctl.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c| 78
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx/rx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test A were done
This path introduces a helper which can give a hint for whether or not
there's a work queued in the work list. This could be used for busy
polling code to exit the busy loop.
Signed-off-by: Jason Wang
---
drivers/vhost/vhost.c | 7 +++
drivers/vhost/vhost.h | 1 +
2
On Fri, Mar 04, 2016 at 10:11:00AM +, Li, Liang Z wrote:
> > On Fri, Mar 04, 2016 at 09:12:12AM +, Li, Liang Z wrote:
> > > > Although I wonder which is cheaper; that would be fairly expensive
> > > > for the guest wouldn't it? And you'd somehow have to kick the guest
> > > > before
On Fri, Mar 04, 2016 at 09:08:44AM +, Li, Liang Z wrote:
> > On Fri, Mar 04, 2016 at 01:52:53AM +, Li, Liang Z wrote:
> > > > I wonder if it would be possible to avoid the kernel changes by
> > > > parsing /proc/self/pagemap - if that can be used to detect
> > > > unmapped/zero mapped
Hi Daniel,
On Wed, 17 Feb 2016 14:08:01 +0100
Daniel Vetter wrote:
> On Wed, Feb 17, 2016 at 09:02:44AM +, Carlos Palminha wrote:
> > Thanks Boris.
> >
> > @Daniel, do you want me to resend this patch or will you fix it directly in
> > mode-fixup git branch?
>
> I can
> On Fri, Mar 04, 2016 at 09:12:12AM +, Li, Liang Z wrote:
> > > Although I wonder which is cheaper; that would be fairly expensive
> > > for the guest wouldn't it? And you'd somehow have to kick the guest
> > > before migration to do the ballooning - and how long would you wait for
> it to
On Fri, Mar 04, 2016 at 09:12:12AM +, Li, Liang Z wrote:
> > Although I wonder which is cheaper; that would be fairly expensive for the
> > guest wouldn't it? And you'd somehow have to kick the guest before
> > migration to do the ballooning - and how long would you wait for it to
> > finish?
> > > * Liang Li (liang.z...@intel.com) wrote:
> > > > The current QEMU live migration implementation mark the all the
> > > > guest's RAM pages as dirtied in the ram bulk stage, all these
> > > > pages will be processed and that takes quit a lot of CPU cycles.
> > > >
> > > > From guest's point
On Fri, Mar 04, 2016 at 09:08:20AM +, Dr. David Alan Gilbert wrote:
> * Roman Kagan (rka...@virtuozzo.com) wrote:
> > On Fri, Mar 04, 2016 at 08:23:09AM +, Li, Liang Z wrote:
> > > The unmapped/zero mapped pages can be detected by parsing
> > > /proc/self/pagemap,
> > > but the free pages
> * Roman Kagan (rka...@virtuozzo.com) wrote:
> > On Fri, Mar 04, 2016 at 08:23:09AM +, Li, Liang Z wrote:
> > > > On Thu, Mar 03, 2016 at 05:46:15PM +, Dr. David Alan Gilbert wrote:
> > > > > * Liang Li (liang.z...@intel.com) wrote:
> > > > > > The current QEMU live migration
> On Fri, Mar 04, 2016 at 01:52:53AM +, Li, Liang Z wrote:
> > > I wonder if it would be possible to avoid the kernel changes by
> > > parsing /proc/self/pagemap - if that can be used to detect
> > > unmapped/zero mapped pages in the guest ram, would it achieve the
> same result?
> >
> >
* Roman Kagan (rka...@virtuozzo.com) wrote:
> On Fri, Mar 04, 2016 at 08:23:09AM +, Li, Liang Z wrote:
> > > On Thu, Mar 03, 2016 at 05:46:15PM +, Dr. David Alan Gilbert wrote:
> > > > * Liang Li (liang.z...@intel.com) wrote:
> > > > > The current QEMU live migration implementation mark
On Fri, Mar 04, 2016 at 08:23:09AM +, Li, Liang Z wrote:
> > On Thu, Mar 03, 2016 at 05:46:15PM +, Dr. David Alan Gilbert wrote:
> > > * Liang Li (liang.z...@intel.com) wrote:
> > > > The current QEMU live migration implementation mark the all the
> > > > guest's RAM pages as dirtied in
On Fri, Mar 04, 2016 at 01:52:53AM +, Li, Liang Z wrote:
> > I wonder if it would be possible to avoid the kernel changes by parsing
> > /proc/self/pagemap - if that can be used to detect unmapped/zero mapped
> > pages in the guest ram, would it achieve the same result?
>
> Only detect the
23 matches
Mail list logo