On 4/13/2016 5:06 PM, Michael S. Tsirkin wrote:
> On Wed, Apr 13, 2016 at 12:15:38PM +0100, Dr. David Alan Gilbert wrote:
>> * Michael S. Tsirkin (m...@redhat.com) wrote:
>>> On Wed, Apr 13, 2016 at 04:24:55PM +0530, Jitendra Kolhe wrote:
Can we extend support for post-copy in a different
On Wed, Apr 13, 2016 at 12:15:38PM +0100, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (m...@redhat.com) wrote:
> > On Wed, Apr 13, 2016 at 04:24:55PM +0530, Jitendra Kolhe wrote:
> > > Can we extend support for post-copy in a different patch set?
> >
> > If the optimization does not
* Michael S. Tsirkin (m...@redhat.com) wrote:
> On Wed, Apr 13, 2016 at 04:24:55PM +0530, Jitendra Kolhe wrote:
> > Can we extend support for post-copy in a different patch set?
>
> If the optimization does not *help* on some paths,
> that's fine. The issue is with adding extra code
>
On Wed, Apr 13, 2016 at 04:24:55PM +0530, Jitendra Kolhe wrote:
> Can we extend support for post-copy in a different patch set?
If the optimization does not *help* on some paths,
that's fine. The issue is with adding extra code
special-casing protocols:
+if (migrate_postcopy_ram()) {
+
On 4/10/2016 10:29 PM, Michael S. Tsirkin wrote:
> On Fri, Apr 01, 2016 at 04:38:28PM +0530, Jitendra Kolhe wrote:
>> On 3/29/2016 5:58 PM, Michael S. Tsirkin wrote:
>>> On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote:
While measuring live migration performance for qemu/kvm
On Fri, Apr 01, 2016 at 04:38:28PM +0530, Jitendra Kolhe wrote:
> On 3/29/2016 5:58 PM, Michael S. Tsirkin wrote:
> > On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote:
> >> While measuring live migration performance for qemu/kvm guest, it
> >> was observed that the qemu doesn’t
On 3/31/2016 10:09 PM, Dr. David Alan Gilbert wrote:
> * Jitendra Kolhe (jitendra.ko...@hpe.com) wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are released by the guest
On 3/29/2016 4:18 PM, Paolo Bonzini wrote:
>
>
> On 29/03/2016 12:47, Jitendra Kolhe wrote:
>>> Indeed. It is correct for the main system RAM, but hot-plugged RAM
>>> would also have a zero-based section.offset_within_region. You need to
>>> add memory_region_get_ram_addr(section.mr), just
On 3/29/2016 5:58 PM, Michael S. Tsirkin wrote:
> On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are released by
* Jitendra Kolhe (jitendra.ko...@hpe.com) wrote:
> While measuring live migration performance for qemu/kvm guest, it
> was observed that the qemu doesn’t maintain any intelligence for the
> guest ram pages which are released by the guest balloon driver and
> treat such pages as any other normal
On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote:
> While measuring live migration performance for qemu/kvm guest, it
> was observed that the qemu doesn’t maintain any intelligence for the
> guest ram pages which are released by the guest balloon driver and
> treat such pages as any
On 3/28/2016 7:41 PM, Eric Blake wrote:
> On 03/27/2016 10:16 PM, Jitendra Kolhe wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are released by the guest balloon driver and
On 3/28/2016 4:06 PM, Denis V. Lunev wrote:
> On 03/28/2016 07:16 AM, Jitendra Kolhe wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are released by the guest balloon driver
On 29/03/2016 12:47, Jitendra Kolhe wrote:
> > Indeed. It is correct for the main system RAM, but hot-plugged RAM
> > would also have a zero-based section.offset_within_region. You need to
> > add memory_region_get_ram_addr(section.mr), just like the call to
> > balloon_page adds
On 3/29/2016 3:35 PM, Paolo Bonzini wrote:
>
>
> On 28/03/2016 08:59, Michael S. Tsirkin wrote:
+qemu_mutex_lock_balloon_bitmap();
for (;;) {
size_t offset = 0;
uint32_t pfn;
elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
On 28/03/2016 08:59, Michael S. Tsirkin wrote:
>> > +qemu_mutex_lock_balloon_bitmap();
>> > for (;;) {
>> > size_t offset = 0;
>> > uint32_t pfn;
>> > elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
>> > if (!elem) {
>> > +
On 03/27/2016 10:16 PM, Jitendra Kolhe wrote:
> While measuring live migration performance for qemu/kvm guest, it
> was observed that the qemu doesn’t maintain any intelligence for the
> guest ram pages which are released by the guest balloon driver and
> treat such pages as any other normal guest
On 03/28/2016 07:16 AM, Jitendra Kolhe wrote:
While measuring live migration performance for qemu/kvm guest, it
was observed that the qemu doesn’t maintain any intelligence for the
guest ram pages which are released by the guest balloon driver and
treat such pages as any other normal guest ram
On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote:
> While measuring live migration performance for qemu/kvm guest, it
> was observed that the qemu doesn’t maintain any intelligence for the
> guest ram pages which are released by the guest balloon driver and
> treat such pages as any
While measuring live migration performance for qemu/kvm guest, it
was observed that the qemu doesn’t maintain any intelligence for the
guest ram pages which are released by the guest balloon driver and
treat such pages as any other normal guest ram pages. This has direct
impact on overall
20 matches
Mail list logo