Re: [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-04-09 Thread Serge Hallyn
Quoting Chris Cormier (ccorm...@gmail.com):
 @serge-hallyn
 I'm using Netapp filer in c-mode for NFS storage, mount options are: 
 (rw,nosuid,nodev,noatime,hard,nfsvers=3,tcp,intr,rsize=32768,wsize=32768,addr=x.x.x.x).
 
 However, I can reproduce this on a a host with or without NFS, using
 local disk, qcow2, or raw images and the OP was using FC SAN.
 
 I've tried to reproduce your success by buiding a new VM the latest
 preciese 12.04.02 with updates, but i have been unsuccessful. It happens
 immediately after the restore 100% of the time.
 
 The quickest test i've found is this, it helps aid in your reproduction as 
 it's an immediate indication.
 -Get/Compile lmbench
 -build it
 -run
 # touch tmpfile
 # for i in null read write open ; do sleep 1; ./lat_syscall -N 1 $i tmpfile; 
 done
 - the latencies for read/write generally increase up to 2x, and the 
 open/close by about 30% post restore.

i did this and got the attached results (.1 before save, .2 after
restoer).  This is with precise guest on precise host over nfs (from
a raring nfs server).

results.1 (before save)
Simple syscall: 0.1468 microseconds
Simple read: 0.3555 microseconds
Simple write: 0.2785 microseconds
Simple open/close: 3.3682 microseconds
serge@p1:~/lmbench3$ ./sergetest 
Simple syscall: 0.1466 microseconds
Simple read: 0.3412 microseconds
Simple write: 0.2813 microseconds
Simple open/close: 3.3175 microseconds
serge@p1:~/lmbench3$ ./sergetest 
Simple syscall: 0.1582 microseconds
Simple read: 0.3403 microseconds
Simple write: 0.2871 microseconds
Simple open/close: 2.7587 microseconds
serge@p1:~/lmbench3$ ./sergetest 
Simple syscall: 0.1453 microseconds
Simple read: 0.3371 microseconds
Simple write: 0.2790 microseconds
Simple open/close: 3.3391 microseconds


results.2 (after restore)
Simple syscall: 0.1457 microseconds
Simple read: 0.3370 microseconds
Simple write: 0.2832 microseconds
Simple open/close: 3.1675 microseconds
Simple syscall: 0.1470 microseconds
Simple read: 0.3436 microseconds
Simple write: 0.2812 microseconds
Simple open/close: 2.8002 microseconds
Simple syscall: 0.1452 microseconds
Simple read: 0.3428 microseconds
Simple write: 0.2817 microseconds
Simple open/close: 2.7974 microseconds
Simple syscall: 0.1456 microseconds
Simple read: 0.3722 microseconds
Simple write: 0.2798 microseconds
Simple open/close: 2.7494 microseconds
Simple syscall: 0.1470 microseconds
Simple read: 0.3362 microseconds
Simple write: 0.2830 microseconds
Simple open/close: 2.7640 microsecond

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu-kvm in Ubuntu.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


Re: [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-04-09 Thread Serge Hallyn
Quoting C Cormier (1100...@bugs.launchpad.net):
 Could you confirm that your .1 tests were on a freshly booted Guest OS?

Yup, it was a fresh boot.

Since the vms are on shared storage, do you have a box you could wire
up running raring, to test more recent qemu?

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu-kvm in Ubuntu.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


Re: [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-04-09 Thread Serge Hallyn
Quoting Chris Cormier (ccorm...@gmail.com):
 @serge-hallyn
 I'm using Netapp filer in c-mode for NFS storage, mount options are: 
 (rw,nosuid,nodev,noatime,hard,nfsvers=3,tcp,intr,rsize=32768,wsize=32768,addr=x.x.x.x).
 
 However, I can reproduce this on a a host with or without NFS, using
 local disk, qcow2, or raw images and the OP was using FC SAN.
 
 I've tried to reproduce your success by buiding a new VM the latest
 preciese 12.04.02 with updates, but i have been unsuccessful. It happens
 immediately after the restore 100% of the time.
 
 The quickest test i've found is this, it helps aid in your reproduction as 
 it's an immediate indication.
 -Get/Compile lmbench
 -build it
 -run
 # touch tmpfile
 # for i in null read write open ; do sleep 1; ./lat_syscall -N 1 $i tmpfile; 
 done
 - the latencies for read/write generally increase up to 2x, and the 
 open/close by about 30% post restore.

i did this and got the attached results (.1 before save, .2 after
restoer).  This is with precise guest on precise host over nfs (from
a raring nfs server).

results.1 (before save)
Simple syscall: 0.1468 microseconds
Simple read: 0.3555 microseconds
Simple write: 0.2785 microseconds
Simple open/close: 3.3682 microseconds
serge@p1:~/lmbench3$ ./sergetest 
Simple syscall: 0.1466 microseconds
Simple read: 0.3412 microseconds
Simple write: 0.2813 microseconds
Simple open/close: 3.3175 microseconds
serge@p1:~/lmbench3$ ./sergetest 
Simple syscall: 0.1582 microseconds
Simple read: 0.3403 microseconds
Simple write: 0.2871 microseconds
Simple open/close: 2.7587 microseconds
serge@p1:~/lmbench3$ ./sergetest 
Simple syscall: 0.1453 microseconds
Simple read: 0.3371 microseconds
Simple write: 0.2790 microseconds
Simple open/close: 3.3391 microseconds


results.2 (after restore)
Simple syscall: 0.1457 microseconds
Simple read: 0.3370 microseconds
Simple write: 0.2832 microseconds
Simple open/close: 3.1675 microseconds
Simple syscall: 0.1470 microseconds
Simple read: 0.3436 microseconds
Simple write: 0.2812 microseconds
Simple open/close: 2.8002 microseconds
Simple syscall: 0.1452 microseconds
Simple read: 0.3428 microseconds
Simple write: 0.2817 microseconds
Simple open/close: 2.7974 microseconds
Simple syscall: 0.1456 microseconds
Simple read: 0.3722 microseconds
Simple write: 0.2798 microseconds
Simple open/close: 2.7494 microseconds
Simple syscall: 0.1470 microseconds
Simple read: 0.3362 microseconds
Simple write: 0.2830 microseconds
Simple open/close: 2.7640 microsecond

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


Re: [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-04-09 Thread Serge Hallyn
Quoting C Cormier (1100...@bugs.launchpad.net):
 Could you confirm that your .1 tests were on a freshly booted Guest OS?

Yup, it was a fresh boot.

Since the vms are on shared storage, do you have a box you could wire
up running raring, to test more recent qemu?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs