[PATCH] [Autotest PATCH v2] KVM-test: Add a subtest 'qemu_img'

2010-03-31 Thread Yolkfull Chow
This is designed to test all subcommands of 'qemu-img' however
so far 'commit' is not implemented.

* For 'check' subcommand test, it will 'dd' to create a file with specified
size and see whether it's supported to be checked. Then convert it to be
supported formats ('qcow2' and 'raw' so far) to see whether there's error
after convertion.

* For 'convert' subcommand test, it will convert both to 'qcow2' and 'raw' from
the format specified in config file. And only check 'qcow2' after convertion.

* For 'snapshot' subcommand test, it will create two snapshots and list them.
Finally delete them if no errors found.

* For 'info' subcommand test, it will check image format  size according to
output of 'info' subcommand  at specified image file.

* For 'rebase' subcommand test, it will create first snapshot 'sn1' based on 
original
base_img, and create second snapshot based on sn1. And then rebase sn2 to 
base_img.
After rebase check the baking_file of sn2.

This supports two rebase mode: unsafe mode and safe mode:
Unsafe mode:
With -u an unsafe mode is enabled that doesn't require the backing files to 
exist.
It merely changes the backing file reference in the COW image. This is useful 
for
renaming or moving the backing file. The user is responsible to make sure that 
the
new backing file has no changes compared to the old one, or corruption may 
occur.

Safe Mode:
Both the current and the new backing file need to exist, and after the rebase, 
the
COW image is guaranteed to have the same guest visible content as before.
To achieve this, old and new backing file are compared and, if necessary, data 
is
copied from the old backing file into the COW image.

Improvement from v1:
* Add an underscore _ at the beginning of all the auxiliary functions.

Results in:

# ./scan_results.py 
TestStatus  Seconds Info
--  --- 
(Result file: ../../results/default/status)
smp2.RHEL.5.4.i386.qemu_img.check   GOOD132 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.create  GOOD144 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.convert.to_qcow2GOOD251 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.convert.to_raw  GOOD245 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.snapshotGOOD140 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.commit  GOOD146 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.infoGOOD133 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.rebase  TEST_NA 137 
Current kvm user space version does not support 'rebase' subcommand
GOOD1392
[r...@afu kvm]# 

shows only 'rebase' subtest is not supported currently. 
Others runs good from my side.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/qemu_img.py |  280 
 client/tests/kvm/tests_base.cfg.sample |   40 +
 2 files changed, 320 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/qemu_img.py

diff --git a/client/tests/kvm/tests/qemu_img.py 
b/client/tests/kvm/tests/qemu_img.py
new file mode 100644
index 000..7f786c5
--- /dev/null
+++ b/client/tests/kvm/tests/qemu_img.py
@@ -0,0 +1,280 @@
+import re, os, logging, commands
+from autotest_lib.client.common_lib import utils, error
+import kvm_vm, kvm_utils
+
+
+def run_qemu_img(test, params, env):
+
+`qemu-img' functions test:
+1) Judge what subcommand is going to be tested
+2) Run subcommand test
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+cmd = kvm_utils.get_path(test.bindir, params.get(qemu_img_binary))
+if not os.path.exists(cmd):
+raise error.TestError(Binary of 'qemu-img' not found)
+image_format = params.get(image_format)
+image_size = params.get(image_size, 10G)
+image_name = kvm_vm.get_image_filename(params, test.bindir)
+
+def _check(cmd, img):
+
+Simple 'qemu-img check' function implementation.
+
+@param cmd: binary of 'qemu_img'
+@param img: image to be checked
+
+cmd +=  check %s % img
+logging.info(Checking image '%s'... % img)
+o = commands.getoutput(cmd)
+if does not support checks in o or No errors in o:
+return (True, )
+return (False, o)
+
+# Subcommand 'qemu-img check' test
+# This tests will 'dd' to create a specified size file, and check it.
+# Then convert it to supported image_format in each loop and check again.
+def check_test(cmd):
+test_image = params.get

[PATCH] KVM test: Put os.kill in kvm_stat into try block to avoid traceback

2010-03-29 Thread Yolkfull Chow
Sometimes it tried to kill an already terminated process which can cause
a traceback. This patch fixes the problem.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/profilers/kvm_stat/kvm_stat.py |5 -
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/client/profilers/kvm_stat/kvm_stat.py 
b/client/profilers/kvm_stat/kvm_stat.py
index 7568a03..59d6ff6 100644
--- a/client/profilers/kvm_stat/kvm_stat.py
+++ b/client/profilers/kvm_stat/kvm_stat.py
@@ -51,7 +51,10 @@ class kvm_stat(profiler.profiler):
 
 @param test: Autotest test on which this profiler will operate on.
 
-os.kill(self.pid, 15)
+try:
+os.kill(self.pid, 15)
+except OSError:
+pass
 
 
 def report(self, test):
-- 
1.7.0.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-03-29 Thread Yolkfull Chow
On Wed, Mar 17, 2010 at 10:38:58AM -0300, Lucas Meneghel Rodrigues wrote:
 Copying Michael on the message.
 
 Hi Yolkfull, I have reviewed this patch and I have some comments to
 make on it, similar to the ones I made on an earlier version of it:
 
 One of the things that I noticed is that this patch doesn't work very
 well out of the box:
 
 [...@freedom kvm]$ ./scan_results.py
 Test  Status  Seconds 
 Info
   --  --- 
 
 (Result file: ../../results/default/status)
 smp2.Fedora.11.64.qemu_img.check  GOOD47  
 completed successfully
 smp2.Fedora.11.64.qemu_img.create GOOD44  
 completed successfully
 smp2.Fedora.11.64.qemu_img.convert.to_qcow2   FAIL45  
 Image
 converted failed; Command: /usr/bin/qemu-img convert -f qcow2 -O qcow2
 /tmp/kvm_autotest_root/images/fc11-64.qcow2
 /tmp/kvm_autotest_root/images/fc11-64.qcow2.converted_qcow2;Output is:
 qemu-img: Could not open '/tmp/kvm_autotest_root/images/fc11-64.qcow2'
 smp2.Fedora.11.64.qemu_img.convert.to_raw FAIL46  
 Image
 converted failed; Command: /usr/bin/qemu-img convert -f qcow2 -O raw
 /tmp/kvm_autotest_root/images/fc11-64.qcow2
 /tmp/kvm_autotest_root/images/fc11-64.qcow2.converted_raw;Output is:
 qemu-img: Could not open '/tmp/kvm_autotest_root/images/fc11-64.qcow2'
 smp2.Fedora.11.64.qemu_img.snapshot   FAIL44  
 Create
 snapshot failed via command: /usr/bin/qemu-img snapshot -c snapshot0
 /tmp/kvm_autotest_root/images/fc11-64.qcow2;Output is: qemu-img: Could
 not open '/tmp/kvm_autotest_root/images/fc11-64.qcow2'
 smp2.Fedora.11.64.qemu_img.commit GOOD44  
 completed successfully
 smp2.Fedora.11.64.qemu_img.info   FAIL44  
 Unhandled
 str: Unhandled TypeError: argument of type 'NoneType' is not iterable
 smp2.Fedora.11.64.qemu_img.rebase TEST_NA 43  
 Current
 kvm user space version does not support 'rebase' subcommand
   GOOD412 
 
 We need to fix that before upstream inclusion.

Hi Lucas, did you run the test on fedora or other box? I ran this test on my 
fedora 13 box for
several times, worked fine:

# ./scan_results.py 
TestStatus  Seconds 
Info
--  --- 

(Result file: ../../results/default/status)
smp2.RHEL.5.4.i386.qemu_img.check   GOOD132 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.create  GOOD144 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.convert.to_qcow2GOOD251 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.convert.to_raw  GOOD245 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.snapshotGOOD140 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.commit  GOOD146 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.infoGOOD133 
completed successfully
smp2.RHEL.5.4.i386.qemu_img.rebase  TEST_NA 137 
Current kvm user space version does not support 'rebase' subcommand
GOOD1392
[r...@afu kvm]# 

Weird why there are some case failed...
Please test again based on the new patch I will send later.

 
 Also, one thing that I've noticed is that this test doesn't depend of
 any other variants, so we don't need to repeat it to every combination
 of guest and qemu command line options. Michael, does it occur to you
 a way to get this test out of the variants block, so it gets executed
 only once per job and not every combination of guest and other qemu
 options?

Lucas and Michael, maybe we could add a parameter say 'ignore_vm_config = yes' 
to config file
which let a test ignore all configurations combination. 
Another method is ugly adding following block into config file:

---
qemu_img:
only ide
only qcow2
only up
only ...
(use 'only' to filter all configurations combination)
---

But I don't think it's a good idea. What do you think?

 
 On Fri, Jan 29, 2010 at 4:00 AM, Yolkfull Chow yz...@redhat.com wrote:
  This is designed to test all subcommands of 'qemu-img' however
  so far 'commit' is not implemented.
 
  * For 'check' subcommand test, it will 'dd' to create a file with specified
  size and see whether it's supported to be checked. Then convert it to be
  supported formats ('qcow2' and 'raw' so far) to see whether there's error
  after convertion

[PATCH] KVM-test: SR-IOV: Fix a bug that wrongly check VFs count

2010-03-10 Thread Yolkfull Chow
The parameter 'devices_requested' is irrelated to driver_option 'max_vfs'
of 'igb'.

NIC card 82576 has two network interfaces and each can be
virtualized up to 7 virtual functions, therefore we multiply
two for the value of driver_option 'max_vfs' and can thus get
the total number of VFs.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_utils.py |   19 +--
 1 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 4565dc1..1813ed1 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -1012,17 +1012,22 @@ class PciAssignable(object):
 
 Get VFs count number according to lspci.
 
+# FIXME: Need to think out a method of identify which
+# 'virtual function' belongs to which physical card considering
+# that if the host has more than one 82576 card. PCI_ID?
 cmd = lspci | grep 'Virtual Function' | wc -l
-# For each VF we'll see 2 prints of 'Virtual Function', so let's
-# divide the result per 2
-return int(commands.getoutput(cmd)) / 2
+return int(commands.getoutput(cmd))
 
 
 def check_vfs_count(self):
 
 Check VFs count number according to the parameter driver_options.
 
-return (self.get_vfs_count == self.devices_requested)
+# Network card 82576 has two network interfaces and each can be
+# virtualized up to 7 virtual functions, therefore we multiply
+# two for the value of driver_option 'max_vfs'.
+expected_count = int((re.findall((\d), self.driver_option)[0])) * 2
+return (self.get_vfs_count == expected_count)
 
 
 def is_binded_to_stub(self, full_id):
@@ -1054,15 +1059,17 @@ class PciAssignable(object):
 elif not self.check_vfs_count():
 os.system(modprobe -r %s % self.driver)
 re_probe = True
+else:
+return True
 
 # Re-probe driver with proper number of VFs
 if re_probe:
 cmd = modprobe %s %s % (self.driver, self.driver_option)
+logging.info(Loading the driver '%s' with option '%s' %
+   (self.driver, self.driver_option))
 s, o = commands.getstatusoutput(cmd)
 if s:
 return False
-if not self.check_vfs_count():
-return False
 return True
 
 
-- 
1.7.0.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to disable KVM start at boot

2010-02-25 Thread Yolkfull Chow
On Thu, Feb 25, 2010 at 05:23:15PM +0800, sati...@pacific.net.hk wrote:
 Quoting Yolkfull Chow yz...@redhat.com:
 
 
 $ lsmod | grep kvm
 kvm_amd38452  0
 kvm   163952  1 kvm_amd
 
 
 # chkconfig --level 35 kvm off
 error reading information on service kvm: No such file or directory
 
 
 I tried to disable qemu on Gnome;
 System - Administration - Services - highlight qemu - Disable
 
 (kvm is NOT there)
 
 
 Next boot
 
 $ lsmod | grep kvm
 kvm_amd38452  0
 kvm   163952  1 kvm_amd
 
 still there
 
 
 If you want kvm related kernel modules not to be loaded at next boot,
 you can add then into blacklist: /etc/modprobe.d/blacklist.con
 
 
 Hi Yolkfull,
 
 Thanks for your advice.  Could you please explain in more detail.
 
 $ cat /etc/modprobe.d/blacklist.conf
 # watchdog drivers
 blacklist i8xx_tco
 
 # framebuffer drivers
 blacklist aty128fb
 blacklist atyfb
 blacklist radeonfb
 blacklist i810fb
 blacklist cirrusfb
 blacklist intelfb
 blacklist kyrofb
 blacklist i2c-matroxfb
 blacklist hgafb
 blacklist nvidiafb
 blacklist rivafb
 blacklist savagefb
 blacklist sstfb
 blacklist neofb
 blacklist tridentfb
 blacklist tdfxfb
 blacklist virgefb
 blacklist vga16fb
 
 # ISDN - see bugs 154799, 159068
 blacklist hisax
 blacklist hisax_fcpcipnp
 
 # sound drivers
 blacklist snd-pcsp
 
 
 What shall I add on this config file?
 
 How can I start KVM after booting if needed.  TIA

Hi,

You can comment all content of this file: /etc/sysconfig/modules/kvm.modules
and #modprobe kvm  modprobe kvm_intel if need.

Cheers,

 
 
 B.R.
 Stephen L
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to disable KVM start at boot

2010-02-24 Thread Yolkfull Chow
On Thu, Feb 25, 2010 at 12:25:07PM +0800, sati...@pacific.net.hk wrote:
 Quoting Hao, Xudong xudong@intel.com:
 
 That's right.
 I run it on RHEL system, F12 should be same, you can have a try.
 
 Hi Hao,
 
 $ lsmod | grep kvm
 kvm_amd38452  0
 kvm   163952  1 kvm_amd
 
 
 # chkconfig --level 35 kvm off
 error reading information on service kvm: No such file or directory
 
 
 I tried to disable qemu on Gnome;
 System - Administration - Services - highlight qemu - Disable
 
 (kvm is NOT there)
 
 
 Next boot
 
 $ lsmod | grep kvm
 kvm_amd38452  0
 kvm   163952  1 kvm_amd
 
 still there

If you want kvm related kernel modules not to be loaded at next boot,
you can add then into blacklist: /etc/modprobe.d/blacklist.con

 
 
 B.R.
 Stephen L
 
 
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] KVM-test: SR-IOV: fix a bug that misplaced parameters location

2010-02-08 Thread Yolkfull Chow
Reverse both parameters of utils.open_write_close() that was misplaced.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_utils.py |5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 08af99b..cdf2a00 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -1089,15 +1089,14 @@ class PciAssignable(object):
 
 for content, file in info_write_to_files:
 try:
-utils.open_write_close(content, file)
+utils.open_write_close(file, content)
 except IOError:
 logging.debug(Failed to write %s to file %s %
   (content, file))
 continue
 
 if not self.is_binded_to_stub(full_id):
-logging.error(Binding device %s to stub failed %
-  pci_id)
+logging.error(Binding device %s to stub failed % pci_id)
 continue
 else:
 logging.debug(Device %s already binded to stub % pci_id)
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] KVM-test: Log the output of failed executed command 'pre/post-command'

2010-02-03 Thread Yolkfull Chow
Sometimes we need the output of failed command 'pre/post-command'
when raising error.TestError. Take an example that using post-command
to check images of qcow2 format, it will simply tell command failed
rather than show '%d errors were found on the image'.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_preprocessing.py |6 --
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/client/tests/kvm/kvm_preprocessing.py 
b/client/tests/kvm/kvm_preprocessing.py
index 4e45c76..53fa2cb 100644
--- a/client/tests/kvm/kvm_preprocessing.py
+++ b/client/tests/kvm/kvm_preprocessing.py
@@ -148,9 +148,11 @@ def process_command(test, params, env, command, 
command_timeout,
  logging.debug, (command) ,
  timeout=command_timeout)
 if status != 0:
-logging.warn(Custom processing command failed: '%s' % command)
+logging.warn(Custom processing command failed: '%s'; Output is: %s %
+(command, output))
 if not command_noncritical:
-raise error.TestError(Custom processing command failed)
+raise error.TestError(Custom processing command failed: %s %
+   output)
 
 
 def process(test, params, env, image_func, vm_func):
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH 1/1] KVM test: kvm_vm.py: shorten VM.destroy()

2010-02-03 Thread Yolkfull Chow
On Wed, Feb 03, 2010 at 01:21:12PM +0200, Michael Goldish wrote:
 Call self.pci_assignable.release_devs() in the finally block.

Looks good for me. Thanks, Michael, for this cleanup.

 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm_vm.py |   11 ++-
  1 files changed, 2 insertions(+), 9 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
 index 6731927..db903a0 100755
 --- a/client/tests/kvm/kvm_vm.py
 +++ b/client/tests/kvm/kvm_vm.py
 @@ -598,8 +598,6 @@ class VM:
  # Is it already dead?
  if self.is_dead():
  logging.debug(VM is already down)
 -if self.pci_assignable:
 -self.pci_assignable.release_devs()
  return
  
  logging.debug(Destroying VM with PID %d... %
 @@ -620,9 +618,6 @@ class VM:
  return
  finally:
  session.close()
 -if self.pci_assignable:
 -self.pci_assignable.release_devs()
 -
  
  # Try to destroy with a monitor command
  logging.debug(Trying to kill VM with monitor command...)
 @@ -632,8 +627,6 @@ class VM:
  # Wait for the VM to be really dead
  if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
  logging.debug(VM is down)
 -if self.pci_assignable:
 -self.pci_assignable.release_devs()
  return
  
  # If the VM isn't dead yet...
 @@ -643,13 +636,13 @@ class VM:
  # Wait for the VM to be really dead
  if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
  logging.debug(VM is down)
 -if self.pci_assignable:
 -self.pci_assignable.release_devs()
  return
  
  logging.error(Process %s is a zombie! % self.process.get_pid())
  
  finally:
 +if self.pci_assignable:
 +self.pci_assignable.release_devs()
  if self.process:
  self.process.close()
  try:
 -- 
 1.5.4.1
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [RFC] KVM test: Ship rss.exe and finish.exe binaries with KVM test

2010-02-02 Thread Yolkfull Chow
On Tue, Feb 02, 2010 at 09:48:34AM -0200, Lucas Meneghel Rodrigues wrote:
 Hi folks:
 
 We're on an effort of streamlining the KVM test experience, by choosing
 sane defaults and helper scripts that can overcome the initial barrier
 with getting the KVM test running. On one of the conversations I've had
 today, we came up with the idea of shipping the compiled windows
 programs rss.exe and finish.exe, needed for windows hosts testing.
 
 Even though rss.exe and finish.exe can be compiled in a fairly
 straightforward way using the awesome cross compiling environment with
 mingw, there are some obvious limitations to it:
 
 1) The cross compiling environment is only available for fedora = 11.
 No other distros I know have it.
 
 2) Sometimes it might take time for the user to realize he/she has to
 compile the source code under unattended/ folder, and how to do it.
 
 That person would take a couple of failed attempts scratching his/her
 head thinking what the heck is this deps/finish.exe they're talking
 about?. Surely documentation can help, and I am looking at making the
 documentation on how to do it more easily discoverable.
 
 That said, shipping the binaries would make the life of those people
 easier, and anyway the binaries work pretty well across all versions of
 windows from winxp to win7, they are self contained, with no external
 dependencies (they all use the standard win32 API).
 
 3) That said we also need a script that can build the entire
 winutils.iso without making the user to spend way too much time figuring
 out how to do it. I want to work on such a script on the next days.
 
 So, what are your opinions? Should we ship the binaries or pursue a
 script that can build those for the user as soon as the (yet to be
 integrated) get_started.py script runs? Remember that the later might
 mean users of RHEL = 5.X and debian like will be left out in the cold.
 
 Looking forward hearing your input,
 
 Lucas

Hi Lucas,

I believe it's straightforward way for newbie of kvm-autotest to get quickly 
started.

I once compiled the source code two times, and each time I have to find the 
email
that has the exact command with options. 

Thanks for working on this. :-)

 
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-01-28 Thread Yolkfull Chow
On Tue, Jan 26, 2010 at 03:11:34PM -0200, Lucas Meneghel Rodrigues wrote:
 On Tue, 2010-01-26 at 11:25 +0800, Yolkfull Chow wrote:
  This is designed to test all subcommands of 'qemu-img' however
  so far 'commit' is not implemented.
 
 Hi Yolkful, this is very good! Seeing this test made me think about that
 stand alone autotest module we commited a while ago, that does
 qemu_iotests testsuite on the host.
 
 Perhaps we could 'port' this module to the kvm test, since it is more
 convenient to execute it inside a kvm test job (in a job where we test
 more than 2 build types, for example, we need to execute qemu_img and
 qemu_io_tests for every qemu-img built).
 
 Could you look at implementing this?
 
  * For 'check' subcommand test, it will 'dd' to create a file with specified
  size and see whether it's supported to be checked. Then convert it to be
  supported formats (qcow2 and raw so far) to see whether there's error
  after convertion.
  
  * For 'convert' subcommand test, it will convert both to 'qcow2' and 'raw' 
  from
  the format specified in config file. And only check 'qcow2' after 
  convertion.
  
  * For 'snapshot' subcommand test, it will create two snapshots and list 
  them.
  Finally delete them if no errors found.
  
  * For 'info' subcommand test, it simply get output from specified image 
  file.
  
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/tests/qemu_img.py |  155 
  
   client/tests/kvm/tests_base.cfg.sample |   36 
   2 files changed, 191 insertions(+), 0 deletions(-)
   create mode 100644 client/tests/kvm/tests/qemu_img.py
  
  diff --git a/client/tests/kvm/tests/qemu_img.py 
  b/client/tests/kvm/tests/qemu_img.py
  new file mode 100644
  index 000..1ae04f0
  --- /dev/null
  +++ b/client/tests/kvm/tests/qemu_img.py
  @@ -0,0 +1,155 @@
  +import os, logging, commands
  +from autotest_lib.client.common_lib import error
  +import kvm_vm
  +
  +
  +def run_qemu_img(test, params, env):
  +
  +`qemu-img' functions test:
  +1) Judge what subcommand is going to be tested
  +2) Run subcommand test
  +
  +@param test: kvm test object
  +@param params: Dictionary with the test parameters
  +@param env: Dictionary with test environment.
  +
  +cmd = params.get(qemu_img_binary)
 
 It is a good idea to verify if cmd above resolves to an absolute path,
 to avoid problems. If it doesn't resolve, verify if there's the symbolic
 link under kvm test dir pointing to qemu-img, and if it does exist, make
 sure it points to a valid file (ie, symlink is not broken).
 
  +subcommand = params.get(subcommand)
  +image_format = params.get(image_format)
  +image_name = kvm_vm.get_image_filename(params, test.bindir)
  +
  +def check(img):
  +global cmd
  +cmd +=  check %s % img
  +logging.info(Checking image '%s'... % img)
  +s, o = commands.getstatusoutput(cmd)
  +if not (s == 0 or does not support checks in o):
  +return (False, o)
  +return (True, )
 
 Please use utils.system_output here instead of the equivalent commands
 API on the above code. This comment applies to all further uses of
 commands.[function].
 
  +
  +# Subcommand 'qemu-img check' test
  +# This tests will 'dd' to create a specified size file, and check it.
  +# Then convert it to supported image_format in each loop and check 
  again.
  +def check_test():
  +size = params.get(dd_image_size)
  +test_image = params.get(dd_image_name)
  +create_image_cmd = params.get(create_image_cmd)
  +create_image_cmd = create_image_cmd % (test_image, size)
  +s, o = commands.getstatusoutput(create_image_cmd)
  +if s != 0:
  +raise error.TestError(Failed command: %s; Output is: %s %
  + (create_image_cmd, o))
  +s, o = check(test_image)
  +if not s:
  +raise error.TestFail(Failed to check image '%s' with error: 
  %s %
  +  (test_image, 
  o))
  +for fmt in params.get(supported_image_formats).split():
  +output_image = test_image + .%s % fmt
  +convert(fmt, test_image, output_image)
  +s, o = check(output_image)
  +if not s:
  +raise error.TestFail(Check image '%s' got error: %s %
  + (output_image, o))
  +commands.getoutput(rm -f %s % output_image)
  +commands.getoutput(rm -f %s % test_image)
  +#Subcommand 'qemu-img create' test
  +def create_test():
  +global cmd
 
 I don't like very much this idea of using a global variable, instead it
 should be preferrable to use a class and have a class attribute with
 'cmd'. This way it would be safer, since the usage of cmd is
 encapsulated

[Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-01-28 Thread Yolkfull Chow
This is designed to test all subcommands of 'qemu-img' however
so far 'commit' is not implemented.

* For 'check' subcommand test, it will 'dd' to create a file with specified
size and see whether it's supported to be checked. Then convert it to be
supported formats ('qcow2' and 'raw' so far) to see whether there's error
after convertion.

* For 'convert' subcommand test, it will convert both to 'qcow2' and 'raw' from
the format specified in config file. And only check 'qcow2' after convertion.

* For 'snapshot' subcommand test, it will create two snapshots and list them.
Finally delete them if no errors found.

* For 'info' subcommand test, it will check image format  size according to
output of 'info' subcommand  at specified image file.

* For 'rebase' subcommand test, it will create first snapshot 'sn1' based on 
original
base_img, and create second snapshot based on sn1. And then rebase sn2 to 
base_img.
After rebase check the baking_file of sn2.

This supports two rebase mode: unsafe mode and safe mode:
Unsafe mode:
With -u an unsafe mode is enabled that doesn't require the backing files to 
exist.
It merely changes the backing file reference in the COW image. This is useful 
for
renaming or moving the backing file. The user is responsible to make sure that 
the
new backing file has no changes compared to the old one, or corruption may 
occur.

Safe Mode:
Both the current and the new backing file need to exist, and after the rebase, 
the
COW image is guaranteed to have the same guest visible content as before.
To achieve this, old and new backing file are compared and, if necessary, data 
is
copied from the old backing file into the COW image.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/qemu_img.py |  235 
 client/tests/kvm/tests_base.cfg.sample |   40 ++
 2 files changed, 275 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/qemu_img.py

diff --git a/client/tests/kvm/tests/qemu_img.py 
b/client/tests/kvm/tests/qemu_img.py
new file mode 100644
index 000..e6352a0
--- /dev/null
+++ b/client/tests/kvm/tests/qemu_img.py
@@ -0,0 +1,235 @@
+import re, os, logging, commands
+from autotest_lib.client.common_lib import utils, error
+import kvm_vm, kvm_utils
+
+
+def run_qemu_img(test, params, env):
+
+`qemu-img' functions test:
+1) Judge what subcommand is going to be tested
+2) Run subcommand test
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+cmd = kvm_utils.get_path(test.bindir, params.get(qemu_img_binary))
+if not os.path.exists(cmd):
+raise error.TestError(Binary of 'qemu-img' not found)
+image_format = params.get(image_format)
+image_size = params.get(image_size, 10G)
+image_name = kvm_vm.get_image_filename(params, test.bindir)
+
+def check(cmd, img):
+cmd +=  check %s % img
+logging.info(Checking image '%s'... % img)
+o = commands.getoutput(cmd)
+if does not support checks in o or No errors in o:
+return (True, )
+return (False, o)
+
+# Subcommand 'qemu-img check' test
+# This tests will 'dd' to create a specified size file, and check it.
+# Then convert it to supported image_format in each loop and check again.
+def check_test(cmd):
+test_image = params.get(image_name_dd)
+create_image_cmd = params.get(create_image_cmd)
+create_image_cmd = create_image_cmd % test_image
+s, o = commands.getstatusoutput(create_image_cmd)
+if s != 0:
+raise error.TestError(Failed command: %s; Output is: %s %
+ (create_image_cmd, o))
+s, o = check(cmd, test_image)
+if not s:
+raise error.TestFail(Check image '%s' failed with error: %s %
+   (test_image, o))
+for fmt in params.get(supported_image_formats).split():
+output_image = test_image + .%s % fmt
+convert(cmd, fmt, test_image, output_image)
+s, o = check(cmd, output_image)
+if not s:
+raise error.TestFail(Check image '%s' got error: %s %
+ (output_image, o))
+os.remove(output_image)
+os.remove(test_image)
+
+def create(cmd, img_name, fmt, img_size=None, base_img=None,
+   base_img_fmt=None, encrypted=no):
+cmd +=  create
+if encrypted == yes:
+cmd +=  -e
+if base_img:
+cmd +=  -b %s % base_img
+if base_img_fmt:
+cmd +=  -F %s % base_img_fmt
+cmd +=  -f %s % fmt
+cmd +=  %s % img_name
+if img_size:
+cmd +=  %s % img_size
+s, o = commands.getstatusoutput(cmd)
+if s != 0

Re: [Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-01-28 Thread Yolkfull Chow
On Fri, Jan 29, 2010 at 03:00:09PM +0800, Yolkfull Chow wrote:
 This is designed to test all subcommands of 'qemu-img' however
 so far 'commit' is not implemented.
 
 * For 'check' subcommand test, it will 'dd' to create a file with specified
 size and see whether it's supported to be checked. Then convert it to be
 supported formats ('qcow2' and 'raw' so far) to see whether there's error
 after convertion.
 
 * For 'convert' subcommand test, it will convert both to 'qcow2' and 'raw' 
 from
 the format specified in config file. And only check 'qcow2' after convertion.
 
 * For 'snapshot' subcommand test, it will create two snapshots and list them.
 Finally delete them if no errors found.
 
 * For 'info' subcommand test, it will check image format  size according to
 output of 'info' subcommand  at specified image file.
 
 * For 'rebase' subcommand test, it will create first snapshot 'sn1' based on 
 original
 base_img, and create second snapshot based on sn1. And then rebase sn2 to 
 base_img.
 After rebase check the baking_file of sn2.
 
 This supports two rebase mode: unsafe mode and safe mode:
 Unsafe mode:
 With -u an unsafe mode is enabled that doesn't require the backing files to 
 exist.
 It merely changes the backing file reference in the COW image. This is useful 
 for
 renaming or moving the backing file. The user is responsible to make sure 
 that the
 new backing file has no changes compared to the old one, or corruption may 
 occur.
 
 Safe Mode:
 Both the current and the new backing file need to exist, and after the 
 rebase, the
 COW image is guaranteed to have the same guest visible content as before.
 To achieve this, old and new backing file are compared and, if necessary, 
 data is
 copied from the old backing file into the COW image.
 
 Signed-off-by: Yolkfull Chow yz...@redhat.com
 ---
  client/tests/kvm/tests/qemu_img.py |  235 
 
  client/tests/kvm/tests_base.cfg.sample |   40 ++
  2 files changed, 275 insertions(+), 0 deletions(-)
  create mode 100644 client/tests/kvm/tests/qemu_img.py
 
 diff --git a/client/tests/kvm/tests/qemu_img.py 
 b/client/tests/kvm/tests/qemu_img.py
 new file mode 100644
 index 000..e6352a0
 --- /dev/null
 +++ b/client/tests/kvm/tests/qemu_img.py
 @@ -0,0 +1,235 @@
 +import re, os, logging, commands
 +from autotest_lib.client.common_lib import utils, error
 +import kvm_vm, kvm_utils
 +
 +
 +def run_qemu_img(test, params, env):
 +
 +`qemu-img' functions test:
 +1) Judge what subcommand is going to be tested
 +2) Run subcommand test
 +
 +@param test: kvm test object
 +@param params: Dictionary with the test parameters
 +@param env: Dictionary with test environment.
 +
 +cmd = kvm_utils.get_path(test.bindir, params.get(qemu_img_binary))
 +if not os.path.exists(cmd):
 +raise error.TestError(Binary of 'qemu-img' not found)
 +image_format = params.get(image_format)
 +image_size = params.get(image_size, 10G)
 +image_name = kvm_vm.get_image_filename(params, test.bindir)
 +
 +def check(cmd, img):
 +cmd +=  check %s % img
 +logging.info(Checking image '%s'... % img)
 +o = commands.getoutput(cmd)
 +if does not support checks in o or No errors in o:
 +return (True, )
 +return (False, o)
 +
 +# Subcommand 'qemu-img check' test
 +# This tests will 'dd' to create a specified size file, and check it.
 +# Then convert it to supported image_format in each loop and check again.
 +def check_test(cmd):
 +test_image = params.get(image_name_dd)
 +create_image_cmd = params.get(create_image_cmd)
 +create_image_cmd = create_image_cmd % test_image
 +s, o = commands.getstatusoutput(create_image_cmd)
 +if s != 0:
 +raise error.TestError(Failed command: %s; Output is: %s %
 + (create_image_cmd, o))
 +s, o = check(cmd, test_image)
 +if not s:
 +raise error.TestFail(Check image '%s' failed with error: %s %
 +   (test_image, o))
 +for fmt in params.get(supported_image_formats).split():
 +output_image = test_image + .%s % fmt
 +convert(cmd, fmt, test_image, output_image)
 +s, o = check(cmd, output_image)
 +if not s:
 +raise error.TestFail(Check image '%s' got error: %s %
 + (output_image, o))
 +os.remove(output_image)
 +os.remove(test_image)
 +
 +def create(cmd, img_name, fmt, img_size=None, base_img=None,
 +   base_img_fmt=None, encrypted=no):
 +cmd +=  create
 +if encrypted == yes:
 +cmd +=  -e
 +if base_img:
 +cmd +=  -b %s % base_img
 +if base_img_fmt:
 +cmd +=  -F %s % base_img_fmt

Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-01-27 Thread Yolkfull Chow
On Tue, Jan 26, 2010 at 03:11:34PM -0200, Lucas Meneghel Rodrigues wrote:
 On Tue, 2010-01-26 at 11:25 +0800, Yolkfull Chow wrote:
  This is designed to test all subcommands of 'qemu-img' however
  so far 'commit' is not implemented.
 
 Hi Yolkful, this is very good! Seeing this test made me think about that
 stand alone autotest module we commited a while ago, that does
 qemu_iotests testsuite on the host.
 
 Perhaps we could 'port' this module to the kvm test, since it is more

Lucas, do you mean the client-side 'kvmtest' ?

And thanks for comments. :)

 convenient to execute it inside a kvm test job (in a job where we test
 more than 2 build types, for example, we need to execute qemu_img and
 qemu_io_tests for every qemu-img built).
 
 Could you look at implementing this?
 
  * For 'check' subcommand test, it will 'dd' to create a file with specified
  size and see whether it's supported to be checked. Then convert it to be
  supported formats (qcow2 and raw so far) to see whether there's error
  after convertion.
  
  * For 'convert' subcommand test, it will convert both to 'qcow2' and 'raw' 
  from
  the format specified in config file. And only check 'qcow2' after 
  convertion.
  
  * For 'snapshot' subcommand test, it will create two snapshots and list 
  them.
  Finally delete them if no errors found.
  
  * For 'info' subcommand test, it simply get output from specified image 
  file.
  
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/tests/qemu_img.py |  155 
  
   client/tests/kvm/tests_base.cfg.sample |   36 
   2 files changed, 191 insertions(+), 0 deletions(-)
   create mode 100644 client/tests/kvm/tests/qemu_img.py
  
  diff --git a/client/tests/kvm/tests/qemu_img.py 
  b/client/tests/kvm/tests/qemu_img.py
  new file mode 100644
  index 000..1ae04f0
  --- /dev/null
  +++ b/client/tests/kvm/tests/qemu_img.py
  @@ -0,0 +1,155 @@
  +import os, logging, commands
  +from autotest_lib.client.common_lib import error
  +import kvm_vm
  +
  +
  +def run_qemu_img(test, params, env):
  +
  +`qemu-img' functions test:
  +1) Judge what subcommand is going to be tested
  +2) Run subcommand test
  +
  +@param test: kvm test object
  +@param params: Dictionary with the test parameters
  +@param env: Dictionary with test environment.
  +
  +cmd = params.get(qemu_img_binary)
 
 It is a good idea to verify if cmd above resolves to an absolute path,
 to avoid problems. If it doesn't resolve, verify if there's the symbolic
 link under kvm test dir pointing to qemu-img, and if it does exist, make
 sure it points to a valid file (ie, symlink is not broken).
 
  +subcommand = params.get(subcommand)
  +image_format = params.get(image_format)
  +image_name = kvm_vm.get_image_filename(params, test.bindir)
  +
  +def check(img):
  +global cmd
  +cmd +=  check %s % img
  +logging.info(Checking image '%s'... % img)
  +s, o = commands.getstatusoutput(cmd)
  +if not (s == 0 or does not support checks in o):
  +return (False, o)
  +return (True, )
 
 Please use utils.system_output here instead of the equivalent commands
 API on the above code. This comment applies to all further uses of
 commands.[function].
 
  +
  +# Subcommand 'qemu-img check' test
  +# This tests will 'dd' to create a specified size file, and check it.
  +# Then convert it to supported image_format in each loop and check 
  again.
  +def check_test():
  +size = params.get(dd_image_size)
  +test_image = params.get(dd_image_name)
  +create_image_cmd = params.get(create_image_cmd)
  +create_image_cmd = create_image_cmd % (test_image, size)
  +s, o = commands.getstatusoutput(create_image_cmd)
  +if s != 0:
  +raise error.TestError(Failed command: %s; Output is: %s %
  + (create_image_cmd, o))
  +s, o = check(test_image)
  +if not s:
  +raise error.TestFail(Failed to check image '%s' with error: 
  %s %
  +  (test_image, 
  o))
  +for fmt in params.get(supported_image_formats).split():
  +output_image = test_image + .%s % fmt
  +convert(fmt, test_image, output_image)
  +s, o = check(output_image)
  +if not s:
  +raise error.TestFail(Check image '%s' got error: %s %
  + (output_image, o))
  +commands.getoutput(rm -f %s % output_image)
  +commands.getoutput(rm -f %s % test_image)
  +#Subcommand 'qemu-img create' test
  +def create_test():
  +global cmd
 
 I don't like very much this idea of using a global variable, instead it
 should be preferrable to use a class and have a class attribute with
 'cmd

Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-01-27 Thread Yolkfull Chow
On Wed, Jan 27, 2010 at 07:37:46AM -0500, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  On Tue, Jan 26, 2010 at 03:11:34PM -0200, Lucas Meneghel Rodrigues
  wrote:
   On Tue, 2010-01-26 at 11:25 +0800, Yolkfull Chow wrote:
This is designed to test all subcommands of 'qemu-img' however
so far 'commit' is not implemented.
   
   Hi Yolkful, this is very good! Seeing this test made me think about
  that
   stand alone autotest module we commited a while ago, that does
   qemu_iotests testsuite on the host.
   
   Perhaps we could 'port' this module to the kvm test, since it is
  more
  
  Lucas, do you mean the client-side 'kvmtest' ?
  
  And thanks for comments. :)
  
   convenient to execute it inside a kvm test job (in a job where we
  test
   more than 2 build types, for example, we need to execute qemu_img
  and
   qemu_io_tests for every qemu-img built).
   
   Could you look at implementing this?
   
* For 'check' subcommand test, it will 'dd' to create a file with
  specified
size and see whether it's supported to be checked. Then convert it
  to be
supported formats (qcow2 and raw so far) to see whether there's
  error
after convertion.

* For 'convert' subcommand test, it will convert both to 'qcow2'
  and 'raw' from
the format specified in config file. And only check 'qcow2' after
  convertion.

* For 'snapshot' subcommand test, it will create two snapshots and
  list them.
Finally delete them if no errors found.

* For 'info' subcommand test, it simply get output from specified
  image file.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/qemu_img.py |  155
  
 client/tests/kvm/tests_base.cfg.sample |   36 
 2 files changed, 191 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/qemu_img.py

diff --git a/client/tests/kvm/tests/qemu_img.py
  b/client/tests/kvm/tests/qemu_img.py
new file mode 100644
index 000..1ae04f0
--- /dev/null
+++ b/client/tests/kvm/tests/qemu_img.py
@@ -0,0 +1,155 @@
+import os, logging, commands
+from autotest_lib.client.common_lib import error
+import kvm_vm
+
+
+def run_qemu_img(test, params, env):
+
+`qemu-img' functions test:
+1) Judge what subcommand is going to be tested
+2) Run subcommand test
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+cmd = params.get(qemu_img_binary)
   
   It is a good idea to verify if cmd above resolves to an absolute
  path,
   to avoid problems. If it doesn't resolve, verify if there's the
  symbolic
   link under kvm test dir pointing to qemu-img, and if it does exist,
  make
   sure it points to a valid file (ie, symlink is not broken).
 
 This can be done quickly using kvm_utils.get_path() and os.path.exists(),
 like this:
 
 cmd = kvm_utils.get_path(params.get(qemu_img_binary))
 if not os.path.exists(cmd):
 raise error.TestError(qemu-img binary not found)
 
 kvm_utils.get_path() is the standard way of getting both absolute and
 relative paths, and os.path.exists() checks whether the file exists and
 makes sure it's not a broken symlink.

Yes, thanks for pointing that out.

 
+subcommand = params.get(subcommand)
+image_format = params.get(image_format)
+image_name = kvm_vm.get_image_filename(params, test.bindir)
+
+def check(img):
+global cmd
+cmd +=  check %s % img
+logging.info(Checking image '%s'... % img)
+s, o = commands.getstatusoutput(cmd)
+if not (s == 0 or does not support checks in o):
+return (False, o)
+return (True, )
   
   Please use utils.system_output here instead of the equivalent
  commands
   API on the above code. This comment applies to all further uses of
   commands.[function].
   
+
+# Subcommand 'qemu-img check' test
+# This tests will 'dd' to create a specified size file, and
  check it.
+# Then convert it to supported image_format in each loop and
  check again.
+def check_test():
+size = params.get(dd_image_size)
+test_image = params.get(dd_image_name)
+create_image_cmd = params.get(create_image_cmd)
+create_image_cmd = create_image_cmd % (test_image, size)
+s, o = commands.getstatusoutput(create_image_cmd)
+if s != 0:
+raise error.TestError(Failed command: %s; Output is:
  %s %
+
  (create_image_cmd, o))
+s, o = check(test_image)
+if not s:
+raise error.TestFail(Failed to check image '%s' with
  error: %s

[Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-01-25 Thread Yolkfull Chow
This is designed to test all subcommands of 'qemu-img' however
so far 'commit' is not implemented.

* For 'check' subcommand test, it will 'dd' to create a file with specified
size and see whether it's supported to be checked. Then convert it to be
supported formats (qcow2 and raw so far) to see whether there's error
after convertion.

* For 'convert' subcommand test, it will convert both to 'qcow2' and 'raw' from
the format specified in config file. And only check 'qcow2' after convertion.

* For 'snapshot' subcommand test, it will create two snapshots and list them.
Finally delete them if no errors found.

* For 'info' subcommand test, it simply get output from specified image file.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/qemu_img.py |  155 
 client/tests/kvm/tests_base.cfg.sample |   36 
 2 files changed, 191 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/qemu_img.py

diff --git a/client/tests/kvm/tests/qemu_img.py 
b/client/tests/kvm/tests/qemu_img.py
new file mode 100644
index 000..1ae04f0
--- /dev/null
+++ b/client/tests/kvm/tests/qemu_img.py
@@ -0,0 +1,155 @@
+import os, logging, commands
+from autotest_lib.client.common_lib import error
+import kvm_vm
+
+
+def run_qemu_img(test, params, env):
+
+`qemu-img' functions test:
+1) Judge what subcommand is going to be tested
+2) Run subcommand test
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+cmd = params.get(qemu_img_binary)
+subcommand = params.get(subcommand)
+image_format = params.get(image_format)
+image_name = kvm_vm.get_image_filename(params, test.bindir)
+
+def check(img):
+global cmd
+cmd +=  check %s % img
+logging.info(Checking image '%s'... % img)
+s, o = commands.getstatusoutput(cmd)
+if not (s == 0 or does not support checks in o):
+return (False, o)
+return (True, )
+
+# Subcommand 'qemu-img check' test
+# This tests will 'dd' to create a specified size file, and check it.
+# Then convert it to supported image_format in each loop and check again.
+def check_test():
+size = params.get(dd_image_size)
+test_image = params.get(dd_image_name)
+create_image_cmd = params.get(create_image_cmd)
+create_image_cmd = create_image_cmd % (test_image, size)
+s, o = commands.getstatusoutput(create_image_cmd)
+if s != 0:
+raise error.TestError(Failed command: %s; Output is: %s %
+ (create_image_cmd, o))
+s, o = check(test_image)
+if not s:
+raise error.TestFail(Failed to check image '%s' with error: %s %
+  (test_image, o))
+for fmt in params.get(supported_image_formats).split():
+output_image = test_image + .%s % fmt
+convert(fmt, test_image, output_image)
+s, o = check(output_image)
+if not s:
+raise error.TestFail(Check image '%s' got error: %s %
+ (output_image, o))
+commands.getoutput(rm -f %s % output_image)
+commands.getoutput(rm -f %s % test_image)
+
+#Subcommand 'qemu-img create' test
+def create_test():
+global cmd
+cmd +=  create
+if params.get(encrypted) == yes:
+cmd +=  -e
+if params.get(base_image):
+cmd +=  -F %s -b %s % (params.get(base_image_format),
+ params.get(base_image))
+format = params.get(image_format)
+cmd +=  -f %s % format
+image_name_test = os.path.join(test.bindir,
+  params.get(image_name_test)) + '.' + format
+cmd +=  %s %s % (image_name_test, params.get(image_size_test))
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+raise error.TestFail(Create image '%s' failed: %s %
+(image_name_test, o))
+commands.getoutput(rm -f %s % image_name_test)
+
+def convert(output_format, image_name, output_filename,
+format=None, compressed=no, encrypted=no):
+global cmd
+cmd +=  convert
+if compressed == yes:
+cmd +=  -c
+if encrypted == yes:
+cmd +=  -e
+if format:
+cmd +=  -f %s % image_format
+cmd +=  -O %s % params.get(dest_image_format)
+cmd +=  %s %s % (image_name, output_filename)
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+raise error.TestFail(Image converted failed; Command: %s;
+ Output is: %s % (cmd, o))
+
+#Subcommand 'qemu-img convert' test
+def

Re: [Autotest] [AUTOTEST PATCH 1/2] KVM-test: Fix a bug that about list slice in scan_results.py

2010-01-24 Thread Yolkfull Chow
On Thu, Dec 31, 2009 at 04:52:20AM -0500, Michael Goldish wrote:
 GOOD/FAIL/ERROR lines are always preceded by START lines, and info_list
 is appended a  when parsing a START line, so it seems unreasonable
 for info_list to be empty when parsing a GOOD/FAIL/ERROR line.
 What file did you parse when you got that exception?  Can you reproduce
 the problem and show me the file that caused it?
 I guess the file was either corrupt, or of an unknown new format.
 If the latter is true, we should study the new format and modify the
 code accordingly.

Hi Michael,

So sorry for late reply. I just saw this email. Maybe I should consider change 
the
email client 'mutt' to 'Thunderbird' because I often miss some unread emails. 

And I cannot now find the file cause that exception.But I could reproduce
at that time. Let's temporarily skip this patch now and look into it if find it 
again in future.

Thanks for your analysis. :)

 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  If 'info_list' is empty slice operation can result in traceback:
  
  ...
info_list[-1] = parts[5]
IndexError: list assignment index out of range
  
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/scan_results.py |1 +
   1 files changed, 1 insertions(+), 0 deletions(-)
  
  diff --git a/client/tests/kvm/scan_results.py
  b/client/tests/kvm/scan_results.py
  index f7bafa9..1daff2f 100755
  --- a/client/tests/kvm/scan_results.py
  +++ b/client/tests/kvm/scan_results.py
  @@ -45,6 +45,7 @@ def parse_results(text):
   # Found a FAIL/ERROR/GOOD line -- get failure/success info
   elif (len(parts) = 6 and parts[3].startswith(timestamp)
  and
 parts[4].startswith(localtime)):
  +info_list.append()
   info_list[-1] = parts[5]
   
   return result_list
  -- 
  1.6.6
  
  ___
  Autotest mailing list
  autot...@test.kernel.org
  http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI Passthrough Problem

2010-01-21 Thread Yolkfull Chow
On Thu, Jan 21, 2010 at 09:24:36PM -0800, Aaron Clausen wrote:
 I'm trying once again to get PCI passthrough working (KVM 84 on Ubuntu
 9.10), and I'm getting this error :
 
 LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
 /usr/bin/kvm -S -M pc-0.11 -m 4096 -smp 4 -name mailserver -uuid
 76a83471-e94a-3658-fa61-8eceaa74ffc2 -monitor
 unix:/var/run/libvirt/qemu/mailserver.monitor,server,nowait -localtime
 -boot c -drive file=,if=ide,media=cdrom,index=2 -drive
 file=/var/lib/libvirt/images/mailserver.img,if=virtio,index=0,boot=on
 -drive file=/var/lib/libvirt/images/mailserver-2.img,if=virtio,index=1
 -net nic,macaddr=54:52:00:1b:b2:56,vlan=0,model=virtio,name=virtio.0
 -net tap,fd=17,vlan=0,name=tap.0 -serial pty -parallel none -usb
 -usbdevice tablet -vnc 127.0.0.1:0 -k en-us -vga cirrus -pcidevice
 host=0a:01.0
 char device redirected to /dev/pts/0
 get_real_device: /sys/bus/pci/devices/:0a:01.0/config: Permission denied
 init_assigned_device: Error: Couldn't get real device (0a:01.0)!
 Failed to initialize assigned device host=0a:01.0

Seems libvirt initialize the PCI devices problem, you could manually unbind 
this 
device from host kernel driver and try above command again.

For unbind this device please refer to :

http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM

 
 Any thoughts?
 
 -- 
 Aaron Clausen
 mightymartia...@gmail.com
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH] KVM-test: subtest migrate: Use 'wait_for_login' to log into migrated guest

2010-01-20 Thread Yolkfull Chow
Using 'wait_for' for logging into migrated guest repeats the work of
'wait_for_login' which exists already. We just need to change the name
of 'dest_vm'.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_test_utils.py  |1 +
 client/tests/kvm/tests/migration.py |7 ++-
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/client/tests/kvm/kvm_test_utils.py 
b/client/tests/kvm/kvm_test_utils.py
index 02ec0cf..13af8e1 100644
--- a/client/tests/kvm/kvm_test_utils.py
+++ b/client/tests/kvm/kvm_test_utils.py
@@ -135,6 +135,7 @@ def migrate(vm, env=None):
 
 # Clone the source VM and ask the clone to wait for incoming migration
 dest_vm = vm.clone()
+dest_vm.name = migrated_guest
 dest_vm.create(for_migration=True)
 
 try:
diff --git a/client/tests/kvm/tests/migration.py 
b/client/tests/kvm/tests/migration.py
index b8f171c..b65064b 100644
--- a/client/tests/kvm/tests/migration.py
+++ b/client/tests/kvm/tests/migration.py
@@ -46,11 +46,8 @@ def run_migration(test, params, env):
 dest_vm = kvm_test_utils.migrate(vm, env)
 
 # Log into the guest again
-logging.info(Logging into guest after migration...)
-session2 = kvm_utils.wait_for(dest_vm.remote_login, 30, 0, 2)
-if not session2:
-raise error.TestFail(Could not log into guest after migration)
-logging.info(Logged in after migration)
+session2 = kvm_test_utils.wait_for_login(dest_vm, timeout=30, start=0,
+ step=2)
 
 # Make sure the background process is still running
 if session2.get_command_status(check_command, timeout=30) != 0:
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH] KVM-test: Fix a bug that pci_assignable type name mismatch

2010-01-18 Thread Yolkfull Chow
The pci_assignable type name is nic_vf/nic_pf in kvm_utils.py whereas
in kvm_vm.py they are vf/pf. Weird that why I tested it pass last week.
Hope that it will not bring in any trouble before this patch applied.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_utils.py |   10 +-
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 83aff66..df26a77 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -971,12 +971,12 @@ class PciAssignable(object):
 Request PCI assignable devices on host. It will check whether to request
 PF (physical Functions) or VF (Virtual Functions).
 
-def __init__(self, type=nic_vf, driver=None, driver_option=None,
+def __init__(self, type=vf, driver=None, driver_option=None,
  names=None, devices_requested=None):
 
 Initialize parameter 'type' which could be:
-nic_vf: Virtual Functions
-nic_pf: Physical Function (actual hardware)
+vf: Virtual Functions
+pf: Physical Function (actual hardware)
 mixed:  Both includes VFs and PFs
 
 If pass through Physical NIC cards, we need to specify which devices
@@ -1087,9 +1087,9 @@ class PciAssignable(object):
 @param count: count number of PCI devices needed for pass through
 @return: a list of all devices' PCI IDs
 
-if self.type == nic_vf:
+if self.type == vf:
 vf_ids = self.get_vf_devs()
-elif self.type == nic_pf:
+elif self.type == pf:
 vf_ids = self.get_pf_devs()
 elif self.type == mixed:
 vf_ids = self.get_vf_devs()
-- 
1.5.5.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM test: Add PCI device assignment support

2010-01-12 Thread Yolkfull Chow
On Sun, Dec 27, 2009 at 09:55:56PM -0200, Lucas Meneghel Rodrigues wrote:
 Add support to PCI device assignment on the kvm test. It supports
 both SR-IOV virtual functions and physical NIC card device
 assignment.
 
 Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
 be shared amongst multiple virtual machines while retaining the
 performance benefit of assigning a PCI device to a virtual machine.
 A common example is where a single SR-IOV capable NIC - with perhaps
 only a single physical network port - might be shared with multiple
 virtual machines by assigning a virtual function to each VM.
 
 SR-IOV support is implemented in the kernel. The core implementation
 is contained in the PCI subsystem, but there must also be driver support
 for both the Physical Function (PF) and Virtual Function (VF) devices.
 With an SR-IOV capable device one can allocate VFs from a PF. The VFs
 surface as PCI devices which are backed on the physical PCI device by
 resources (queues, and register sets).
 
 Device support:
 
 In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
 SR-IOV capable device supported. The igb driver has PF support and the
 igbvf has VF support.
 
 In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
 the same vxge driver for the PF as well as the VFs.
 
 In order to configure the test:
 
* For SR-IOV virtual functions passthrough, we could specify the
  module parameter 'max_vfs' in config file.
* For physical NIC card pass through, we should specify the device
  name(s).
 
 4th try: Implemented Yolkfull's suggestion of keeping 'max_vfs' and
 'assignable_devices' as sepparated parameters. Yolkfull, please test this
 on your environment. Thank you!

Hi Lucas,
Sorry for the late reply. I just tested this patch and found some problems,
please see comments below:

 
   * Naming is consistent with PCI assignment instead of
 PCI passthrough, as it's a more correct term.
   * No more device database file, as all information about devices
 is stored on an attribute of the VM class (an instance of the
 PciAssignable class), so we don't have to bother dumping this
 info to a file.
   * Code simplified to avoid duplication
 
 As it's a fairly involved feature, the more reviews we get the better.
 
 Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
 ---
  client/tests/kvm/kvm_utils.py  |  281 
 
  client/tests/kvm/kvm_vm.py |   59 +++
  client/tests/kvm/tests_base.cfg.sample |   20 +++
  3 files changed, 360 insertions(+), 0 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
 index 2bbbe22..59c72a9 100644
 --- a/client/tests/kvm/kvm_utils.py
 +++ b/client/tests/kvm/kvm_utils.py
 @@ -924,3 +924,284 @@ def create_report(report_dir, results_dir):
  reporter = os.path.join(report_dir, 'html_report.py')
  html_file = os.path.join(results_dir, 'results.html')
  os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
 +
 +
 +def get_full_pci_id(pci_id):
 +
 +Get full PCI ID of pci_id.
 +
 +@param pci_id: PCI ID of a device.
 +
 +cmd = lspci -D | awk '/%s/ {print $1}' % pci_id
 +status, full_id = commands.getstatusoutput(cmd)
 +if status != 0:
 +return None
 +return full_id
 +
 +
 +def get_vendor_from_pci_id(pci_id):
 +
 +Check out the device vendor ID according to pci_id.
 +
 +@param pci_id: PCI ID of a device.
 +
 +cmd = lspci -n | awk '/%s/ {print $3}' % pci_id
 +return re.sub(:,  , commands.getoutput(cmd))
 +
 +
 +class PciAssignable(object):
 +
 +Request PCI assignable devices on host. It will check whether to request
 +PF (physical Functions) or VF (Virtual Functions).
 +
 +def __init__(self, type=nic_vf, driver=None, driver_option=None,
 + names=None, devices_requested=None):
 +
 +Initialize parameter 'type' which could be:
 +nic_vf: Virtual Functions
 +nic_pf: Physical Function (actual hardware)
 +mixed:  Both includes VFs and PFs
 +
 +If pass through Physical NIC cards, we need to specify which devices
 +to be assigned, e.g. 'eth1 eth2'.
 +
 +If pass through Virtual Functions, we need to specify how many vfs
 +are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
 +config file.
 +
 +@param type: PCI device type.
 +@param driver: Kernel module for the PCI assignable device.
 +@param driver_option: Module option to specify the maximum number of
 +VFs (eg 'max_vfs=7')
 +@param names: Physical NIC cards correspondent network interfaces,
 +e.g.'eth1 eth2 ...'

Add parameter interpretation for 'devices_requested'.

 +
 +self.type = type
 +self.driver = driver
 +self.driver_option = driver_option
 +if names:
 +  

Re: [PATCH] KVM test: Add PCI device assignment support

2010-01-12 Thread Yolkfull Chow
On Tue, Jan 12, 2010 at 04:28:13PM -0200, Lucas Meneghel Rodrigues wrote:
 Add support to PCI device assignment on the kvm test. It supports
 both SR-IOV virtual functions and physical NIC card device
 assignment.
 
 Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
 be shared amongst multiple virtual machines while retaining the
 performance benefit of assigning a PCI device to a virtual machine.
 A common example is where a single SR-IOV capable NIC - with perhaps
 only a single physical network port - might be shared with multiple
 virtual machines by assigning a virtual function to each VM.
 
 SR-IOV support is implemented in the kernel. The core implementation
 is contained in the PCI subsystem, but there must also be driver support
 for both the Physical Function (PF) and Virtual Function (VF) devices.
 With an SR-IOV capable device one can allocate VFs from a PF. The VFs
 surface as PCI devices which are backed on the physical PCI device by
 resources (queues, and register sets).
 
 Device support:
 
 In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
 SR-IOV capable device supported. The igb driver has PF support and the
 igbvf has VF support.
 
 In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
 the same vxge driver for the PF as well as the VFs.
 
 In order to configure the test:
 
   * For SR-IOV virtual functions passthrough, we could specify the
 module parameter 'max_vfs' in config file.
   * For physical NIC card pass through, we should specify the device
 name(s).

Looks good for me. 
Lucas, thank you so much for improving this patch. :-)

 
 Signed-off-by: Yolkfull Chow yz...@redhat.com
 Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
 ---
  client/tests/kvm/kvm_utils.py  |  284 
 
  client/tests/kvm/kvm_vm.py |   60 +++
  client/tests/kvm/tests_base.cfg.sample |   22 +++-
  3 files changed, 365 insertions(+), 1 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
 index 2bbbe22..a2d9607 100644
 --- a/client/tests/kvm/kvm_utils.py
 +++ b/client/tests/kvm/kvm_utils.py
 @@ -924,3 +924,287 @@ def create_report(report_dir, results_dir):
  reporter = os.path.join(report_dir, 'html_report.py')
  html_file = os.path.join(results_dir, 'results.html')
  os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
 +
 +
 +def get_full_pci_id(pci_id):
 +
 +Get full PCI ID of pci_id.
 +
 +@param pci_id: PCI ID of a device.
 +
 +cmd = lspci -D | awk '/%s/ {print $1}' % pci_id
 +status, full_id = commands.getstatusoutput(cmd)
 +if status != 0:
 +return None
 +return full_id
 +
 +
 +def get_vendor_from_pci_id(pci_id):
 +
 +Check out the device vendor ID according to pci_id.
 +
 +@param pci_id: PCI ID of a device.
 +
 +cmd = lspci -n | awk '/%s/ {print $3}' % pci_id
 +return re.sub(:,  , commands.getoutput(cmd))
 +
 +
 +class PciAssignable(object):
 +
 +Request PCI assignable devices on host. It will check whether to request
 +PF (physical Functions) or VF (Virtual Functions).
 +
 +def __init__(self, type=nic_vf, driver=None, driver_option=None,
 + names=None, devices_requested=None):
 +
 +Initialize parameter 'type' which could be:
 +nic_vf: Virtual Functions
 +nic_pf: Physical Function (actual hardware)
 +mixed:  Both includes VFs and PFs
 +
 +If pass through Physical NIC cards, we need to specify which devices
 +to be assigned, e.g. 'eth1 eth2'.
 +
 +If pass through Virtual Functions, we need to specify how many vfs
 +are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
 +config file.
 +
 +@param type: PCI device type.
 +@param driver: Kernel module for the PCI assignable device.
 +@param driver_option: Module option to specify the maximum number of
 +VFs (eg 'max_vfs=7')
 +@param names: Physical NIC cards correspondent network interfaces,
 +e.g.'eth1 eth2 ...'
 +@param devices_requested: Number of devices being requested.
 +
 +self.type = type
 +self.driver = driver
 +self.driver_option = driver_option
 +if names:
 +self.name_list = names.split()
 +if devices_requested:
 +self.devices_requested = int(devices_requested)
 +else:
 +self.devices_requested = None
 +
 +
 +def _get_pf_pci_id(self, name, search_str):
 +
 +Get the PF PCI ID according to name.
 +
 +@param name: Name of the PCI device.
 +@param search_str: Search string to be used on lspci.
 +
 +cmd = ethtool -i %s | awk '/bus-info/ {print $2}' % name
 +s, pci_id = commands.getstatusoutput(cmd)
 +if not (s or Cannot get driver information

Re: [AUTOTEST PATCH 2/2] KVM-test: Move two 'remote_login' out of try block in kvm_vm.py

2010-01-12 Thread Yolkfull Chow
On Thu, Dec 31, 2009 at 05:03:21AM -0500, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  If vm.remote_login failed 'session.close()' can result in exception
  in finally clause. This patch fix the problem.
  
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/kvm_vm.py |   51
  +--
   1 files changed, 25 insertions(+), 26 deletions(-)
  
  diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
  index 7229b79..f746331 100755
  --- a/client/tests/kvm/kvm_vm.py
  +++ b/client/tests/kvm/kvm_vm.py
  @@ -827,40 +827,39 @@ class VM:
   
   Get the cpu count of the VM.
   
  +session = self.remote_login()
  +cmd = self.params.get(cpu_chk_cmd)
   try:
  -session = self.remote_login()
  -if session:
  -cmd = self.params.get(cpu_chk_cmd)
  -s, count = session.get_command_status_output(cmd)
  -if s == 0:
  -return int(count)
  +s, count = session.get_command_status_output(cmd)
  +if s == 0:
  +return int(count)
   return None
   finally:
  -session.close()
  +if session:
  +session.close()
 
 If self.remote_login() fails, session will be None, and then the attempted
 call to session.get_command_status_output() will raise an exception which
 will fail the test with a rather ugly reason string.
 
 I think this form is preferable:
 
 session = self.remote_login()
 if not session:
 return None

Reasonable, thanks Michael. :)
Will post the updated patch soon.

 try:
 cmd = ...
 s, count = ...
 if s == 0:
 return int(count)
 return None
 finally:
 session.close()
 
 There's no need for a second 'if session' in the finally block.
 
   def get_memory_size(self):
   
   Get memory size of the VM.
   
  +session = self.remote_login()
  +cmd = self.params.get(mem_chk_cmd)
   try:
  -session = self.remote_login()
  -if session:
  -cmd = self.params.get(mem_chk_cmd)
  -s, mem_str = session.get_command_status_output(cmd)
  -if s != 0:
  -return None
  -mem = re.findall(([0-9][0-9][0-9]+), mem_str)
  -mem_size = 0
  -for m in mem:
  -mem_size += int(m)
  -if GB in mem_str:
  -mem_size *= 1024
  -elif MB in mem_str:
  -pass
  -else:
  -mem_size /= 1024
  -return int(mem_size)
  -return None
  +s, mem_str = session.get_command_status_output(cmd)
  +if s != 0:
  +return None
  +mem = re.findall(([0-9][0-9][0-9]+), mem_str)
  +mem_size = 0
  +for m in mem:
  +mem_size += int(m)
  +if GB in mem_str:
  +mem_size *= 1024
  +elif MB in mem_str:
  +pass
  +else:
  +mem_size /= 1024
  +return int(mem_size)
   finally:
  -session.close()
  +if session:
  +session.close()
 
 The same applies to this function.
 
  -- 
  1.6.6
  
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST PATCH 2/2] KVM-test: Move two 'remote_login' out of try block in kvm_vm.py

2010-01-12 Thread Yolkfull Chow
On Tue, Jan 12, 2010 at 08:37:25PM -0200, Lucas Meneghel Rodrigues wrote:
 On Thu, Dec 31, 2009 at 8:03 AM, Michael Goldish mgold...@redhat.com wrote:
 
  - Yolkfull Chow yz...@redhat.com wrote:
 
  If vm.remote_login failed 'session.close()' can result in exception
  in finally clause. This patch fix the problem.
 
 Hi Yolkfull, please implement Michael's comments and re-submit.

Sorry, hadn't seen the reply from Michael. I will post the updated patch soon.

Thanks, Lucas. :)

 
 Cheers,
 
 -- 
 Lucas
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH 1/2] KVM-test: Move two 'remote_login' out of try block in kvm_vm.py

2010-01-12 Thread Yolkfull Chow
'self.remote_login()' should be outside of try block.

And as suggested by Michael, we need fix the problem that if 
self.remote_login() fails,
session will be None.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_vm.py |   49 ++-
 1 files changed, 25 insertions(+), 24 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 7229b79..ed6d5ad 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -827,13 +827,14 @@ class VM:
 
 Get the cpu count of the VM.
 
+session = self.remote_login()
+if not session:
+return None
 try:
-session = self.remote_login()
-if session:
-cmd = self.params.get(cpu_chk_cmd)
-s, count = session.get_command_status_output(cmd)
-if s == 0:
-return int(count)
+cmd = self.params.get(cpu_chk_cmd)
+s, count = session.get_command_status_output(cmd)
+if s == 0:
+return int(count)
 return None
 finally:
 session.close()
@@ -843,24 +844,24 @@ class VM:
 
 Get memory size of the VM.
 
-try:
-session = self.remote_login()
-if session:
-cmd = self.params.get(mem_chk_cmd)
-s, mem_str = session.get_command_status_output(cmd)
-if s != 0:
-return None
-mem = re.findall(([0-9][0-9][0-9]+), mem_str)
-mem_size = 0
-for m in mem:
-mem_size += int(m)
-if GB in mem_str:
-mem_size *= 1024
-elif MB in mem_str:
-pass
-else:
-mem_size /= 1024
-return int(mem_size)
+session = self.remote_login()
+if not session:
 return None
+try:
+cmd = self.params.get(mem_chk_cmd)
+s, mem_str = session.get_command_status_output(cmd)
+if s != 0:
+return None
+mem = re.findall(([0-9][0-9][0-9]+), mem_str)
+mem_size = 0
+for m in mem:
+mem_size += int(m)
+if GB in mem_str:
+mem_size *= 1024
+elif MB in mem_str:
+pass
+else:
+mem_size /= 1024
+return int(mem_size)
 finally:
 session.close()
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH 2/2] KVM-test: subtest guest_s4: Add check of if there's enough space left for S4

2010-01-12 Thread Yolkfull Chow
If disk has no enough space left, check S4 support will fail.

Also use 'TestNAError' as error type if guest really doesn't support S4.
(Thanks Jason for pointing this out. :)

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/guest_s4.py |   10 +-
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/client/tests/kvm/tests/guest_s4.py 
b/client/tests/kvm/tests/guest_s4.py
index 82572f6..a289148 100644
--- a/client/tests/kvm/tests/guest_s4.py
+++ b/client/tests/kvm/tests/guest_s4.py
@@ -15,11 +15,11 @@ def run_guest_s4(test, params, env):
 session = kvm_test_utils.wait_for_login(vm)
 
 logging.info(Checking whether guest OS supports suspend to disk (S4)...)
-status = session.get_command_status(params.get(check_s4_support_cmd))
-if status is None:
-logging.error(Failed to check if guest OS supports S4)
-elif status != 0:
-raise error.TestFail(Guest OS does not support S4)
+s, o = 
session.get_command_status_output(params.get(check_s4_support_cmd))
+if not enough space in o:
+raise error.TestError(Check S4 support failed: %s % o)
+elif s != 0:
+raise error.TestNAError(Guest OS does not support S4)
 
 logging.info(Waiting until all guest OS services are fully started...)
 time.sleep(float(params.get(services_up_timeout, 30)))
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH 1/2] KVM-test: linux_s3 subtest: Tune up timeout for suspend command

2010-01-05 Thread Yolkfull Chow
As suggested by Jason, timeout value can be various if different
guest CPU number specified. This patch fixs the problem.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/linux_s3.py |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests/linux_s3.py 
b/client/tests/kvm/tests/linux_s3.py
index 0292757..39f09e4 100644
--- a/client/tests/kvm/tests/linux_s3.py
+++ b/client/tests/kvm/tests/linux_s3.py
@@ -36,7 +36,8 @@ def run_linux_s3(test, params, env):
 logging.info(Putting VM into S3)
 command = chvt %s  echo mem  /sys/power/state  chvt %s % (dst_tty,
  src_tty)
-status = session.get_command_status(command, timeout=120)
+suspend_timeout = 120 + int(params.get(smp)) * 60
+status = session.get_command_status(command, timeout=suspend_timeout)
 if status != 0:
 raise error.TestFail(Suspend to mem failed)
 
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH 2/2] KVM-test: guest_s4 subtest: Tune up timeout value for `set_s4_cmd' command

2010-01-05 Thread Yolkfull Chow

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/guest_s4.py |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests/guest_s4.py 
b/client/tests/kvm/tests/guest_s4.py
index f08b9d2..82572f6 100644
--- a/client/tests/kvm/tests/guest_s4.py
+++ b/client/tests/kvm/tests/guest_s4.py
@@ -45,7 +45,8 @@ def run_guest_s4(test, params, env):
 session2.sendline(params.get(set_s4_cmd))
 
 # Make sure the VM goes down
-if not kvm_utils.wait_for(vm.is_dead, 240, 2, 2):
+suspend_timeout = 240 + int(params.get(smp)) * 60
+if not kvm_utils.wait_for(vm.is_dead, suspend_timeout, 2, 2):
 raise error.TestFail(VM refuses to go down. Suspend failed.)
 logging.info(VM suspended successfully. Sleeping for a while before 
  resuming it.)
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH] KVM-test: Add a subtest image_copy

2010-01-05 Thread Yolkfull Chow
Add image_copy subtest for convenient KVM functional testing.

The target image will be copied into the linked directory if link 'images'
is created, and copied to the directory specified in config file otherwise.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_utils.py  |   64 
 client/tests/kvm/tests/image_copy.py   |   42 +
 client/tests/kvm/tests_base.cfg.sample |6 +++
 3 files changed, 112 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/image_copy.py

diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 2bbbe22..3944b2b 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -924,3 +924,67 @@ def create_report(report_dir, results_dir):
 reporter = os.path.join(report_dir, 'html_report.py')
 html_file = os.path.join(results_dir, 'results.html')
 os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
+
+
+def is_dir_mounted(source, dest, type, perm):
+
+Check whether `source' is mounted on `dest' with right permission.
+
+@source: mount source
+@dest:   mount point
+@type:   file system type
+
+match_string = %s %s %s %s % (source, dest, type, perm)
+try:
+f = open(/etc/mtab, r)
+mounted = f.read()
+f.close()
+except IOError:
+mounted = commands.getoutput(mount)
+if match_string in mounted: 
+return True
+return False
+
+
+def umount(mount_point):
+
+Umount `mount_point'.
+
+@mount_point: mount point
+
+cmd = umount %s % mount_point
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+logging.error(Fail to umount: %s % o)
+return False
+return True
+
+
+def mount(src, mount_point, type, perm = rw):
+
+Mount the src into mount_point of the host.
+
+@src: mount source
+@mount_point: mount point
+@type: file system type
+@perm: mount permission
+
+if is_dir_mounted(src, mount_point, type, perm):
+return True
+
+umount(mount_point)
+
+cmd = mount -t %s %s %s -o %s % (type, src, mount_point, perm)
+logging.debug(Issue mount command: %s % cmd)
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+logging.error(Fail to mount: %s  % o)
+return False
+
+if is_dir_mounted(src, mount_point, type, perm):
+logging.info(Successfully mounted %s % src)
+return True
+else:
+logging.error(Mount verification failed; currently mounted: %s %
+ file('/etc/mtab').read())
+return False
diff --git a/client/tests/kvm/tests/image_copy.py 
b/client/tests/kvm/tests/image_copy.py
new file mode 100644
index 000..800fb90
--- /dev/null
+++ b/client/tests/kvm/tests/image_copy.py
@@ -0,0 +1,42 @@
+import os, logging, commands
+from autotest_lib.client.common_lib import error
+import kvm_utils
+
+def run_image_copy(test, params, env):
+
+Copy guest images from NFS server.
+1) Mount the NFS directory
+2) Check the existence of source image
+3) If existence copy the image from NFS
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+mount_dest_dir = params.get(dst_dir,'/mnt/images')
+if not os.path.exists(mount_dest_dir):
+os.mkdir(mount_dest_dir)
+
+src_dir = params.get('nfs_images_dir')
+image_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm/images')
+if not os.path.exists(image_dir):
+image_dir = os.path.dirname(params.get(image_name))
+
+image = os.path.split(params['image_name'])[1]+'.'+params['image_format']
+
+src_path = os.path.join(mount_dest_dir, image)
+dst_path = os.path.join(image_dir, image)
+
+if not kvm_utils.mount(src_dir, mount_dest_dir, nfs, ro):
+raise error.TestError(Fail to mount the %s to %s %
+  (src_dir, mount_dest_dir))
+  
+# Check the existence of source image
+if not os.path.exists(src_path):
+raise error.TestError(Could not found %s in src directory % src_path)
+
+logging.info(Copying image '%s'... % image)
+cmd = cp %s %s % (src_path, dst_path)
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+raise error.TestFail(Failed to copy image %s: %s % (cmd, o))
diff --git a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
index b8f25f4..bdeac19 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -61,6 +61,12 @@ variants:
 floppy = images/floppy.img
 extra_params +=  -boot d
 
+- image_copy:
+type = image_copy
+vms = ''
+# Here specify the NFS directory that contains all images
+nfs_images_dir = 
+
 - setup:install

[Autotest PATCH] KVM-test: Add a subtest image_copy

2010-01-04 Thread Yolkfull Chow
Add image_copy subtest for convenient KVM functional testing.

The target image will be copied into the linked directory if link 'images'
is created, and copied to the directory specified in config file otherwise.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_utils.py  |   64 
 client/tests/kvm/tests/image_copy.py   |   42 +
 client/tests/kvm/tests_base.cfg.sample |6 +++
 3 files changed, 112 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/image_copy.py

diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 2bbbe22..1e11441 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -924,3 +924,67 @@ def create_report(report_dir, results_dir):
 reporter = os.path.join(report_dir, 'html_report.py')
 html_file = os.path.join(results_dir, 'results.html')
 os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
+
+
+def is_dir_mounted(source, dest, type, perm):
+
+Check whether `source' is mounted on `dest' with right permission.
+
+@source: mount source
+@dest:   mount point
+@type:   file system type
+
+match_string = %s %s %s %s % (source, dest, type, perm)
+try:
+f = open(/etc/mtab, r)
+except IOError:
+pass
+mounted = f.read()
+f.close()
+if match_string in mounted: 
+return True
+return False
+
+
+def umount(mount_point):
+
+Umount `mount_point'.
+
+@mount_point: mount point
+
+cmd = umount %s % mount_point
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+logging.error(Fail to umount: %s % o)
+return False
+return True
+
+
+def mount(src, mount_point, type, perm = rw):
+
+Mount the src into mount_point of the host.
+
+@src: mount source
+@mount_point: mount point
+@type: file system type
+@perm: mount permission
+
+if is_dir_mounted(src, mount_point, type, perm):
+return True
+
+umount(mount_point)
+
+cmd = mount -t %s %s %s -o %s % (type, src, mount_point, perm)
+logging.debug(Issue mount command: %s % cmd)
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+logging.error(Fail to mount: %s  % o)
+return False
+
+if is_dir_mounted(src, mount_point, type, perm):
+logging.info(Successfully mounted %s % src)
+return True
+else:
+logging.error(Mount verification failed; currently mounted: %s %
+ file('/etc/mtab').read())
+return False
diff --git a/client/tests/kvm/tests/image_copy.py 
b/client/tests/kvm/tests/image_copy.py
new file mode 100644
index 000..800fb90
--- /dev/null
+++ b/client/tests/kvm/tests/image_copy.py
@@ -0,0 +1,42 @@
+import os, logging, commands
+from autotest_lib.client.common_lib import error
+import kvm_utils
+
+def run_image_copy(test, params, env):
+
+Copy guest images from NFS server.
+1) Mount the NFS directory
+2) Check the existence of source image
+3) If existence copy the image from NFS
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+mount_dest_dir = params.get(dst_dir,'/mnt/images')
+if not os.path.exists(mount_dest_dir):
+os.mkdir(mount_dest_dir)
+
+src_dir = params.get('nfs_images_dir')
+image_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm/images')
+if not os.path.exists(image_dir):
+image_dir = os.path.dirname(params.get(image_name))
+
+image = os.path.split(params['image_name'])[1]+'.'+params['image_format']
+
+src_path = os.path.join(mount_dest_dir, image)
+dst_path = os.path.join(image_dir, image)
+
+if not kvm_utils.mount(src_dir, mount_dest_dir, nfs, ro):
+raise error.TestError(Fail to mount the %s to %s %
+  (src_dir, mount_dest_dir))
+  
+# Check the existence of source image
+if not os.path.exists(src_path):
+raise error.TestError(Could not found %s in src directory % src_path)
+
+logging.info(Copying image '%s'... % image)
+cmd = cp %s %s % (src_path, dst_path)
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+raise error.TestFail(Failed to copy image %s: %s % (cmd, o))
diff --git a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
index b8f25f4..bdeac19 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -61,6 +61,12 @@ variants:
 floppy = images/floppy.img
 extra_params +=  -boot d
 
+- image_copy:
+type = image_copy
+vms = ''
+# Here specify the NFS directory that contains all images
+nfs_images_dir = 
+
 - setup:install unattended_install
 type = steps

Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest image_copy

2010-01-04 Thread Yolkfull Chow
On Mon, Jan 04, 2010 at 10:52:13PM +0800, Amos Kong wrote:
 On Mon, Jan 04, 2010 at 05:30:21PM +0800, Yolkfull Chow wrote:
  Add image_copy subtest for convenient KVM functional testing.
  
  The target image will be copied into the linked directory if link 'images'
  is created, and copied to the directory specified in config file otherwise.
  
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/kvm_utils.py  |   64 
  
   client/tests/kvm/tests/image_copy.py   |   42 +
   client/tests/kvm/tests_base.cfg.sample |6 +++
   3 files changed, 112 insertions(+), 0 deletions(-)
   create mode 100644 client/tests/kvm/tests/image_copy.py
  
  diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
  index 2bbbe22..1e11441 100644
  --- a/client/tests/kvm/kvm_utils.py
  +++ b/client/tests/kvm/kvm_utils.py
  @@ -924,3 +924,67 @@ def create_report(report_dir, results_dir):
   reporter = os.path.join(report_dir, 'html_report.py')
   html_file = os.path.join(results_dir, 'results.html')
   os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
  +
  +
  +def is_dir_mounted(source, dest, type, perm):
  +
  +Check whether `source' is mounted on `dest' with right permission.
  +
  +@source: mount source
  +@dest:   mount point
  +@type:   file system type
 
@perm:   mount permission
 
  +
  +match_string = %s %s %s %s % (source, dest, type, perm)
  +try:
  +f = open(/etc/mtab, r)
  +except IOError:
  +pass
 
 When calling open(), if raise an IOError exception, 'f' was not assigned.
 Then we could not call 'f.read()' or 'f.close()'

Ah..yes, thanks for pointing this out.

 
 We need 'return False', not 'pass' 
 
  +mounted = f.read()
  +f.close()
  +if match_string in mounted: 
  +return True
  +return False
  +
  +
  +def umount(mount_point):
  +
  +Umount `mount_point'.
  +
  +@mount_point: mount point
  +
  +cmd = umount %s % mount_point
  +s, o = commands.getstatusoutput(cmd)
  +if s != 0:
  +logging.error(Fail to umount: %s % o)
  +return False
  +return True
  +
  +
  +def mount(src, mount_point, type, perm = rw):
  +
  +Mount the src into mount_point of the host.
  +
  +@src: mount source
  +@mount_point: mount point
  +@type: file system type
  +@perm: mount permission
  +
  +if is_dir_mounted(src, mount_point, type, perm):
  +return True
  +
  +umount(mount_point)
  +
  +cmd = mount -t %s %s %s -o %s % (type, src, mount_point, perm)
  +logging.debug(Issue mount command: %s % cmd)
  +s, o = commands.getstatusoutput(cmd)
  +if s != 0:
  +logging.error(Fail to mount: %s  % o)
  +return False
  +
  +if is_dir_mounted(src, mount_point, type, perm):
  +logging.info(Successfully mounted %s % src)
  +return True
  +else:
  +logging.error(Mount verification failed; currently mounted: %s %
  + file('/etc/mtab').read())
  +return False
  diff --git a/client/tests/kvm/tests/image_copy.py 
  b/client/tests/kvm/tests/image_copy.py
  new file mode 100644
  index 000..800fb90
  --- /dev/null
  +++ b/client/tests/kvm/tests/image_copy.py
  @@ -0,0 +1,42 @@
  +import os, logging, commands
  +from autotest_lib.client.common_lib import error
  +import kvm_utils
  +
  +def run_image_copy(test, params, env):
  +
  +Copy guest images from NFS server.
  +1) Mount the NFS directory
  +2) Check the existence of source image
  +3) If existence copy the image from NFS
  +
  +@param test: kvm test object
  +@param params: Dictionary with the test parameters
  +@param env: Dictionary with test environment.
  +
  +mount_dest_dir = params.get(dst_dir,'/mnt/images')
  +if not os.path.exists(mount_dest_dir):
  +os.mkdir(mount_dest_dir)
  +
  +src_dir = params.get('nfs_images_dir')
  +image_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm/images')
  +if not os.path.exists(image_dir):
  +image_dir = os.path.dirname(params.get(image_name))
  +
  +image = 
  os.path.split(params['image_name'])[1]+'.'+params['image_format']
  +
  +src_path = os.path.join(mount_dest_dir, image)
  +dst_path = os.path.join(image_dir, image)
  +
  +if not kvm_utils.mount(src_dir, mount_dest_dir, nfs, ro):
  +raise error.TestError(Fail to mount the %s to %s %
  +  (src_dir, mount_dest_dir))
  +  
  +# Check the existence of source image
  +if not os.path.exists(src_path):
  +raise error.TestError(Could not found %s in src directory % 
  src_path)
  +
  +logging.info(Copying image '%s'... % image)
  +cmd = cp %s %s % (src_path, dst_path)
  +s, o

[AUTOTEST PATCH 1/2] KVM-test: Fix a bug that about list slice in scan_results.py

2009-12-30 Thread Yolkfull Chow
If 'info_list' is empty slice operation can result in traceback:

...
  info_list[-1] = parts[5]
  IndexError: list assignment index out of range

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/scan_results.py |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/scan_results.py b/client/tests/kvm/scan_results.py
index f7bafa9..1daff2f 100755
--- a/client/tests/kvm/scan_results.py
+++ b/client/tests/kvm/scan_results.py
@@ -45,6 +45,7 @@ def parse_results(text):
 # Found a FAIL/ERROR/GOOD line -- get failure/success info
 elif (len(parts) = 6 and parts[3].startswith(timestamp) and
   parts[4].startswith(localtime)):
+info_list.append()
 info_list[-1] = parts[5]
 
 return result_list
-- 
1.6.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [Autotest PATCH] KVM test: No need close session when login timeout

2009-12-28 Thread Yolkfull Chow
On Sat, Dec 26, 2009 at 10:07:58AM -0500, Michael Goldish wrote:
 
 - Amos Kong ak...@redhat.com wrote:
 
  On Fri, Dec 25, 2009 at 08:28:18AM -0500, Michael Goldish wrote:
   
   - Amos Kong ak...@redhat.com wrote:
   
If login timeout, wait_for() returned 'None' and assigned to
'session'.
When call session.close(), this prlblem was caused:
AttributeError: 'NoneType' object has no attribute 'close'

Signed-off-by: Amos Kong ak...@redhat.com
---
 client/tests/kvm/tests/timedrift_with_migration.py |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests/timedrift_with_migration.py
b/client/tests/kvm/tests/timedrift_with_migration.py
index a012db3..0b93183 100644
--- a/client/tests/kvm/tests/timedrift_with_migration.py
+++ b/client/tests/kvm/tests/timedrift_with_migration.py
@@ -76,7 +76,8 @@ def run_timedrift_with_migration(test, params,
env):
  time_filter_re,
time_format)
 
 finally:
-session.close()
+if session != None:
+session.close()
   
   Agreed, but we can make this simply:
   
   if session:
   session.close()

Actually we should use 'wait_for_login' instead of 'wait_for' in 
timedrift_with_migration.py:

session = kvm_test_utils.wait_for_login(vm, timeout=30)

And fix name of 'dest_vm', could be something like 'migrated_vm' in migrate() 
in kvm_test_utils.py:

...
dest_vm = vm.clone()
dest_vm.name = migrated_vm
dest_vm.create(for_migration=True)
...

Then this problem will never happen. :)


  
  Yes,
  
   
   There's no need to explicitly check for None (and if there was,
   the preferred syntax would be 'is not None' rather than '!= None').
   
   Also, just to be safe, we should make the same modification to
   timedrift_with_reboot.py.
  
  
  In timedrift_with_reboot.py, 'session' has been assigned before 'try'.
  If re-login timout, kvm_test_utils.reboot() returns nothing, the value
  of 'session' isn't changed.
  So session.close() couldn't cause this problem:
  AttributeError: 'NoneType' object has no attribute 'close'
 
 The two tests are nearly identical so I thought we might as well make
 the change in both of them, but I agree that it doesn't matter (unless
 we change the behavior of kvm_test_utils.reboot() in the future).
 
  
  
  
  In other testcases, if session wasn't assigned before 'try',
  when calling kvm_test_utils.wait_for_login()/kvm_test_utils.reboot()
  timeout,
  It returns nothing, if close 'session' in finally part, Another
  problem will occur:
  NameError: name 'session' is not defined
  
  In this condition,
   
  if session:
  session.close()
  
  also causes this error.
 
 In what tests exactly does this happen?
 
 'if session' is preferable to 'if session is not None' because it's
 shorter, not because it's safer.
 
  
  
  
   We can also consider removing the try..finally clauses altogether
   because sessions are now closed automatically when they're no
  longer
   needed.
   
 
 # Report results
 host_delta = ht1 - ht0
-- 
1.5.5.6
  
  -- 
  Amos Kong
  Quality Engineer
  Raycom Office(Beijing), Red Hat Inc.
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH] KVM test: Fixup memory size shown in 'GB' in get_memory_size

2009-12-24 Thread Yolkfull Chow
In guest RHEL-3.9 $mem_chk_cmd will catch memory size in GB which
will be computed wrongly in get_memory_size. This patch fix the problem.

Thanks akong for pointing this out.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_vm.py |6 +-
 1 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index cc314d4..7229b79 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -854,7 +854,11 @@ class VM:
 mem_size = 0
 for m in mem:
 mem_size += int(m)
-if not MB in mem_str:
+if GB in mem_str:
+mem_size *= 1024
+elif MB in mem_str:
+pass
+else:
 mem_size /= 1024
 return int(mem_size)
 return None
-- 
1.6.5.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH] KVM test: physical_resources_check subtest:Fixup `mem_chk_cmd' for rhel3.9

2009-12-23 Thread Yolkfull Chow
Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests_base.cfg.sample |4 
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
index a403399..f5a55a0 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -535,6 +535,8 @@ variants:
 extra_params +=  -bootp /pxelinux.0 -boot n
 kernel_args = ks=floppy nicdelay=60
 unattended_file = unattended/RHEL-3-series.ks
+physical_resources_check:
+mem_chk_cmd = dmidecode | awk -F: '/Maximum 
Capacity/ {print $2}'
 - 3.9.x86_64:
 no setup autotest linux_s3
 image_name = rhel3-64
@@ -551,6 +553,8 @@ variants:
 extra_params +=  -bootp /pxelinux.0 -boot n
 kernel_args = ks=floppy nicdelay=60
 unattended_file = unattended/RHEL-3-series.ks
+physical_resources_check:
+mem_chk_cmd = dmidecode | awk -F: '/Maximum 
Capacity/ {print $2}'
 # Windows section
 - @Windows:
 no autotest linux_s3 vlan_tag
-- 
1.6.5.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest PATCH] KVM test: Add a subtest vnc via which interacts with guest

2009-12-17 Thread Yolkfull Chow
Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/vnc.py  |   24 
 client/tests/kvm/tests_base.cfg.sample |3 +++
 2 files changed, 27 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/vnc.py

diff --git a/client/tests/kvm/tests/vnc.py b/client/tests/kvm/tests/vnc.py
new file mode 100644
index 000..0f00379
--- /dev/null
+++ b/client/tests/kvm/tests/vnc.py
@@ -0,0 +1,24 @@
+import logging, pexpect
+from autotest_lib.client.common_lib import  error
+import kvm_test_utils, kvm_subprocess
+
+def run_vnc(test, params, env):
+
+Test whether guest could be interacted with vnc.
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+session = kvm_test_utils.wait_for_login(vm)
+
+# Start vnc connection test
+vnc_port = str(vm.vnc_port - 5900)
+vnc_cmd = vncviewer +  localhost: + vnc_port
+logging.debug(Using command to vnc connect: %s % vnc_cmd)
+
+p = kvm_subprocess.run_bg(vnc_cmd, None, logging.debug, (vnc) )
+if not p.is_alive():
+raise error.TestFail(Vnc connect to guest failed)
+p.close()
diff --git a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
index a403399..0eaccae 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -270,6 +270,9 @@ variants:
 type = physical_resources_check
 catch_uuid_cmd = dmidecode | awk -F: '/UUID/ {print $2}'
 
+- vnc: install setup unattended_install
+type = vnc
+
 # NICs
 variants:
 - @rtl8139:
-- 
1.6.5.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest] [PATCH] KVM test: subtest stress_boot: Fix a bug that cloned VMs are not screendumped

2009-12-10 Thread Yolkfull Chow
We just used vm.create() to create those cloned VMs whereas ignored catching
screendumps of them. This patch fix this problem.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/stress_boot.py |8 +++-
 1 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/client/tests/kvm/tests/stress_boot.py 
b/client/tests/kvm/tests/stress_boot.py
index 2a2e933..0b5ec02 100644
--- a/client/tests/kvm/tests/stress_boot.py
+++ b/client/tests/kvm/tests/stress_boot.py
@@ -1,6 +1,6 @@
 import logging, time
 from autotest_lib.client.common_lib import error
-import kvm_subprocess, kvm_test_utils, kvm_utils
+import kvm_subprocess, kvm_test_utils, kvm_utils, kvm_preprocessing
 
 
 def run_stress_boot(tests, params, env):
@@ -39,11 +39,9 @@ def run_stress_boot(tests, params, env):
 vm_params[address_index] = str(address_index)
 curr_vm = vm.clone(vm_name, vm_params)
 kvm_utils.env_register_vm(env, vm_name, curr_vm)
-params['vms'] +=   + vm_name
-
 logging.info(Booting guest #%d % num)
-if not curr_vm.create():
-raise error.TestFail(Cannot create VM #%d % num)
+kvm_preprocessing.preprocess_vm(tests, vm_params, env, vm_name)
+params['vms'] +=   + vm_name
 
 curr_vm_session = kvm_utils.wait_for(curr_vm.remote_login, 240, 0, 
2)
 if not curr_vm_session:
-- 
1.6.5.3

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST PATCH 1/2 - V3] Add a server-side test - kvm_migration

2009-12-08 Thread Yolkfull Chow
This patch will add a server-side test namely kvm_migration. Currently,
it will use existing KVM client test framework and add a new file
kvm_migration.py to help judge executing routine: source machine or dest
machine.

Improvement based on Version #2:
 * Log into migrated guest from source client machine because the previous
   session got before migration should be still responsive.
 * Compare the output of `migration_test_command' before and after migration.
 * Add checking status of migrated guest via monitor command 'info status'.
 * As suggested by Sudhir, rename 'rem_port' to 'mig_port'. I also rename
   'rem_host' to be 'dest_host'.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_migration.py  |  170 
 client/tests/kvm/kvm_test_utils.py |   27 +++---
 client/tests/kvm/kvm_tests.cfg.sample  |2 +
 client/tests/kvm_migration |1 +
 server/tests/kvm/migration_control.srv |  139 ++
 5 files changed, 327 insertions(+), 12 deletions(-)
 create mode 100644 client/tests/kvm/kvm_migration.py
 create mode 12 client/tests/kvm_migration
 create mode 100644 server/tests/kvm/migration_control.srv

diff --git a/client/tests/kvm/kvm_migration.py 
b/client/tests/kvm/kvm_migration.py
new file mode 100644
index 000..7845e6b
--- /dev/null
+++ b/client/tests/kvm/kvm_migration.py
@@ -0,0 +1,170 @@
+import sys, os, time, logging, commands, socket
+from autotest_lib.client.bin import test
+from autotest_lib.client.common_lib import error
+import kvm_utils, kvm_preprocessing, common, kvm_vm, kvm_test_utils
+
+
+class kvm_migration(test.test):
+
+KVM migration test.
+
+@copyright: Red Hat 2008-2009
+@see: http://www.linux-kvm.org/page/KVM-Autotest/Client_Install
+(Online doc - Getting started with KVM testing)
+
+Migration execution progress:
+
+source host dest host
+--
+log into guest
+--
+start socket server
+
+ wait 30 secs -- wait login_timeout+30 secs---
+
+accept connection connect to socket server,send mig_port
+--
+start migration
+
+ wait 30 secs -- wait mig_timeout+30 secs-
+
+try to log into migrated guest   check VM's status via monitor cmd
+--
+
+
+version = 1
+def initialize(self):
+pass
+
+
+def run_once(self, params):
+
+Setup remote machine and then execute migration.
+
+# Check whether remote machine is ready
+dsthost = params.get(dsthost)
+srchost = params.get(srchost)
+image_path = os.path.join(self.bindir, images)
+
+rootdir = params.get(rootdir)
+iso = os.path.join(rootdir, 'iso')
+images = os.path.join(rootdir, 'images')
+qemu = os.path.join(rootdir, 'qemu')
+qemu_img = os.path.join(rootdir, 'qemu-img')
+
+def link_if_not_exist(ldir, target, link_name):
+t = target
+l = os.path.join(ldir, link_name)
+if not os.path.exists(l):
+os.symlink(t,l)
+link_if_not_exist(self.bindir, '../../', 'autotest')
+link_if_not_exist(self.bindir, iso, 'isos')
+link_if_not_exist(self.bindir, images, 'images')
+link_if_not_exist(self.bindir, qemu, 'qemu')
+link_if_not_exist(self.bindir, qemu_img, 'qemu-img')
+
+# Report the parameters we've received and write them as keyvals
+logging.debug(Test parameters:)
+keys = params.keys()
+keys.sort()
+for key in keys:
+logging.debug(%s = %s, key, params[key])
+self.write_test_keyval({key: params[key]})
+
+# Open the environment file
+env_filename = os.path.join(self.bindir, params.get(env, env))
+env = kvm_utils.load_env(env_filename, {})
+logging.debug(Contents of environment: %s % str(env))
+
+# Preprocess
+kvm_preprocessing.preprocess(self, params, env)
+kvm_utils.dump_env(env, env_filename)
+
+try:
+try:
+# Get the living VM
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+migration_test_command = params.get(migration_test_command)
+login_timeout = int(params.get(login_timeout))
+mig_timeout = int(params.get(mig_timeout))
+source_addr = (srchost, 50006)
+all = [srchost, dsthost]
+
+# Check whether migration is supported
+s, o = vm.send_monitor_cmd(help info)
+if not info migrate in o:
+raise

[AUTOTEST PATCH 2/2] KVM test: subtest migration: Add rem_host and rem_port for migrate()

2009-12-08 Thread Yolkfull Chow
Since kvm_test_utils.migrate() adds two arguments to adopt
server-side migration. This client side test also needs update.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/migration.py |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests/migration.py 
b/client/tests/kvm/tests/migration.py
index b8f171c..3c983bc 100644
--- a/client/tests/kvm/tests/migration.py
+++ b/client/tests/kvm/tests/migration.py
@@ -43,7 +43,8 @@ def run_migration(test, params, env):
 session2.close()
 
 # Migrate the VM
-dest_vm = kvm_test_utils.migrate(vm, env)
+dest_vm = kvm_test_utils.migrate(vm, localhost,
+ dest_vm.migration_port, env)
 
 # Log into the guest again
 logging.info(Logging into guest after migration...)
-- 
1.6.5.2

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST PATCH] KVM test: subtest block_hotplug: Fixup pci_test_cmd in config file

2009-12-07 Thread Yolkfull Chow
RHEL-4.8 is still using 'hd[a-z]' as harddisk device name. This patch
adds 'h' to regular expression in command `pci_test_cmd'.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_tests.cfg.sample |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 20ae332..73c593a 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -217,7 +217,7 @@ variants:
 image_size_stg = 1G
 remove_image_stg = yes
 force_create_image_stg = yes
-pci_test_cmd = yes | mke2fs `fdisk -l 21 | awk '/\/dev\/[sv]d[a-z] 
doesn/ {print $2}'`
+pci_test_cmd = yes | mke2fs `fdisk -l 21 | awk '/\/dev\/[hsv]d[a-z] 
doesn/ {print $2}'`
 wait_secs_for_hook_up = 3
 kill_vm_on_error = yes
 variants:
-- 
1.6.5.2

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Add a server-side test - kvm_migration

2009-12-07 Thread Yolkfull Chow
On Mon, Dec 07, 2009 at 03:35:54PM +0530, sudhir kumar wrote:
 Resending with proper cc list :(
 
 On Mon, Dec 7, 2009 at 2:43 PM, sudhir kumar smalik...@gmail.com wrote:
  Thanks for initiating the server side implementation of migration. Few
  comments below
 
  On Fri, Dec 4, 2009 at 1:48 PM, Yolkfull Chow yz...@redhat.com wrote:
  This patch will add a server-side test namely kvm_migration. Currently,
  it will use existing KVM client test framework and add a new file
  kvm_migration.py to help judge executing routine: source machine or dest
  machine.
 
  * One thing need to be considered/improved:
  Whether we parse the kvm_tests.cfg on server machine or on client machines?
  If parse it on client machines, we need to fix one problem that adding
  'start_vm_for_migration' parameter into dict which generated on dest 
  machine.
  I think we can not manage with client side parsing without adding too
  much complexity. So let us continue parsing on the server side only
  for remote migration. Also as the patch does, keep the local migration
  under the client also. I do not like adding test variants in
  migration_control.srv. Comments below...
 
  So far I choose parsing kvm_tests.cfg on server machine, and then add
  'start_vm_for_migration' into dict cloned from original test dict for dest
  machine.
 
  * In order to run this test so far, we need to setup NFS for both
  source and dest machines.
 
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/kvm_migration.py      |  165 
  
   client/tests/kvm/kvm_test_utils.py     |   27 +++---
   client/tests/kvm/kvm_tests.cfg.sample  |    2 +
   client/tests/kvm_migration             |    1 +
   server/tests/kvm/migration_control.srv |  137 ++
   5 files changed, 320 insertions(+), 12 deletions(-)
   create mode 100644 client/tests/kvm/kvm_migration.py
   create mode 12 client/tests/kvm_migration
   create mode 100644 server/tests/kvm/migration_control.srv
 
  diff --git a/client/tests/kvm/kvm_migration.py 
  b/client/tests/kvm/kvm_migration.py
  new file mode 100644
  index 000..52cd3cd
  --- /dev/null
  +++ b/client/tests/kvm/kvm_migration.py
  @@ -0,0 +1,165 @@
  +import sys, os, time, logging, commands, socket
  +from autotest_lib.client.bin import test
  +from autotest_lib.client.common_lib import error
  +import kvm_utils, kvm_preprocessing, common, kvm_vm, kvm_test_utils
  +
  +
  +class kvm_migration(test.test):
  +    
  +    KVM migration test.
  +
  +   �...@copyright: Red Hat 2008-2009
  +   �...@see: http://www.linux-kvm.org/page/KVM-Autotest/Client_Install
  +            (Online doc - Getting started with KVM testing)
  +
  +    Migration execution progress:
  +
  +    source host                     dest host
  +    --
  +    log into guest
  +    --
  +    start socket server
  +
  +     wait 30 secs -- wait login_timeout+30 secs---
  +
  +    accept connection             connect to socket server,send mig_port
  +    --
  +    start migration
  +
  +     wait 30 secs -- wait mig_timeout+30 secs-
  +
  +                                  try to log into migrated guest
  +    --
  +
  +    
  +    version = 1
  +    def initialize(self):
  +        pass
  +
  +
  +    def run_once(self, params):
  +        
  +        Setup remote machine and then execute migration.
  +        
  +        # Check whether remote machine is ready
  +        dsthost = params.get(dsthost)
  +        srchost = params.get(srchost)
  +        image_path = os.path.join(self.bindir, images)
  +
  +        rootdir = params.get(rootdir)
  +        iso = os.path.join(rootdir, 'iso')
  +        images = os.path.join(rootdir, 'images')
  +        qemu = os.path.join(rootdir, 'qemu')
  +        qemu_img = os.path.join(rootdir, 'qemu-img')
  +
  +        def link_if_not_exist(ldir, target, link_name):
  +            t = target
  +            l = os.path.join(ldir, link_name)
  +            if not os.path.exists(l):
  +                os.symlink(t,l)
  +        link_if_not_exist(self.bindir, '../../', 'autotest')
  +        link_if_not_exist(self.bindir, iso, 'isos')
  +        link_if_not_exist(self.bindir, images, 'images')
  +        link_if_not_exist(self.bindir, qemu, 'qemu')
  +        link_if_not_exist(self.bindir, qemu_img, 'qemu-img')
  +
  +        # Report the parameters we've received and write them as keyvals
  +        logging.debug(Test parameters:)
  +        keys = params.keys()
  +        keys.sort()
  +        for key in keys:
  +            logging.debug(    %s = %s, key, params[key])
  +            self.write_test_keyval({key: params[key]})
  +
  +        # Open

[PATCH V2] Add a server-side test - kvm_migration

2009-12-04 Thread Yolkfull Chow
This patch will add a server-side test namely kvm_migration. Currently,
it will use existing KVM client test framework and add a new file
kvm_migration.py to help judge executing routine: source machine or dest
machine.

Improvement based on version #1:
1) add two barriers for controlling synchronizing problem.
2) add socket for telling source machine which port that dest machine
   is listening.
3) makeup all .cfg files according to those sample files
4) delete last line of kvm_tests.cfg and place limitations into server control
   file for user to edit.

* One thing need to be considered/improved:
Whether we parse the kvm_tests.cfg on server machine or on client machines?
If parse it on client machines, we need to fix one problem that adding
'start_vm_for_migration' parameter into dict which generated on dest machine.

So far I choose parsing kvm_tests.cfg on server machine, and then add
'start_vm_for_migration' into dict cloned from original test dict for dest
machine.

* In order to run this test so far, we need to setup NFS for both
source and dest machines.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_migration.py  |  165 
 client/tests/kvm/kvm_test_utils.py |   27 +++---
 client/tests/kvm/kvm_tests.cfg.sample  |2 +
 client/tests/kvm_migration |1 +
 server/tests/kvm/migration_control.srv |  137 ++
 5 files changed, 320 insertions(+), 12 deletions(-)
 create mode 100644 client/tests/kvm/kvm_migration.py
 create mode 12 client/tests/kvm_migration
 create mode 100644 server/tests/kvm/migration_control.srv

diff --git a/client/tests/kvm/kvm_migration.py 
b/client/tests/kvm/kvm_migration.py
new file mode 100644
index 000..52cd3cd
--- /dev/null
+++ b/client/tests/kvm/kvm_migration.py
@@ -0,0 +1,165 @@
+import sys, os, time, logging, commands, socket
+from autotest_lib.client.bin import test
+from autotest_lib.client.common_lib import error
+import kvm_utils, kvm_preprocessing, common, kvm_vm, kvm_test_utils
+
+
+class kvm_migration(test.test):
+
+KVM migration test.
+
+@copyright: Red Hat 2008-2009
+@see: http://www.linux-kvm.org/page/KVM-Autotest/Client_Install
+(Online doc - Getting started with KVM testing)
+
+Migration execution progress:
+
+source host dest host
+--
+log into guest
+--
+start socket server
+
+ wait 30 secs -- wait login_timeout+30 secs---
+
+accept connection connect to socket server,send mig_port
+--
+start migration
+
+ wait 30 secs -- wait mig_timeout+30 secs-
+
+  try to log into migrated guest
+--
+
+
+version = 1
+def initialize(self):
+pass
+
+
+def run_once(self, params):
+
+Setup remote machine and then execute migration.
+
+# Check whether remote machine is ready
+dsthost = params.get(dsthost)
+srchost = params.get(srchost)
+image_path = os.path.join(self.bindir, images)
+
+rootdir = params.get(rootdir)
+iso = os.path.join(rootdir, 'iso')
+images = os.path.join(rootdir, 'images')
+qemu = os.path.join(rootdir, 'qemu')
+qemu_img = os.path.join(rootdir, 'qemu-img')
+
+def link_if_not_exist(ldir, target, link_name):
+t = target
+l = os.path.join(ldir, link_name)
+if not os.path.exists(l):
+os.symlink(t,l)
+link_if_not_exist(self.bindir, '../../', 'autotest')
+link_if_not_exist(self.bindir, iso, 'isos')
+link_if_not_exist(self.bindir, images, 'images')
+link_if_not_exist(self.bindir, qemu, 'qemu')
+link_if_not_exist(self.bindir, qemu_img, 'qemu-img')
+
+# Report the parameters we've received and write them as keyvals
+logging.debug(Test parameters:)
+keys = params.keys()
+keys.sort()
+for key in keys:
+logging.debug(%s = %s, key, params[key])
+self.write_test_keyval({key: params[key]})
+
+# Open the environment file
+env_filename = os.path.join(self.bindir, params.get(env, env))
+env = kvm_utils.load_env(env_filename, {})
+logging.debug(Contents of environment: %s % str(env))
+
+# Preprocess
+kvm_preprocessing.preprocess(self, params, env)
+kvm_utils.dump_env(env, env_filename)
+
+try:
+try:
+# Get the living VM
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+migration_test_command = params.get

[PATCH 1/2] Add a server-side test - kvm_migration

2009-12-04 Thread Yolkfull Chow
This patch will add a server-side test namely kvm_migration. Currently,
it will use existing KVM client test framework and add a new file
kvm_migration.py to help judge executing routine: source machine or dest
machine.

* One thing need to be considered/improved:
Whether we parse the kvm_tests.cfg on server machine or on client machines?
If parse it on client machines, we need to fix one problem that adding
'start_vm_for_migration' parameter into dict which generated on dest machine.

So far I choose parsing kvm_tests.cfg on server machine, and then add
'start_vm_for_migration' into dict cloned from original test dict for dest
machine.

* In order to run this test so far, we need to setup NFS for both
source and dest machines.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_migration.py  |  165 
 client/tests/kvm/kvm_test_utils.py |   27 +++---
 client/tests/kvm/kvm_tests.cfg.sample  |2 +
 client/tests/kvm_migration |1 +
 server/tests/kvm/migration_control.srv |  137 ++
 5 files changed, 320 insertions(+), 12 deletions(-)
 create mode 100644 client/tests/kvm/kvm_migration.py
 create mode 12 client/tests/kvm_migration
 create mode 100644 server/tests/kvm/migration_control.srv

diff --git a/client/tests/kvm/kvm_migration.py 
b/client/tests/kvm/kvm_migration.py
new file mode 100644
index 000..52cd3cd
--- /dev/null
+++ b/client/tests/kvm/kvm_migration.py
@@ -0,0 +1,165 @@
+import sys, os, time, logging, commands, socket
+from autotest_lib.client.bin import test
+from autotest_lib.client.common_lib import error
+import kvm_utils, kvm_preprocessing, common, kvm_vm, kvm_test_utils
+
+
+class kvm_migration(test.test):
+
+KVM migration test.
+
+@copyright: Red Hat 2008-2009
+@see: http://www.linux-kvm.org/page/KVM-Autotest/Client_Install
+(Online doc - Getting started with KVM testing)
+
+Migration execution progress:
+
+source host dest host
+--
+log into guest
+--
+start socket server
+
+ wait 30 secs -- wait login_timeout+30 secs---
+
+accept connection connect to socket server,send mig_port
+--
+start migration
+
+ wait 30 secs -- wait mig_timeout+30 secs-
+
+  try to log into migrated guest
+--
+
+
+version = 1
+def initialize(self):
+pass
+
+
+def run_once(self, params):
+
+Setup remote machine and then execute migration.
+
+# Check whether remote machine is ready
+dsthost = params.get(dsthost)
+srchost = params.get(srchost)
+image_path = os.path.join(self.bindir, images)
+
+rootdir = params.get(rootdir)
+iso = os.path.join(rootdir, 'iso')
+images = os.path.join(rootdir, 'images')
+qemu = os.path.join(rootdir, 'qemu')
+qemu_img = os.path.join(rootdir, 'qemu-img')
+
+def link_if_not_exist(ldir, target, link_name):
+t = target
+l = os.path.join(ldir, link_name)
+if not os.path.exists(l):
+os.symlink(t,l)
+link_if_not_exist(self.bindir, '../../', 'autotest')
+link_if_not_exist(self.bindir, iso, 'isos')
+link_if_not_exist(self.bindir, images, 'images')
+link_if_not_exist(self.bindir, qemu, 'qemu')
+link_if_not_exist(self.bindir, qemu_img, 'qemu-img')
+
+# Report the parameters we've received and write them as keyvals
+logging.debug(Test parameters:)
+keys = params.keys()
+keys.sort()
+for key in keys:
+logging.debug(%s = %s, key, params[key])
+self.write_test_keyval({key: params[key]})
+
+# Open the environment file
+env_filename = os.path.join(self.bindir, params.get(env, env))
+env = kvm_utils.load_env(env_filename, {})
+logging.debug(Contents of environment: %s % str(env))
+
+# Preprocess
+kvm_preprocessing.preprocess(self, params, env)
+kvm_utils.dump_env(env, env_filename)
+
+try:
+try:
+# Get the living VM
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+migration_test_command = params.get(migration_test_command)
+login_timeout = int(params.get(login_timeout))
+mig_timeout = int(params.get(mig_timeout))
+source_addr = (srchost, 50006)
+all = [srchost, dsthost]
+
+# Check whether migration is supported
+s, o = vm.send_monitor_cmd(help info

[PATCH 2/2] KVM test: subtest migration: Add rem_host and rem_port for migrate().

2009-12-04 Thread Yolkfull Chow
Since kvm_test_utils.migrate() adds two arguments to adopt
server-side migration. This client side test also needs update.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/migration.py |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests/migration.py 
b/client/tests/kvm/tests/migration.py
index b8f171c..b84943d 100644
--- a/client/tests/kvm/tests/migration.py
+++ b/client/tests/kvm/tests/migration.py
@@ -43,7 +43,8 @@ def run_migration(test, params, env):
 session2.close()
 
 # Migrate the VM
-dest_vm = kvm_test_utils.migrate(vm, env)
+dest_vm = kvm_test_utils.migrate(vm, localhost, 
+ dest_vm.migration_port, env)
 
 # Log into the guest again
 logging.info(Logging into guest after migration...)
-- 
1.6.5.2

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Adds a test to verify resources inside a VM

2009-12-04 Thread Yolkfull Chow
On Sun, Nov 29, 2009 at 11:04:55AM +0200, Yaniv Kaul wrote:
 On 11/29/2009 9:20 AM, Yolkfull Chow wrote:
 On Wed, Nov 25, 2009 at 11:35:02AM +0530, sudhir kumar wrote:
 This patch adds a test for verifying whether the number of cpus and amount
 of memory as seen inside a guest is same as allocated to it on the qemu
 command line.
 Hello Sudhir,
 
 Please see embedded comments as below:
 
 Signed-off-by: Sudhir Kumarsku...@linux.vnet.ibm.com
 
 Index: kvm/tests/verify_resources.py
 ===
 --- /dev/null
 +++ kvm/tests/verify_resources.py
 @@ -0,0 +1,74 @@
 +import logging, time
 +from autotest_lib.client.common_lib import error
 +import kvm_subprocess, kvm_test_utils, kvm_utils
 +
 +
 +Test to verify if the guest has the equal amount of resources
 +as allocated through the qemu command line
 +
 +...@copyright: 2009 IBM Corporation
 +...@author: Sudhir Kumarsku...@linux.vnet.ibm.com
 +
 +
 +
 +def run_verify_resources(test, params, env):
 +
 +KVM test for verifying VM resources(#vcpu, memory):
 +1) Get resources from the VM parameters
 +2) Log into the guest
 +3) Get actual resources, compare and report the pass/failure
 +
 +@param test: kvm test object
 +@param params: Dictionary with the test parameters
 +@param env: Dictionary with test environment.
 +
 +vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
 +
 +# Get info about vcpu and memory from dictionary
 +exp_vcpus = int(params.get(smp))
 +exp_mem_mb = long(params.get(mem))
 +real_vcpus = 0
 +real_mem_kb = 0
 +real_mem_mb = 0
 +# Some memory is used by bios and all, so lower the expected
 value say by 5%
 +exp_mem_mb = long(exp_mem_mb * 0.95)
 +logging.info(The guest should have vcpus: %s % exp_vcpus)
 +logging.info(The guest should have min mem: %s MB % exp_mem_mb)
 +
 +session = kvm_test_utils.wait_for_login(vm)
 +
 +# Get info about vcpu and memory from within guest
 +if params.get(guest_os_type) == Linux:
 +output = session.get_command_output(cat /proc/cpuinfo|grep 
 processor)
 We'd better here not hard code the command that getting CPU count. As KVM 
 supports not
 only Linux  Windows, but also others say Unix/BSD.
 A recommended method could be define it in config file for different 
 platforms:
 
 - @Linux:
  verify_resources:
  count_cpu_cmd = grep processor /proc/cpuinfo
 
 - @Windows:
  verify_resources:
  count_cpu_cmd = systeminfo (here I would not suggest we use 
  'systeminfo'
  for catching M$ guest's memory size)
 
 +for line in output.split('\n'):
 +if 'processor' in line:
 +real_vcpus = real_vcpus + 1
 
 If you want just to count the number of processors, count using:
 /bin/grep -c processor /proc/cpuinfo
 However, I feel there's more data we could get from the output -
 such as the topology (sockets/cores/threads) and cpuid level and
 flags which we should look at .

Hi Yaniv,
Thanks very much for comments. Yes, cpuid level and flags are important. 
And test of cpu_flags verification has been finished long long ago in our
internal tree. We will add cpuid in later improvement. 

Thanks again for pointing this out and also sorry for late reply. :)

 
 +
 +output = session.get_command_output(cat /proc/meminfo)
 
 /bin/grep MemTotal /proc/meminfo
 Y.
 
 For catching memory size of Linux guests, I prefer command 'dmidecode' which 
 can
 catch memory size exactly in MB.
 
 +for line in output.split('\n'):
 +if 'MemTotal' in line:
 +real_mem_kb = long(line.split()[1])
 +real_mem_mb = real_mem_kb / 1024
 +
 +elif params.get(guest_os_type) == Windows:
 +# Windows takes long time to display output for systeminfo
 +output = session.get_command_output(systeminfo, timeout =
 150, internal_timeout = 50)
 +for line in output.split('\n'):
 +if 'Processor' in line:
 +real_vcpus = int(line.split()[1])
 +
 +for line in output.split('\n'):
 +if 'Total Physical Memory' in line:
 +   real_mem_mb = long(.join(%s % k for k in
 line.split()[3].split(',')))
 So many slice and split operations can easy results in problems.
 To catch memory of Windows guests, I recommend we use 'wmic memphysical' 
 which
 can dump memory size in KB exactly.
 
 
 Meanwhile, we also need to verify guest's NICs' count and their(its) model,
 hard disk(s)'s count  model etc. Therefore I think we need a case to verify
 them together.
 
 I had wrote such test couples of days before. I also ran it several times.
 Please comment on it when I post it here later. Thanks,
 
 +
 +else:
 +raise error.TestFail(Till date this test is supported only
 for Linux and Windows)
 +
 +logging.info(The guest has cpus: %s % real_vcpus)
 +logging.info(The guest has mem: %s MB

Re: [Autotest] [PATCH 1/2] Adds a test to verify resources inside a VM

2009-12-01 Thread Yolkfull Chow
On Tue, Dec 01, 2009 at 11:56:43AM -0200, Lucas Meneghel Rodrigues wrote:
 Hi Sudhir and Yolkfull:
 
 Thanks for your work on this test! Since Yolkfull's test matches
 Sudhir's test functionality and extends it, I will go with it. Some
 points:
 
  * A failure on checking a given resource shouldn't prevent us from
 testing other resources. Hence, instead of TestFail() exceptions,
 let's replace it by an increase on a failure counter defined in the
 beginning of the test.
  * In order to make it more clear what the test does, let's change the
 name to check_physical_resources
  * At least for the user messages, it's preferrable to use Assigned
 to VM and Reported by OS instead of expected and actual.
 
 I have implemented the suggestions and tested it, works quite well. A
 patch was sent to the mailing list a couple of minutes ago, please let
 me know what you guys think.

Looks good for me. Thanks Lucas for improving this test. 

Sudhir, what do you think about this? :)

Cheers,
Yolkfull

 
 Cheers,
 
 On Sun, Nov 29, 2009 at 8:40 AM, Yolkfull Chow yz...@redhat.com wrote:
  On Sun, Nov 29, 2009 at 02:22:40PM +0530, sudhir kumar wrote:
  On Sun, Nov 29, 2009 at 12:50 PM, Yolkfull Chow yz...@redhat.com wrote:
   On Wed, Nov 25, 2009 at 11:35:02AM +0530, sudhir kumar wrote:
   This patch adds a test for verifying whether the number of cpus and 
   amount
   of memory as seen inside a guest is same as allocated to it on the qemu
   command line.
  
   Hello Sudhir,
  
   Please see embedded comments as below:
  
  
   Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com
  
   Index: kvm/tests/verify_resources.py
   ===
   --- /dev/null
   +++ kvm/tests/verify_resources.py
   @@ -0,0 +1,74 @@
   +import logging, time
   +from autotest_lib.client.common_lib import error
   +import kvm_subprocess, kvm_test_utils, kvm_utils
   +
   +
   +Test to verify if the guest has the equal amount of resources
   +as allocated through the qemu command line
   +
   +...@copyright: 2009 IBM Corporation
   +...@author: Sudhir Kumar sku...@linux.vnet.ibm.com
   +
   +
   +
   +def run_verify_resources(test, params, env):
   +    
   +    KVM test for verifying VM resources(#vcpu, memory):
   +    1) Get resources from the VM parameters
   +    2) Log into the guest
   +    3) Get actual resources, compare and report the pass/failure
   +
   +   �...@param test: kvm test object
   +   �...@param params: Dictionary with the test parameters
   +   �...@param env: Dictionary with test environment.
   +    
   +    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
   +
   +    # Get info about vcpu and memory from dictionary
   +    exp_vcpus = int(params.get(smp))
   +    exp_mem_mb = long(params.get(mem))
   +    real_vcpus = 0
   +    real_mem_kb = 0
   +    real_mem_mb = 0
   +    # Some memory is used by bios and all, so lower the expected
   value say by 5%
   +    exp_mem_mb = long(exp_mem_mb * 0.95)
   +    logging.info(The guest should have vcpus: %s % exp_vcpus)
   +    logging.info(The guest should have min mem: %s MB % exp_mem_mb)
   +
   +    session = kvm_test_utils.wait_for_login(vm)
   +
   +    # Get info about vcpu and memory from within guest
   +    if params.get(guest_os_type) == Linux:
   +        output = session.get_command_output(cat /proc/cpuinfo|grep 
   processor)
  
   We'd better here not hard code the command that getting CPU count. As 
   KVM supports not
   only Linux  Windows, but also others say Unix/BSD.
   A recommended method could be define it in config file for different 
   platforms:
  I agree. The only concern that made me doing it inside test is the
  increasing size and complexity of the config file. I am fine with
  passing the command from the config file but still the code paths have
  to be different for each type of OS ie windows linux etc.
 
  
   - @Linux:
      verify_resources:
          count_cpu_cmd = grep processor /proc/cpuinfo
  
   - @Windows:
      verify_resources:
          count_cpu_cmd = systeminfo (here I would not suggest we use 
   'systeminfo'
                                      for catching M$ guest's memory size)
  
   +        for line in output.split('\n'):
   +            if 'processor' in line:
   +                real_vcpus = real_vcpus + 1
   +
   +        output = session.get_command_output(cat /proc/meminfo)
  
   For catching memory size of Linux guests, I prefer command 'dmidecode' 
   which can
   catch memory size exactly in MB.
  I think we can use both here. To my knowledge dmidecode will test the
  BIOS code of kvm and hence we can include both the methods?
  
   +        for line in output.split('\n'):
   +            if 'MemTotal' in line:
   +                real_mem_kb = long(line.split()[1])
   +        real_mem_mb = real_mem_kb / 1024
   +
   +    elif params.get(guest_os_type) == Windows:
   +        # Windows takes long time to display output for systeminfo

[PATCH] Add a server-side test: kvm_migration

2009-12-01 Thread Yolkfull Chow
This patch will add a server-side test namely kvm_migration. Currently,
it will use existing KVM client test framework and add a new file
kvm_migration.py to help judge executing routine: source machine or dest
machine.

* Things need to be improved:
1) a method/mechanism to let source machine that initiating migrate informed
   when is dest machine(VM is started with listening mode) is ready. IMHO, we
   could use socket communication to send 'listening port'(-incoming 
tcp:0:$PORT)
   to source host and then source will start migrating.
2) second issue is, how we can edit the kvm_tests.cfg file to control different
   migration via web front-END. AFAIK, we can edit the control 
file(migration_control.srv)
   but we cann't touch kvm_tests.cfg.

* In order to run this test so far, we need:
1) set up NFS on source host (pair[1] machine) and mount on dest host (pair[0])
2) edit the kvm_tests.cfg, kvm_address_pool.cfg, kvm_cdkey.cfg etc whatsoever 
that
   kvm client-side test run needs
3) issue command: autoserv -m source_ip,dest_ip migration_control.srv

NOTE: This is only the trial version. Need so much comments/suggestions.
Thanks in advance.

Cheers,

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_migration.py  |  146 
 client/tests/kvm/kvm_preprocessing.py  |2 +-
 client/tests/kvm/kvm_test_utils.py |   27 +++---
 client/tests/kvm/kvm_tests.cfg.sample  |1 +
 client/tests/kvm/kvm_vm.py |4 +-
 client/tests/kvm/tests/migration.py|2 +-
 client/tests/kvm_migration |1 +
 server/tests/kvm/migration_control.srv |   98 +
 8 files changed, 265 insertions(+), 16 deletions(-)
 create mode 100644 client/tests/kvm/kvm_migration.py
 create mode 12 client/tests/kvm_migration
 create mode 100644 server/tests/kvm/migration_control.srv

diff --git a/client/tests/kvm/kvm_migration.py 
b/client/tests/kvm/kvm_migration.py
new file mode 100644
index 000..4c08b5a
--- /dev/null
+++ b/client/tests/kvm/kvm_migration.py
@@ -0,0 +1,146 @@
+import sys, os, time, logging, commands
+from autotest_lib.client.bin import test
+from autotest_lib.client.common_lib import error
+import kvm_utils, kvm_preprocessing, common, kvm_vm, kvm_test_utils
+
+
+class kvm_migration(test.test):
+
+KVM migration test.
+
+@copyright: Red Hat 2008-2009
+@see: http://www.linux-kvm.org/page/KVM-Autotest/Client_Install
+(Online doc - Getting started with KVM testing)
+
+version = 1
+def initialize(self):
+pass
+
+
+def setup(self):
+
+Setup environment like NFS mount etc.
+
+pass
+
+
+def run_once(self, params):
+
+Setup remote machine and then execute migration.
+
+# Check whether remote machine is ready
+dsthost = params.get(dsthost)
+srchost = params.get(srchost)
+image_path = os.path.join(self.bindir, images)
+
+rootdir = params.get(rootdir)
+iso = os.path.join(rootdir, 'iso')
+images = os.path.join(rootdir, 'images')
+qemu = os.path.join(rootdir, 'qemu')
+qemu_img = os.path.join(rootdir, 'qemu-img')
+
+def link_if_not_exist(ldir, target, link_name):
+t = target
+l = os.path.join(ldir, link_name)
+if not os.path.exists(l):
+os.symlink(t,l)
+link_if_not_exist(self.bindir, '../../', 'autotest')
+link_if_not_exist(self.bindir, iso, 'isos')
+link_if_not_exist(self.bindir, images, 'images')
+link_if_not_exist(self.bindir, qemu, 'qemu')
+link_if_not_exist(self.bindir, qemu_img, 'qemu-img')
+
+try:
+image_real_path = os.readlink(image_path)
+except OSError:
+raise error.TestError(Readlink of image dir failed)
+
+def setup_dest(srchost, path):
+
+Mount NFS directory from source host.
+
+cmd = mount |grep -q %s % srchost
+if os.system(cmd):
+mnt_cmd = mount %s:%s %s % (srchost,
+  path,
+  path)
+s, o = commands.getstatusoutput(mnt_cmd)
+if s != 0:
+raise error.TestError(Mount srchost failed: %s % o)
+
+def setup_source(path):
+
+Setup NFS mount point.
+
+export_string = %s *(rw,no_root_squash) % path
+export_file = '/etc/exports'
+f = open(export_file)
+if not export_string in f.read().strip():
+try:
+f.write(export_string)
+except IOError:
+raise error.TestError(Failed to write to exports file)
+
+cmd = service nfs restart  exportfs -a
+if os.system(cmd):
+raise

Re: [PATCH] KVM test: Add PCI device assignment support

2009-11-30 Thread Yolkfull Chow
On Mon, Nov 30, 2009 at 07:08:11PM -0200, Lucas Meneghel Rodrigues wrote:
 Add support to PCI device assignment on the kvm test. It supports
 both SR-IOV virtual functions and physical NIC card device
 assignment.
 
 Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
 be shared amongst multiple virtual machines while retaining the
 performance benefit of assigning a PCI device to a virtual machine.
 A common example is where a single SR-IOV capable NIC - with perhaps
 only a single physical network port - might be shared with multiple
 virtual machines by assigning a virtual function to each VM.
 
 SR-IOV support is implemented in the kernel. The core implementation is
 contained in the PCI subsystem, but there must also be driver support
 for both the Physical Function (PF) and Virtual Function (VF) devices.
 With an SR-IOV capable device one can allocate VFs from a PF. The VFs
 surface as PCI devices which are backed on the physical PCI device by
 resources (queues, and register sets).
 
 Device support:
 
 In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
 SR-IOV capable device supported. The igb driver has PF support and the
 igbvf has VF support.
 
 In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
 the same vxge driver for the PF as well as the VFs.
 
 In order to configure the test:
 
   * For SR-IOV virtual functions passthrough, we could specify the
 module parameter 'max_vfs' in config file.
   * For physical NIC card pass through, we should specify the device
 name(s).
 
 3rd try: The patch was heavily modified from the first 2 attempts:
 
  * Naming is consistent with PCI assignment instead of
PCI passthrough, as it's a more correct term.
  * No more device database file, as all information about devices
is stored on an attribute of the VM class (an instance of the
PciAssignable class), so we don't have to bother dumping this
info to a file.
  * Code simplified to avoid duplication
 
 As it's a fairly involved feature, the more reviews we get the better.

Hi Lucas,

I have some ideas about devices_requested parameter, please see comments
below:

 
 Signed-off-by: Yolkfull Chow yz...@redhat.com
 Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
 ---
  client/tests/kvm/kvm_tests.cfg.sample |   20 +++-
  client/tests/kvm/kvm_utils.py |  278 
 +
  client/tests/kvm/kvm_vm.py|   59 +++
  3 files changed, 356 insertions(+), 1 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
 b/client/tests/kvm/kvm_tests.cfg.sample
 index feffb8d..be60399 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -844,13 +844,31 @@ variants:
  only default
  image_format = raw
  
 -
  variants:
  - @smallpages:
  - hugepages:
  pre_command = /usr/bin/python scripts/hugepage.py /mnt/kvm_hugepage
  extra_params +=  -mem-path /mnt/kvm_hugepage
  
 +variants:
 +- @no_pci_assignable:
 +pci_assignable = no
 +- pf_assignable:
 +pci_assignable = pf
 +device_names = eth1
 +- vf_assignable:
 +pci_assignable = vf
 +# Driver (kernel module) that supports SR-IOV hardware.
 +# As of today (30-11-2009), we have 2 drivers for this type of 
 hardware:
 +# Intel® 82576 Gigabit Ethernet Controller - igb
 +# Neterion® X3100™ - vxge
 +driver = igb
 +# Driver option to specify the number of virtual functions
 +# (on vxge the option is , for example, is max_config_dev)
 +# the default below is for the igb driver
 +driver_option = max_vfs
 +# Number of devices that are going to be requested.
 +devices_requested = 7

I think we'd better specify not only number of driver_option 'max_vfs' but
also devices_requested. Reasons:

1) The value of driver option 'max_vfs' is different with devices_requested. 
   Typically, if we assign 7 to max_vfs, it will virtualize 14(7*2) VFs.
2) Also, we can later write a case that boots 14 VMs and assign one Virtual
   Function to each of them. Thus we need to modprobe max_vfs=7, and set
   devices_requested=1. It could be boundary test, already a bug exists.

What do you think?

  
  variants:
  - @basic:
 diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
 index bf25900..fc04745 100644
 --- a/client/tests/kvm/kvm_utils.py
 +++ b/client/tests/kvm/kvm_utils.py
 @@ -874,3 +874,281 @@ def unmap_url_cache(cachedir, url, expected_hash, 
 method=md5):
  file_path = utils.unmap_url(cachedir, src, cachedir)
  
  return file_path
 +
 +
 +def get_full_pci_id(pci_id):
 +
 +Get full PCI ID of pci_id.
 +
 +@param pci_id: PCI ID of a device.
 +
 +cmd = lspci -D | awk '/%s/ {print $1}' % pci_id
 +status, full_id = commands.getstatusoutput(cmd)
 +if status != 0:
 +return None

About implementation of KVM server-side migration in autotest

2009-11-30 Thread Yolkfull Chow
 = kvm_test_utils.wait_for_login(vm,
   timeout=mig_timeout)
except:
raise error.TestFail(Could not log into migrated 
guest)

except Exception, e:
logging.error(Test failed: %s, e)
logging.debug(Postprocessing on error...)
kvm_preprocessing.postprocess_on_error(self, params, env)
kvm_utils.dump_env(env, env_filename)
raise

finally:
# Postprocess
kvm_preprocessing.postprocess(self, params, env)
logging.debug(Contents of environment: %s, str(env))
kvm_utils.dump_env(env, env_filename)
  

   But there will be a problem:  where can we edit the config file
   to generate differnt dicts for migration test? Hard code the parameters in 
control.srv? 

   It will be easier to resolve the problem of making up kvm_tests.cfg for both 
client
   machines. 

2 just pass 'start_vm_for_migration = yes' to dest host

3 source host will wait until dest host telling it's ready using socket
  communication (any existent implement/function in autotest framework? )

4 and then change the role of 'source machine' and 'dest machine' to implement 
ping-pong
  migrate.

Initial version of control.srv for server-migration:

-
AUTHOR = Yolkfull Chow yz...@redhat.com
TIME = SHORT
NAME = Migration across Multi-machine
TEST_CATEGORY = Functional
TEST_CLASS = 'Virtualization'
TEST_TYPE = Server
DOC = 
Migrate KVM guest between two hosts.

Arguments to run_test:

@dict - a dictionary containing all parameters that migration need.


import sys, os, commands 
from autotest_lib.server import utils

KVM_DIR = os.path.join('/root/devel/upstream/server-mig', 'client/tests/kvm')
sys.path.insert(0, KVM_DIR)
import kvm_config

rootdir = '/tmp/kvm_autotest_root'

def run(pair):
print KVM migration running on srchost [%s] and desthost [%s]\n % (
pair[0], pair[1])

source = hosts.create_host(pair[0])
dest = hosts.create_host(pair[1])

source_at = autotest.Autotest(source)
source_at.install(source)
dest_at = autotest.Autotest(dest)
dest_at.install(dest)

# --
# Get test set (dictionary list) from the configuration file
# --
filename = os.path.join(KVM_DIR, kvm_tests.cfg)
cfg = kvm_config.config(filename)

# Make only dictionaries that migration needs
cfg.parse_string(only migrate)

filename = os.path.join(KVM_DIR, kvm_address_pools.cfg)
if os.path.exists(filename):
cfg.parse_file(filename)
hostname = os.uname()[1].split(.)[0]
if cfg.filter(^ + hostname):
cfg.parse_string(only ^%s % hostname)
else:
cfg.parse_string(only ^default_host)
list = cfg.get_list()


# Control file template for client machine
control_string = job.run_test('kvm_migration', params=%s)

for vm_dict in list:

vm_dict['srchost'] = source.ip
vm_dict['dsthost'] = dest.ip
vm_dict['display'] = 'vnc'
vm_dict['rootdir'] = rootdir

source_dict = vm_dict.copy()
dest_dict = vm_dict.copy()

source_dict['role'] = source

dest_dict['role'] = dest
dest_dict['start_vm_for_migration'] = yes

# Report the parameters we've received
print Test parameters:
keys = vm_dict.keys()
keys.sort()
for key in keys:
print  + str(key) +  =  + str(vm_dict[key])

source_control_file = ''.join([control_string % source_dict])
dest_control_file = ''.join([control_string % dest_dict])

dest_command = subcommand(dest_at.run,
[dest_control_file, dest.hostname])
source_command = subcommand(source_at.run,
[source_control_file, source.hostname])

parallel([dest_command, source_command])

# grab the pairs (and failures)
(pairs, failures) = utils.form_ntuples_from_machines(machines, 2)

# log the failures
for failure in failures:
job.record(FAIL, failure[0], kvm, failure[1])

# now run through each pair and run
job.parallel_simple(run, pairs, log=False)

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Adds a test to verify resources inside a VM

2009-11-29 Thread Yolkfull Chow
On Sun, Nov 29, 2009 at 02:22:40PM +0530, sudhir kumar wrote:
 On Sun, Nov 29, 2009 at 12:50 PM, Yolkfull Chow yz...@redhat.com wrote:
  On Wed, Nov 25, 2009 at 11:35:02AM +0530, sudhir kumar wrote:
  This patch adds a test for verifying whether the number of cpus and amount
  of memory as seen inside a guest is same as allocated to it on the qemu
  command line.
 
  Hello Sudhir,
 
  Please see embedded comments as below:
 
 
  Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com
 
  Index: kvm/tests/verify_resources.py
  ===
  --- /dev/null
  +++ kvm/tests/verify_resources.py
  @@ -0,0 +1,74 @@
  +import logging, time
  +from autotest_lib.client.common_lib import error
  +import kvm_subprocess, kvm_test_utils, kvm_utils
  +
  +
  +Test to verify if the guest has the equal amount of resources
  +as allocated through the qemu command line
  +
  +...@copyright: 2009 IBM Corporation
  +...@author: Sudhir Kumar sku...@linux.vnet.ibm.com
  +
  +
  +
  +def run_verify_resources(test, params, env):
  +    
  +    KVM test for verifying VM resources(#vcpu, memory):
  +    1) Get resources from the VM parameters
  +    2) Log into the guest
  +    3) Get actual resources, compare and report the pass/failure
  +
  +   �...@param test: kvm test object
  +   �...@param params: Dictionary with the test parameters
  +   �...@param env: Dictionary with test environment.
  +    
  +    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
  +
  +    # Get info about vcpu and memory from dictionary
  +    exp_vcpus = int(params.get(smp))
  +    exp_mem_mb = long(params.get(mem))
  +    real_vcpus = 0
  +    real_mem_kb = 0
  +    real_mem_mb = 0
  +    # Some memory is used by bios and all, so lower the expected
  value say by 5%
  +    exp_mem_mb = long(exp_mem_mb * 0.95)
  +    logging.info(The guest should have vcpus: %s % exp_vcpus)
  +    logging.info(The guest should have min mem: %s MB % exp_mem_mb)
  +
  +    session = kvm_test_utils.wait_for_login(vm)
  +
  +    # Get info about vcpu and memory from within guest
  +    if params.get(guest_os_type) == Linux:
  +        output = session.get_command_output(cat /proc/cpuinfo|grep 
  processor)
 
  We'd better here not hard code the command that getting CPU count. As KVM 
  supports not
  only Linux  Windows, but also others say Unix/BSD.
  A recommended method could be define it in config file for different 
  platforms:
 I agree. The only concern that made me doing it inside test is the
 increasing size and complexity of the config file. I am fine with
 passing the command from the config file but still the code paths have
 to be different for each type of OS ie windows linux etc.
 
 
  - @Linux:
     verify_resources:
         count_cpu_cmd = grep processor /proc/cpuinfo
 
  - @Windows:
     verify_resources:
         count_cpu_cmd = systeminfo (here I would not suggest we use 
  'systeminfo'
                                     for catching M$ guest's memory size)
 
  +        for line in output.split('\n'):
  +            if 'processor' in line:
  +                real_vcpus = real_vcpus + 1
  +
  +        output = session.get_command_output(cat /proc/meminfo)
 
  For catching memory size of Linux guests, I prefer command 'dmidecode' 
  which can
  catch memory size exactly in MB.
 I think we can use both here. To my knowledge dmidecode will test the
 BIOS code of kvm and hence we can include both the methods?
 
  +        for line in output.split('\n'):
  +            if 'MemTotal' in line:
  +                real_mem_kb = long(line.split()[1])
  +        real_mem_mb = real_mem_kb / 1024
  +
  +    elif params.get(guest_os_type) == Windows:
  +        # Windows takes long time to display output for systeminfo
  +        output = session.get_command_output(systeminfo, timeout =
  150, internal_timeout = 50)
  +        for line in output.split('\n'):
  +            if 'Processor' in line:
  +                real_vcpus = int(line.split()[1])
  +
  +        for line in output.split('\n'):
  +            if 'Total Physical Memory' in line:
  +               real_mem_mb = long(.join(%s % k for k in
  line.split()[3].split(',')))
 
  So many slice and split operations can easy results in problems.
  To catch memory of Windows guests, I recommend we use 'wmic memphysical' 
  which
  can dump memory size in KB exactly.
 Is the command available for all windows OSes? If yes we can
 definitely use the command.

Yes it's available for all Windows OSes although with some limitations that it 
can
only be executed within TELNET session or Windows command prompt. But it's 
fixed now.:)

Cheers,

 
 
  Meanwhile, we also need to verify guest's NICs' count and their(its) model,
  hard disk(s)'s count  model etc. Therefore I think we need a case to verify
  them together.
 Yeah, I just gave a first try for such a test. We need to test all the
 emulated hardware.
 
  I had wrote such test couples of days before

[PATCH] KVM test: Add a subtest params_verify

2009-11-29 Thread Yolkfull Chow
This patch will test following parameters of a VM:
1) count of CPU, hard disks and NICs
2) memory size
3) model of hard disks and NICs
4) NICs' mac address
5) UUID and serial number (if defined the command in config file)

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/bin/harness_standalone.py|2 +-
 client/tests/kvm/kvm_tests.cfg.sample   |   11 +++
 client/tests/kvm/kvm_vm.py  |   39 +++
 client/tests/kvm/tests/params_verify.py |  110 +++
 4 files changed, 161 insertions(+), 1 deletions(-)
 create mode 100644 client/tests/kvm/tests/params_verify.py

diff --git a/client/bin/harness_standalone.py b/client/bin/harness_standalone.py
index 4ec7cd2..c70c09b 100644
--- a/client/bin/harness_standalone.py
+++ b/client/bin/harness_standalone.py
@@ -36,7 +36,7 @@ class harness_standalone(harness.harness):
 if os.path.exists('/etc/event.d'):
 # NB: assuming current runlevel is default
 initdefault = utils.system_output('/sbin/runlevel').split()[1]
-else if os.path.exists('/etc/inittab'):
+elif os.path.exists('/etc/inittab'):
 initdefault = utils.system_output('grep :initdefault: 
/etc/inittab')
 initdefault = initdefault.split(':')[1]
 else:
diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index feffb8d..94763c5 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -243,6 +243,10 @@ variants:
 kill_vm = yes
 kill_vm_gracefully = no
 
+- params_verify:
+type = params_verify
+catch_uuid_cmd = dmidecode | awk -F: '/UUID/ {print $2}'
+
 
 # NICs
 variants:
@@ -269,6 +273,8 @@ variants:
 shell_port = 22
 file_transfer_client = scp
 file_transfer_port = 22
+mem_chk_cmd = dmidecode -t 17 | awk -F: '/Size/ {print $2}'
+cpu_chk_cmd = grep -c processor /proc/cpuinfo
 
 variants:
 - Fedora:
@@ -542,6 +548,9 @@ variants:
 # This ISO will be used for all tests except install:
 cdrom = windows/winutils.iso
 
+cpu_chk_cmd = echo %NUMBER_OF_PROCESSORS%
+mem_chk_cmd = wmic memphysical
+
 migrate:
 migration_test_command = ver  vol
 migration_bg_command = start ping -t localhost
@@ -583,6 +592,8 @@ variants:
 reference_cmd = wmic diskdrive list brief
 find_pci_cmd = wmic diskdrive list brief
 pci_test_cmd = echo select disk 1  dt  echo online  dt  
echo detail disk  dt  echo exit  dt  diskpart /s dt
+params_verify:
+catch_uuid_cmd = 
 
 variants:
 - Win2000:
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 100b567..cc314d4 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -821,3 +821,42 @@ class VM:
 return self.uuid
 else:
 return self.params.get(uuid, None)
+
+
+def get_cpu_count(self):
+
+Get the cpu count of the VM.
+
+try:
+session = self.remote_login()
+if session:
+cmd = self.params.get(cpu_chk_cmd)
+s, count = session.get_command_status_output(cmd)
+if s == 0:
+return int(count)
+return None
+finally:
+session.close()
+
+
+def get_memory_size(self):
+
+Get memory size of the VM.
+
+try:
+session = self.remote_login()
+if session:
+cmd = self.params.get(mem_chk_cmd)
+s, mem_str = session.get_command_status_output(cmd)
+if s != 0:
+return None
+mem = re.findall(([0-9][0-9][0-9]+), mem_str)
+mem_size = 0
+for m in mem:
+mem_size += int(m)
+if not MB in mem_str:
+mem_size /= 1024
+return int(mem_size)
+return None
+finally:
+session.close()
diff --git a/client/tests/kvm/tests/params_verify.py 
b/client/tests/kvm/tests/params_verify.py
new file mode 100644
index 000..a30a91d
--- /dev/null
+++ b/client/tests/kvm/tests/params_verify.py
@@ -0,0 +1,110 @@
+import re, string, logging
+from autotest_lib.client.common_lib import error
+import kvm_test_utils, kvm_utils
+
+def run_params_verify(test, params, env):
+
+Verify all parameters in KVM command line:
+1) Log into the guest
+2) Verify whether cpu counts ,memory size, nics' model,
+   count and drives' format  count, drive_serial, UUID
+3) Verify all nic cards' macaddr
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+vm = kvm_test_utils.get_living_vm(env, params.get

Re: [PATCH 1/2] Adds a test to verify resources inside a VM

2009-11-28 Thread Yolkfull Chow
On Wed, Nov 25, 2009 at 11:35:02AM +0530, sudhir kumar wrote:
 This patch adds a test for verifying whether the number of cpus and amount
 of memory as seen inside a guest is same as allocated to it on the qemu
 command line.

Hello Sudhir, 

Please see embedded comments as below:

 
 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com
 
 Index: kvm/tests/verify_resources.py
 ===
 --- /dev/null
 +++ kvm/tests/verify_resources.py
 @@ -0,0 +1,74 @@
 +import logging, time
 +from autotest_lib.client.common_lib import error
 +import kvm_subprocess, kvm_test_utils, kvm_utils
 +
 +
 +Test to verify if the guest has the equal amount of resources
 +as allocated through the qemu command line
 +
 +...@copyright: 2009 IBM Corporation
 +...@author: Sudhir Kumar sku...@linux.vnet.ibm.com
 +
 +
 +
 +def run_verify_resources(test, params, env):
 +
 +KVM test for verifying VM resources(#vcpu, memory):
 +1) Get resources from the VM parameters
 +2) Log into the guest
 +3) Get actual resources, compare and report the pass/failure
 +
 +@param test: kvm test object
 +@param params: Dictionary with the test parameters
 +@param env: Dictionary with test environment.
 +
 +vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
 +
 +# Get info about vcpu and memory from dictionary
 +exp_vcpus = int(params.get(smp))
 +exp_mem_mb = long(params.get(mem))
 +real_vcpus = 0
 +real_mem_kb = 0
 +real_mem_mb = 0
 +# Some memory is used by bios and all, so lower the expected
 value say by 5%
 +exp_mem_mb = long(exp_mem_mb * 0.95)
 +logging.info(The guest should have vcpus: %s % exp_vcpus)
 +logging.info(The guest should have min mem: %s MB % exp_mem_mb)
 +
 +session = kvm_test_utils.wait_for_login(vm)
 +
 +# Get info about vcpu and memory from within guest
 +if params.get(guest_os_type) == Linux:
 +output = session.get_command_output(cat /proc/cpuinfo|grep 
 processor)

We'd better here not hard code the command that getting CPU count. As KVM 
supports not
only Linux  Windows, but also others say Unix/BSD. 
A recommended method could be define it in config file for different platforms:

- @Linux:
verify_resources:
count_cpu_cmd = grep processor /proc/cpuinfo

- @Windows:
verify_resources:
count_cpu_cmd = systeminfo (here I would not suggest we use 'systeminfo'
for catching M$ guest's memory size)

 +for line in output.split('\n'):
 +if 'processor' in line:
 +real_vcpus = real_vcpus + 1
 +
 +output = session.get_command_output(cat /proc/meminfo)

For catching memory size of Linux guests, I prefer command 'dmidecode' which can
catch memory size exactly in MB.

 +for line in output.split('\n'):
 +if 'MemTotal' in line:
 +real_mem_kb = long(line.split()[1])
 +real_mem_mb = real_mem_kb / 1024
 +
 +elif params.get(guest_os_type) == Windows:
 +# Windows takes long time to display output for systeminfo
 +output = session.get_command_output(systeminfo, timeout =
 150, internal_timeout = 50)
 +for line in output.split('\n'):
 +if 'Processor' in line:
 +real_vcpus = int(line.split()[1])
 +
 +for line in output.split('\n'):
 +if 'Total Physical Memory' in line:
 +   real_mem_mb = long(.join(%s % k for k in
 line.split()[3].split(',')))

So many slice and split operations can easy results in problems.
To catch memory of Windows guests, I recommend we use 'wmic memphysical' which
can dump memory size in KB exactly.


Meanwhile, we also need to verify guest's NICs' count and their(its) model,
hard disk(s)'s count  model etc. Therefore I think we need a case to verify
them together. 

I had wrote such test couples of days before. I also ran it several times.
Please comment on it when I post it here later. Thanks,

 +
 +else:
 +raise error.TestFail(Till date this test is supported only
 for Linux and Windows)
 +
 +logging.info(The guest has cpus: %s % real_vcpus)
 +logging.info(The guest has mem: %s MB % real_mem_mb)
 +if exp_vcpus != real_vcpus or real_mem_mb  exp_mem_mb:
 +raise error.TestFail(Actual resources(cpu ='%s' memory ='%s' MB) 
 +  differ from Allocated resources(cpu = '%s' memory ='%s' MB
 + % (real_vcpus, real_mem_mb, exp_vcpus, exp_mem_mb))
 +
 +session.close()
 
 
 
 
 Sending the patch as an attachment too. Please review and provide your 
 comments.
 -- 
 Sudhir Kumar

 This patch adds a test for verifying whether the number of cpus and amount 
 of memory as seen inside a guest is same as allocated to it on the qemu
 command line.
 
 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com
 
 Index: kvm/tests/verify_resources.py
 

[PATCH] KVM test: Fix two typos in config file

2009-11-18 Thread Yolkfull Chow

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_tests.cfg.sample |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index ac9ef66..7f37994 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -840,7 +840,7 @@ variants:
 only up
 only WinXP.32
 no install setup
-no kvm_hugepages
+no hugepages
 only unattended_install setup boot shutdown
 only rtl8139
 - @fc11_kickstart:
@@ -850,7 +850,7 @@ variants:
 only up
 only Fedora.11.64
 no install setup
-no kvm_hugepages
+no hugepages
 only unattended_install boot shutdown
 only rtl8139
 
-- 
1.6.5.2

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 5/7] KVM test: minor pci_hotplug fixes

2009-11-05 Thread Yolkfull Chow
On Thu, Nov 05, 2009 at 12:01:10PM +0200, Michael Goldish wrote:
 - Put the PCI device removal code in a finally clause.

Hi Michael,

I have a little concern with the removal procedure. Thinking about if
pci_add failed, the output will not contain right information including PCI
ID. The slice operation during removing therefore could involve traceback
and removal will be failed at the end. Thus I would not place it in a finally
clause at that time.

But looking from opposite side, if we don't place it in a finally
clause and, pci_add is succeeded whereas verification is failed, the
device will not be removed finally. 

We may need to balance both if possible. Do you have any idea about
this? I can agree on applying the method proposed by this patch first.

 - Use kvm_vm.get_image_filename() instead of os.path.join().
 It's a bit cleaner because if we ever change the names of image parameters
 we'll only have to change the code in one place.
 Also, the way os.path.join() was used lead to image filenames being prefixed
 with 'images/' twice, e.g. 'images/images/foo.qcow2'.
 - Make some failure messages clearer.
 - Remove 'only Fedora Ubuntu Windows' from the fmt_raw variant.
 'only' works for things that have already been defined, but the guests are
 defined later.
 - Remove unused 'modprobe_acpiphp' parameter.
 - Change 'online disk' to 'online' in pci_test_cmd for Windows ('online disk'
 doesn't seem to work).
 - Remove the unneeded telnet/ssh/guest_port parameters from the Windows
 block_hotplug parameters.
 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm_tests.cfg.sample |7 +--
  client/tests/kvm/tests/pci_hotplug.py |  112 
 +
  2 files changed, 58 insertions(+), 61 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
 b/client/tests/kvm/kvm_tests.cfg.sample
 index f271a09..326ae20 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -181,7 +181,6 @@ variants:
  - nic_hotplug:  install setup unattended_install
  type = pci_hotplug
  pci_type = nic
 -modprobe_acpiphp = no
  reference_cmd = lspci
  find_pci_cmd = 'lspci | tail -n1'
  pci_test_cmd = 'nslookup www.redhat.com'
 @@ -223,7 +222,6 @@ variants:
  image_format_stg = qcow2
  - fmt_raw:
  image_format_stg = raw
 -only Fedora Ubuntu Windows
  
  - system_reset: install setup unattended_install
  type = boot
 @@ -538,13 +536,10 @@ variants:
  nic_virtio:
  match_string = VirtIO Ethernet
  block_hotplug:
 -use_telnet = yes
 -ssh_port = 23
 -guest_port_ssh = 23
  wait_secs_for_hook_up = 10
  reference_cmd = wmic diskdrive list brief
  find_pci_cmd = wmic diskdrive list brief
 -pci_test_cmd = echo select disk 1  dt  echo online disk  dt 
  echo detail disk  dt  echo exit  dt  diskpart /s dt
 +pci_test_cmd = echo select disk 1  dt  echo online  dt  
 echo detail disk  dt  echo exit  dt  diskpart /s dt
  
  variants:
  - Win2000:
 diff --git a/client/tests/kvm/tests/pci_hotplug.py 
 b/client/tests/kvm/tests/pci_hotplug.py
 index 3ad9ea2..876d8b8 100644
 --- a/client/tests/kvm/tests/pci_hotplug.py
 +++ b/client/tests/kvm/tests/pci_hotplug.py
 @@ -1,6 +1,6 @@
  import logging, os
  from autotest_lib.client.common_lib import error
 -import kvm_subprocess, kvm_test_utils, kvm_utils
 +import kvm_subprocess, kvm_test_utils, kvm_utils, kvm_vm
  
  
  def run_pci_hotplug(test, params, env):
 @@ -21,8 +21,8 @@ def run_pci_hotplug(test, params, env):
  session = kvm_test_utils.wait_for_login(vm)
  
  # Modprobe the module if specified in config file
 -if params.get(modprobe_module):
 -module = params.get(modprobe_module)
 +module = params.get(modprobe_module)
 +if module:
  if session.get_command_status(modprobe %s % module):
  raise error.TestError(Modprobe module '%s' failed % module)
  
 @@ -38,61 +38,63 @@ def run_pci_hotplug(test, params, env):
  if test_type == nic:
  pci_add_cmd = pci_add pci_addr=auto nic model=%s % tested_model
  elif test_type == block:
 -image_name = params.get(image_name_stg)
 -image_filename = %s.%s % (image_name, 
 params.get(image_format_stg))
 -image_dir = os.path.join(test.bindir, images)
 -storage_name = os.path.join(image_dir, image_filename)
 +image_params = kvm_utils.get_sub_dict(params, stg)
 +image_filename = kvm_vm.get_image_filename(image_params, test.bindir)
  pci_add_cmd = (pci_add pci_addr=auto storage file=%s,if=%s %
 -(storage_name, tested_model))
 +(image_filename, tested_model))
  
  # Implement pci_add
  s, add_output 

Re: [Autotest] [KVM-AUTOTEST PATCH 7/7] KVM test: remove monitor socket file when destroying a VM

2009-11-05 Thread Yolkfull Chow
On Thu, Nov 05, 2009 at 12:01:12PM +0200, Michael Goldish wrote:
 This should slow the rate of accumulation of monitor files in /tmp.

Hi Michael,

I recommend we use TCP as monitor dev of VM. Two reasons:
 1) we don't need to add extra code to remove monitor files
 2) it's necessary for some users want to implement client-side
migration.

What do you think? ;-)

 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm_vm.py |4 
  1 files changed, 4 insertions(+), 0 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
 index 5781dbc..62a10b9 100755
 --- a/client/tests/kvm/kvm_vm.py
 +++ b/client/tests/kvm/kvm_vm.py
 @@ -578,6 +578,10 @@ class VM:
  finally:
  if self.process:
  self.process.close()
 +try:
 +os.unlink(self.monitor_file_name)
 +except OSError:
 +pass
  
  
  def is_alive(self):
 -- 
 1.5.4.1
 
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] [RFC] KVM test: Major control file cleanup

2009-11-03 Thread Yolkfull Chow
On Wed, Oct 28, 2009 at 02:04:59PM -0400, Michael Goldish wrote:
 
 - Lucas Meneghel Rodrigues l...@redhat.com wrote:
 
  On Wed, Oct 28, 2009 at 1:43 PM, Michael Goldish mgold...@redhat.com
  wrote:
   Sounds great, except it won't allow you to debug your configuration
   using kvm_config.py.  So the question now is what's more important
  --
   the ability to debug or ease of use when running from the server.
  
  Here we have 2 use cases:
  
  1) Users of the web interface, that (hopefully) have canned test sets
  that work reliably. Ability to debug stuff is less important on this
  scenario.
  2) People developing tests, and in this case ability to debug config
  is very important
  
  I see the following options:
  
  1) Document as part of the test development guide that, in order to
  be able to debug stuff, that all the test sets are to be written to
  the config file and then, can be parsed using kvm_config.
  2) If we write all dictionaries generated by that particular
  configuration on files inside the job results directory, we still
  have
  debug ability for all use cases (I am starting to like this idea very
  much, as I type).
  
  So I'd then implement option 2) and refactor the control file with
  the
  test sets defined inside strings in the control file, then you can
  see
  how it looks? How about that?
 
 Sounds fine.
 - Where exactly will the test list appear?
 - We should also allow printing of verbose debug output (parsing variants
 block, 9000 dicts in current context...) by passing something to the
 constructor of the config object.
 - We should make it clear to the user that he/she must rename the control
 file (to control.lucas for example) or else it may be overwritten on the
 next git-fetch or -pull.
 
 I'm still not sure it's a great idea to make config debugging harder, so
 if anyone other than Lucas who uses the KVM test is reading this, please
 let us know if you ever use kvm_config.py and if you think the ability to
 print the list of test dicts is important.

Hi Michael,

I had used kvm_config.py for printing lists of selected test dicts
often. And I think it's necessary to keep this feature. IMHO, option 2) Lucas
proposed is a good idea. What do you think? Hope I haven't missed
something. :)

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Seperate smp from extra_params and add into default VM params

2009-11-02 Thread Yolkfull Chow
We may need leave smp as standalone parameter of VM. Reasons I can proposal:
 1) memory is a standalone parameter, so is smp
 2) smp parameter is needed in some test case, say VM params_verify


Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_tests.cfg.sample |3 ++-
 client/tests/kvm/kvm_vm.py|4 
 2 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 573206c..c16b615 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -18,6 +18,7 @@ kill_unresponsive_vms = yes
 # Some default VM params
 qemu_binary = qemu
 qemu_img_binary = qemu-img
+smp = 1
 mem = 512
 image_size = 10G
 shell_port = 22
@@ -751,7 +752,7 @@ variants:
 - @up:
 no autotest.npb
 - smp2:
-extra_params +=  -smp 2
+smp = 2
 used_cpus = 2
 stress_boot: used_cpus = 10
 timedrift.with_load: used_cpus = 100
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index ee6796b..0b7a81e 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -261,6 +261,10 @@ class VM:
 if mem:
 qemu_cmd +=  -m %s % mem
 
+smp = params.get(smp)
+if smp:
+qemu_cmd +=  -smp %s % smp
+
 iso = params.get(cdrom)
 if iso:
 iso = kvm_utils.get_path(root_dir, iso)
-- 
1.6.5.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GDB Debugging

2009-10-24 Thread Yolkfull Chow
On Fri, Oct 23, 2009 at 09:19:40AM -0700, Saksena, Abhishek wrote:
 
 Hi Guys,
 
 Any help will be appreciated on following issue. I have been struggling on 
 this for quite some time...
 
 
 -Abhishek
 
 
 
 -Original Message-
 From: Saksena, Abhishek 
 Sent: Tuesday, October 20, 2009 11:49 AM
 To: 'Jan Kiszka'
 Cc: kvm@vger.kernel.org
 Subject: GDB + KVM Debug
 
 I have now tried using both
 
 
 Set arch i8086 and 
 Set arch i386:x86-64:intel 

Try 'set architecture i386:x86-64'.

 
 But still see the same issue. Do I need to apply any patch?
 
 
 Abhishek
 
 -Original Message-
 From: Jan Kiszka [mailto:jan.kis...@siemens.com] 
 Sent: Thursday, September 17, 2009 1:36 AM
 To: Saksena, Abhishek
 Cc: kvm@vger.kernel.org
 Subject: Re: GDB + KVM Debug
 
 Saksena, Abhishek wrote:
  I am using KVM-88. However I can't get gdb still working. I stared qemu 
  with -s -S option and when I try to connect gdb to it I get following 
  error:-
  
  (gdb) target remote lochost:1234
  lochost: unknown host
  lochost:1234: No such file or directory.
  (gdb) target remote locahost:1234
  locahost: unknown host
  locahost:1234: No such file or directory.
  (gdb) target remote localhost:1234
  Remote debugging using localhost:1234
  [New Thread 1]
  Remote 'g' packet reply is too long: 
  2306f0ff023002f07f03000
 0
  (gdb)
  
 
 Try 'set arch target-architecture' before connecting. This is required
 if you didn't load the corresponding target image into gdb.
 
 Jan
 
 -- 
 Siemens AG, Corporate Technology, CT SE 2
 Corporate Competence Center Embedded Linux
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Add 'downscript=no' into kvm command line

2009-10-20 Thread Yolkfull Chow
If no downscript is assigned, add 'downscript=no' to avoid error:

/etc/qemu-ifdown: could not launch network script

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_vm.py |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index a8d96ca..0b8efbc 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -252,6 +252,8 @@ class VM:
 if script_path:
 script_path = kvm_utils.get_path(root_dir, script_path)
 qemu_cmd += ,downscript=%s % script_path
+else:
+qemu_cmd += ,downscript=no
 # Proceed to next NIC
 vlan += 1
 
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM test: Add PCI pass through test

2009-10-15 Thread Yolkfull Chow
On Wed, Oct 14, 2009 at 09:08:00AM -0300, Lucas Meneghel Rodrigues wrote:
 Add a new PCI pass trough test. It supports both SR-IOV virtual
 functions and physical NIC card pass through.
 
 Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
 be shared amongst multiple virtual machines while retaining the
 performance benefit of assigning a PCI device to a virtual machine.
 A common example is where a single SR-IOV capable NIC - with perhaps
 only a single physical network port - might be shared with multiple
 virtual machines by assigning a virtual function to each VM.
 
 SR-IOV support is implemented in the kernel. The core implementation is
 contained in the PCI subsystem, but there must also be driver support
 for both the Physical Function (PF) and Virtual Function (VF) devices.
 With an SR-IOV capable device one can allocate VFs from a PF. The VFs
 surface as PCI devices which are backed on the physical PCI device by
 resources (queues, and register sets).
 
 Device support:
 
 In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
 SR-IOV capable device supported. The igb driver has PF support and the
 igbvf has VF support.
 
 In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
 the same vxge driver for the PF as well as the VFs.

Wow, new NIC card supports SR-IOV... 
At this rate, do we need to move the driver name and its parameter into
config file so that in future if a new NIC card using different driver
is supported, we could handle it without changing code ?

 
 In order to configure the test:
 
   * For SR-IOV virtual functions passthrough, we could specify the
 module parameter 'max_vfs' in config file.
   * For physical NIC card pass through, we should specify the device
 name(s).
 
 Signed-off-by: Yolkfull Chow yz...@redhat.com
 ---
  client/tests/kvm/kvm_tests.cfg.sample |   11 ++-
  client/tests/kvm/kvm_utils.py |  278 
 +
  client/tests/kvm/kvm_vm.py|   72 +
  3 files changed, 360 insertions(+), 1 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
 b/client/tests/kvm/kvm_tests.cfg.sample
 index cc3228a..1dad188 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -786,13 +786,22 @@ variants:
  only default
  image_format = raw
  
 -
  variants:
  - @smallpages:
  - hugepages:
  pre_command = /usr/bin/python scripts/hugepage.py /mnt/kvm_hugepage
  extra_params +=  -mem-path /mnt/kvm_hugepage
  
 +variants:
 +- @no_passthrough:
 +pass_through = no
 +- nic_passthrough:
 +pass_through = pf
 +passthrough_devs = eth1
 +- vfs_passthrough:
 +pass_through = vf
 +max_vfs = 7
 +vfs_count = 7
  
  variants:
  - @basic:
 diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
 index 53b664a..0e3398c 100644
 --- a/client/tests/kvm/kvm_utils.py
 +++ b/client/tests/kvm/kvm_utils.py
 @@ -788,3 +788,281 @@ def md5sum_file(filename, size=None):
  size -= len(data)
  f.close()
  return o.hexdigest()
 +
 +
 +def get_full_id(pci_id):
 +
 +Get full PCI ID of pci_id.
 +
 +cmd = lspci -D | awk '/%s/ {print $1}' % pci_id
 +status, full_id = commands.getstatusoutput(cmd)
 +if status != 0:
 +return None
 +return full_id
 +
 +
 +def get_vendor_id(pci_id):
 +
 +Check out the device vendor ID according to PCI ID.
 +
 +cmd = lspci -n | awk '/%s/ {print $3}' % pci_id
 +return re.sub(:,  , commands.getoutput(cmd))
 +
 +
 +def release_dev(pci_id, pci_dict):
 +
 +Release a single PCI device.
 +
 +@param pci_id: PCI ID of a given PCI device
 +@param pci_dict: Dictionary with information about PCI devices
 +
 +base_dir = /sys/bus/pci
 +full_id = get_full_id(pci_id)
 +vendor_id = get_vendor_id(pci_id)
 +drv_path = os.path.join(base_dir, devices/%s/driver % full_id)
 +if 'pci-stub' in os.readlink(drv_path):
 +cmd = echo '%s'  %s/new_id % (vendor_id, drv_path)
 +if os.system(cmd):
 +return False
 +
 +stub_path = os.path.join(base_dir, drivers/pci-stub)
 +cmd = echo '%s'  %s/unbind % (full_id, stub_path)
 +if os.system(cmd):
 +return False
 +
 +prev_driver = pci_dict[pci_id]
 +cmd = echo '%s'  %s/bind % (full_id, prev_driver)
 +if os.system(cmd):
 +return False
 +return True
 +
 +
 +def release_pci_devs(pci_dict):
 +
 +Release all PCI devices assigned to host.
 +
 +@param pci_dict: Dictionary with information about PCI devices
 +
 +for pci_id in pci_dict:
 +if not release_dev(pci_id, pci_dict):
 +logging.error(Failed to release device [%s] to host % pci_id)
 +else:
 +logging.info(Release device [%s] successfully % pci_id)
 +
 +
 +class PassThrough(object

Re: [Autotest] [PATCH] Add pass through feature test (support SR-IOV)

2009-10-15 Thread Yolkfull Chow
On Wed, Oct 14, 2009 at 09:13:59AM -0300, Lucas Meneghel Rodrigues wrote:
 Yolkfull, I've studied about single root IO virtualization before
 reviewing your patch, the general approach here looks good. There were
 some stylistic points as far as code is concerned, so I have rebased
 your patch against the latest trunk, and added some explanation about
 the features being tested and referenced (extracted from a Fedora 12
 blueprint).
 
 Please let me know if you are OK with it, I guess I will review this
 patch a couple more times, as the code and the features being tested
 are fairly complex.
 
 Thanks!

Lucas, thank you very much for adding a detailed explanation and
improving for this test. I had reviewed the new patch and some new
consideration came to my mind. I had added them on the email, please
reviewed. :)

 
 On Mon, Sep 14, 2009 at 11:20 PM, Yolkfull Chow yz...@redhat.com wrote:
  It supports both SR-IOV virtual functions' and physical NIC card pass 
  through.
   * For SR-IOV virtual functions passthrough, we could specify the module
     parameter 'max_vfs' in config file.
   * For physical NIC card pass through, we should specify the device name(s).
 
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/kvm_tests.cfg.sample |   12 ++
   client/tests/kvm/kvm_utils.py         |  248 
  -
   client/tests/kvm/kvm_vm.py            |   68 +-
   3 files changed, 326 insertions(+), 2 deletions(-)
 
  diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
  b/client/tests/kvm/kvm_tests.cfg.sample
  index a83ef9b..c6037da 100644
  --- a/client/tests/kvm/kvm_tests.cfg.sample
  +++ b/client/tests/kvm/kvm_tests.cfg.sample
  @@ -627,6 +627,18 @@ variants:
 
 
   variants:
  +    - @no_passthrough:
  +        pass_through = no
  +    - nic_passthrough:
  +        pass_through = pf
  +        passthrough_devs = eth1
  +    - vfs_passthrough:
  +        pass_through = vf
  +        max_vfs = 7
  +        vfs_count = 7
  +
  +
  +variants:
      - @basic:
          only Fedora Windows
      - @full:
  diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
  index dfca938..1fe3b31 100644
  --- a/client/tests/kvm/kvm_utils.py
  +++ b/client/tests/kvm/kvm_utils.py
  @@ -1,5 +1,5 @@
   import md5, thread, subprocess, time, string, random, socket, os, signal, 
  pty
  -import select, re, logging, commands
  +import select, re, logging, commands, cPickle
   from autotest_lib.client.bin import utils
   from autotest_lib.client.common_lib import error
   import kvm_subprocess
  @@ -795,3 +795,249 @@ def md5sum_file(filename, size=None):
          size -= len(data)
      f.close()
      return o.hexdigest()
  +
  +
  +def get_full_id(pci_id):
  +    
  +    Get full PCI ID of pci_id.
  +    
  +    cmd = lspci -D | awk '/%s/ {print $1}' % pci_id
  +    status, full_id = commands.getstatusoutput(cmd)
  +    if status != 0:
  +        return None
  +    return full_id
  +
  +
  +def get_vendor_id(pci_id):
  +    
  +    Check out the device vendor ID according to PCI ID.
  +    
  +    cmd = lspci -n | awk '/%s/ {print $3}' % pci_id
  +    return re.sub(:,  , commands.getoutput(cmd))
  +
  +
  +def release_pci_devs(dict):
  +    
  +    Release assigned PCI devices to host.
  +    
  +    def release_dev(pci_id):
  +        base_dir = /sys/bus/pci
  +        full_id = get_full_id(pci_id)
  +        vendor_id = get_vendor_id(pci_id)
  +        drv_path = os.path.join(base_dir, devices/%s/driver % full_id)
  +        if 'pci-stub' in os.readlink(drv_path):
  +            cmd = echo '%s'  %s/new_id % (vendor_id, drv_path)
  +            if os.system(cmd):
  +                return False
  +
  +            stub_path = os.path.join(base_dir, drivers/pci-stub)
  +            cmd = echo '%s'  %s/unbind % (full_id, stub_path)
  +            if os.system(cmd):
  +                return False
  +
  +            prev_driver = self.dev_prev_drivers[pci_id]
  +            cmd = echo '%s'  %s/bind % (full_id, prev_driver)
  +            if os.system(cmd):
  +                return False
  +        return True
  +
  +    for pci_id in dict.keys():
  +        if not release_dev(pci_id):
  +            logging.error(Failed to release device [%s] to host % pci_id)
  +        else:
  +            logging.info(Release device [%s] successfully % pci_id)
  +
  +
  +class PassThrough:
  +    
  +    Request passthroughable devices on host. It will check whether to 
  request
  +    PF(physical NIC cards) or VF(Virtual Functions).
  +    
  +    def __init__(self, type=nic_vf, max_vfs=None, names=None):
  +        
  +        Initialize parameter 'type' which could be:
  +        nic_vf: Virtual Functions
  +        nic_pf: Physical NIC card
  +        mixed:  Both includes VFs and PFs
  +
  +        If pass through Physical NIC cards, we need to specify which 
  devices
  +        to be assigned, e.g. 'eth1 eth2'.
  +
  +        If pass through Virtual Functions, we

Re: [Autotest] [PATCH] Add a kvm test guest_s4 which supports both Linux and Windows platform

2009-10-14 Thread Yolkfull Chow
On Wed, Oct 14, 2009 at 06:58:01AM -0300, Lucas Meneghel Rodrigues wrote:
 On Tue, Oct 13, 2009 at 11:54 PM, Yolkfull Chow yz...@redhat.com wrote:
  On Tue, Oct 13, 2009 at 05:29:40PM -0300, Lucas Meneghel Rodrigues wrote:
  Hi Yolkfull and Chen:
 
  Thanks for your test! I have some comments and doubts to clear, most
  of them are about content of the messages delivered for the user and
  some other details.
 
  On Sun, Sep 27, 2009 at 6:11 AM, Yolkfull Chow yz...@redhat.com wrote:
   For this case, Ken Cao wrote the linux part previously and I did 
   extensive
   modifications on Windows platform support.
  
   Signed-off-by: Ken Cao k...@redhat.com
   Signed-off-by: Yolkfull Chow yz...@redhat.com
   ---
    client/tests/kvm/kvm_tests.cfg.sample |   14 +++
    client/tests/kvm/tests/guest_s4.py    |   66 
   +
    2 files changed, 80 insertions(+), 0 deletions(-)
    create mode 100644 client/tests/kvm/tests/guest_s4.py
  
   diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
   b/client/tests/kvm/kvm_tests.cfg.sample
   index 285a38f..f9ecb61 100644
   --- a/client/tests/kvm/kvm_tests.cfg.sample
   +++ b/client/tests/kvm/kvm_tests.cfg.sample
   @@ -94,6 +94,14 @@ variants:
       - linux_s3:     install setup
           type = linux_s3
  
   +    - guest_s4:
   +        type = guest_s4
   +        check_s4_support_cmd = grep -q disk /sys/power/state
   +        test_s4_cmd = cd /tmp/;nohup tcpdump -q -t ip host localhost
   +        check_s4_cmd = pgrep tcpdump
   +        set_s4_cmd = echo disk  /sys/power/state
   +        kill_test_s4_cmd = pkill tcpdump
   +
       - timedrift:    install setup
           type = timedrift
           extra_params +=  -rtc-td-hack
   @@ -382,6 +390,12 @@ variants:
               # Alternative host load:
               #host_load_command = dd if=/dev/urandom of=/dev/null
               host_load_instances = 8
   +        guest_s4:
   +            check_s4_support_cmd = powercfg /hibernate on
   +            test_s4_cmd = start /B ping -n 3000 localhost
   +            check_s4_cmd = tasklist | find /I ping
   +            set_s4_cmd = rundll32.exe PowrProf.dll, SetSuspendState
   +            kill_test_s4_cmd = taskkill /IM ping.exe /F
  
           variants:
               - Win2000:
   diff --git a/client/tests/kvm/tests/guest_s4.py 
   b/client/tests/kvm/tests/guest_s4.py
   new file mode 100644
   index 000..5d8fbdf
   --- /dev/null
   +++ b/client/tests/kvm/tests/guest_s4.py
   @@ -0,0 +1,66 @@
   +import logging, time
   +from autotest_lib.client.common_lib import error
   +import kvm_test_utils, kvm_utils
   +
   +
   +def run_guest_s4(test, params, env):
   +    
   +    Suspend guest to disk,supports both Linux  Windows OSes.
   +
   +   �...@param test: kvm test object.
   +   �...@param params: Dictionary with test parameters.
   +   �...@param env: Dictionary with the test environment.
   +    
   +    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
   +    session = kvm_test_utils.wait_for_login(vm)
   +
   +    logging.info(Checking whether VM supports S4)
   +    status = 
   session.get_command_status(params.get(check_s4_support_cmd))
   +    if status is None:
   +        logging.error(Failed to check if S4 exists)
   +    elif status != 0:
   +        raise error.TestFail(Guest does not support S4)
   +
   +    logging.info(Waiting for a while for X to start...)
 
  Yes, generally X starts a bit later than the SSH service, so I
  understand the time being here, however:
 
   * In fact we are waiting for all services of the guest to be up and
  functional, so depending on the level of load, I don't think 10s is
  gonna make it. So I suggest something = 30s
 
  Yeah,reasonable, we did ignore the circumstance with workload. But as
  you metioned,it can depend on different level of workload, therefore 30s
  may be not enough as well. Your idea that write a utility function
  waiting for some services up is good I think, thus it could be something
  like:
 
  def wait_services_up(services_list):
     ...
 
  and for this case:
 
  wait_services_up([Xorg]) for Linux and
  wait_services_up([explore.exe]) for Windows.
 
 Ok, sounds good to me!
 
   * It's also true that just wait for a given time and hope that it
  will be OK kinda sucks, so ideally we need to write utility functions
  to stablish as well as possible when all services of a host are fully
  booted up. Stated this way, it looks simple, but it's not.
 
  Autotest experience suggests that there's no real sane way to
  determine when a linux box is booted up, but we can take a
  semi-rational approach and verify if all services for the current run
  level have the status up or a similar approach. For windows, I was
  talking to Yaniv Kaul and it seems that processing the output of the
  'sc query' command might give what we want. Bottom line, I'd like to
  add a TODO item, and write a function to stablish (fairly confidently)
  that a windows

[PATCH] Little bug fix in pci_hotplug.py

2009-10-13 Thread Yolkfull Chow
If command executed timeout, the return value of status could be None,
which is missed in judge statement:

if s:
   ...

Thanks Jason Wang for pointing this out.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/tests/pci_hotplug.py |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests/pci_hotplug.py 
b/client/tests/kvm/tests/pci_hotplug.py
index 01d9447..3ad9ea2 100644
--- a/client/tests/kvm/tests/pci_hotplug.py
+++ b/client/tests/kvm/tests/pci_hotplug.py
@@ -83,7 +83,7 @@ def run_pci_hotplug(test, params, env):
 
 # Test the newly added device
 s, o = session.get_command_status_output(params.get(pci_test_cmd))
-if s:
+if s != 0:
 raise error.TestFail(Check for %s device failed after PCI hotplug. 
  Output: %s % (test_type, o))
 
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Add a kvm test guest_s4 which supports both Linux and Windows platform

2009-10-13 Thread Yolkfull Chow
On Tue, Oct 13, 2009 at 05:29:40PM -0300, Lucas Meneghel Rodrigues wrote:
 Hi Yolkfull and Chen:
 
 Thanks for your test! I have some comments and doubts to clear, most
 of them are about content of the messages delivered for the user and
 some other details.
 
 On Sun, Sep 27, 2009 at 6:11 AM, Yolkfull Chow yz...@redhat.com wrote:
  For this case, Ken Cao wrote the linux part previously and I did extensive
  modifications on Windows platform support.
 
  Signed-off-by: Ken Cao k...@redhat.com
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/kvm_tests.cfg.sample |   14 +++
   client/tests/kvm/tests/guest_s4.py    |   66 
  +
   2 files changed, 80 insertions(+), 0 deletions(-)
   create mode 100644 client/tests/kvm/tests/guest_s4.py
 
  diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
  b/client/tests/kvm/kvm_tests.cfg.sample
  index 285a38f..f9ecb61 100644
  --- a/client/tests/kvm/kvm_tests.cfg.sample
  +++ b/client/tests/kvm/kvm_tests.cfg.sample
  @@ -94,6 +94,14 @@ variants:
      - linux_s3:     install setup
          type = linux_s3
 
  +    - guest_s4:
  +        type = guest_s4
  +        check_s4_support_cmd = grep -q disk /sys/power/state
  +        test_s4_cmd = cd /tmp/;nohup tcpdump -q -t ip host localhost
  +        check_s4_cmd = pgrep tcpdump
  +        set_s4_cmd = echo disk  /sys/power/state
  +        kill_test_s4_cmd = pkill tcpdump
  +
      - timedrift:    install setup
          type = timedrift
          extra_params +=  -rtc-td-hack
  @@ -382,6 +390,12 @@ variants:
              # Alternative host load:
              #host_load_command = dd if=/dev/urandom of=/dev/null
              host_load_instances = 8
  +        guest_s4:
  +            check_s4_support_cmd = powercfg /hibernate on
  +            test_s4_cmd = start /B ping -n 3000 localhost
  +            check_s4_cmd = tasklist | find /I ping
  +            set_s4_cmd = rundll32.exe PowrProf.dll, SetSuspendState
  +            kill_test_s4_cmd = taskkill /IM ping.exe /F
 
          variants:
              - Win2000:
  diff --git a/client/tests/kvm/tests/guest_s4.py 
  b/client/tests/kvm/tests/guest_s4.py
  new file mode 100644
  index 000..5d8fbdf
  --- /dev/null
  +++ b/client/tests/kvm/tests/guest_s4.py
  @@ -0,0 +1,66 @@
  +import logging, time
  +from autotest_lib.client.common_lib import error
  +import kvm_test_utils, kvm_utils
  +
  +
  +def run_guest_s4(test, params, env):
  +    
  +    Suspend guest to disk,supports both Linux  Windows OSes.
  +
  +   �...@param test: kvm test object.
  +   �...@param params: Dictionary with test parameters.
  +   �...@param env: Dictionary with the test environment.
  +    
  +    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
  +    session = kvm_test_utils.wait_for_login(vm)
  +
  +    logging.info(Checking whether VM supports S4)
  +    status = session.get_command_status(params.get(check_s4_support_cmd))
  +    if status is None:
  +        logging.error(Failed to check if S4 exists)
  +    elif status != 0:
  +        raise error.TestFail(Guest does not support S4)
  +
  +    logging.info(Waiting for a while for X to start...)
 
 Yes, generally X starts a bit later than the SSH service, so I
 understand the time being here, however:
 
  * In fact we are waiting for all services of the guest to be up and
 functional, so depending on the level of load, I don't think 10s is
 gonna make it. So I suggest something = 30s

Yeah,reasonable, we did ignore the circumstance with workload. But as
you metioned,it can depend on different level of workload, therefore 30s
may be not enough as well. Your idea that write a utility function
waiting for some services up is good I think, thus it could be something
like:

def wait_services_up(services_list):
...

and for this case:

wait_services_up([Xorg]) for Linux and
wait_services_up([explore.exe]) for Windows.

  * It's also true that just wait for a given time and hope that it
 will be OK kinda sucks, so ideally we need to write utility functions
 to stablish as well as possible when all services of a host are fully
 booted up. Stated this way, it looks simple, but it's not.
 
 Autotest experience suggests that there's no real sane way to
 determine when a linux box is booted up, but we can take a
 semi-rational approach and verify if all services for the current run
 level have the status up or a similar approach. For windows, I was
 talking to Yaniv Kaul and it seems that processing the output of the
 'sc query' command might give what we want. Bottom line, I'd like to
 add a TODO item, and write a function to stablish (fairly confidently)
 that a windows/linux guest is booted up.
 
  +    time.sleep(10)
  +
  +    # Start up a program(tcpdump for linux OS  ping for M$ OS), as a flag.
  +    # If the program died after suspend, then fails this testcase.
  +    test_s4_cmd = params.get(test_s4_cmd)
  +    session.sendline(test_s4_cmd)
  +
  +    # Get

Re: [Autotest] [PATCH] Fix a bug in function create in kvm_vm

2009-10-10 Thread Yolkfull Chow
On Mon, Oct 05, 2009 at 04:03:22PM -0300, Lucas Meneghel Rodrigues wrote:
 Hi Yolkfull! I've checked your patch, but it turns out that the comma
 is valid syntax for the logging module. By any chance you actually had
 an error with it?

Hi Lucas,
I just checked, yes it's valid syntax for this module. Before this I met
a traceback during running autotest and it indicated this line
around,thus I doubt about this by mistake. Sorry for confusing. ;-)

But I found for the variables in logging.debug(),sometimes it use comma
to format while sometimes '%' which will drop code readability.

Anyway, thanks for checking.

 
 On Mon, Sep 28, 2009 at 4:45 AM, Yolkfull Chow yz...@redhat.com wrote:
 
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/kvm_vm.py |    2 +-
   1 files changed, 1 insertions(+), 1 deletions(-)
 
  diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
  index 55220f9..8ae 100755
  --- a/client/tests/kvm/kvm_vm.py
  +++ b/client/tests/kvm/kvm_vm.py
  @@ -406,7 +406,7 @@ class VM:
                                self.process.get_output()))
                  return False
 
  -            logging.debug(VM appears to be alive with PID %d,
  +            logging.debug(VM appears to be alive with PID %d %
                            self.process.get_pid())
              return True
 
  --
  1.6.2.5
 
  ___
  Autotest mailing list
  autot...@test.kernel.org
  http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
 
 
 
 
 -- 
 Lucas
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Fix a bug in function create in kvm_vm

2009-10-10 Thread Yolkfull Chow
On Sat, Oct 10, 2009 at 04:24:45PM +0800, Yolkfull Chow wrote:
 On Mon, Oct 05, 2009 at 04:03:22PM -0300, Lucas Meneghel Rodrigues wrote:
  Hi Yolkfull! I've checked your patch, but it turns out that the comma
  is valid syntax for the logging module. By any chance you actually had
  an error with it?
 
 Hi Lucas,
 I just checked, yes it's valid syntax for this module. Before this I met
 a traceback during running autotest and it indicated this line
 around,thus I doubt about this by mistake. Sorry for confusing. ;-)
 
 But I found for the variables in logging.debug(),sometimes it use comma
 to format while sometimes '%' which will drop code readability.

Another reason is if someone who still using kvm_log want to backport
codes from this tree, not only need he replace all 'logging' with
'kvm_log' but also need change these comma syntax. ;-)

 
 Anyway, thanks for checking.
 
  
  On Mon, Sep 28, 2009 at 4:45 AM, Yolkfull Chow yz...@redhat.com wrote:
  
   Signed-off-by: Yolkfull Chow yz...@redhat.com
   ---
    client/tests/kvm/kvm_vm.py |    2 +-
    1 files changed, 1 insertions(+), 1 deletions(-)
  
   diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
   index 55220f9..8ae 100755
   --- a/client/tests/kvm/kvm_vm.py
   +++ b/client/tests/kvm/kvm_vm.py
   @@ -406,7 +406,7 @@ class VM:
                                 self.process.get_output()))
                   return False
  
   -            logging.debug(VM appears to be alive with PID %d,
   +            logging.debug(VM appears to be alive with PID %d %
                             self.process.get_pid())
               return True
  
   --
   1.6.2.5
  
   ___
   Autotest mailing list
   autot...@test.kernel.org
   http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
  
  
  
  
  -- 
  Lucas
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Add two parameters for wait_for_login

2009-09-30 Thread Yolkfull Chow
Sometimes we need login to guest using different start_time and step_time.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_test_utils.py |6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/client/tests/kvm/kvm_test_utils.py 
b/client/tests/kvm/kvm_test_utils.py
index 601b350..0983003 100644
--- a/client/tests/kvm/kvm_test_utils.py
+++ b/client/tests/kvm/kvm_test_utils.py
@@ -43,7 +43,7 @@ def get_living_vm(env, vm_name):
 return vm
 
 
-def wait_for_login(vm, nic_index=0, timeout=240):
+def wait_for_login(vm, nic_index=0, timeout=240, start=0, step=2):
 
 Try logging into a VM repeatedly.  Stop on success or when timeout expires.
 
@@ -54,8 +54,8 @@ def wait_for_login(vm, nic_index=0, timeout=240):
 
 logging.info(Waiting for guest '%s' to be up... % vm.name)
 session = kvm_utils.wait_for(lambda: vm.remote_login(nic_index=nic_index),
- timeout, 0, 2)
+ timeout, start, step)
 if not session:
 raise error.TestFail(Could not log into guest '%s' % vm.name)
-logging.info(Logged in)
+logging.info(Logged in '%s' % vm.name)
 return session
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Fix a bug in function create in kvm_vm

2009-09-28 Thread Yolkfull Chow

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_vm.py |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 55220f9..8ae 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -406,7 +406,7 @@ class VM:
   self.process.get_output()))
 return False
 
-logging.debug(VM appears to be alive with PID %d,
+logging.debug(VM appears to be alive with PID %d %
   self.process.get_pid())
 return True
 
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Add a kvm test guest_s4 which supports both Linux and Windows platform

2009-09-27 Thread Yolkfull Chow
For this case, Ken Cao wrote the linux part previously and I did extensive
modifications on Windows platform support.

Signed-off-by: Ken Cao k...@redhat.com
Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_tests.cfg.sample |   14 +++
 client/tests/kvm/tests/guest_s4.py|   66 +
 2 files changed, 80 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/guest_s4.py

diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 285a38f..f9ecb61 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -94,6 +94,14 @@ variants:
 - linux_s3: install setup
 type = linux_s3
 
+- guest_s4:
+type = guest_s4
+check_s4_support_cmd = grep -q disk /sys/power/state
+test_s4_cmd = cd /tmp/;nohup tcpdump -q -t ip host localhost
+check_s4_cmd = pgrep tcpdump
+set_s4_cmd = echo disk  /sys/power/state
+kill_test_s4_cmd = pkill tcpdump
+
 - timedrift:install setup
 type = timedrift
 extra_params +=  -rtc-td-hack
@@ -382,6 +390,12 @@ variants:
 # Alternative host load:
 #host_load_command = dd if=/dev/urandom of=/dev/null
 host_load_instances = 8
+guest_s4:
+check_s4_support_cmd = powercfg /hibernate on
+test_s4_cmd = start /B ping -n 3000 localhost
+check_s4_cmd = tasklist | find /I ping
+set_s4_cmd = rundll32.exe PowrProf.dll, SetSuspendState
+kill_test_s4_cmd = taskkill /IM ping.exe /F
 
 variants:
 - Win2000:
diff --git a/client/tests/kvm/tests/guest_s4.py 
b/client/tests/kvm/tests/guest_s4.py
new file mode 100644
index 000..5d8fbdf
--- /dev/null
+++ b/client/tests/kvm/tests/guest_s4.py
@@ -0,0 +1,66 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+import kvm_test_utils, kvm_utils
+
+
+def run_guest_s4(test, params, env):
+
+Suspend guest to disk,supports both Linux  Windows OSes.
+
+@param test: kvm test object.
+@param params: Dictionary with test parameters.
+@param env: Dictionary with the test environment.
+
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+session = kvm_test_utils.wait_for_login(vm)
+
+logging.info(Checking whether VM supports S4)
+status = session.get_command_status(params.get(check_s4_support_cmd))
+if status is None:
+logging.error(Failed to check if S4 exists)
+elif status != 0:
+raise error.TestFail(Guest does not support S4)
+
+logging.info(Waiting for a while for X to start...)
+time.sleep(10)
+
+# Start up a program(tcpdump for linux OS  ping for M$ OS), as a flag.
+# If the program died after suspend, then fails this testcase.
+test_s4_cmd = params.get(test_s4_cmd)
+session.sendline(test_s4_cmd)
+
+# Get the second session to start S4
+session2 = kvm_test_utils.wait_for_login(vm)
+
+check_s4_cmd = params.get(check_s4_cmd)
+if session2.get_command_status(check_s4_cmd):
+raise error.TestError(Failed to launch %s background % test_s4_cmd)
+logging.info(Launched command background in guest: %s % test_s4_cmd)
+
+# Implement S4
+logging.info(Start suspend to disk now...)
+session2.sendline(params.get(set_s4_cmd))
+
+if not kvm_utils.wait_for(vm.is_dead, 360, 30, 2):
+raise error.TestFail(VM refuse to go down,suspend failed)
+logging.info(VM suspended successfully.)
+
+logging.info(VM suspended to disk. sleep 10 seconds to have a break...)
+time.sleep(10)
+
+# Start vm, and check whether the program is still running
+logging.info(Restart VM now...)
+
+if not vm.create():
+raise error.TestError(failed to start the vm again.)
+if not vm.is_alive():
+raise error.TestError(VM seems to be dead; Test requires a live VM.)
+
+# Check whether test command still alive
+if session2.get_command_status(check_s4_cmd):
+raise error.TestFail(%s died, indicating that S4 failed % 
test_s4_cmd)
+
+logging.info(VM resumed after S4)
+session2.sendline(params.get(kill_test_s4_cmd))
+session.close()
+session2.close()
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Change log message of VM login

2009-09-27 Thread Yolkfull Chow
We may use this function 'wait_for_login' for several times in a case,
only the first time login could be Waiting guest to be up.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_test_utils.py |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/kvm_test_utils.py 
b/client/tests/kvm/kvm_test_utils.py
index 601b350..aa3f2ee 100644
--- a/client/tests/kvm/kvm_test_utils.py
+++ b/client/tests/kvm/kvm_test_utils.py
@@ -52,7 +52,7 @@ def wait_for_login(vm, nic_index=0, timeout=240):
 @param timeout: Time to wait before giving up.
 @return: A shell session object.
 
-logging.info(Waiting for guest '%s' to be up... % vm.name)
+logging.info(Try to login to guest '%s'... % vm.name)
 session = kvm_utils.wait_for(lambda: vm.remote_login(nic_index=nic_index),
  timeout, 0, 2)
 if not session:
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 2/4] KVM test: rss.cpp: send characters to the console window rather than directly to STDIN

2009-09-22 Thread Yolkfull Chow
On Mon, Sep 21, 2009 at 08:30:26AM -0400, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  On Sun, Sep 20, 2009 at 06:16:28PM +0300, Michael Goldish wrote:
   Some Windows programs behave badly when their STDIN is redirected to
  a pipe
   (most notably wmic).  Therefore, keep STDIN unredirected, and send
  input to the
   console window as a series of WM_CHAR messages.
  
  Hi Michael, I just tried this patch. After re-compiling and
  installing RSS, seems never a command could be executed successfully
  or
  returned with results. I tested this on Win2008-32. Any clue for 
  fixing up it?
 
 Did you also apply the other patch --
 allow setting shell line separator string in the config file?

I did forget apply the second patch of that set. It now works and 'wmic'
works fine as well. Thank you very much for working this out.:-)

Cheers,

 
 By default, when using nc, autotest ends commands with \n.
 The modified rss.exe seems to require lines to end with \r\n (it's a
 common line separator in Windows).
 The patch I mentioned adds a shell_linesep parameter that controls
 the line separator and changes the line separator on Windows to \r\n.
 
 If you can't apply the patch (conflicts or whatever), go to kvm_utils.py,
 find the netcat function, and change this line:
 return remote_login(command, password, prompt, \n, timeout)
 to this:
 return remote_login(command, password, prompt, \r\n, timeout)
 
 If it still doesn't work then we have a real problem and I'll have to
 start debugging stuff.
 
 (Note: if you want to manually test rss.exe, use telnet instead of nc
 (e.g. telnet localhost 5000) because telnet ends lines with \r\n.)
 
   
   Signed-off-by: Michael Goldish mgold...@redhat.com
   ---
client/tests/kvm/deps/rss.cpp |   54
  +---
1 files changed, 23 insertions(+), 31 deletions(-)
   
   diff --git a/client/tests/kvm/deps/rss.cpp
  b/client/tests/kvm/deps/rss.cpp
   index 73a849a..66d9a5b 100644
   --- a/client/tests/kvm/deps/rss.cpp
   +++ b/client/tests/kvm/deps/rss.cpp
   @@ -22,9 +22,9 @@ struct client_info {
SOCKET socket;
sockaddr_in addr;
int pid;
   +HWND hwnd;
HANDLE hJob;
HANDLE hChildOutputRead;
   -HANDLE hChildInputWrite;
HANDLE hThreadChildToSocket;
};

   @@ -161,15 +161,10 @@ DWORD WINAPI SocketToChild(LPVOID
  client_info_ptr)
sprintf(message, Client (%s) entered text: \%s\\r\n,
client_info_str, formatted_buffer);
AppendMessage(message);
   -// Write the data to the child's STDIN
   -WriteFile(ci.hChildInputWrite, buffer, bytes_received,
   -  bytes_written, NULL);
   -// Make sure all the data was written
   -if (bytes_written != bytes_received) {
   -sprintf(message,
   -SocketToChild: bytes received (%d) != bytes
  written (%d),
   -bytes_received, bytes_written);
   -ExitOnError(message, 1);
   +// Send the data as a series of WM_CHAR messages to the
  console window
   +for (int i=0; ibytes_received; i++) {
   +SendMessage(ci.hwnd, WM_CHAR, (WPARAM)buffer[i], 0);
   +SendMessage(ci.hwnd, WM_SETFOCUS, 0, 0);
}
}

   @@ -194,7 +189,6 @@ DWORD WINAPI SocketToChild(LPVOID
  client_info_ptr)
CloseHandle(ci.hJob);
CloseHandle(ci.hThreadChildToSocket);
CloseHandle(ci.hChildOutputRead);
   -CloseHandle(ci.hChildInputWrite);

AppendMessage(SocketToChild thread exited\r\n);

   @@ -203,18 +197,25 @@ DWORD WINAPI SocketToChild(LPVOID
  client_info_ptr)

void PrepAndLaunchRedirectedChild(client_info *ci,
  HANDLE hChildStdOut,
   -  HANDLE hChildStdIn,
  HANDLE hChildStdErr)
{
PROCESS_INFORMATION pi;
STARTUPINFO si;

   +// Allocate a new console for the child
   +HWND hwnd = GetForegroundWindow();
   +FreeConsole();
   +AllocConsole();
   +ShowWindow(GetConsoleWindow(), SW_HIDE);
   +if (hwnd)
   +SetForegroundWindow(hwnd);
   +
// Set up the start up info struct.
ZeroMemory(si, sizeof(STARTUPINFO));
si.cb = sizeof(STARTUPINFO);
si.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW;
si.hStdOutput = hChildStdOut;
   -si.hStdInput  = hChildStdIn;
   +si.hStdInput  = GetStdHandle(STD_INPUT_HANDLE);
si.hStdError  = hChildStdErr;
// Use this if you want to hide the child:
si.wShowWindow = SW_HIDE;
   @@ -223,7 +224,7 @@ void PrepAndLaunchRedirectedChild(client_info
  *ci,

// Launch the process that you want to redirect.
if (!CreateProcess(NULL, cmd.exe, NULL, NULL, TRUE,
   -   CREATE_NEW_CONSOLE, NULL, C:\\, si,
  pi

Re: [Autotest] [KVM-AUTOTEST PATCH 2/4] KVM test: rss.cpp: send characters to the console window rather than directly to STDIN

2009-09-21 Thread Yolkfull Chow
On Sun, Sep 20, 2009 at 06:16:28PM +0300, Michael Goldish wrote:
 Some Windows programs behave badly when their STDIN is redirected to a pipe
 (most notably wmic).  Therefore, keep STDIN unredirected, and send input to 
 the
 console window as a series of WM_CHAR messages.

Hi Michael, I just tried this patch. After re-compiling and
installing RSS, seems never a command could be executed successfully or
returned with results. I tested this on Win2008-32. Any clue for 
fixing up it?

 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/deps/rss.cpp |   54 +---
  1 files changed, 23 insertions(+), 31 deletions(-)
 
 diff --git a/client/tests/kvm/deps/rss.cpp b/client/tests/kvm/deps/rss.cpp
 index 73a849a..66d9a5b 100644
 --- a/client/tests/kvm/deps/rss.cpp
 +++ b/client/tests/kvm/deps/rss.cpp
 @@ -22,9 +22,9 @@ struct client_info {
  SOCKET socket;
  sockaddr_in addr;
  int pid;
 +HWND hwnd;
  HANDLE hJob;
  HANDLE hChildOutputRead;
 -HANDLE hChildInputWrite;
  HANDLE hThreadChildToSocket;
  };
  
 @@ -161,15 +161,10 @@ DWORD WINAPI SocketToChild(LPVOID client_info_ptr)
  sprintf(message, Client (%s) entered text: \%s\\r\n,
  client_info_str, formatted_buffer);
  AppendMessage(message);
 -// Write the data to the child's STDIN
 -WriteFile(ci.hChildInputWrite, buffer, bytes_received,
 -  bytes_written, NULL);
 -// Make sure all the data was written
 -if (bytes_written != bytes_received) {
 -sprintf(message,
 -SocketToChild: bytes received (%d) != bytes written 
 (%d),
 -bytes_received, bytes_written);
 -ExitOnError(message, 1);
 +// Send the data as a series of WM_CHAR messages to the console 
 window
 +for (int i=0; ibytes_received; i++) {
 +SendMessage(ci.hwnd, WM_CHAR, (WPARAM)buffer[i], 0);
 +SendMessage(ci.hwnd, WM_SETFOCUS, 0, 0);
  }
  }
  
 @@ -194,7 +189,6 @@ DWORD WINAPI SocketToChild(LPVOID client_info_ptr)
  CloseHandle(ci.hJob);
  CloseHandle(ci.hThreadChildToSocket);
  CloseHandle(ci.hChildOutputRead);
 -CloseHandle(ci.hChildInputWrite);
  
  AppendMessage(SocketToChild thread exited\r\n);
  
 @@ -203,18 +197,25 @@ DWORD WINAPI SocketToChild(LPVOID client_info_ptr)
  
  void PrepAndLaunchRedirectedChild(client_info *ci,
HANDLE hChildStdOut,
 -  HANDLE hChildStdIn,
HANDLE hChildStdErr)
  {
  PROCESS_INFORMATION pi;
  STARTUPINFO si;
  
 +// Allocate a new console for the child
 +HWND hwnd = GetForegroundWindow();
 +FreeConsole();
 +AllocConsole();
 +ShowWindow(GetConsoleWindow(), SW_HIDE);
 +if (hwnd)
 +SetForegroundWindow(hwnd);
 +
  // Set up the start up info struct.
  ZeroMemory(si, sizeof(STARTUPINFO));
  si.cb = sizeof(STARTUPINFO);
  si.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW;
  si.hStdOutput = hChildStdOut;
 -si.hStdInput  = hChildStdIn;
 +si.hStdInput  = GetStdHandle(STD_INPUT_HANDLE);
  si.hStdError  = hChildStdErr;
  // Use this if you want to hide the child:
  si.wShowWindow = SW_HIDE;
 @@ -223,7 +224,7 @@ void PrepAndLaunchRedirectedChild(client_info *ci,
  
  // Launch the process that you want to redirect.
  if (!CreateProcess(NULL, cmd.exe, NULL, NULL, TRUE,
 -   CREATE_NEW_CONSOLE, NULL, C:\\, si, pi))
 +   0, NULL, C:\\, si, pi))
  ExitOnError(CreateProcess failed);
  
  // Close any unnecessary handles.
 @@ -235,12 +236,16 @@ void PrepAndLaunchRedirectedChild(client_info *ci,
  // Assign the process to a newly created JobObject
  ci-hJob = CreateJobObject(NULL, NULL);
  AssignProcessToJobObject(ci-hJob, pi.hProcess);
 +// Keep the console window's handle
 +ci-hwnd = GetConsoleWindow();
 +
 +// Detach from the child's console
 +FreeConsole();
  }
  
  void SpawnSession(client_info *ci)
  {
  HANDLE hOutputReadTmp, hOutputRead, hOutputWrite;
 -HANDLE hInputWriteTmp, hInputRead, hInputWrite;
  HANDLE hErrorWrite;
  SECURITY_ATTRIBUTES sa;
  
 @@ -261,10 +266,6 @@ void SpawnSession(client_info *ci)
   TRUE, DUPLICATE_SAME_ACCESS))
  ExitOnError(DuplicateHandle failed);
  
 -// Create the child input pipe.
 -if (!CreatePipe(hInputRead, hInputWriteTmp, sa, 0))
 -ExitOnError(CreatePipe failed);
 -
  // Create new output read handle and the input write handles. Set
  // the Properties to FALSE. Otherwise, the child inherits the
  // properties and, as a result, non-closeable handles to the pipes
 @@ -276,29 +277,20 @@ void SpawnSession(client_info *ci)
   DUPLICATE_SAME_ACCESS))
  

Re: [Autotest] [KVM-AUTOTEST PATCH 0/7] KVM test: support for the new remote shell server for Windows

2009-09-18 Thread Yolkfull Chow
On Thu, Sep 17, 2009 at 11:40:46AM -0400, Michael Goldish wrote:
 
 - Michael Goldish mgold...@redhat.com wrote:
 
  - Yolkfull Chow yz...@redhat.com wrote:
  
   On Tue, Aug 18, 2009 at 06:30:14PM -0400, Michael Goldish wrote:

- Lucas Meneghel Rodrigues l...@redhat.com wrote:

 On Tue, Aug 18, 2009 at 7:15 AM, Michael
   Goldishmgold...@redhat.com
 wrote:
 
  - Lucas Meneghel Rodrigues l...@redhat.com wrote:
 
  Ok, very good, similarly to the previous patchset, I rebased
   one
 of
  the patches and applied the set, I am making tests with an
  rss
 binary
  generated by the cross compiler.
 
  I am testing with Winxp 32 bit, so far so good and rss.exe
   works
 as
  expected. I guess I will test more with other hosts, but I
  am
   not
 too
  far from applying this patchset as well.
 
  Will keep you posted!
   
   
   Hi Michael, so far rss works wonderful on remote login a VM and
   execute some simple commands. But can we expect extend the facility
   to
   enable it support some telnet commands, like 'wmic' ? We did need
   such
   commands which is used for collecting guest hardware information.
  
  I wasn't aware of wmic, but now that I tried running it, it appears
  to
  be one of those programs, like netsh and ftp, that behave badly when
  run outside an actual console window.
  In the case of netsh and ftp there are easy workarounds but wmic
  seems
  to hang no matter what I do.
  AFAIK, it doesn't work with SSH servers either, including openSSH
  under cygwin.
  I wonder how it works under telnet but it'll be difficult to find out
  because I don't have the MS telnet source code.
  
  I'll start looking for a solution/workaround.
 
 I think I've found a solution and I'm testing it now.  If everything works
 as expected I'll send a patch soon.

Wow, that's wonderful. Expecting it. :-)

 
   
 
  Note that this patchset should also allow you to install
   rss.exe
  automatically using step files, so I hope that in your tests
   you're
 not
  installing it manually.  I'm not expecting you to test
   everything
 (it
  takes quite a while), but if you're testing anyway, better
  let
   the
 step
  files do some work too.
 
  (I know we'll start using unattended installation scripts
  soon
   but
 it
  doesn't hurt to have functional step files too.)
 
  Also note that using a certain qemu/KVM version I couldn't
  get
   Vista
 to
  work with user mode.  This isn't an rss.exe problem.  In TAP
   mode
 it
  works just fine.
 
  In any case, thanks for reviewing and testing the patchsets.
 
 Ok Michael, turns out the win2000 failure was a silly mistake.
   So,
 after checking the code and going trough light testing, I
  applied
 this
 patchset
 
 http://autotest.kernel.org/changeset/3553
 http://autotest.kernel.org/changeset/3554
 http://autotest.kernel.org/changeset/3555
 http://autotest.kernel.org/changeset/3556

Maybe I'm misinterpreting this, but it looks like you squashed
  two
patches together.  The commit titled step file tests: do not
  fail
when receiving an invalid screendump (changeset 3556) also makes
changes to kvm_tests.cfg.sample (it's not supposed to), and the
   patch
that's supposed to do it seems to be missing.
This is probably not important -- I just thought I should bring
  it
   to
your attention.

 Sudhir, perhaps you can try the upstream tree starting with
  r3556.
   It
 has all the changes you have asked for earlier (rss and stuff).
 
 In order to get things all set, I suggest:
 
 1) Create a directory with the following contents:
 
 [...@freedom rss]$ ls -l
 total 52
 -rwxrwxr-x. 1 lmr lmr 42038 2009-08-17 18:55 rss.exe
 -rw-rw-r--. 1 lmr lmr   517 2009-08-17 18:57 rss.reg
 -rw-rw-r--. 1 lmr lmr   972 2009-08-17 18:57 setuprss.bat
 
 Those can be found under client/tests/kvm/deps directory on the
   most
 current autotest tree.
 
 2) Create an iso from it:
 
 genisoimage -o rss.iso -max-iso9660-filenames
  -relaxed-filenames
   -D
 --input-charset iso8859-1 rss
 
 3) Put rss.iso under your windows iso directory.
 
 4) Profit :)
 
 If you want to compile the latest rss, you could try numerous
   things,
 here are some possible routes:
 
 1) Compile it under a windows host with mingw installed
 2) Compile it under Fedora 12 with the cross compile
  environment
 installed, you need to install at least these through yum:
 
 mingw32-w32api
 mingw32-gcc-c++
 mingw32-gcc
 
 And then do a:
 
 i686-pc-mingw32-g++ rss.cpp -lws2_32 -mwindows -o rss.exe
 
 I hope that was helpful. After this patch-applying spree, I
  gotta
   a
 *lot* of documentation to write

Re: [Autotest] [PATCH 12/19] KVM test: Add new module kvm_test_utils.py

2009-09-14 Thread Yolkfull Chow
On Mon, Sep 14, 2009 at 10:58:01AM +0300, Uri Lublin wrote:
 On 09/14/2009 08:26 AM, Yolkfull Chow wrote:
 On Wed, Sep 09, 2009 at 09:12:05PM +0300, Michael Goldish wrote:
 This module is meant to reduce code size by performing common test 
 procedures.
 Generally, code here should look like test code.

 +def wait_for_login(vm, nic_index=0, timeout=240):
 +
 +Try logging into a VM repeatedly.  Stop on success or when timeout 
 expires.
 +
 +@param vm: VM object.
 +@param nic_index: Index of NIC to access in the VM.
 +@param timeout: Time to wait before giving up.
 +@return: A shell session object.
 +
 +logging.info(Waiting for guest to be up...)
 +session = kvm_utils.wait_for(lambda: 
 vm.remote_login(nic_index=nic_index),
 + timeout, 0, 2)
 +if not session:
 +raise error.TestFail(Could not log into guest)

 Hi Michael, I think we should also add a parameter 'vm_name' for
 wait_for_login(). On the assumption that we boot more than one VMs, it's
 hard to know which guest failed to login according to message above.
 What do you think? :-)


 The VM object (vm parameter) knows its own name.
 It is a good idea to add that name to log/error messages, since we do 
 want to run different tests (VMs) in parallel (although the logs should 
 be also saved in different directories/files).


Yes, I did ignore that we could use 'vm.name'
instead of adding a parameter for wait_for_login(). Therefore those
log/error messages could be added something like this:

if not session:
raise error.TestFail(Could not log into guest %s vm.name)

Thanks for pointing out that. :-)

 Regards,
 Uri.

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 0/7] KVM test: support for the new remote shell server for Windows

2009-09-14 Thread Yolkfull Chow
On Tue, Aug 18, 2009 at 06:30:14PM -0400, Michael Goldish wrote:
 
 - Lucas Meneghel Rodrigues l...@redhat.com wrote:
 
  On Tue, Aug 18, 2009 at 7:15 AM, Michael Goldishmgold...@redhat.com
  wrote:
  
   - Lucas Meneghel Rodrigues l...@redhat.com wrote:
  
   Ok, very good, similarly to the previous patchset, I rebased one
  of
   the patches and applied the set, I am making tests with an rss
  binary
   generated by the cross compiler.
  
   I am testing with Winxp 32 bit, so far so good and rss.exe works
  as
   expected. I guess I will test more with other hosts, but I am not
  too
   far from applying this patchset as well.
  
   Will keep you posted!


Hi Michael, so far rss works wonderful on remote login a VM and
execute some simple commands. But can we expect extend the facility to
enable it support some telnet commands, like 'wmic' ? We did need such
commands which is used for collecting guest hardware information.


  
   Note that this patchset should also allow you to install rss.exe
   automatically using step files, so I hope that in your tests you're
  not
   installing it manually.  I'm not expecting you to test everything
  (it
   takes quite a while), but if you're testing anyway, better let the
  step
   files do some work too.
  
   (I know we'll start using unattended installation scripts soon but
  it
   doesn't hurt to have functional step files too.)
  
   Also note that using a certain qemu/KVM version I couldn't get Vista
  to
   work with user mode.  This isn't an rss.exe problem.  In TAP mode
  it
   works just fine.
  
   In any case, thanks for reviewing and testing the patchsets.
  
  Ok Michael, turns out the win2000 failure was a silly mistake. So,
  after checking the code and going trough light testing, I applied
  this
  patchset
  
  http://autotest.kernel.org/changeset/3553
  http://autotest.kernel.org/changeset/3554
  http://autotest.kernel.org/changeset/3555
  http://autotest.kernel.org/changeset/3556
 
 Maybe I'm misinterpreting this, but it looks like you squashed two
 patches together.  The commit titled step file tests: do not fail
 when receiving an invalid screendump (changeset 3556) also makes
 changes to kvm_tests.cfg.sample (it's not supposed to), and the patch
 that's supposed to do it seems to be missing.
 This is probably not important -- I just thought I should bring it to
 your attention.
 
  Sudhir, perhaps you can try the upstream tree starting with r3556. It
  has all the changes you have asked for earlier (rss and stuff).
  
  In order to get things all set, I suggest:
  
  1) Create a directory with the following contents:
  
  [...@freedom rss]$ ls -l
  total 52
  -rwxrwxr-x. 1 lmr lmr 42038 2009-08-17 18:55 rss.exe
  -rw-rw-r--. 1 lmr lmr   517 2009-08-17 18:57 rss.reg
  -rw-rw-r--. 1 lmr lmr   972 2009-08-17 18:57 setuprss.bat
  
  Those can be found under client/tests/kvm/deps directory on the most
  current autotest tree.
  
  2) Create an iso from it:
  
  genisoimage -o rss.iso -max-iso9660-filenames -relaxed-filenames -D
  --input-charset iso8859-1 rss
  
  3) Put rss.iso under your windows iso directory.
  
  4) Profit :)
  
  If you want to compile the latest rss, you could try numerous things,
  here are some possible routes:
  
  1) Compile it under a windows host with mingw installed
  2) Compile it under Fedora 12 with the cross compile environment
  installed, you need to install at least these through yum:
  
  mingw32-w32api
  mingw32-gcc-c++
  mingw32-gcc
  
  And then do a:
  
  i686-pc-mingw32-g++ rss.cpp -lws2_32 -mwindows -o rss.exe
  
  I hope that was helpful. After this patch-applying spree, I gotta a
  *lot* of documentation to write to our wiki :)
  
  Cheers,
  
  Lucas
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Add pass through feature test (support SR-IOV)

2009-09-14 Thread Yolkfull Chow
It supports both SR-IOV virtual functions' and physical NIC card pass through.
  * For SR-IOV virtual functions passthrough, we could specify the module 
parameter 'max_vfs' in config file.
  * For physical NIC card pass through, we should specify the device name(s).

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_tests.cfg.sample |   12 ++
 client/tests/kvm/kvm_utils.py |  248 -
 client/tests/kvm/kvm_vm.py|   68 +-
 3 files changed, 326 insertions(+), 2 deletions(-)

diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index a83ef9b..c6037da 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -627,6 +627,18 @@ variants:
 
 
 variants:
+- @no_passthrough:
+pass_through = no
+- nic_passthrough:
+pass_through = pf
+passthrough_devs = eth1
+- vfs_passthrough:
+pass_through = vf
+max_vfs = 7
+vfs_count = 7
+
+
+variants:
 - @basic:
 only Fedora Windows
 - @full:
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index dfca938..1fe3b31 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -1,5 +1,5 @@
 import md5, thread, subprocess, time, string, random, socket, os, signal, pty
-import select, re, logging, commands
+import select, re, logging, commands, cPickle
 from autotest_lib.client.bin import utils
 from autotest_lib.client.common_lib import error
 import kvm_subprocess
@@ -795,3 +795,249 @@ def md5sum_file(filename, size=None):
 size -= len(data)
 f.close()
 return o.hexdigest()
+
+
+def get_full_id(pci_id):
+
+Get full PCI ID of pci_id.
+
+cmd = lspci -D | awk '/%s/ {print $1}' % pci_id
+status, full_id = commands.getstatusoutput(cmd)
+if status != 0:
+return None
+return full_id
+
+
+def get_vendor_id(pci_id):
+
+Check out the device vendor ID according to PCI ID.
+
+cmd = lspci -n | awk '/%s/ {print $3}' % pci_id
+return re.sub(:,  , commands.getoutput(cmd))
+
+
+def release_pci_devs(dict):
+
+Release assigned PCI devices to host.
+
+def release_dev(pci_id):
+base_dir = /sys/bus/pci
+full_id = get_full_id(pci_id)
+vendor_id = get_vendor_id(pci_id)
+drv_path = os.path.join(base_dir, devices/%s/driver % full_id)
+if 'pci-stub' in os.readlink(drv_path):
+cmd = echo '%s'  %s/new_id % (vendor_id, drv_path)
+if os.system(cmd):
+return False
+
+stub_path = os.path.join(base_dir, drivers/pci-stub)
+cmd = echo '%s'  %s/unbind % (full_id, stub_path)
+if os.system(cmd):
+return False
+
+prev_driver = self.dev_prev_drivers[pci_id]
+cmd = echo '%s'  %s/bind % (full_id, prev_driver)
+if os.system(cmd):
+return False
+return True
+  
+for pci_id in dict.keys():
+if not release_dev(pci_id):
+logging.error(Failed to release device [%s] to host % pci_id)
+else:
+logging.info(Release device [%s] successfully % pci_id)
+
+
+class PassThrough:
+
+Request passthroughable devices on host. It will check whether to request
+PF(physical NIC cards) or VF(Virtual Functions).
+
+def __init__(self, type=nic_vf, max_vfs=None, names=None):
+
+Initialize parameter 'type' which could be:
+nic_vf: Virtual Functions
+nic_pf: Physical NIC card
+mixed:  Both includes VFs and PFs
+
+If pass through Physical NIC cards, we need to specify which devices
+to be assigned, e.g. 'eth1 eth2'.
+
+If pass through Virtual Functions, we need to specify how many vfs
+are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
+config file.
+
+@param type: Pass through device's type
+@param max_vfs: parameter of module 'igb'
+@param names: Physical NIC cards' names, e.g.'eth1 eth2 ...'
+
+self.type = type
+if max_vfs:
+self.max_vfs = int(max_vfs)
+if names:
+self.name_list = names.split()
+
+def sr_iov_setup(self):
+
+Setup SR-IOV environment, check if module 'igb' is loaded with
+parameter 'max_vfs'.
+
+re_probe = False
+# Check whether the module 'igb' is loaded
+s, o = commands.getstatusoutput('lsmod | grep igb')
+if s:
+re_probe = True
+elif not self.chk_vfs_count():
+os.system(modprobe -r igb)
+re_probe = True
+
+# Re-probe module 'igb'
+if re_probe:
+cmd = modprobe igb max_vfs=%d % self.max_vfs
+s, o = commands.getstatusoutput(cmd)
+if s:
+return

Re: [Autotest] [PATCH 12/19] KVM test: Add new module kvm_test_utils.py

2009-09-13 Thread Yolkfull Chow
On Wed, Sep 09, 2009 at 09:12:05PM +0300, Michael Goldish wrote:
 This module is meant to reduce code size by performing common test procedures.
 Generally, code here should look like test code.
 More specifically:
 - Functions in this module should raise exceptions if things go wrong
   (unlike functions in kvm_utils.py and kvm_vm.py which report failure via
   their returned values).
 - Functions in this module may use logging.info(), in addition to
   logging.debug() and logging.error(), to log messages the user may be
   interested in (unlike kvm_utils.py and kvm_vm.py which use
   logging.debug() for everything that isn't an error).
 - Functions in this module typically use functions and classes from
   lower-level modules (e.g. kvm_utils.py, kvm_vm.py, kvm_subprocess.py).
 - Functions in this module should not be used by lower-level modules.
 - Functions in this module should be used in the right context.
   For example, a function should not be used where it may display
   misleading or inaccurate info or debug messages.
 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm_test_utils.py |   61 
 
  1 files changed, 61 insertions(+), 0 deletions(-)
  create mode 100644 client/tests/kvm/kvm_test_utils.py
 
 diff --git a/client/tests/kvm/kvm_test_utils.py 
 b/client/tests/kvm/kvm_test_utils.py
 new file mode 100644
 index 000..39e92b9
 --- /dev/null
 +++ b/client/tests/kvm/kvm_test_utils.py
 @@ -0,0 +1,61 @@
 +import time, os, logging, re, commands
 +from autotest_lib.client.common_lib import utils, error
 +import kvm_utils, kvm_vm, kvm_subprocess
 +
 +
 +High-level KVM test utility functions.
 +
 +This module is meant to reduce code size by performing common test 
 procedures.
 +Generally, code here should look like test code.
 +More specifically:
 +- Functions in this module should raise exceptions if things go wrong
 +  (unlike functions in kvm_utils.py and kvm_vm.py which report failure 
 via
 +  their returned values).
 +- Functions in this module may use logging.info(), in addition to
 +  logging.debug() and logging.error(), to log messages the user may be
 +  interested in (unlike kvm_utils.py and kvm_vm.py which use
 +  logging.debug() for anything that isn't an error).
 +- Functions in this module typically use functions and classes from
 +  lower-level modules (e.g. kvm_utils.py, kvm_vm.py, kvm_subprocess.py).
 +- Functions in this module should not be used by lower-level modules.
 +- Functions in this module should be used in the right context.
 +  For example, a function should not be used where it may display
 +  misleading or inaccurate info or debug messages.
 +
 +...@copyright: 2008-2009 Red Hat Inc.
 +
 +
 +
 +def get_living_vm(env, vm_name):
 +
 +Get a VM object from the environment and make sure it's alive.
 +
 +@param env: Dictionary with test environment.
 +@param vm_name: Name of the desired VM object.
 +@return: A VM object.
 +
 +vm = kvm_utils.env_get_vm(env, vm_name)
 +if not vm:
 +raise error.TestError(VM '%s' not found in environment % vm_name)
 +if not vm.is_alive():
 +raise error.TestError(VM '%s' seems to be dead; test requires a 
 +  living VM % vm_name)
 +return vm
 +
 +
 +def wait_for_login(vm, nic_index=0, timeout=240):
 +
 +Try logging into a VM repeatedly.  Stop on success or when timeout 
 expires.
 +
 +@param vm: VM object.
 +@param nic_index: Index of NIC to access in the VM.
 +@param timeout: Time to wait before giving up.
 +@return: A shell session object.
 +
 +logging.info(Waiting for guest to be up...)
 +session = kvm_utils.wait_for(lambda: 
 vm.remote_login(nic_index=nic_index),
 + timeout, 0, 2)
 +if not session:
 +raise error.TestFail(Could not log into guest)

Hi Michael, I think we should also add a parameter 'vm_name' for
wait_for_login(). On the assumption that we boot more than one VMs, it's
hard to know which guest failed to login according to message above.
What do you think? :-)

 +logging.info(Logged in)
 +return session
 -- 
 1.5.4.1
 
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH v2 1/3] KVM test: add AutoIt test

2009-08-11 Thread Yolkfull Chow
On Tue, Aug 11, 2009 at 03:10:42PM +0300, Michael Goldish wrote:
 Currently the test only logs in, runs a given script and fails if the script
 takes too long to exit or if its exit status is nonzero.
 
 The test expects these parameters:
 autoit_binary: Path to AutoIt binary in the guest.
 autoit_script: Path to script in the host.
 autoit_script_params: Command line parameters to send to the script.
 autoit_script_timeout: The time duration (in seconds) to wait for the script 
 to
 exit.
 
 The test code can be extended later to add more features.
 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm.py   |1 +
  client/tests/kvm/kvm_tests.py |   66 
 +
  2 files changed, 67 insertions(+), 0 deletions(-)
 
 diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
 index 070e463..4930e80 100644
 --- a/client/tests/kvm/kvm.py
 +++ b/client/tests/kvm/kvm.py
 @@ -56,6 +56,7 @@ class kvm(test.test):
  linux_s3: test_routine(kvm_tests, run_linux_s3),
  stress_boot:  test_routine(kvm_tests, run_stress_boot),
  timedrift:test_routine(kvm_tests, run_timedrift),
 +autoit:   test_routine(kvm_tests, run_autoit),
  }
  
  # Make it possible to import modules from the test's bindir
 diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
 index 9cd01e2..749c1fd 100644
 --- a/client/tests/kvm/kvm_tests.py
 +++ b/client/tests/kvm/kvm_tests.py
 @@ -776,3 +776,69 @@ def run_timedrift(test, params, env):
  if drift  drift_threshold_after_rest:
  raise error.TestFail(Time drift too large after rest period: %.2f%%
   % drift_total)
 +
 +
 +def run_autoit(test, params, env):
 +
 +A wrapper for AutoIt scripts.
 +
 +1) Log into a guest.
 +2) Run AutoIt script.
 +3) Wait for script execution to complete.
 +4) Pass/fail according to exit status of script.
 +
 +@param test: KVM test object.
 +@param params: Dictionary with test parameters.
 +@param env: Dictionary with the test environment.
 +
 +vm = kvm_utils.env_get_vm(env, params.get(main_vm))
 +if not vm:
 +raise error.TestError(VM object not found in environment)
 +if not vm.is_alive():
 +raise error.TestError(VM seems to be dead; Test requires a living 
 VM)
 +
 +logging.info(Waiting for guest to be up...)
 +
 +session = kvm_utils.wait_for(vm.remote_login, 240, 0, 2)
 +if not session:
 +raise error.TestFail(Could not log into guest)
 +
 +try:
 +logging.info(Logged in; starting script...)
 +
 +# Collect test parameters
 +binary = params.get(autoit_binary)
 +script = params.get(autoit_script)
 +script_params = params.get(autoit_script_params, )
 +timeout = float(params.get(autoit_script_timeout, 600))
 +
 +# Send AutoIt script to guest (this code will be replaced once we
 +# support sending files to Windows guests)
 +session.sendline(del script.au3)
 +file = open(kvm_utils.get_path(test.bindir, script))
 +for line in file.readlines():
 +# Insert a '^' before each character
 +line = .join(^ + c for c in line.rstrip())
 +if line:
 +# Append line to the file
 +session.sendline(echo %sscript.au3 % line)
 +file.close()
 +
 +session.read_up_to_prompt()
 +
 +command = cmd /c %s script.au3 %s % (binary, script_params)

Hi Michael, for the problem that execute script in Windows cmd shell, I
have some information share with you:

Guys in our team had found that the value which `echo %errorlevel%`
returns is not always right. It just reflects whether the action to
execute the script has been implemented successfully and it ALWAYS return
even if errors occur. That means as soon as the script has been started
successfully it will return 0 even if error occurred during script running.

One solution could be use command 'start /wait script.au3' which will
let program wait for it to terminate:
http://ss64.com/nt/start.html

I have not investigated it enough as well, if any mistake made, please
just ignore this reply. ;-)


 +
 +logging.info( Script output )
 +status = session.get_command_status(command,
 +print_func=logging.info,
 +timeout=timeout)
 +logging.info( End of script output 
 )
 +
 +if status is None:
 +raise error.TestFail(Timeout expired before script execution 
 + completed (or something weird happened))
 +if status != 0:
 +raise error.TestFail(Script execution failed)
 +
 +finally:
 +session.close()
 -- 
 1.5.4.1
 
 

Re: [Autotest] [KVM-AUTOTEST PATCH v2 1/3] KVM test: add AutoIt test

2009-08-11 Thread Yolkfull Chow
On Tue, Aug 11, 2009 at 09:27:17AM -0400, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  On Tue, Aug 11, 2009 at 03:10:42PM +0300, Michael Goldish wrote:
   Currently the test only logs in, runs a given script and fails if
  the script
   takes too long to exit or if its exit status is nonzero.
   
   The test expects these parameters:
   autoit_binary: Path to AutoIt binary in the guest.
   autoit_script: Path to script in the host.
   autoit_script_params: Command line parameters to send to the
  script.
   autoit_script_timeout: The time duration (in seconds) to wait for
  the script to
   exit.
   
   The test code can be extended later to add more features.
   
   Signed-off-by: Michael Goldish mgold...@redhat.com
   ---
client/tests/kvm/kvm.py   |1 +
client/tests/kvm/kvm_tests.py |   66
  +
2 files changed, 67 insertions(+), 0 deletions(-)
   
   diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
   index 070e463..4930e80 100644
   --- a/client/tests/kvm/kvm.py
   +++ b/client/tests/kvm/kvm.py
   @@ -56,6 +56,7 @@ class kvm(test.test):
linux_s3: test_routine(kvm_tests,
  run_linux_s3),
stress_boot:  test_routine(kvm_tests,
  run_stress_boot),
timedrift:test_routine(kvm_tests,
  run_timedrift),
   +autoit:   test_routine(kvm_tests,
  run_autoit),
}

# Make it possible to import modules from the test's
  bindir
   diff --git a/client/tests/kvm/kvm_tests.py
  b/client/tests/kvm/kvm_tests.py
   index 9cd01e2..749c1fd 100644
   --- a/client/tests/kvm/kvm_tests.py
   +++ b/client/tests/kvm/kvm_tests.py
   @@ -776,3 +776,69 @@ def run_timedrift(test, params, env):
if drift  drift_threshold_after_rest:
raise error.TestFail(Time drift too large after rest
  period: %.2f%%
 % drift_total)
   +
   +
   +def run_autoit(test, params, env):
   +
   +A wrapper for AutoIt scripts.
   +
   +1) Log into a guest.
   +2) Run AutoIt script.
   +3) Wait for script execution to complete.
   +4) Pass/fail according to exit status of script.
   +
   +@param test: KVM test object.
   +@param params: Dictionary with test parameters.
   +@param env: Dictionary with the test environment.
   +
   +vm = kvm_utils.env_get_vm(env, params.get(main_vm))
   +if not vm:
   +raise error.TestError(VM object not found in
  environment)
   +if not vm.is_alive():
   +raise error.TestError(VM seems to be dead; Test requires a
  living VM)
   +
   +logging.info(Waiting for guest to be up...)
   +
   +session = kvm_utils.wait_for(vm.remote_login, 240, 0, 2)
   +if not session:
   +raise error.TestFail(Could not log into guest)
   +
   +try:
   +logging.info(Logged in; starting script...)
   +
   +# Collect test parameters
   +binary = params.get(autoit_binary)
   +script = params.get(autoit_script)
   +script_params = params.get(autoit_script_params, )
   +timeout = float(params.get(autoit_script_timeout, 600))
   +
   +# Send AutoIt script to guest (this code will be replaced
  once we
   +# support sending files to Windows guests)
   +session.sendline(del script.au3)
   +file = open(kvm_utils.get_path(test.bindir, script))
   +for line in file.readlines():
   +# Insert a '^' before each character
   +line = .join(^ + c for c in line.rstrip())
   +if line:
   +# Append line to the file
   +session.sendline(echo %sscript.au3 % line)
   +file.close()
   +
   +session.read_up_to_prompt()
   +
   +command = cmd /c %s script.au3 %s % (binary,
  script_params)
  
  Hi Michael, for the problem that execute script in Windows cmd shell,
  I have some information share with you:
  
  Guys in our team had found that the value which `echo %errorlevel%`
  returns is not always right. It just reflects whether the action to
  execute the script has been implemented successfully and it ALWAYS
  return
  even if errors occur. That means as soon as the script has been
  started
  successfully it will return 0 even if error occurred during script
  running.
 
 You can't issue 'echo %errorlevel%' before the command returns.
 If the command is blocking, i.e. if you have to wait for it to complete,
 like 'dir' or any typical shell command, 'echo %errorlevel%' works just
 fine and reflects the exit status of the command.
 
 If the command returns immediately, like GUI apps (calc, notepad), then
 you should run it using 'cmd /c' like the AutoIt test does.
 cmd will return only when the GUI program terminates, and then you can
 use 'echo %errorlevel%' as usual.
 
 Note that when running a command using 'cmd /c', the following

Re: [KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-04 Thread Yolkfull Chow
On Mon, Aug 03, 2009 at 04:00:45AM -0400, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  On Mon, Aug 03, 2009 at 02:45:23AM -0400, Michael Goldish wrote:
   
   - Yolkfull Chow yz...@redhat.com wrote:
   
Hi Michael, I just have some comments on what you changed on
stress_boot. :-)


On Mon, Aug 03, 2009 at 02:58:21AM +0300, Michael Goldish wrote:
 Take an additional parameter 'clone_address_index_base' which
indicates the
 initial value for 'address_index' for the cloned VMs.  This
  value
is
 incremented after each clone is created.  I assume the original
  VM
has a single
 NIC; otherwise NICs will end up sharing MAC addresses, which is
bad.
 
 Also, make a few small corrections:
 
 - Take the params for the clones from the original VM's params,
  not
from the
 test's params, because the test's params contain information
  about
several
 VMs, only one of which is the original VM. The original VM's
  params,
on the
 other hand, describe just a single VM (which we want to clone).
 
 - Change the way kill_vm.* parameters are sent to the
postprocessor.
 (The postprocessor doesn't read params from the VM objects but
rather from the
 test's params dict.)
 
 - Replace 'if get_command_status(...)' with 'if
get_command_status(...) != 0'.
 
 - Replace the test command 'ps aux' with 'uname -a'.  The silly
reason for this
 is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is
  just
as good
 (AFAIK), use it instead.
 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm_tests.cfg.sample |7 ++-
  client/tests/kvm/kvm_tests.py |   14 ++
  2 files changed, 12 insertions(+), 9 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample
b/client/tests/kvm/kvm_tests.cfg.sample
 index 3c23432..3a4bf64 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -117,7 +117,12 @@ variants:
  - stress_boot:  install setup
  type = stress_boot
  max_vms = 5
 -alive_test_cmd = ps aux
 +alive_test_cmd = uname -a
 +clone_address_index_base = 10
 +kill_vm = yes
 +kill_vm_vm1 = no


Actually, there are two methods to deal with started VMs via
framework, one is as soon
as the current test finished and the other is just before next
  test
going to be started. 
In this case both methods equal somewhat between each other I
  think,
and
I had chosen the later one. But if you had done some testing and
proved the previous one is better, I would agree. 
   
   It's not that I tested and found the former to be better, it's just
  that
   if a user runs only stress_boot, it makes sense to clean up at the
  end
   of the test, not at the beginning of the next one (because there's
  no
   next one).  Since you're adding VMs to the postprocessor's params
   (params['vms'] +=   + vm_name) I thought we might as well use
  that.
   (In order to clean up the VMs automatically when the next test
  starts,
   we don't need to add the VM to those params -- only to 'env'.)
  
  Hmm...yes, if users only want to run stress_boot, cleaning up should
  be
  implemented at the end of the test. 
  
   


 +kill_vm_gracefully = no

This has been specified within code during cloning. 
   
   Right, but the postprocessor is supposed to ignore that, because it
  doesn't
   look at the params inside the VM -- it looks at test params that get
  to the
   postprocessor.
   

 +extra_params +=  -snapshot

If you add this option '-snapshot' here, the first VM will be in
snapshot
mode as well which will affect next test.
   
   Which is good in my opinion, because I don't want the first VM
  writing to the
   same image that others are reading from.  It sounds like it might
   
   Let me know what you think.
  
  My opinion is, since the scenario that the first VM using
  non-snapshot
  mode while the others do has not caused any problem, why not just keep
  the first VM
  in non-snapshot mode and let it to be used in next test which really
  save booting time? 
  
  What do you think?  Please let me know if there is a trap in such
  scenario.
  Thanks. :-)
 
 I'm not sure if it really will cause trouble, but it sounds like it might,
 since we're modifying an image that's being used by another VM, if I
 understand correctly.  I don't think we actually have to see a test fail
 before we decide to avoid doing something -- it's enough just to know that
 something is bad practice.  I'm not even sure this is bad practice, but I
 want to be on the safe side anyway.  Can anyone shed some light on this?
 (Is it risky to start a VM without

Re: [PATCH] Add a subtest pci_hotplug in kvm test

2009-08-04 Thread Yolkfull Chow
Differences between previous patch:

- Use a loop waiting for some seconds to compare output of a command
- Use a loop waiting for some seconds to catch string indicates which
  PCI device
- Add option kill_vm_on_error in block_hotplug since once a model failed
  to be hot removed, it will affect next model

A better result I can get is:
---
./scan_results.py
teststatus
seconds info
--
--- 
Fedora.11.32.nic_hotplug.nic_8139   GOOD
107 completed successfully
Fedora.11.32.nic_hotplug.nic_virtio FAIL
76  Not found pci model:virtio; Command is:lspci | tail -n1
Fedora.11.32.block_hotplug.fmt_qcow2.block_virtio   GOOD
92  completed successfully
Fedora.11.32.block_hotplug.fmt_qcow2.block_scsi GOOD
48  completed successfully
RHEL.5.3.i386.nic_hotplug.nic_8139  GOOD
144 completed successfully
RHEL.5.3.i386.nic_hotplug.nic_virtioGOOD
48  completed successfully
RHEL.5.3.i386.block_hotplug.fmt_qcow2.block_virtio  GOOD
47  completed successfully
RHEL.5.3.i386.block_hotplug.fmt_qcow2.block_scsiGOOD
47  completed successfully
Win2008.32.nic_hotplug.nic_8139 GOOD
141 completed successfully
Win2008.32.nic_hotplug.nic_virtio   GOOD
99  completed successfully
Win2008.32.block_hotplug.fmt_qcow2.block_scsi   GOOD
90  completed successfully
GOOD
953 



Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm.py   |1 +
 client/tests/kvm/kvm_tests.cfg.sample |   67 +
 client/tests/kvm/kvm_tests.py |  105 +
 client/tests/kvm/kvm_vm.py|2 +
 4 files changed, 175 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index 070e463..f985388 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -56,6 +56,7 @@ class kvm(test.test):
 linux_s3: test_routine(kvm_tests, run_linux_s3),
 stress_boot:  test_routine(kvm_tests, run_stress_boot),
 timedrift:test_routine(kvm_tests, run_timedrift),
+pci_hotplug:   test_routine(kvm_tests, run_pci_hotplug),
 }
 
 # Make it possible to import modules from the test's bindir
diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 7cd12cb..9af1bc8 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -118,6 +118,53 @@ variants:
 kill_vm = yes
 kill_vm_gracefully = no
 
+- nic_hotplug:
+type = pci_hotplug
+pci_type = nic
+modprobe_acpiphp = no
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+pci_test_cmd = 'nslookup www.redhat.com'
+wait_secs_for_hook_up = 3
+variants:
+- nic_8139:
+pci_model = rtl8139
+match_string = 8139
+- nic_virtio:
+pci_model = virtio
+match_string = Virtio network device
+- nic_e1000:
+pci_model = e1000
+match_string = Gigabit Ethernet Controller
+
+- block_hotplug:
+type = pci_hotplug
+pci_type = block
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+images +=  stg
+boot_drive_stg = no
+image_name_stg = storage
+image_size_stg = 1G
+remove_image_stg = yes
+force_create_image_stg = yes
+pci_test_cmd = yes | mke2fs `fdisk -l 21 | awk '/\/dev\/[sv]d[a-z] 
doesn/ {print $2}'`
+wait_secs_for_hook_up = 3
+kill_vm_on_error = yes
+variants:
+- block_virtio:
+pci_model = virtio
+match_string = Virtio block device
+- block_scsi:
+pci_model = scsi
+match_string = SCSI
+variants:
+- fmt_qcow2:
+image_format_stg = qcow2
+- fmt_raw:
+image_format_stg = raw
+only Fedora Ubuntu Windows
+
 
 # NICs
 variants:
@@ -259,6 +306,10 @@ variants:
 - RHEL:
 no setup
 ssh_prompt = \[r...@.{0,50}][\#\$] 
+nic_hotplug:
+modprobe_module = acpiphp
+block_hotplug:
+modprobe_module = acpiphp
 
 variants:
 - 5.3.i386:
@@ -345,6 +396,22 @@ variants:
 # Alternative host load:
 #host_load_command = dd

Re: [PATCH] Add a subtest pci_hotplug in kvm test

2009-08-04 Thread Yolkfull Chow

Sorry for just submitting a wrong patch file which including a bug, please 
ignore previous one
and review following patch:

Differences between previous patch:

- Use a loop waiting for some seconds to compare output of a command
- Use a loop waiting for some seconds to catch string indicates which
  PCI device
- Add option kill_vm_on_error in block_hotplug since once a model failed
  to be hot removed, it will affect next model

A better result I can get is:
---
./scan_results.py
test   status   seconds 
info
   --   --- 

Fedora.11.32.nic_hotplug.nic_8139  GOOD 107 
completed successfully
Fedora.11.32.nic_hotplug.nic_virtioFAIL 76  Not 
found pci model:virtio; Command is:lspci | tail -n1
Fedora.11.32.block_hotplug.fmt_qcow2.block_virtio  GOOD 92  
completed successfully
Fedora.11.32.block_hotplug.fmt_qcow2.block_scsiGOOD 48  
completed successfully
RHEL.5.3.i386.nic_hotplug.nic_8139 GOOD 144 
completed successfully
RHEL.5.3.i386.nic_hotplug.nic_virtio   GOOD 48  
completed successfully
RHEL.5.3.i386.block_hotplug.fmt_qcow2.block_virtio GOOD 47  
completed successfully
RHEL.5.3.i386.block_hotplug.fmt_qcow2.block_scsi   GOOD 47  
completed successfully
Win2008.32.nic_hotplug.nic_8139GOOD 141 
completed successfully
Win2008.32.nic_hotplug.nic_virtio  GOOD 99  
completed successfully
Win2008.32.block_hotplug.fmt_qcow2.block_scsi  GOOD 90  
completed successfully
   GOOD 953


Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm.py   |1 +
 client/tests/kvm/kvm_tests.cfg.sample |   67 +
 client/tests/kvm/kvm_tests.py |  105 +
 client/tests/kvm/kvm_vm.py|2 +
 4 files changed, 175 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index 070e463..f985388 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -56,6 +56,7 @@ class kvm(test.test):
 linux_s3: test_routine(kvm_tests, run_linux_s3),
 stress_boot:  test_routine(kvm_tests, run_stress_boot),
 timedrift:test_routine(kvm_tests, run_timedrift),
+pci_hotplug:   test_routine(kvm_tests, run_pci_hotplug),
 }
 
 # Make it possible to import modules from the test's bindir
diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 7cd12cb..9af1bc8 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -118,6 +118,53 @@ variants:
 kill_vm = yes
 kill_vm_gracefully = no
 
+- nic_hotplug:
+type = pci_hotplug
+pci_type = nic
+modprobe_acpiphp = no
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+pci_test_cmd = 'nslookup www.redhat.com'
+wait_secs_for_hook_up = 3
+variants:
+- nic_8139:
+pci_model = rtl8139
+match_string = 8139
+- nic_virtio:
+pci_model = virtio
+match_string = Virtio network device
+- nic_e1000:
+pci_model = e1000
+match_string = Gigabit Ethernet Controller
+
+- block_hotplug:
+type = pci_hotplug
+pci_type = block
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+images +=  stg
+boot_drive_stg = no
+image_name_stg = storage
+image_size_stg = 1G
+remove_image_stg = yes
+force_create_image_stg = yes
+pci_test_cmd = yes | mke2fs `fdisk -l 21 | awk '/\/dev\/[sv]d[a-z] 
doesn/ {print $2}'`
+wait_secs_for_hook_up = 3
+kill_vm_on_error = yes
+variants:
+- block_virtio:
+pci_model = virtio
+match_string = Virtio block device
+- block_scsi:
+pci_model = scsi
+match_string = SCSI
+variants:
+- fmt_qcow2:
+image_format_stg = qcow2
+- fmt_raw:
+image_format_stg = raw
+only Fedora Ubuntu Windows
+
 
 # NICs
 variants:
@@ -259,6 +306,10 @@ variants:
 - RHEL:
 no setup
 ssh_prompt = \[r...@.{0,50}][\#\$] 
+nic_hotplug:
+modprobe_module = acpiphp
+block_hotplug:
+modprobe_module = acpiphp
 
 variants:
 - 5.3.i386:
@@ -345,6 +396,22 @@ variants

Re: [KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-03 Thread Yolkfull Chow
On Mon, Aug 03, 2009 at 02:45:23AM -0400, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  Hi Michael, I just have some comments on what you changed on
  stress_boot. :-)
  
  
  On Mon, Aug 03, 2009 at 02:58:21AM +0300, Michael Goldish wrote:
   Take an additional parameter 'clone_address_index_base' which
  indicates the
   initial value for 'address_index' for the cloned VMs.  This value
  is
   incremented after each clone is created.  I assume the original VM
  has a single
   NIC; otherwise NICs will end up sharing MAC addresses, which is
  bad.
   
   Also, make a few small corrections:
   
   - Take the params for the clones from the original VM's params, not
  from the
   test's params, because the test's params contain information about
  several
   VMs, only one of which is the original VM. The original VM's params,
  on the
   other hand, describe just a single VM (which we want to clone).
   
   - Change the way kill_vm.* parameters are sent to the
  postprocessor.
   (The postprocessor doesn't read params from the VM objects but
  rather from the
   test's params dict.)
   
   - Replace 'if get_command_status(...)' with 'if
  get_command_status(...) != 0'.
   
   - Replace the test command 'ps aux' with 'uname -a'.  The silly
  reason for this
   is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is just
  as good
   (AFAIK), use it instead.
   
   Signed-off-by: Michael Goldish mgold...@redhat.com
   ---
client/tests/kvm/kvm_tests.cfg.sample |7 ++-
client/tests/kvm/kvm_tests.py |   14 ++
2 files changed, 12 insertions(+), 9 deletions(-)
   
   diff --git a/client/tests/kvm/kvm_tests.cfg.sample
  b/client/tests/kvm/kvm_tests.cfg.sample
   index 3c23432..3a4bf64 100644
   --- a/client/tests/kvm/kvm_tests.cfg.sample
   +++ b/client/tests/kvm/kvm_tests.cfg.sample
   @@ -117,7 +117,12 @@ variants:
- stress_boot:  install setup
type = stress_boot
max_vms = 5
   -alive_test_cmd = ps aux
   +alive_test_cmd = uname -a
   +clone_address_index_base = 10
   +kill_vm = yes
   +kill_vm_vm1 = no
  
  
  Actually, there are two methods to deal with started VMs via
  framework, one is as soon
  as the current test finished and the other is just before next test
  going to be started. 
  In this case both methods equal somewhat between each other I think,
  and
  I had chosen the later one. But if you had done some testing and
  proved the previous one is better, I would agree. 
 
 It's not that I tested and found the former to be better, it's just that
 if a user runs only stress_boot, it makes sense to clean up at the end
 of the test, not at the beginning of the next one (because there's no
 next one).  Since you're adding VMs to the postprocessor's params
 (params['vms'] +=   + vm_name) I thought we might as well use that.
 (In order to clean up the VMs automatically when the next test starts,
 we don't need to add the VM to those params -- only to 'env'.)

Hmm...yes, if users only want to run stress_boot, cleaning up should be
implemented at the end of the test. 

 
  
  
   +kill_vm_gracefully = no
  
  This has been specified within code during cloning. 
 
 Right, but the postprocessor is supposed to ignore that, because it doesn't
 look at the params inside the VM -- it looks at test params that get to the
 postprocessor.
 
  
   +extra_params +=  -snapshot
  
  If you add this option '-snapshot' here, the first VM will be in
  snapshot
  mode as well which will affect next test.
 
 Which is good in my opinion, because I don't want the first VM writing to the
 same image that others are reading from.  It sounds like it might
 
 Let me know what you think.

My opinion is, since the scenario that the first VM using non-snapshot
mode while the others do has not caused any problem, why not just keep the 
first VM
in non-snapshot mode and let it to be used in next test which really save 
booting time? 

What do you think?  Please let me know if there is a trap in such scenario.
Thanks. :-)

 
 Thanks,
 Michael
 
  

- shutdown: install setup
type = shutdown
   diff --git a/client/tests/kvm/kvm_tests.py
  b/client/tests/kvm/kvm_tests.py
   index 9784ec9..308db97 100644
   --- a/client/tests/kvm/kvm_tests.py
   +++ b/client/tests/kvm/kvm_tests.py
   @@ -541,8 +541,8 @@ def run_stress_boot(tests, params, env):
raise error.TestFail(Could not log into first guest)

num = 2
   -vms = []
sessions = [session]
   +address_index = int(params.get(clone_address_index_base,
  10))

# boot the VMs
while num = int(params.get(max_vms)):
   @@ -550,15 +550,12 @@ def run_stress_boot(tests, params, env):
vm_name = vm + str(num)

# clone vm according to the first one
   -vm_params = params.copy()
   -vm_params

Re: [PATCH] Add a subtest pci_hotplug in kvm test

2009-08-03 Thread Yolkfull Chow

On 06/30/2009 09:58 PM, Dor Laor wrote:

On 06/30/2009 02:11 PM, Yolkfull Chow wrote:

Signed-off-by: Yolkfull Chowyz...@redhat.com
---
  client/tests/kvm/kvm.py   |1 +
  client/tests/kvm/kvm_tests.cfg.sample |   56 
  client/tests/kvm/kvm_tests.py |   93 
+

  client/tests/kvm/kvm_vm.py|2 +
  4 files changed, 152 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index 4c7bae4..4fbce5b 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -55,6 +55,7 @@ class kvm(test.test):
  kvm_install:  test_routine(kvm_install, 
run_kvm_install),
  linux_s3: test_routine(kvm_tests, 
run_linux_s3),
  stress_boot:  test_routine(kvm_tests, 
run_stress_boot),
+pci_hotplug:  test_routine(kvm_tests, 
run_pci_hotplug),


Cool! It's very good since it tends to break.



  }

  # Make it possible to import modules from the test's bindir
diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample

index 2f864de..50b5765 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -94,6 +94,52 @@ variants:
  max_vms = 5
  alive_test_cmd = ps aux

+
+- nic_hotplug:
+type = pci_hotplug
+pci_type = nic
+modprobe_acpiphp = yes
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+pci_test_cmd = 'nslookup www.redhat.com'
+variants:
+- @nic_8139:
+pci_model = rtl8139
+match_string = 8139
+- nic_virtio:
+pci_model = virtio
+match_string = Virtio network device
+- nic_e1000:
+pci_model = e1000
+match_string = Gigabit Ethernet Controller
+
+- block_hotplug:
+type = pci_hotplug
+pci_type = block
+modprobe_acpiphp = yes
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+images +=  stg
+boot_drive_stg = no
+image_name_stg = storage
+image_size = 1G
+force_create_image_stg = yes
+pci_test_cmd = 'dir'
+no Windows
+variants:
+- block_virtio:
+pci_model = virtio
+match_string = Virtio block device
+- block_scsi:
+pci_model = scsi
+match_string = SCSI storage controller
+variants:


There is no need to test qcow2/raw here since it shouldn't matter.
You can test qcow2 only, it is enough.


Hi Glauber,  according to Dor's comments on this, I did some testing and 
got an interesting result for block_hotplug:

1) hotplug storage of raw + SCSI  will always fail on Windows
2) hotplug storage of Raw + Virtio will always fail on Fedora
3) hotplug storage with image format Raw will also fail often on RHEL

Does block_hotplug relate to image format?  Would you give me any clue 
on this?

Thanks in advance.




+- fmt_qcow2:
+image_format_stg = qcow2
+- fmt_raw:
+image_format_stg = raw
+
+
  # NICs
  variants:
  - @rtl8139:
@@ -306,6 +352,12 @@ variants:
  migration_test_command = ver  vol
  stress_boot:
  alive_test_cmd = systeminfo
+nic_hotplug:
+modprobe_acpiphp = no
+reference_cmd = systeminfo
+find_pci_cmd = ipconfig /all | find Description
+nic_e1000:
+match_string = Intel(R) PRO/1000 MT Network 
Connection


  variants:
  - Win2000:
@@ -530,6 +582,10 @@ virtio|virtio_blk|e1000:
  only Fedora.9 openSUSE-11 Ubuntu-8.10-server


+nic_hotplug.nic_virtio|block_hotplug:
+no Windows
+
+
  variants:
  - @qcow2:
  image_format = qcow2
diff --git a/client/tests/kvm/kvm_tests.py 
b/client/tests/kvm/kvm_tests.py

index 2d11fed..21280b9 100644
--- a/client/tests/kvm/kvm_tests.py
+++ b/client/tests/kvm/kvm_tests.py
@@ -585,3 +585,96 @@ def run_stress_boot(tests, params, env):
  for se in sessions:
  se.close()
  logging.info(Total number booted: %d % (num -1))
+
+
+def run_pci_hotplug(test, params, env):
+
+Test pci devices' hotplug
+1) pci_add a deivce (nic or storage)
+2) Compare 'info pci' output
+3) Compare $reference_cmd output
+4) Verify whether pci_model is shown in $pci_find_cmd
+5) pci_del the device, verify whether could remove the pci device
+
+@param test:   kvm test object
+@param params: Dictionary with the test parameters
+@param env:Dictionary with test environment.
+
+vm = kvm_utils.env_get_vm(env, params.get(main_vm))
+if not vm:
+raise error.TestError(VM object not found in environment)
+if not vm.is_alive():
+raise

Re: [PATCH] Add a subtest pci_hotplug in kvm test

2009-08-03 Thread Yolkfull Chow
On Mon, Aug 03, 2009 at 02:37:29PM +0300, Dor Laor wrote:
 On 08/03/2009 12:19 PM, Yolkfull Chow wrote:
 On 06/30/2009 09:58 PM, Dor Laor wrote:
 On 06/30/2009 02:11 PM, Yolkfull Chow wrote:
 Signed-off-by: Yolkfull Chowyz...@redhat.com
 ---
 client/tests/kvm/kvm.py | 1 +
 client/tests/kvm/kvm_tests.cfg.sample | 56 
 client/tests/kvm/kvm_tests.py | 93 +
 client/tests/kvm/kvm_vm.py | 2 +
 4 files changed, 152 insertions(+), 0 deletions(-)

 diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
 index 4c7bae4..4fbce5b 100644
 --- a/client/tests/kvm/kvm.py
 +++ b/client/tests/kvm/kvm.py
 @@ -55,6 +55,7 @@ class kvm(test.test):
 kvm_install: test_routine(kvm_install, run_kvm_install),
 linux_s3: test_routine(kvm_tests, run_linux_s3),
 stress_boot: test_routine(kvm_tests, run_stress_boot),
 + pci_hotplug: test_routine(kvm_tests, run_pci_hotplug),

 Cool! It's very good since it tends to break.


 }

 # Make it possible to import modules from the test's bindir
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample
 b/client/tests/kvm/kvm_tests.cfg.sample
 index 2f864de..50b5765 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -94,6 +94,52 @@ variants:
 max_vms = 5
 alive_test_cmd = ps aux

 +
 + - nic_hotplug:
 + type = pci_hotplug
 + pci_type = nic
 + modprobe_acpiphp = yes
 + reference_cmd = lspci
 + find_pci_cmd = 'lspci | tail -n1'
 + pci_test_cmd = 'nslookup www.redhat.com'
 + variants:
 + - @nic_8139:
 + pci_model = rtl8139
 + match_string = 8139
 + - nic_virtio:
 + pci_model = virtio
 + match_string = Virtio network device
 + - nic_e1000:
 + pci_model = e1000
 + match_string = Gigabit Ethernet Controller
 +
 + - block_hotplug:
 + type = pci_hotplug
 + pci_type = block
 + modprobe_acpiphp = yes
 + reference_cmd = lspci
 + find_pci_cmd = 'lspci | tail -n1'
 + images +=  stg
 + boot_drive_stg = no
 + image_name_stg = storage
 + image_size = 1G
 + force_create_image_stg = yes
 + pci_test_cmd = 'dir'
 + no Windows
 + variants:
 + - block_virtio:
 + pci_model = virtio
 + match_string = Virtio block device
 + - block_scsi:
 + pci_model = scsi
 + match_string = SCSI storage controller
 + variants:

 There is no need to test qcow2/raw here since it shouldn't matter.
 You can test qcow2 only, it is enough.

 Hi Glauber, according to Dor's comments on this, I did some testing and
 got an interesting result for block_hotplug:
 1) hotplug storage of raw + SCSI will always fail on Windows
 2) hotplug storage of Raw + Virtio will always fail on Fedora
 3) hotplug storage with image format Raw will also fail often on RHEL

 Does block_hotplug relate to image format? Would you give me any clue on
 this?

 It shouldn't matter. In case the test is sensitive for timeout, it might.

 Can you describe what's working and what's not on each combination?
 As for scsi, it is not reliable so it might be scsi's fault.

 Can you provide the fdisk -l ouput on Fedora when it is not working?
 From the test below, there is not time/event for letting the guest hook  
 up the new block device.
 Maybe you need to do several retries in a loop or check a real event in  
 the guest (better one)

I can make sure now there is a bug in block hotplug feature since segfault has 
been found in
dmesg on my laptop Fedora. Also, both guest RHEL.5.3-i386 and Windows 2008 have
crashed during running block_hotplug test. 
For example, for Windows 2008-32 guest, sometimes issue command 'systeminfo' 
during block_hotplug
can crash the guest.

I had added a loop wait for the PCI device hooked up, before this, I
used sleep(some_seconds) wait for module installed. 


 Thanks in advance.


 + - fmt_qcow2:
 + image_format_stg = qcow2
 + - fmt_raw:
 + image_format_stg = raw
 +
 +
 # NICs
 variants:
 - @rtl8139:
 @@ -306,6 +352,12 @@ variants:
 migration_test_command = ver vol
 stress_boot:
 alive_test_cmd = systeminfo
 + nic_hotplug:
 + modprobe_acpiphp = no
 + reference_cmd = systeminfo
 + find_pci_cmd = ipconfig /all | find Description
 + nic_e1000:
 + match_string = Intel(R) PRO/1000 MT Network Connection

 variants:
 - Win2000:
 @@ -530,6 +582,10 @@ virtio|virtio_blk|e1000:
 only Fedora.9 openSUSE-11 Ubuntu-8.10-server


 +nic_hotplug.nic_virtio|block_hotplug:
 + no Windows
 +
 +
 variants:
 - @qcow2:
 image_format = qcow2
 diff --git a/client/tests/kvm/kvm_tests.py
 b/client/tests/kvm/kvm_tests.py
 index 2d11fed..21280b9 100644
 --- a/client/tests/kvm/kvm_tests.py
 +++ b/client/tests/kvm/kvm_tests.py
 @@ -585,3 +585,96 @@ def run_stress_boot(tests, params, env):
 for se in sessions:
 se.close()
 logging.info(Total number booted: %d % (num -1))
 +
 +
 +def run_pci_hotplug(test, params, env):
 + 
 + Test pci devices' hotplug
 + 1) pci_add a deivce (nic or storage)
 + 2) Compare 'info pci' output
 + 3) Compare $reference_cmd output
 + 4) Verify whether pci_model is shown in $pci_find_cmd
 + 5) pci_del the device, verify whether could remove the pci

Re: [KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-02 Thread Yolkfull Chow
Hi Michael, I just have some comments on what you changed on
stress_boot. :-)


On Mon, Aug 03, 2009 at 02:58:21AM +0300, Michael Goldish wrote:
 Take an additional parameter 'clone_address_index_base' which indicates the
 initial value for 'address_index' for the cloned VMs.  This value is
 incremented after each clone is created.  I assume the original VM has a 
 single
 NIC; otherwise NICs will end up sharing MAC addresses, which is bad.
 
 Also, make a few small corrections:
 
 - Take the params for the clones from the original VM's params, not from the
 test's params, because the test's params contain information about several
 VMs, only one of which is the original VM. The original VM's params, on the
 other hand, describe just a single VM (which we want to clone).
 
 - Change the way kill_vm.* parameters are sent to the postprocessor.
 (The postprocessor doesn't read params from the VM objects but rather from the
 test's params dict.)
 
 - Replace 'if get_command_status(...)' with 'if get_command_status(...) != 0'.
 
 - Replace the test command 'ps aux' with 'uname -a'.  The silly reason for 
 this
 is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is just as good
 (AFAIK), use it instead.
 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm_tests.cfg.sample |7 ++-
  client/tests/kvm/kvm_tests.py |   14 ++
  2 files changed, 12 insertions(+), 9 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
 b/client/tests/kvm/kvm_tests.cfg.sample
 index 3c23432..3a4bf64 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -117,7 +117,12 @@ variants:
  - stress_boot:  install setup
  type = stress_boot
  max_vms = 5
 -alive_test_cmd = ps aux
 +alive_test_cmd = uname -a
 +clone_address_index_base = 10
 +kill_vm = yes
 +kill_vm_vm1 = no


Actually, there are two methods to deal with started VMs via framework, one is 
as soon
as the current test finished and the other is just before next test going to be 
started. 
In this case both methods equal somewhat between each other I think, and
I had chosen the later one. But if you had done some testing and
proved the previous one is better, I would agree. 


 +kill_vm_gracefully = no

This has been specified within code during cloning. 

 +extra_params +=  -snapshot

If you add this option '-snapshot' here, the first VM will be in snapshot
mode as well which will affect next test.

  
  - shutdown: install setup
  type = shutdown
 diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
 index 9784ec9..308db97 100644
 --- a/client/tests/kvm/kvm_tests.py
 +++ b/client/tests/kvm/kvm_tests.py
 @@ -541,8 +541,8 @@ def run_stress_boot(tests, params, env):
  raise error.TestFail(Could not log into first guest)
  
  num = 2
 -vms = []
  sessions = [session]
 +address_index = int(params.get(clone_address_index_base, 10))
  
  # boot the VMs
  while num = int(params.get(max_vms)):
 @@ -550,15 +550,12 @@ def run_stress_boot(tests, params, env):
  vm_name = vm + str(num)
  
  # clone vm according to the first one
 -vm_params = params.copy()
 -vm_params['image_snapshot'] = yes
 -vm_params['kill_vm'] = yes
 -vm_params['kill_vm_gracefully'] = no
 +vm_params = vm.get_params().copy()
 +vm_params[address_index] = str(address_index)
  curr_vm = vm.clone(vm_name, vm_params)
  kvm_utils.env_register_vm(env, vm_name, curr_vm)
  params['vms'] +=   + vm_name
  
 -#vms.append(curr_vm)
  logging.info(Booting guest #%d % num)
  if not curr_vm.create():
  raise error.TestFail(Cannot create VM #%d % num)
 @@ -571,10 +568,11 @@ def run_stress_boot(tests, params, env):
  sessions.append(curr_vm_session)
  
  # check whether all previous ssh sessions are responsive
 -for i, vm_session in enumerate(sessions):
 -if 
 vm_session.get_command_status(params.get(alive_test_cmd)):
 +for i, se in enumerate(sessions):
 +if se.get_command_status(params.get(alive_test_cmd)) != 0:
  raise error.TestFail(Session #%d is not responsive % i)
  num += 1
 +address_index += 1
  
  except (error.TestFail, OSError):
  for se in sessions:
 -- 
 1.5.4.1
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Specify the system UUID for VM

2009-07-29 Thread Yolkfull Chow
On Wed, Jul 29, 2009 at 03:18:51PM +0300, Avi Kivity wrote:
 On 07/16/2009 01:26 PM, Yolkfull Chow wrote:
 Signed-off-by: Yolkfull Chowyz...@redhat.com
 ---
   client/tests/kvm/kvm_vm.py |   11 +++
   1 files changed, 11 insertions(+), 0 deletions(-)

 diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
 index 503f636..895049e 100644
 --- a/client/tests/kvm/kvm_vm.py
 +++ b/client/tests/kvm/kvm_vm.py
 @@ -113,6 +113,13 @@ class VM:
   self.qemu_path = qemu_path
   self.image_dir = image_dir
   self.iso_dir = iso_dir
 +
 +if params.get(uuid):
 +if params.get(uuid) == random:
 +uuid = os.popen(cat 
 /proc/sys/kernel/random/uuid).readline()
 +self.uuid = uuid.strip()


 instead of os.popen(cat ...), you can open the file directly:

uuid = file('/proc/.../uuid').readline()

Yes, Lucas also suggested this method as well. Since the patch has been 
applied, need I submit a
little patch for this? 
Thanks for suggestion. :-)


 -- 
 error compiling committee.c: too many arguments to function

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Specify the system UUID for VM

2009-07-29 Thread Yolkfull Chow
On Wed, Jul 29, 2009 at 09:06:25PM +0800, Yolkfull Chow wrote:
 On Wed, Jul 29, 2009 at 03:18:51PM +0300, Avi Kivity wrote:
  On 07/16/2009 01:26 PM, Yolkfull Chow wrote:
  Signed-off-by: Yolkfull Chowyz...@redhat.com
  ---
client/tests/kvm/kvm_vm.py |   11 +++
1 files changed, 11 insertions(+), 0 deletions(-)
 
  diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
  index 503f636..895049e 100644
  --- a/client/tests/kvm/kvm_vm.py
  +++ b/client/tests/kvm/kvm_vm.py
  @@ -113,6 +113,13 @@ class VM:
self.qemu_path = qemu_path
self.image_dir = image_dir
self.iso_dir = iso_dir
  +
  +if params.get(uuid):
  +if params.get(uuid) == random:
  +uuid = os.popen(cat 
  /proc/sys/kernel/random/uuid).readline()
  +self.uuid = uuid.strip()
 
 
  instead of os.popen(cat ...), you can open the file directly:
 
 uuid = file('/proc/.../uuid').readline()
 
 Yes, Lucas also suggested this method as well. Since the patch has been 
 applied, need I submit a
 little patch for this? 
 Thanks for suggestion. :-)

Seems the answer emerges. ;-)

 
 
  -- 
  error compiling committee.c: too many arguments to function
 
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Add a kvm subtest -- pci_hotplug, which supports both Windows OS and Linux OS.

2009-07-28 Thread Yolkfull Chow
On Tue, Jul 28, 2009 at 02:03:10AM -0300, Lucas Meneghel Rodrigues wrote:
 On Thu, Jul 23, 2009 at 4:18 AM, Yolkfull Chowyz...@redhat.com wrote:
  Hi Yaniv, following is the output from Windows guest:
 
  ---
  Microsoft DiskPart version 6.0.6001
  Copyright (C) 1999-2007 Microsoft Corporation.
  On computer: WIN-Q18A9GP5ECI
 
  Disk 1 is now the selected disk.
 
  DiskPart has encountered an error: The media is write protected.
  See the System Event Log for more information.
 
  Have you ever seen this error during format newly added SCSI block
  device?
 
  The contents of my diskpart script file:
  ---
  select disk 1
 
 
  online
 
  create partition primary
  exit
  ---
 
 
  I didn't use a script - nor have I ever hot-plugged a disk, but it does
  seem to happen to me as well now - the 2nd disk (the first is IDE) is
  indeed seems to be R/O.
  I'll look into it.
 
  Hi Lucas, did you notice this problem happened on Windows guest? What's
  your opinion about the patch pci_hotplug,send or wait?
 
 Hi Yolkfull, sorry for the delay answering. Yes, I did see the problem
 on windows guests. About the test itself, it looks good and I am
 making more tests before integrating it. Interestingly I tried it with
 older fedora versions and lspci doesn't seem to be able to recognize
 the newly added devices. High time we add step files and data for F10
 and F11 on the default config file.
 
  I think I can still send the patch for now, and just use 'detail disk' 
  which doesn't format the newly added
  disk in diskpart script. As soon as we have a solution, I can submit a
  patch to fix it. Moreover, nic_virtio and block_virtio for Windows section 
  will be
  disabled in config file as well since both drivers don't work well now.
 
 Ok, if you have an updated patch, please send it. Let's try to get all
 the problems addressed as soon as possible.
 
 Thanks for your work on this!

Ok, I will post it here soon after I finish current case. Thanks, Lucas. :-)


Regards,
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add a kvm subtest -- pci_hotplug, which supports both Windows OS and Linux OS.

2009-07-23 Thread Yolkfull Chow
On Tue, Jul 21, 2009 at 10:45:31AM +0300, Yaniv Kaul wrote:
 On 7/21/2009 9:11 AM, Yolkfull Chow wrote:

 SNIP



 Previously, I used 'create partition primary' to verify whether the disk 
 could be formatted but always got an error:
 ---
 diskpart has encountered an error...
 ---
 And then I found the SCSI disk was added to Windows guest was read-only. 
 So I changed the format command to be 'detail disk' temporarily.



 Interesting - how did that happen? Lets see your command line, and
 probably 'info block' from the monitor. Are you saying that hot-plugged
 drives are added as r/o?

  
 I am afraid yes. After pci_add (monitor command: pci_add pci_addr=auto 
 storage
 file=/tmp/stg.qcow2,if=scsi) the SCSI block device, 'info block' will
 show the scsi0-hd0 device is 'ro=0' whereas when I 'create partition
 primary' on this selected disk in diskpart tool, error message will be 
 raised that the disk is
 write protected.


 Well, that doesn't sound like the desired behavior. Work with the KVM
 developers on this.
  
 Hi Yaniv, following is the output from Windows guest:

 ---
 Microsoft DiskPart version 6.0.6001
 Copyright (C) 1999-2007 Microsoft Corporation.
 On computer: WIN-Q18A9GP5ECI

 Disk 1 is now the selected disk.

 DiskPart has encountered an error: The media is write protected.
 See the System Event Log for more information.

 Have you ever seen this error during format newly added SCSI block
 device?

 The contents of my diskpart script file:
 ---
 select disk 1


 online

 create partition primary
 exit
 ---


 I didn't use a script - nor have I ever hot-plugged a disk, but it does  
 seem to happen to me as well now - the 2nd disk (the first is IDE) is  
 indeed seems to be R/O.
 I'll look into it.

Hi Lucas, did you notice this problem happened on Windows guest? What's
your opinion about the patch pci_hotplug,send or wait? 

I think I can still send the patch for now, and just use 'detail disk' which 
doesn't format the newly added
disk in diskpart script. As soon as we have a solution, I can submit a
patch to fix it. Moreover, nic_virtio and block_virtio for Windows section will 
be
disabled in config file as well since both drivers don't work well now. 

What do you think? Does anyone others has any suggestions/comments?
Thanks in advance. :-)

Regards,
Yolkfull



  


 Also, you can always add an already formatted drive. Just create a qcow
 drive in another instance, format it properly and use it.


  

 Any result with an already formatted drive?
 Y.

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add a kvm subtest -- pci_hotplug, which supports both Windows OS and Linux OS.

2009-07-21 Thread Yolkfull Chow
On Tue, Jul 21, 2009 at 10:45:31AM +0300, Yaniv Kaul wrote:
 On 7/21/2009 9:11 AM, Yolkfull Chow wrote:

 SNIP



 Previously, I used 'create partition primary' to verify whether the disk 
 could be formatted but always got an error:
 ---
 diskpart has encountered an error...
 ---
 And then I found the SCSI disk was added to Windows guest was read-only. 
 So I changed the format command to be 'detail disk' temporarily.



 Interesting - how did that happen? Lets see your command line, and
 probably 'info block' from the monitor. Are you saying that hot-plugged
 drives are added as r/o?

  
 I am afraid yes. After pci_add (monitor command: pci_add pci_addr=auto 
 storage
 file=/tmp/stg.qcow2,if=scsi) the SCSI block device, 'info block' will
 show the scsi0-hd0 device is 'ro=0' whereas when I 'create partition
 primary' on this selected disk in diskpart tool, error message will be 
 raised that the disk is
 write protected.


 Well, that doesn't sound like the desired behavior. Work with the KVM
 developers on this.
  
 Hi Yaniv, following is the output from Windows guest:

 ---
 Microsoft DiskPart version 6.0.6001
 Copyright (C) 1999-2007 Microsoft Corporation.
 On computer: WIN-Q18A9GP5ECI

 Disk 1 is now the selected disk.

 DiskPart has encountered an error: The media is write protected.
 See the System Event Log for more information.

 Have you ever seen this error during format newly added SCSI block
 device?

 The contents of my diskpart script file:
 ---
 select disk 1


 online


 create partition primary
 exit
 ---


 I didn't use a script - nor have I ever hot-plugged a disk, but it does  
 seem to happen to me as well now - the 2nd disk (the first is IDE) is  
 indeed seems to be R/O.
 I'll look into it.


  


 Also, you can always add an already formatted drive. Just create a qcow
 drive in another instance, format it properly and use it.


  

 Any result with an already formatted drive?

No, haven't got a chance to try that. :-(

 Y.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add a kvm subtest -- pci_hotplug, which supports both Windows OS and Linux OS.

2009-07-21 Thread Yolkfull Chow
On Tue, Jul 21, 2009 at 10:45:31AM +0300, Yaniv Kaul wrote:
 On 7/21/2009 9:11 AM, Yolkfull Chow wrote:

 SNIP



 Previously, I used 'create partition primary' to verify whether the disk 
 could be formatted but always got an error:
 ---
 diskpart has encountered an error...
 ---
 And then I found the SCSI disk was added to Windows guest was read-only. 
 So I changed the format command to be 'detail disk' temporarily.



 Interesting - how did that happen? Lets see your command line, and
 probably 'info block' from the monitor. Are you saying that hot-plugged
 drives are added as r/o?

  
 I am afraid yes. After pci_add (monitor command: pci_add pci_addr=auto 
 storage
 file=/tmp/stg.qcow2,if=scsi) the SCSI block device, 'info block' will
 show the scsi0-hd0 device is 'ro=0' whereas when I 'create partition
 primary' on this selected disk in diskpart tool, error message will be 
 raised that the disk is
 write protected.


 Well, that doesn't sound like the desired behavior. Work with the KVM
 developers on this.
  
 Hi Yaniv, following is the output from Windows guest:

 ---
 Microsoft DiskPart version 6.0.6001
 Copyright (C) 1999-2007 Microsoft Corporation.
 On computer: WIN-Q18A9GP5ECI

 Disk 1 is now the selected disk.

 DiskPart has encountered an error: The media is write protected.
 See the System Event Log for more information.

 Have you ever seen this error during format newly added SCSI block
 device?

 The contents of my diskpart script file:
 ---
 select disk 1


 online

 create partition primary
 exit
 ---


 I didn't use a script - nor have I ever hot-plugged a disk, but it does  
 seem to happen to me as well now - the 2nd disk (the first is IDE) is  
 indeed seems to be R/O.
 I'll look into it.


  


 Also, you can always add an already formatted drive. Just create a qcow
 drive in another instance, format it properly and use it.


  

 Any result with an already formatted drive?

I just tried, got same result -- write protected. Steps:

1. qemu-img create -f raw /tmp/stg.raw 1G
2. mkfs.vfat /tmp/stg.raw
3. hot_add the block device
4. diskpart to 'create partition primary' on the newly added disk

Did I make any mistake? Or I need also try to hot_add a drive which has
been installed an OS? 

 Y.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)

2009-07-21 Thread Yolkfull Chow
On Mon, Jul 20, 2009 at 06:07:19PM +0300, Michael Goldish wrote:
 1) Log into a guest.
 2) Take a time reading from the guest and host.
 3) Run load on the guest and host.
 4) Take a second time reading.
 5) Stop the load and rest for a while.
 6) Take a third time reading.
 7) If the drift immediately after load is higher than a user-
 specified value (in %), fail.
 If the drift after the rest period is higher than a user-specified value,
 fail.
 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm.py   |1 +
  client/tests/kvm/kvm_tests.py |  161 
 -
  2 files changed, 160 insertions(+), 2 deletions(-)
 
 diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
 index b18b643..070e463 100644
 --- a/client/tests/kvm/kvm.py
 +++ b/client/tests/kvm/kvm.py
 @@ -55,6 +55,7 @@ class kvm(test.test):
  kvm_install:  test_routine(kvm_install, 
 run_kvm_install),
  linux_s3: test_routine(kvm_tests, run_linux_s3),
  stress_boot:  test_routine(kvm_tests, run_stress_boot),
 +timedrift:test_routine(kvm_tests, run_timedrift),
  }
  
  # Make it possible to import modules from the test's bindir
 diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
 index 5991aed..ca0b8c0 100644
 --- a/client/tests/kvm/kvm_tests.py
 +++ b/client/tests/kvm/kvm_tests.py
 @@ -1,4 +1,4 @@
 -import time, os, logging
 +import time, os, logging, re, commands
  from autotest_lib.client.common_lib import utils, error
  import kvm_utils, kvm_subprocess, ppm_utils, scan_results
  
 @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env):
  
  # boot the first vm
  vm = kvm_utils.env_get_vm(env, params.get(main_vm))
 -
  if not vm:
  raise error.TestError(VM object not found in environment)
  if not vm.is_alive():
 @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env):
  for se in sessions:
  se.close()
  logging.info(Total number booted: %d % (num -1))
 +
 +
 +def run_timedrift(test, params, env):
 +
 +Time drift test (mainly for Windows guests):
 +
 +1) Log into a guest.
 +2) Take a time reading from the guest and host.
 +3) Run load on the guest and host.
 +4) Take a second time reading.
 +5) Stop the load and rest for a while.
 +6) Take a third time reading.
 +7) If the drift immediately after load is higher than a user-
 +specified value (in %), fail.
 +If the drift after the rest period is higher than a user-specified value,
 +fail.
 +
 +@param test: KVM test object.
 +@param params: Dictionary with test parameters.
 +@param env: Dictionary with the test environment.
 +
 +vm = kvm_utils.env_get_vm(env, params.get(main_vm))
 +if not vm:
 +raise error.TestError(VM object not found in environment)
 +if not vm.is_alive():
 +raise error.TestError(VM seems to be dead; Test requires a living 
 VM)
 +
 +logging.info(Waiting for guest to be up...)
 +
 +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
 +if not session:
 +raise error.TestFail(Could not log into guest)
 +
 +logging.info(Logged in)
 +
 +# Collect test parameters:
 +# Command to run to get the current time
 +time_command = params.get(time_command)
 +# Filter which should match a string to be passed to time.strptime()
 +time_filter_re = params.get(time_filter_re)
 +# Time format for time.strptime()
 +time_format = params.get(time_format)
 +guest_load_command = params.get(guest_load_command)
 +guest_load_stop_command = params.get(guest_load_stop_command)
 +host_load_command = params.get(host_load_command)
 +guest_load_instances = int(params.get(guest_load_instances, 1))
 +host_load_instances = int(params.get(host_load_instances, 0))
 +# CPU affinity mask for taskset
 +cpu_mask = params.get(cpu_mask, 0xFF)
 +load_duration = float(params.get(load_duration, 30))
 +rest_duration = float(params.get(rest_duration, 10))
 +drift_threshold = float(params.get(drift_threshold, 200))
 +drift_threshold_after_rest = 
 float(params.get(drift_threshold_after_rest,
 +  200))
 +
 +guest_load_sessions = []
 +host_load_sessions = []
 +
 +# Remember the VM's previous CPU affinity
 +prev_cpu_mask = commands.getoutput(taskset -p %s % vm.get_pid())
 +prev_cpu_mask = prev_cpu_mask.split()[-1]
 +# Set the VM's CPU affinity
 +commands.getoutput(taskset -p %s %s % (cpu_mask, vm.get_pid()))
 +
 +try:
 +# Get time before load
 +host_time_0 = time.time()
 +session.sendline(time_command)
 +(match, s) = session.read_up_to_prompt()
 +s = re.findall(time_filter_re, s)[0]
 +guest_time_0 = time.mktime(time.strptime(s, time_format))

Hi Machael, this 

Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)

2009-07-21 Thread Yolkfull Chow
On Tue, Jul 21, 2009 at 11:29:56AM -0400, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  On Mon, Jul 20, 2009 at 06:07:19PM +0300, Michael Goldish wrote:
   1) Log into a guest.
   2) Take a time reading from the guest and host.
   3) Run load on the guest and host.
   4) Take a second time reading.
   5) Stop the load and rest for a while.
   6) Take a third time reading.
   7) If the drift immediately after load is higher than a user-
   specified value (in %), fail.
   If the drift after the rest period is higher than a user-specified
  value,
   fail.
   
   Signed-off-by: Michael Goldish mgold...@redhat.com
   ---
client/tests/kvm/kvm.py   |1 +
client/tests/kvm/kvm_tests.py |  161
  -
2 files changed, 160 insertions(+), 2 deletions(-)
   
   diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
   index b18b643..070e463 100644
   --- a/client/tests/kvm/kvm.py
   +++ b/client/tests/kvm/kvm.py
   @@ -55,6 +55,7 @@ class kvm(test.test):
kvm_install:  test_routine(kvm_install,
  run_kvm_install),
linux_s3: test_routine(kvm_tests,
  run_linux_s3),
stress_boot:  test_routine(kvm_tests,
  run_stress_boot),
   +timedrift:test_routine(kvm_tests,
  run_timedrift),
}

# Make it possible to import modules from the test's
  bindir
   diff --git a/client/tests/kvm/kvm_tests.py
  b/client/tests/kvm/kvm_tests.py
   index 5991aed..ca0b8c0 100644
   --- a/client/tests/kvm/kvm_tests.py
   +++ b/client/tests/kvm/kvm_tests.py
   @@ -1,4 +1,4 @@
   -import time, os, logging
   +import time, os, logging, re, commands
from autotest_lib.client.common_lib import utils, error
import kvm_utils, kvm_subprocess, ppm_utils, scan_results

   @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env):

# boot the first vm
vm = kvm_utils.env_get_vm(env, params.get(main_vm))
   -
if not vm:
raise error.TestError(VM object not found in
  environment)
if not vm.is_alive():
   @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env):
for se in sessions:
se.close()
logging.info(Total number booted: %d % (num -1))
   +
   +
   +def run_timedrift(test, params, env):
   +
   +Time drift test (mainly for Windows guests):
   +
   +1) Log into a guest.
   +2) Take a time reading from the guest and host.
   +3) Run load on the guest and host.
   +4) Take a second time reading.
   +5) Stop the load and rest for a while.
   +6) Take a third time reading.
   +7) If the drift immediately after load is higher than a user-
   +specified value (in %), fail.
   +If the drift after the rest period is higher than a
  user-specified value,
   +fail.
   +
   +@param test: KVM test object.
   +@param params: Dictionary with test parameters.
   +@param env: Dictionary with the test environment.
   +
   +vm = kvm_utils.env_get_vm(env, params.get(main_vm))
   +if not vm:
   +raise error.TestError(VM object not found in
  environment)
   +if not vm.is_alive():
   +raise error.TestError(VM seems to be dead; Test requires a
  living VM)
   +
   +logging.info(Waiting for guest to be up...)
   +
   +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
   +if not session:
   +raise error.TestFail(Could not log into guest)
   +
   +logging.info(Logged in)
   +
   +# Collect test parameters:
   +# Command to run to get the current time
   +time_command = params.get(time_command)
   +# Filter which should match a string to be passed to
  time.strptime()
   +time_filter_re = params.get(time_filter_re)
   +# Time format for time.strptime()
   +time_format = params.get(time_format)
   +guest_load_command = params.get(guest_load_command)
   +guest_load_stop_command =
  params.get(guest_load_stop_command)
   +host_load_command = params.get(host_load_command)
   +guest_load_instances = int(params.get(guest_load_instances,
  1))
   +host_load_instances = int(params.get(host_load_instances,
  0))
   +# CPU affinity mask for taskset
   +cpu_mask = params.get(cpu_mask, 0xFF)
   +load_duration = float(params.get(load_duration, 30))
   +rest_duration = float(params.get(rest_duration, 10))
   +drift_threshold = float(params.get(drift_threshold, 200))
   +drift_threshold_after_rest =
  float(params.get(drift_threshold_after_rest,
   +  200))
   +
   +guest_load_sessions = []
   +host_load_sessions = []
   +
   +# Remember the VM's previous CPU affinity
   +prev_cpu_mask = commands.getoutput(taskset -p %s %
  vm.get_pid())
   +prev_cpu_mask = prev_cpu_mask.split()[-1]
   +# Set the VM's CPU affinity

[PATCH] Add UUID option into kvm command line

2009-07-17 Thread Yolkfull Chow

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_vm.py |   24 
 1 files changed, 24 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 503f636..48f2916 100644
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -107,6 +107,7 @@ class VM:
 @param iso_dir: The directory where ISOs reside
 
 self.pid = None
+self.uuid = None
 
 self.name = name
 self.params = params
@@ -287,6 +288,11 @@ class VM:
 elif params.get(display) == nographic:
 qemu_cmd +=  -nographic
 
+if params.get(uuid) == random:
+qemu_cmd +=  -uuid %s % self.uuid
+elif params.get(uuid):
+qemu_cmd +=  -uuid %s % params.get(uuid)
+
 return qemu_cmd
 
 
@@ -371,6 +377,12 @@ class VM:
 if params.get(display) == vnc:
 self.vnc_port = kvm_utils.find_free_port(5900, 6000)
 
+# Find random UUID if specified 'uuid = random' in config file
+if params.get(uuid) == random:
+f = open(/proc/sys/kernel/random/uuid)
+self.uuid = f.read().strip()
+f.close()
+
 # Make qemu command
 qemu_command = self.make_qemu_command()
 
@@ -732,3 +744,15 @@ class VM:
 self.send_key(shift-%s % char.lower())
 else:
 self.send_key(char)
+
+
+def get_uuid(self):
+
+Catch UUID of the VM.
+
+@return: None,if not specified in config file
+
+if self.params.get(uuid) == random:
+return self.uuid
+else:
+return self.params.get(uuid, None)
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Specify the system UUID for VM

2009-07-16 Thread Yolkfull Chow
On Thu, Jul 16, 2009 at 06:26:46PM +0800, Yolkfull Chow wrote:
 
 Signed-off-by: Yolkfull Chow yz...@redhat.com
 ---
  client/tests/kvm/kvm_vm.py |   11 +++
  1 files changed, 11 insertions(+), 0 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
 index 503f636..895049e 100644
 --- a/client/tests/kvm/kvm_vm.py
 +++ b/client/tests/kvm/kvm_vm.py
 @@ -113,6 +113,13 @@ class VM:
  self.qemu_path = qemu_path
  self.image_dir = image_dir
  self.iso_dir = iso_dir
 +
 +if params.get(uuid):
 +if params.get(uuid) == random:
 +uuid = os.popen(cat 
 /proc/sys/kernel/random/uuid).readline()
 +self.uuid = uuid.strip()
 +else:
 +self.uuid = params.get(uuid)

Sorry, forgot to initialize self.uuid.  Will resend the patch. 

  
  
  # Find available monitor filename
 @@ -374,6 +381,10 @@ class VM:
  # Make qemu command
  qemu_command = self.make_qemu_command()
  
 +# Specify the system UUID
 +if self.uuid:
 +qemu_command +=  -uuid %s % self.uuid
 +
  # Is this VM supposed to accept incoming migrations?
  if for_migration:
  # Find available migration port
 -- 
 1.6.2.5
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Specify the system UUID for VM

2009-07-16 Thread Yolkfull Chow

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_vm.py |   12 
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 503f636..5f81965 100644
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -113,6 +113,14 @@ class VM:
 self.qemu_path = qemu_path
 self.image_dir = image_dir
 self.iso_dir = iso_dir
+
+self.uuid = None
+if params.get(uuid):
+if params.get(uuid) == random:
+uuid = os.popen(cat /proc/sys/kernel/random/uuid).readline()
+self.uuid = uuid.strip()
+else:
+self.uuid = params.get(uuid)
 
 
 # Find available monitor filename
@@ -374,6 +382,10 @@ class VM:
 # Make qemu command
 qemu_command = self.make_qemu_command()
 
+# Specify the system UUID
+if self.uuid:
+qemu_command +=  -uuid %s % self.uuid
+
 # Is this VM supposed to accept incoming migrations?
 if for_migration:
 # Find available migration port
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Assign an UUID for each VM in kvm command line

2009-07-15 Thread Yolkfull Chow

On 07/15/2009 09:36 PM, Dor Laor wrote:

On 07/15/2009 12:12 PM, Yolkfull Chow wrote:
Would submit this patch which is from our internal kvm-autotest 
patches submitted by Jason.
So that we could go on test case about parameters verification(UUID, 
DMI data etc).


Signed-off-by: Yolkfull Chowyz...@redhat.com
---
  client/tests/kvm/kvm_vm.py |4 
  1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 503f636..68cc235 100644
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -287,6 +287,10 @@ class VM:
  elif params.get(display) == nographic:
  qemu_cmd +=  -nographic

+uuid = os.popen(cat 
/proc/sys/kernel/random/uuid).readline().strip()

+if uuid:
+qemu_cmd +=  -uuid %s % uuid


If you'll change the uuid on every run, the guest will notice that. 
Some guest (M$) might not love it.
Why not use a static uuid or even just test uuid in a specific test 
without having it in all tests?
Hi Dor, since we cannot use a static uuid for running stress_boot test, 
but just assign UUID in a specific test is a good idea. We could use an 
option like assign_uuid = yes for that specific test?


btw: why you're at it, please add uuid to the block devices too.
+ the -smbios option.

Do you mean assign serial number for block devices?

Thanks for suggestions. :)


Thanks,
dor


+
  return qemu_cmd







--
Yolkfull
Regards,

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Add a subtest pci_hotplug in kvm test

2009-07-14 Thread Yolkfull Chow

On 07/08/2009 09:51 AM, Lucas Meneghel Rodrigues wrote:

I've spent some time doing a second review and test of the code.
During my tests:

  * I found some problems with PCI hotplug itself and would like help
to figure out why things are not working as expected.
  * Made suggestions regarding the phrasing of the error messages
thrown by the test. Mostly nipticking. Let me know if you think the
new messages make sense.
  * The order of the final test steps looks a bit weird to me

Comments follow.

On Fri, Jul 3, 2009 at 3:00 AM, Yolkfull Chowyz...@redhat.com  wrote:
   

On 07/03/2009 01:57 PM, Yolkfull Chow wrote:
 

Signed-off-by: Yolkfull Chowyz...@redhat.com
---
   client/tests/kvm/kvm.py   |1 +
   client/tests/kvm/kvm_tests.cfg.sample |   65 ++-
   client/tests/kvm/kvm_tests.py |   94 
+
   client/tests/kvm/kvm_vm.py|2 +
   4 files changed, 161 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index b18b643..fc92e10 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -55,6 +55,7 @@ class kvm(test.test):
   kvm_install:  test_routine(kvm_install, 
run_kvm_install),
   linux_s3: test_routine(kvm_tests, run_linux_s3),
   stress_boot:  test_routine(kvm_tests, run_stress_boot),
+pci_hotplug:  test_routine(kvm_tests, run_pci_hotplug),
   }

   # Make it possible to import modules from the test's bindir
diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 2f864de..a9e16d6 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -94,6 +94,53 @@ variants:
   max_vms = 5
   alive_test_cmd = ps aux

+
+- nic_hotplug:
+type = pci_hotplug
+pci_type = nic
+modprobe_acpiphp = yes
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
   

I tried block device hotplug, lspci doesn't show up the newly added
devices. Already tried with F8 and F9. Any idea why?
   
  It doesn't need to modprobe acpiphp on Fedora, also, both F8 and F9 
don't have virtio  virtio_ring modules.  I ran it on guest Fedora-11 
and Win2008,  both ran successfully:


# ./scan_results.py
teststatus
secondsinfo
--
---
Fedora.11.32.nic_hotplug.nic_rtl8139GOOD
68completed successfully
Fedora.11.32.nic_hotplug.nic_virtio GOOD46
completed successfully
Fedora.11.32.block_hotplug.fmt_qcow2.block_virtioGOOD46
completed successfully
Fedora.11.32.block_hotplug.fmt_qcow2.block_scsi GOOD44
completed successfully
Fedora.11.32.block_hotplug.fmt_raw.block_virtio GOOD45
completed successfully
Fedora.11.32.block_hotplug.fmt_raw.block_scsi   GOOD46
completed successfully
Win2008.32.nic_hotplug.nic_rtl8139  GOOD
66completed successfully
Win2008.32.block_hotplug.fmt_qcow2.block_scsi   GOOD186
completed successfully
Win2008.32.block_hotplug.fmt_raw.block_scsi GOOD71
completed successfully


   

+pci_test_cmd = 'nslookupwww.redhat.com'
+seconds_wait_for_device_install = 3
+variants:
+- @nic_8139:
+pci_model = rtl8139
+match_string = 8139
+- nic_virtio:
+pci_model = virtio
+match_string = Virtio network device
+- nic_e1000:
+pci_model = e1000
+match_string = Gigabit Ethernet Controller
   

Pretty much all block hotplug 'guest side check' is failing during the
stage where the output  of lspci | tail -n1 is being compared with the
match strings. Hypervisor is qemu 0.10.5 (kvm-87 upstream).

   

+- block_hotplug:
+type = pci_hotplug
+pci_type = block
+modprobe_acpiphp = yes
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+images +=  stg
+boot_drive_stg = no
+image_name_stg = storage
+image_size = 1G
+force_create_image_stg = yes
+pci_test_cmd = 'dir'
+seconds_wait_for_device_install = 3
+variants:
+- block_virtio:
+pci_model = virtio
+match_string = Virtio block device
+- block_scsi:
+pci_model = scsi
+match_string = SCSI storage controller
+variants:
+- fmt_qcow2:
+image_format_stg = qcow2
+- fmt_raw:
+image_format_stg = raw
+
+
   # NICs
   variants:
   - @rtl8139:
@@ -306,6 +353,22 @@ variants

[PATCH] Add a kvm subtest -- pci_hotplug, which supports both Windows OS and Linux OS.

2009-07-14 Thread Yolkfull Chow
This is a subtest in kvm. It will verify newly added pci block device now. For 
Windows support,it needs to use_telnet since 'wmic' which is used to check disk 
info could only be executed in telnet session not ssh. Just ran it on guest 
Fedora-11.32 and Windows2008.32, both passed:

# ./scan_results.py
teststatus  seconds 
info
--  --- 

Fedora.11.32.nic_hotplug.nic_rtl8139GOOD68  
completed successfully
Fedora.11.32.nic_hotplug.nic_virtio GOOD46  
completed successfully
Fedora.11.32.block_hotplug.fmt_qcow2.block_virtio   GOOD46  
completed successfully
Fedora.11.32.block_hotplug.fmt_qcow2.block_scsi GOOD44  
completed successfully
Fedora.11.32.block_hotplug.fmt_raw.block_virtio GOOD45  
completed successfully
Fedora.11.32.block_hotplug.fmt_raw.block_scsi   GOOD46  
completed successfully
Win2008.32.nic_hotplug.nic_rtl8139  GOOD66  
completed successfully
Win2008.32.block_hotplug.fmt_qcow2.block_scsi   GOOD186 
completed successfully
Win2008.32.block_hotplug.fmt_raw.block_scsi GOOD71  
completed successfully



Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm.py   |1 +
 client/tests/kvm/kvm_tests.cfg.sample |   69 +++-
 client/tests/kvm/kvm_tests.py |   98 +
 client/tests/kvm/kvm_vm.py|2 +
 4 files changed, 169 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index b18b643..fc92e10 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -55,6 +55,7 @@ class kvm(test.test):
 kvm_install:  test_routine(kvm_install, run_kvm_install),
 linux_s3: test_routine(kvm_tests, run_linux_s3),
 stress_boot:  test_routine(kvm_tests, run_stress_boot),
+pci_hotplug:  test_routine(kvm_tests, run_pci_hotplug),
 }
 
 # Make it possible to import modules from the test's bindir
diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 2f864de..7ec6f72 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -91,9 +91,56 @@ variants:
 
 - stress_boot:
 type = stress_boot
-max_vms = 5
+max_vms = 5
 alive_test_cmd = ps aux
 
+
+- nic_hotplug:
+type = pci_hotplug
+pci_type = nic
+modprobe_acpiphp = yes
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+pci_test_cmd = 'nslookup www.redhat.com'
+seconds_wait_for_device_install = 3
+variants:
+- nic_8139:
+pci_model = rtl8139
+match_string = 8139
+- nic_virtio:
+pci_model = virtio
+match_string = Virtio network device
+- nic_e1000:
+pci_model = e1000
+match_string = Gigabit Ethernet Controller
+
+- block_hotplug:
+type = pci_hotplug
+pci_type = block
+modprobe_acpiphp = yes
+reference_cmd = lspci
+find_pci_cmd = 'lspci | tail -n1'
+images +=  stg
+boot_drive_stg = no
+image_name_stg = storage
+image_size = 1G
+force_create_image_stg = yes
+seconds_wait_for_device_install = 3
+pci_test_cmd = 'yes|mke2fs `fdisk -l 21 |grep '/dev/[sv]d[a-z] 
doesn' | awk '{print $2}'`'
+variants:
+- block_virtio:
+pci_model = virtio
+match_string = Virtio block device
+- block_scsi:
+pci_model = scsi
+match_string = SCSI
+variants:
+- fmt_qcow2:
+image_format_stg = qcow2
+- fmt_raw:
+image_format_stg = raw
+
+
 # NICs
 variants:
 - @rtl8139:
@@ -119,6 +166,10 @@ variants:
 - Fedora:
 no setup
 ssh_prompt = \[r...@.{0,50}][\#\$] 
+nic_hotplug:
+modprobe_acpiphp = no
+block_hotplug:
+modprobe_acpiphp = no
 
 variants:
 - 8.32:
@@ -306,6 +357,22 @@ variants:
 migration_test_command = ver  vol
 stress_boot:
 alive_test_cmd = systeminfo
+nic_hotplug:
+modprobe_acpiphp = no
+reference_cmd = systeminfo
+seconds_wait_for_device_install = 10
+find_pci_cmd = ipconfig /all | find Description
+nic_e1000:
+match_string = Intel(R

  1   2   >