Re: [Autotest] [PATCH] KVM test: Memory ballooning test for KVM guest

2010-04-09 Thread sudhir kumar
On Fri, Apr 9, 2010 at 2:40 PM, pradeep psuri...@linux.vnet.ibm.com wrote:

 Hi Lucas

 Thanks for your comments.
 Please find the patch, with suggested changes.

 Thanks
 Pradeep



 Signed-off-by: Pradeep Kumar Surisetty psuri...@linux.vnet.ibm.com
 ---
 diff -uprN autotest-old/client/tests/kvm/tests/balloon_check.py
 autotest/client/tests/kvm/tests/balloon_check.py
 --- autotest-old/client/tests/kvm/tests/balloon_check.py        1969-12-31
 19:00:00.0 -0500
 +++ autotest/client/tests/kvm/tests/balloon_check.py    2010-04-09
 12:33:34.0 -0400
 @@ -0,0 +1,47 @@
 +import re, string, logging, random, time
 +from autotest_lib.client.common_lib import error
 +import kvm_test_utils, kvm_utils
 +
 +def run_balloon_check(test, params, env):
 +    
 +    Check Memory ballooning:
 +    1) Boot a guest
 +    2) Increase and decrease the memory of guest using balloon command from
 monitor
Better replace this description by Change the guest memory between X
and Y values
Also instead of using 0.6 and 0.95 below, better use two variables and
take their value from config file. This will give the user a
flexibility to narrow or widen the ballooning range.

 +    3) check memory info
 +
 +   �...@param test: kvm test object
 +   �...@param params: Dictionary with the test parameters
 +   �...@param env: Dictionary with test environment.
 +    
 +
 +    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
 +    session = kvm_test_utils.wait_for_login(vm)
 +    fail = 0
 +
 +    # Check memory size
 +    logging.info(Memory size check)
 +    expected_mem = int(params.get(mem))
 +    actual_mem = vm.get_memory_size()
 +    if actual_mem != expected_mem:
 +        logging.error(Memory size mismatch:)
 +        logging.error(Assigned to VM: %s % expected_mem)
 +        logging.error(Reported by OS: %s % actual_mem)
 +
 +    #change memory to random size between 60% to 95% of actual memory
 +    percent = random.uniform(0.6, 0.95)
 +    new_mem = int(percent*expected_mem)
 +    vm.send_monitor_cmd(balloon %s %new_mem)

You may want to check if the command passed/failed. Older versions
might not support ballooning.

 +    time.sleep(20)
why 20 second sleep and why the magic number?

 +    status, output = vm.send_monitor_cmd(info balloon)
You might want to put this check before changing the memory.

 +    if status != 0:
 +        logging.error(qemu monitor command failed: info balloon)
 +
 +    balloon_cmd_mem = int(re.findall(\d+,output)[0])
A better variable name I can think of is ballooned_mem

 +    if balloon_cmd_mem != new_mem:
 +        logging.error(memory ballooning failed while changing memory to
 %s %balloon_cmd_mem)
 +       fail += 1
 +
 +    #Checking for test result
 +    if fail != 0:
In case you are running multiple iterations and the 2nd iteration
fails you will always miss this condition.

 +        raise error.TestFail(Memory ballooning test failed )
 +    session.close()
 diff -uprN autotest-old/client/tests/kvm/tests_base.cfg.sample
 autotest/client/tests/kvm/tests_base.cfg.sample
 --- autotest-old/client/tests/kvm/tests_base.cfg.sample 2010-04-09
 12:32:50.0 -0400
 +++ autotest/client/tests/kvm/tests_base.cfg.sample     2010-04-09
 12:53:27.0 -0400
 @@ -185,6 +185,10 @@ variants:
                 drift_threshold = 10
                 drift_threshold_single = 3

 +    - balloon_check:  install setup unattended_install boot
 +        type = balloon_check
 +        extra_params +=  -balloon virtio
 +
     - stress_boot:  install setup unattended_install
         type = stress_boot
         max_vms = 5
 ---

Rest all looks good

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest





-- 
Regards
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Networkconfiguration with KVM

2010-04-04 Thread sudhir kumar
On Sun, Apr 4, 2010 at 5:47 PM, Dan Johansson k...@dmj.nu wrote:
 Hi,

 I am new to this list and to KVM (and qemu) so please be gentle with me.
 Up until now I have been running my virtualizing  using VMWare-Server. Now I
 want to try KVM due to some issues with the  VMWare-Server and I am having
 some troubles with the networking part of KVM.

 This is a small example of what I want (best viewed in a fix-font):

  +---+
  | Host                              |
  |  +--+                eth0 | 192.168.1.0/24
  |  |      eth0|-- +                 |
  |  | VM1  eth1|---(---+--- eth1 | 192.168.2.0/24
  |  |      eth2|---(---(---+         |
  |  +--+   |   |   |         |
  |                 |   |   |         |
  |  +--+   +---(---(--- eth2 | 192.168.1.0/24
  |  |      eth0|---+   |   |         |
  |  | VM2  eth1|---+   +--- eth3 | 192.168.3.0/24
  |  |      eth2|---+         |
  |  +--+                     |
  |                                   |
  +---+

 Host-eth0 is only for the Host (no VM)
 Host-eth1 is shared between the Host and the VM's (VM?-eth1)
 Host-eth2 and Host-eth3 are only for the VMs (eth0 and eth2)

 The Host and the VMs all have fixed IPs (no dhcp or likewise).
 In this example th IPs could be:
 Host-eth0:      192.168.1.1
 Host-eth1:      192.168.2.1
 Host-eth2:      -
 Host-eth3:      -
 VM1-eth0:               192.168.1.11
 VM1-eth1:               192.168.2.11
 VM1-eth2:               192.168.3.11
 VM2-eth0:               192.168.1.22
 VM2-eth1:               192.168.2.22
 VM3-eth2:               192.168.3.22

 And, yes, Host-eth0 and Host-eth2 are in the same subnet, with eth0 dedicated
 to the Host and eth2 dedicated to the VMs.

 In VMWare this was quite easy to setup (three bridged networks).

Its easy with KVM too. You want 3 NICs per VM, so you need to pass the
corresponding parameters(including qemu-ifup script) for 3 NICs to
each VM.
In the host you need to create 2 bridges: say br-eth1 and br-eth2.
Make them as the interface on the host in place of the corresponding
eth interfaces.(brct addbr br-eth1; ifcfg eth1 0.0.0.0 up; brctl addif
br-eth eth1; assign eth1's ip and routes to breth1; same for eth2).
In the corresponding qemu-ifup scripts of each interface use
bridge=br-ethN (This basicaly translates to brctl addif br-ethN $1,
where $ is the tap device created)
This should work perfectly fine with your existing NW setup.
For a quick reference use: http://www.linux-kvm.org/page/Networking

 Does someone know how I can set this up with KVM/QEMU?

 Regards,
 --
 Dan Johansson, http://www.dmj.nu
 ***
 This message is printed on 100% recycled electrons!
 ***
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Regards
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] KVM test: Exposing boot and reboot timeouts in config files

2010-03-08 Thread sudhir kumar
 --
 1.6.6.1

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 2/2] KVM test: Add cpu_set subtest

2010-02-25 Thread sudhir kumar
 the
 +    # autotest CPU Hotplug test
 +    timeout = int(params.get(cpu_hotplug_timeout))
 +    test_name = cpu_hotplug
 +    control_path = os.path.join(test.bindir, autotest_control,
 +                                cpu_hotplug.control)
 +    outputdir = test.outputdir
 +
 +    logging.info(Executing CPU hotplug autotest on guest)
 +    kvm_test_utils.run_autotest(vm, session, control_path, timeout, 
 test_name,
 +                                outputdir)
 +
 diff --git a/client/tests/kvm/tests_base.cfg.sample 
 b/client/tests/kvm/tests_base.cfg.sample
 index b00ed9e..07b394a 100644
 --- a/client/tests/kvm/tests_base.cfg.sample
 +++ b/client/tests/kvm/tests_base.cfg.sample
 @@ -287,6 +287,11 @@ variants:
         kill_vm = yes
         kill_vm_gracefully = no

 +    - cpu_set:
Shall not we add the dependency? like only Linux or no Windows ?
 +        type = cpu_set
 +        cpu_hotplug_timeout = 600
 +        n_cpus_add = 1
 +
     - system_reset: install setup unattended_install
         type = boot
         reboot_method = system_reset
Rest looks fine!!
 --
 1.6.6.1

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 1/2] KVM test: Refactoring the 'autotest' subtest

2010-02-25 Thread sudhir kumar
 before ABORT results)
 -    bad_results = [r for r in results if r[1] == FAIL]
 -    bad_results += [r for r in results if r[1] == ERROR]
 -    bad_results += [r for r in results if r[1] == ABORT]
 +    kvm_test_utils.run_autotest(vm, session, control_path, timeout, 
 test_name,
 +                                outputdir)

 -    # Fail the test if necessary
 -    if not results:
 -        raise error.TestFail(Test '%s' did not produce any recognizable 
 -                             results % test_name)
 -    if bad_results:
 -        result = bad_results[0]
 -        raise error.TestFail(Test '%s' ended with %s (reason: '%s')
 -                             % (result[0], result[1], result[3]))
 --
 1.6.6.1

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM source code

2010-02-11 Thread sudhir kumar
On Thu, Feb 11, 2010 at 4:44 PM, Puja Gupta pmgupta@gmail.com wrote:
 Hello friends ,
           Can anyone tell me where to get kvm source code from and
 how to compile it using make.
I think you do not like google :)
The very first search kvm code gives this
http://www.linux-kvm.org/page/Code

This gives you all the information you need.

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [RFC] KVM test: Ship rss.exe and finish.exe binaries with KVM test

2010-02-03 Thread sudhir kumar
Lucas Great!!
A nice pill for the windows' pain.

On Tue, Feb 2, 2010 at 5:18 PM, Lucas Meneghel Rodrigues l...@redhat.com 
wrote:
 Hi folks:

 We're on an effort of streamlining the KVM test experience, by choosing
 sane defaults and helper scripts that can overcome the initial barrier
 with getting the KVM test running. On one of the conversations I've had
 today, we came up with the idea of shipping the compiled windows
 programs rss.exe and finish.exe, needed for windows hosts testing.

 Even though rss.exe and finish.exe can be compiled in a fairly
 straightforward way using the awesome cross compiling environment with
 mingw, there are some obvious limitations to it:

 1) The cross compiling environment is only available for fedora = 11.
 No other distros I know have it.
And we have to take care of it, otherwise people will not stop
complaining about the complexity involved in kvm testing under
autotest.


 2) Sometimes it might take time for the user to realize he/she has to
 compile the source code under unattended/ folder, and how to do it.

 That person would take a couple of failed attempts scratching his/her
 head thinking what the heck is this deps/finish.exe they're talking
 about?. Surely documentation can help, and I am looking at making the
 documentation on how to do it more easily discoverable.

 That said, shipping the binaries would make the life of those people
 easier, and anyway the binaries work pretty well across all versions of
 windows from winxp to win7, they are self contained, with no external
 dependencies (they all use the standard win32 API).

 3) That said we also need a script that can build the entire
 winutils.iso without making the user to spend way too much time figuring
 out how to do it. I want to work on such a script on the next days.

 So, what are your opinions? Should we ship the binaries or pursue a
 script that can build those for the user as soon as the (yet to be
 integrated) get_started.py script runs? Remember that the later might
 mean users of RHEL = 5.X and debian like will be left out in the cold.
I am completely in favour of shipping the binaries and the script to make cd.iso
This will also help in install setup of windows guest.
I know many people complain that getting up with autotest for kvm
testing is so complex, which definitely is not(however it appears).
This will make windows testing bit more smooth. In my personal
experience, I usually keep a fully configured win.img across multiple
qemu-kvm versions for running different tests. The freshly installed
images after the install test, I usually delete.

So my vote is in favour of shipping the binary( and it is not too large).

 Looking forward hearing your input,

 Lucas

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest 'qemu_img'

2010-01-27 Thread sudhir kumar
)
+        if not (snapshot0 in o and snapshot1 in o and s ==
  0):
+            raise error.TestFail(Snapshot created failed or
  missed;
+                                 snapshot list is: \n%s % o)
+        for i in range(2):
+            snapshot_name = snapshot%d % i
+            delcmd = cmd
+            delcmd +=  -d %s %s % (snapshot_name, image_name)
+            s, o = commands.getstatusoutput(delcmd)
+            if s != 0:
+                raise error.TestFail(Delete snapshot '%s'
  failed: %s %
+
  (snapshot_name, o))
+
+    #Subcommand 'qemu-img commit' test
+    def commit_test(cmd):
+        pass
+
+    # Here starts test
+    eval(%s_test % subcommand)

 Aren't you missing a () -- eval(%s_test() % subcommand)?

 BTW, Yolkfull, have you tried running the test and verifying that it
 works?

 Oh, really missed it. I tested it when using method of passing 'cmd' as a 
 parameter
 to subcommand functions and it worked fine. So introduced this mistake when 
 changing
 to use 'global'. Will fix it in next version which will also add 'rebase' 
 subcommand test.

 Thanks for comments, Michael.


  
   In the above expression, we would also benefit from encapsulating
  all
   qemu-img tests on a class:
  
         tester = QemuImgTester()
         tester.test(subcommand)
  
   Or something similar.
  
   That said, I was wondering if we could consolidate all qemu-img
  tests to
   a single execution, instead of splitting it to several variants. We
   could keep a failure record, execute all tests and fail the entire
  test
   if any of them failed. It's not like terribly important, but it
  seems
   more logical to group all qemu-img subcommands testing under a
  single
   test.
  
   Thanks for your work, and please take a look at implementing my
   suggestions.
 
   --
   To unsubscribe from this list: send the line unsubscribe kvm in
   the body of a message to majord...@vger.kernel.org
   More majordomo info at  http://vger.kernel.org/majordomo-info.html
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Add a server-side test - kvm_migration

2009-12-07 Thread sudhir kumar
Resending with proper cc list :(

On Mon, Dec 7, 2009 at 2:43 PM, sudhir kumar smalik...@gmail.com wrote:
 Thanks for initiating the server side implementation of migration. Few
 comments below

 On Fri, Dec 4, 2009 at 1:48 PM, Yolkfull Chow yz...@redhat.com wrote:
 This patch will add a server-side test namely kvm_migration. Currently,
 it will use existing KVM client test framework and add a new file
 kvm_migration.py to help judge executing routine: source machine or dest
 machine.

 * One thing need to be considered/improved:
 Whether we parse the kvm_tests.cfg on server machine or on client machines?
 If parse it on client machines, we need to fix one problem that adding
 'start_vm_for_migration' parameter into dict which generated on dest machine.
 I think we can not manage with client side parsing without adding too
 much complexity. So let us continue parsing on the server side only
 for remote migration. Also as the patch does, keep the local migration
 under the client also. I do not like adding test variants in
 migration_control.srv. Comments below...

 So far I choose parsing kvm_tests.cfg on server machine, and then add
 'start_vm_for_migration' into dict cloned from original test dict for dest
 machine.

 * In order to run this test so far, we need to setup NFS for both
 source and dest machines.

 Signed-off-by: Yolkfull Chow yz...@redhat.com
 ---
  client/tests/kvm/kvm_migration.py      |  165 
 
  client/tests/kvm/kvm_test_utils.py     |   27 +++---
  client/tests/kvm/kvm_tests.cfg.sample  |    2 +
  client/tests/kvm_migration             |    1 +
  server/tests/kvm/migration_control.srv |  137 ++
  5 files changed, 320 insertions(+), 12 deletions(-)
  create mode 100644 client/tests/kvm/kvm_migration.py
  create mode 12 client/tests/kvm_migration
  create mode 100644 server/tests/kvm/migration_control.srv

 diff --git a/client/tests/kvm/kvm_migration.py 
 b/client/tests/kvm/kvm_migration.py
 new file mode 100644
 index 000..52cd3cd
 --- /dev/null
 +++ b/client/tests/kvm/kvm_migration.py
 @@ -0,0 +1,165 @@
 +import sys, os, time, logging, commands, socket
 +from autotest_lib.client.bin import test
 +from autotest_lib.client.common_lib import error
 +import kvm_utils, kvm_preprocessing, common, kvm_vm, kvm_test_utils
 +
 +
 +class kvm_migration(test.test):
 +    
 +    KVM migration test.
 +
 +   �...@copyright: Red Hat 2008-2009
 +   �...@see: http://www.linux-kvm.org/page/KVM-Autotest/Client_Install
 +            (Online doc - Getting started with KVM testing)
 +
 +    Migration execution progress:
 +
 +    source host                     dest host
 +    --
 +    log into guest
 +    --
 +    start socket server
 +
 +     wait 30 secs -- wait login_timeout+30 secs---
 +
 +    accept connection             connect to socket server,send mig_port
 +    --
 +    start migration
 +
 +     wait 30 secs -- wait mig_timeout+30 secs-
 +
 +                                  try to log into migrated guest
 +    --
 +
 +    
 +    version = 1
 +    def initialize(self):
 +        pass
 +
 +
 +    def run_once(self, params):
 +        
 +        Setup remote machine and then execute migration.
 +        
 +        # Check whether remote machine is ready
 +        dsthost = params.get(dsthost)
 +        srchost = params.get(srchost)
 +        image_path = os.path.join(self.bindir, images)
 +
 +        rootdir = params.get(rootdir)
 +        iso = os.path.join(rootdir, 'iso')
 +        images = os.path.join(rootdir, 'images')
 +        qemu = os.path.join(rootdir, 'qemu')
 +        qemu_img = os.path.join(rootdir, 'qemu-img')
 +
 +        def link_if_not_exist(ldir, target, link_name):
 +            t = target
 +            l = os.path.join(ldir, link_name)
 +            if not os.path.exists(l):
 +                os.symlink(t,l)
 +        link_if_not_exist(self.bindir, '../../', 'autotest')
 +        link_if_not_exist(self.bindir, iso, 'isos')
 +        link_if_not_exist(self.bindir, images, 'images')
 +        link_if_not_exist(self.bindir, qemu, 'qemu')
 +        link_if_not_exist(self.bindir, qemu_img, 'qemu-img')
 +
 +        # Report the parameters we've received and write them as keyvals
 +        logging.debug(Test parameters:)
 +        keys = params.keys()
 +        keys.sort()
 +        for key in keys:
 +            logging.debug(    %s = %s, key, params[key])
 +            self.write_test_keyval({key: params[key]})
 +
 +        # Open the environment file
 +        env_filename = os.path.join(self.bindir, params.get(env, env))
 +        env = kvm_utils.load_env(env_filename, {})
 +        logging.debug(Contents of environment: %s

[Autotest][PATCH 1/2] add hackbench test to kvm autotest

2009-12-03 Thread sudhir kumar
This patch adds the hackbench test for the KVM linux guests.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: kvm/autotest_control/hackbench.control
===
--- /dev/null
+++ kvm/autotest_control/hackbench.control
@@ -0,0 +1,13 @@
+AUTHOR = Sudhir Kumar sku...@linux.vnet.ibm.com
+NAME = Hackbench
+TIME = SHORT
+TEST_CLASS = Kernel
+TEST_CATEGORY = Benchmark
+TEST_TYPE = client
+
+DOC = 
+Hackbench is a benchmark which measures the performance, overhead and
+scalability of the Linux scheduler.
+
+
+job.run_test('hackbench')



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest][PATCH 2/2] add hackbench variant into sample config file

2009-12-03 Thread sudhir kumar
This patch adds the test variant for hackbench into the sample config file.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: kvm/kvm_tests.cfg.sample
===
--- kvm.orig/kvm_tests.cfg.sample
+++ kvm/kvm_tests.cfg.sample
@@ -117,6 +117,9 @@ variants:
 - npb:
 test_name = npb
 test_control_file = npb.control
+- hackbench:
+test_name = hackbench
+test_control_file = hackbench.control


 - linux_s3: install setup unattended_install


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 1/2] Adds a test to verify resources inside a VM

2009-12-02 Thread sudhir kumar
On Wed, Dec 2, 2009 at 9:10 AM, Lucas Meneghel Rodrigues l...@redhat.com 
wrote:
 On Wed, 2009-12-02 at 08:59 +0530, sudhir kumar wrote:
 On Wed, Dec 2, 2009 at 7:51 AM, Yolkfull Chow yz...@redhat.com wrote:
 
  Looks good for me. Thanks Lucas for improving this test.
 
  Sudhir, what do you think about this? :)
 Needs couple of hours before I go through the patch. I will post my
 comments by today. Also would like to give a quick run for windows
 guests which are more prone to break :) Thanks Lucas for the effort.

 Thanks guys,

 I've found some bugs on the version I sent to the mailing list, the bugs
 were fixed and now the test looks like this - please review this
 version!

 import re, string, logging
 from autotest_lib.client.common_lib import error
 import kvm_test_utils, kvm_utils


 def run_physical_resources_check(test, params, env):
    
    Check physical resources assigned to KVM virtual machines:
    1) Log into the guest
    2) Verify whether cpu counts ,memory size, nics' model,
       count and drives' format  count, drive_serial, UUID
       reported by the guest OS matches what has been assigned
       to the VM (qemu command line)
    3) Verify all MAC addresses for guest NICs

   �...@param test: kvm test object
   �...@param params: Dictionary with the test parameters
   �...@param env: Dictionary with test environment.
    
    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
    session = kvm_test_utils.wait_for_login(vm)

    logging.info(Starting physical resources check test)
    logging.info(Values assigned to VM are the values we expect 
                 to see reported by the Operating System)
    # Define a failure counter, as we want to check all physical
    # resources to know which checks passed and which ones failed
    n_fail = 0

    # Check cpu count
    logging.info(CPU count check)
    expected_cpu_nr = int(params.get(smp))
    actual_cpu_nr = vm.get_cpu_count()
    if expected_cpu_nr != actual_cpu_nr:
        n_fail += 1
        logging.error(CPU count mismatch:)
        logging.error(    Assigned to VM: %s % expected_cpu_nr)
        logging.error(    Reported by OS: %s % actual_cpu_nr)

    # Check memory size
    logging.info(Memory size check)
    expected_mem = int(params.get(mem))
    actual_mem = vm.get_memory_size()
    if actual_mem != expected_mem:
        n_fail += 1
        logging.error(Memory size mismatch:)
        logging.error(    Assigned to VM: %s % expected_mem)
        logging.error(    Reported by OS: %s % actual_mem)

    # Define a function for checking number of hard drivers  NICs
    def check_num(devices, cmd, check_str):
        f_fail = 0
        expected_num = kvm_utils.get_sub_dict_names(params, devices).__len__()
        s, o = vm.send_monitor_cmd(cmd)
        if s != 0:
            f_fail += 1
            logging.error(qemu monitor command failed: %s % cmd)

        actual_num = string.count(o, check_str)
        if expected_num != actual_num:
            f_fail += 1
            logging.error(%s number mismatch:)
            logging.error(    Assigned to VM: %d % expected_num)
            logging.error(    Reported by OS: %d % actual_num)
        return expected_num, f_fail

    logging.info(Hard drive count check)
    drives_num, f_fail = check_num(images, info block, type=hd)
    n_fail += f_fail

    logging.info(NIC count check)
    nics_num, f_fail = check_num(nics, info network, model=)
    n_fail += f_fail

    # Define a function for checking hard drives  NICs' model
    def chk_fmt_model(device, fmt_model, cmd, str):
        f_fail = 0
        devices = kvm_utils.get_sub_dict_names(params, device)
        for chk_device in devices:
            expected = kvm_utils.get_sub_dict(params, 
 chk_device).get(fmt_model)
            if not expected:
                expected = rtl8139
chk_fmt_model is a generic function. why are we initializing this
variable, expected to rtl8139 ?

            s, o = vm.send_monitor_cmd(cmd)
            if s != 0:
                f_fail += 1
                logging.error(qemu monitor command failed: %s % cmd)

            device_found = re.findall(str, o)
            logging.debug(Found devices: %s % device_found)
            found = False
            for fm in device_found:
                if expected in fm:
                    found = True

            if not found:
                f_fail += 1
                logging.error(%s model mismatch:)
                logging.error(    Assigned to VM: %s % expected)
                logging.error(    Reported by OS: %s % device_found)
        return f_fail

    logging.info(NICs model check)
    f_fail = chk_fmt_model(nics, nic_model, info network, model=(.*),)
    n_fail += f_fail

    logging.info(Drive format check)
    f_fail = chk_fmt_model(images, drive_format, info block,
                           (.*)\: type=hd)
    n_fail += f_fail

    logging.info(Network card MAC check)
    s, o = vm.send_monitor_cmd(info network)
    if s != 0

Re: [Autotest] [PATCH 1/2] Adds a test to verify resources inside a VM

2009-12-01 Thread sudhir kumar
On Wed, Dec 2, 2009 at 7:51 AM, Yolkfull Chow yz...@redhat.com wrote:
 On Tue, Dec 01, 2009 at 11:56:43AM -0200, Lucas Meneghel Rodrigues wrote:
 Hi Sudhir and Yolkfull:

 Thanks for your work on this test! Since Yolkfull's test matches
 Sudhir's test functionality and extends it, I will go with it. Some
 points:

  * A failure on checking a given resource shouldn't prevent us from
 testing other resources. Hence, instead of TestFail() exceptions,
 let's replace it by an increase on a failure counter defined in the
 beginning of the test.
  * In order to make it more clear what the test does, let's change the
 name to check_physical_resources
  * At least for the user messages, it's preferrable to use Assigned
 to VM and Reported by OS instead of expected and actual.

 I have implemented the suggestions and tested it, works quite well. A
 patch was sent to the mailing list a couple of minutes ago, please let
 me know what you guys think.

 Looks good for me. Thanks Lucas for improving this test.

 Sudhir, what do you think about this? :)
Needs couple of hours before I go through the patch. I will post my
comments by today. Also would like to give a quick run for windows
guests which are more prone to break :) Thanks Lucas for the effort.

 Cheers,
 Yolkfull


 Cheers,

 On Sun, Nov 29, 2009 at 8:40 AM, Yolkfull Chow yz...@redhat.com wrote:
  On Sun, Nov 29, 2009 at 02:22:40PM +0530, sudhir kumar wrote:
  On Sun, Nov 29, 2009 at 12:50 PM, Yolkfull Chow yz...@redhat.com wrote:
   On Wed, Nov 25, 2009 at 11:35:02AM +0530, sudhir kumar wrote:
   This patch adds a test for verifying whether the number of cpus and 
   amount
   of memory as seen inside a guest is same as allocated to it on the qemu
   command line.
  
   Hello Sudhir,
  
   Please see embedded comments as below:
  
  
   Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com
  
   Index: kvm/tests/verify_resources.py
   ===
   --- /dev/null
   +++ kvm/tests/verify_resources.py
   @@ -0,0 +1,74 @@
   +import logging, time
   +from autotest_lib.client.common_lib import error
   +import kvm_subprocess, kvm_test_utils, kvm_utils
   +
   +
   +Test to verify if the guest has the equal amount of resources
   +as allocated through the qemu command line
   +
   +...@copyright: 2009 IBM Corporation
   +...@author: Sudhir Kumar sku...@linux.vnet.ibm.com
   +
   +
   +
   +def run_verify_resources(test, params, env):
   +    
   +    KVM test for verifying VM resources(#vcpu, memory):
   +    1) Get resources from the VM parameters
   +    2) Log into the guest
   +    3) Get actual resources, compare and report the pass/failure
   +
   +   �...@param test: kvm test object
   +   �...@param params: Dictionary with the test parameters
   +   �...@param env: Dictionary with test environment.
   +    
   +    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
   +
   +    # Get info about vcpu and memory from dictionary
   +    exp_vcpus = int(params.get(smp))
   +    exp_mem_mb = long(params.get(mem))
   +    real_vcpus = 0
   +    real_mem_kb = 0
   +    real_mem_mb = 0
   +    # Some memory is used by bios and all, so lower the expected
   value say by 5%
   +    exp_mem_mb = long(exp_mem_mb * 0.95)
   +    logging.info(The guest should have vcpus: %s % exp_vcpus)
   +    logging.info(The guest should have min mem: %s MB % exp_mem_mb)
   +
   +    session = kvm_test_utils.wait_for_login(vm)
   +
   +    # Get info about vcpu and memory from within guest
   +    if params.get(guest_os_type) == Linux:
   +        output = session.get_command_output(cat /proc/cpuinfo|grep 
   processor)
  
   We'd better here not hard code the command that getting CPU count. As 
   KVM supports not
   only Linux  Windows, but also others say Unix/BSD.
   A recommended method could be define it in config file for different 
   platforms:
  I agree. The only concern that made me doing it inside test is the
  increasing size and complexity of the config file. I am fine with
  passing the command from the config file but still the code paths have
  to be different for each type of OS ie windows linux etc.
 
  
   - @Linux:
      verify_resources:
          count_cpu_cmd = grep processor /proc/cpuinfo
  
   - @Windows:
      verify_resources:
          count_cpu_cmd = systeminfo (here I would not suggest we use 
   'systeminfo'
                                      for catching M$ guest's memory size)
  
   +        for line in output.split('\n'):
   +            if 'processor' in line:
   +                real_vcpus = real_vcpus + 1
   +
   +        output = session.get_command_output(cat /proc/meminfo)
  
   For catching memory size of Linux guests, I prefer command 'dmidecode' 
   which can
   catch memory size exactly in MB.
  I think we can use both here. To my knowledge dmidecode will test the
  BIOS code of kvm and hence we can include both the methods?
  
   +        for line in output.split('\n

Re: [Autotest] [PATCH] KVM test: Not execute build test by default

2009-12-01 Thread sudhir kumar
On Wed, Dec 2, 2009 at 9:39 AM, Ryan Harper ry...@us.ibm.com wrote:
 * Lucas Meneghel Rodrigues l...@redhat.com [2009-12-01 17:12]:
 Instead of trying to build KVM using one of the methods
 defined on the control file, use noinstall as the
 default mode, and make noinstall entirely skip the
 build test, as having noinstall as one of the options
 for the build test was making the test to run, calling
 the kvm preprocessor and killing VMs that could be present
 on the environment, an undesirable situation.

 This is an intermediate step before we carry over with the
 control file cleanup.

 Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com

 Yeah, I like noinstall as a default.
Definitely good, but I will prefer a name like no_kvm_install.


 Acked-by: Ryan Harper ry...@us.ibm.com

 --
 Ryan Harper
 Software Engineer; Linux Technology Center
 IBM Corp., Austin, Tx
 ry...@us.ibm.com
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Adds a test to verify resources inside a VM

2009-11-29 Thread sudhir kumar
On Sun, Nov 29, 2009 at 12:50 PM, Yolkfull Chow yz...@redhat.com wrote:
 On Wed, Nov 25, 2009 at 11:35:02AM +0530, sudhir kumar wrote:
 This patch adds a test for verifying whether the number of cpus and amount
 of memory as seen inside a guest is same as allocated to it on the qemu
 command line.

 Hello Sudhir,

 Please see embedded comments as below:


 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

 Index: kvm/tests/verify_resources.py
 ===
 --- /dev/null
 +++ kvm/tests/verify_resources.py
 @@ -0,0 +1,74 @@
 +import logging, time
 +from autotest_lib.client.common_lib import error
 +import kvm_subprocess, kvm_test_utils, kvm_utils
 +
 +
 +Test to verify if the guest has the equal amount of resources
 +as allocated through the qemu command line
 +
 +...@copyright: 2009 IBM Corporation
 +...@author: Sudhir Kumar sku...@linux.vnet.ibm.com
 +
 +
 +
 +def run_verify_resources(test, params, env):
 +    
 +    KVM test for verifying VM resources(#vcpu, memory):
 +    1) Get resources from the VM parameters
 +    2) Log into the guest
 +    3) Get actual resources, compare and report the pass/failure
 +
 +   �...@param test: kvm test object
 +   �...@param params: Dictionary with the test parameters
 +   �...@param env: Dictionary with test environment.
 +    
 +    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
 +
 +    # Get info about vcpu and memory from dictionary
 +    exp_vcpus = int(params.get(smp))
 +    exp_mem_mb = long(params.get(mem))
 +    real_vcpus = 0
 +    real_mem_kb = 0
 +    real_mem_mb = 0
 +    # Some memory is used by bios and all, so lower the expected
 value say by 5%
 +    exp_mem_mb = long(exp_mem_mb * 0.95)
 +    logging.info(The guest should have vcpus: %s % exp_vcpus)
 +    logging.info(The guest should have min mem: %s MB % exp_mem_mb)
 +
 +    session = kvm_test_utils.wait_for_login(vm)
 +
 +    # Get info about vcpu and memory from within guest
 +    if params.get(guest_os_type) == Linux:
 +        output = session.get_command_output(cat /proc/cpuinfo|grep 
 processor)

 We'd better here not hard code the command that getting CPU count. As KVM 
 supports not
 only Linux  Windows, but also others say Unix/BSD.
 A recommended method could be define it in config file for different 
 platforms:
I agree. The only concern that made me doing it inside test is the
increasing size and complexity of the config file. I am fine with
passing the command from the config file but still the code paths have
to be different for each type of OS ie windows linux etc.


 - @Linux:
    verify_resources:
        count_cpu_cmd = grep processor /proc/cpuinfo

 - @Windows:
    verify_resources:
        count_cpu_cmd = systeminfo (here I would not suggest we use 
 'systeminfo'
                                    for catching M$ guest's memory size)

 +        for line in output.split('\n'):
 +            if 'processor' in line:
 +                real_vcpus = real_vcpus + 1
 +
 +        output = session.get_command_output(cat /proc/meminfo)

 For catching memory size of Linux guests, I prefer command 'dmidecode' which 
 can
 catch memory size exactly in MB.
I think we can use both here. To my knowledge dmidecode will test the
BIOS code of kvm and hence we can include both the methods?

 +        for line in output.split('\n'):
 +            if 'MemTotal' in line:
 +                real_mem_kb = long(line.split()[1])
 +        real_mem_mb = real_mem_kb / 1024
 +
 +    elif params.get(guest_os_type) == Windows:
 +        # Windows takes long time to display output for systeminfo
 +        output = session.get_command_output(systeminfo, timeout =
 150, internal_timeout = 50)
 +        for line in output.split('\n'):
 +            if 'Processor' in line:
 +                real_vcpus = int(line.split()[1])
 +
 +        for line in output.split('\n'):
 +            if 'Total Physical Memory' in line:
 +               real_mem_mb = long(.join(%s % k for k in
 line.split()[3].split(',')))

 So many slice and split operations can easy results in problems.
 To catch memory of Windows guests, I recommend we use 'wmic memphysical' which
 can dump memory size in KB exactly.
Is the command available for all windows OSes? If yes we can
definitely use the command.


 Meanwhile, we also need to verify guest's NICs' count and their(its) model,
 hard disk(s)'s count  model etc. Therefore I think we need a case to verify
 them together.
Yeah, I just gave a first try for such a test. We need to test all the
emulated hardware.

 I had wrote such test couples of days before. I also ran it several times.
 Please comment on it when I post it here later. Thanks,
Sure. Please post them. I am happy to see them getting merged.

Thanks a lot for your comments!!
Sudhir


 +
 +    else:
 +        raise error.TestFail(Till date this test is supported only
 for Linux and Windows)
 +
 +    logging.info(The guest has cpus: %s % real_vcpus

Re: [PATCH 1/2] Adds a test to verify resources inside a VM

2009-11-26 Thread sudhir kumar
Folks,
Any comments on the patch below ?

On Wed, Nov 25, 2009 at 11:35 AM, sudhir kumar smalik...@gmail.com wrote:
 This patch adds a test for verifying whether the number of cpus and amount
 of memory as seen inside a guest is same as allocated to it on the qemu
 command line.

 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

 Index: kvm/tests/verify_resources.py
 ===
 --- /dev/null
 +++ kvm/tests/verify_resources.py
 @@ -0,0 +1,74 @@
 +import logging, time
 +from autotest_lib.client.common_lib import error
 +import kvm_subprocess, kvm_test_utils, kvm_utils
 +
 +
 +Test to verify if the guest has the equal amount of resources
 +as allocated through the qemu command line
 +
 +...@copyright: 2009 IBM Corporation
 +...@author: Sudhir Kumar sku...@linux.vnet.ibm.com
 +
 +
 +
 +def run_verify_resources(test, params, env):
 +    
 +    KVM test for verifying VM resources(#vcpu, memory):
 +    1) Get resources from the VM parameters
 +    2) Log into the guest
 +    3) Get actual resources, compare and report the pass/failure
 +
 +   �...@param test: kvm test object
 +   �...@param params: Dictionary with the test parameters
 +   �...@param env: Dictionary with test environment.
 +    
 +    vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
 +
 +    # Get info about vcpu and memory from dictionary
 +    exp_vcpus = int(params.get(smp))
 +    exp_mem_mb = long(params.get(mem))
 +    real_vcpus = 0
 +    real_mem_kb = 0
 +    real_mem_mb = 0
 +    # Some memory is used by bios and all, so lower the expected
 value say by 5%
 +    exp_mem_mb = long(exp_mem_mb * 0.95)
 +    logging.info(The guest should have vcpus: %s % exp_vcpus)
 +    logging.info(The guest should have min mem: %s MB % exp_mem_mb)
 +
 +    session = kvm_test_utils.wait_for_login(vm)
 +
 +    # Get info about vcpu and memory from within guest
 +    if params.get(guest_os_type) == Linux:
 +        output = session.get_command_output(cat /proc/cpuinfo|grep 
 processor)
 +        for line in output.split('\n'):
 +            if 'processor' in line:
 +                real_vcpus = real_vcpus + 1
 +
 +        output = session.get_command_output(cat /proc/meminfo)
 +        for line in output.split('\n'):
 +            if 'MemTotal' in line:
 +                real_mem_kb = long(line.split()[1])
 +        real_mem_mb = real_mem_kb / 1024
 +
 +    elif params.get(guest_os_type) == Windows:
 +        # Windows takes long time to display output for systeminfo
 +        output = session.get_command_output(systeminfo, timeout =
 150, internal_timeout = 50)
 +        for line in output.split('\n'):
 +            if 'Processor' in line:
 +                real_vcpus = int(line.split()[1])
 +
 +        for line in output.split('\n'):
 +            if 'Total Physical Memory' in line:
 +               real_mem_mb = long(.join(%s % k for k in
 line.split()[3].split(',')))
 +
 +    else:
 +        raise error.TestFail(Till date this test is supported only
 for Linux and Windows)
 +
 +    logging.info(The guest has cpus: %s % real_vcpus)
 +    logging.info(The guest has mem: %s MB % real_mem_mb)
 +    if exp_vcpus != real_vcpus or real_mem_mb  exp_mem_mb:
 +        raise error.TestFail(Actual resources(cpu ='%s' memory ='%s' MB) 
 +              differ from Allocated resources(cpu = '%s' memory ='%s' MB
 +                         % (real_vcpus, real_mem_mb, exp_vcpus, exp_mem_mb))
 +
 +    session.close()




 Sending the patch as an attachment too. Please review and provide your 
 comments.
 --
 Sudhir Kumar




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] KVM test: Change the way subtests are loaded

2009-11-25 Thread sudhir kumar
On Sat, Oct 31, 2009 at 3:37 AM, Lucas Meneghel Rodrigues
l...@redhat.com wrote:
 Recently autoserv changed its default behavior of rsyncing
 the whole client directory to the test machine, now it
 will copy only the needed tests to the client machine.

 Also, the way the tests are loaded when running from the
 server has changed, breaking the KVM test when ran from
 autoserv.

 So change the mechanism to load KVM subtests, in order to
 cope with the recent autoserv changes.

 Thanks to Ryan Harper for having noticed this issue.

 Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
 ---
  client/tests/kvm/kvm.py |    9 +
  1 files changed, 5 insertions(+), 4 deletions(-)

 diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
 index 204164d..06ef9f5 100644
 --- a/client/tests/kvm/kvm.py
 +++ b/client/tests/kvm/kvm.py
 @@ -22,9 +22,9 @@ class kvm(test.test):
     
     version = 1
     def initialize(self):
 -        # Make it possible to import modules from the test's bindir
 -        sys.path.append(self.bindir)
 +        # Make it possible to import modules from the subtest dir
         self.subtest_dir = os.path.join(self.bindir, 'tests')
 +        sys.path.append(self.subtest_dir)


     def run_once(self, params):
 @@ -51,7 +51,7 @@ class kvm(test.test):
                     raise error.TestError(No %s.py test file found % type)
                 # Load the tests directory (which was turned into a py module)
                 try:
 -                    test_module = __import__(tests.%s % type)
 +                    test_module = __import__(type)

This seems to have broken the execution of autotest under kvm guests.
I see the following error
 'module' object has no attribute 'run_autotest'
  Traceback (most recent call last):
File
/scratch/images/sudhir/devel/autotest/client/common_lib/test.py,
line 570, in _call_test_function
  return func(*args, **dargs)
File
/scratch/images/sudhir/devel/autotest/client/common_lib/test.py,
line 279, in execute
  postprocess_profiled_run, args, dargs)
File
/scratch/images/sudhir/devel/autotest/client/common_lib/test.py,
line 201, in _call_run_once
  self.run_once(*args, **dargs)
File
/scratch/images/sudhir/devel/autotest/client/tests/kvm/kvm.py, line
63, in run_once
  run_func = getattr(test_module, run_%s % type)
  AttributeError: 'module' object has no attribute
'run_autotest'

A little more debugging prints out that the module 'autotest' has been
imported from a wrong place.
 module 'autotest' from
'/scratch/images/sudhir/devel/autotest/client/tests/kvm/autotest/__init__.pyc'
So may be either we need to force to import from specific path or
change the name autotest.

I had the test variant as below
only autotest.hackbench
I am unable to run any of the tests under autotest.

                 except ImportError, e:
                     raise error.TestError(Failed to import test %s: %s %
                                           (type, e))
 @@ -60,7 +60,8 @@ class kvm(test.test):
                 kvm_preprocessing.preprocess(self, params, env)
                 kvm_utils.dump_env(env, env_filename)
                 # Run the test function
 -                eval(test_module.%s.run_%s(self, params, env) % (type, 
 type))
 +                run_func = getattr(test_module, run_%s % type)
 +                run_func(self, params, env)
                 kvm_utils.dump_env(env, env_filename)

             except Exception, e:
 --
 1.6.2.5

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: guest gets stuck on the migration from AMD to Intel

2009-11-25 Thread sudhir kumar
On Wed, Nov 18, 2009 at 3:19 PM, Harald Dunkel harald.dun...@aixigo.de wrote:
 Hi folks,

 If I migrate a virtual machine (2.6.31.6, amd64) from a host with
 AMD cpu to an Intel host, then the guest is terminated on the old
 host as expected, but it gets stuck on the new host. Every 60 seconds
 it prints a message on the virtual console saying

        BUG: soft lockup - CPU#0 got stuck for 61s!
Quite possible that the guest could not be scheduled to run for a
longer time during migration. In such a case the Linux kernel will
find that the cpu was stuck/locked and hence throw the call trace.
These messages are not harmful(is not it?) and the guest keeps running
without any problem.


 If I reset the guest, then it boots (without problems, as it seems).

 There is no migration problem for AMD -- AMD and Intel -- AMD.
 I didn't had a chance to test Intel -- Intel yet.

 The virtual disk is on a common NFSv3 partition. All hosts are
 running 2.6.31.6 (amd64).

 Can anybody reproduce this? I saw the error message several times on
 Google, but not together with a migration from AMD to Intel.

 Any helpful comment would be highly appreciated.


 Regards

 Harri
 ===
 processor       : 0
 vendor_id       : AuthenticAMD
 cpu family      : 15
 model           : 67
 model name      : Dual-Core AMD Opteron(tm) Processor 1210
 stepping        : 2
 cpu MHz         : 1795.378
 cache size      : 1024 KB
 physical id     : 0
 siblings        : 2
 core id         : 0
 cpu cores       : 2
 apicid          : 0
 initial apicid  : 0
 fpu             : yes
 fpu_exception   : yes
 cpuid level     : 1
 wp              : yes
 flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
 cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp 
 lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm 
 extapic cr8_legacy
 bogomips        : 3590.75
 TLB size        : 1024 4K pages
 clflush size    : 64
 cache_alignment : 64
 address sizes   : 40 bits physical, 48 bits virtual
 power management: ts fid vid ttp tm stc
 :
 :



 processor       : 0
 vendor_id       : GenuineIntel
 cpu family      : 6
 model           : 23
 model name      : Intel(R) Xeon(R) CPU           E5420  @ 2.50GHz
 stepping        : 10
 cpu MHz         : 2500.605
 cache size      : 6144 KB
 physical id     : 0
 siblings        : 4
 core id         : 0
 cpu cores       : 4
 apicid          : 0
 initial apicid  : 0
 fpu             : yes
 fpu_exception   : yes
 cpuid level     : 13
 wp              : yes
 flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
 cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm 
 constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx est 
 tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
 bogomips        : 5001.21
 clflush size    : 64
 cache_alignment : 64
 address sizes   : 38 bits physical, 48 bits virtual
 power management:
 :
 :



 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2] Adds a test to verify resources inside a VM

2009-11-24 Thread sudhir kumar
This patch adds a test for verifying whether the number of cpus and amount
of memory as seen inside a guest is same as allocated to it on the qemu
command line.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: kvm/tests/verify_resources.py
===
--- /dev/null
+++ kvm/tests/verify_resources.py
@@ -0,0 +1,74 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+import kvm_subprocess, kvm_test_utils, kvm_utils
+
+
+Test to verify if the guest has the equal amount of resources
+as allocated through the qemu command line
+
+...@copyright: 2009 IBM Corporation
+...@author: Sudhir Kumar sku...@linux.vnet.ibm.com
+
+
+
+def run_verify_resources(test, params, env):
+
+KVM test for verifying VM resources(#vcpu, memory):
+1) Get resources from the VM parameters
+2) Log into the guest
+3) Get actual resources, compare and report the pass/failure
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+
+# Get info about vcpu and memory from dictionary
+exp_vcpus = int(params.get(smp))
+exp_mem_mb = long(params.get(mem))
+real_vcpus = 0
+real_mem_kb = 0
+real_mem_mb = 0
+# Some memory is used by bios and all, so lower the expected
value say by 5%
+exp_mem_mb = long(exp_mem_mb * 0.95)
+logging.info(The guest should have vcpus: %s % exp_vcpus)
+logging.info(The guest should have min mem: %s MB % exp_mem_mb)
+
+session = kvm_test_utils.wait_for_login(vm)
+
+# Get info about vcpu and memory from within guest
+if params.get(guest_os_type) == Linux:
+output = session.get_command_output(cat /proc/cpuinfo|grep processor)
+for line in output.split('\n'):
+if 'processor' in line:
+real_vcpus = real_vcpus + 1
+
+output = session.get_command_output(cat /proc/meminfo)
+for line in output.split('\n'):
+if 'MemTotal' in line:
+real_mem_kb = long(line.split()[1])
+real_mem_mb = real_mem_kb / 1024
+
+elif params.get(guest_os_type) == Windows:
+# Windows takes long time to display output for systeminfo
+output = session.get_command_output(systeminfo, timeout =
150, internal_timeout = 50)
+for line in output.split('\n'):
+if 'Processor' in line:
+real_vcpus = int(line.split()[1])
+
+for line in output.split('\n'):
+if 'Total Physical Memory' in line:
+   real_mem_mb = long(.join(%s % k for k in
line.split()[3].split(',')))
+
+else:
+raise error.TestFail(Till date this test is supported only
for Linux and Windows)
+
+logging.info(The guest has cpus: %s % real_vcpus)
+logging.info(The guest has mem: %s MB % real_mem_mb)
+if exp_vcpus != real_vcpus or real_mem_mb  exp_mem_mb:
+raise error.TestFail(Actual resources(cpu ='%s' memory ='%s' MB) 
+  differ from Allocated resources(cpu = '%s' memory ='%s' MB
+ % (real_vcpus, real_mem_mb, exp_vcpus, exp_mem_mb))
+
+session.close()




Sending the patch as an attachment too. Please review and provide your comments.
-- 
Sudhir Kumar
This patch adds a test for verifying whether the number of cpus and amount 
of memory as seen inside a guest is same as allocated to it on the qemu
command line.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: kvm/tests/verify_resources.py
===
--- /dev/null
+++ kvm/tests/verify_resources.py
@@ -0,0 +1,74 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+import kvm_subprocess, kvm_test_utils, kvm_utils
+
+
+Test to verify if the guest has the equal amount of resources
+as allocated through the qemu command line
+
+...@copyright: 2009 IBM Corporation
+...@author: Sudhir Kumar sku...@linux.vnet.ibm.com
+
+
+
+def run_verify_resources(test, params, env):
+
+KVM test for verifying VM resources(#vcpu, memory):
+1) Get resources from the VM parameters
+2) Log into the guest
+3) Get actual resources, compare and report the pass/failure
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+
+# Get info about vcpu and memory from dictionary
+exp_vcpus = int(params.get(smp))
+exp_mem_mb = long(params.get(mem))
+real_vcpus = 0
+real_mem_kb = 0
+real_mem_mb = 0
+# Some memory is used by bios and all, so lower the expected value say by 5%
+exp_mem_mb = long(exp_mem_mb * 0.95)
+logging.info(The guest should have vcpus: %s % exp_vcpus)
+logging.info(The guest should have min mem: %s MB

[PATCH 2/2] Edit kvm_tests.cfg.sample to include verify_resources test variant

2009-11-24 Thread sudhir kumar
This patch adds the variants for verify_resources test into the
kvm_tests.cfg.sample file.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: kvm/kvm_tests.cfg.sample
===
--- kvm.orig/kvm_tests.cfg.sample
+++ kvm/kvm_tests.cfg.sample
@@ -243,6 +243,11 @@ variants:
 kill_vm = yes
 kill_vm_gracefully = no

+- verify_resources:   install setup unattended_install
+type = verify_resources
+guest_os_type = Linux
+kill_vm_on_error = yes
+

 # NICs
 variants:
@@ -526,6 +531,7 @@ variants:
 # Windows section
 - @Windows:
 no autotest linux_s3
+guest_os_type = Windows
 shutdown_command = shutdown /s /t 0
 reboot_command = shutdown /r /t 0
 status_test_command = echo %errorlevel%



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 18/19] KVM test: kvm_tests.cfg.sample: get all Windows test utilities from a single ISO

2009-11-23 Thread sudhir kumar
 = windows/rss.iso

                     - 64:
                         image_name += -64
 @@ -511,7 +504,6 @@ variants:
                             passwd = 1q2w3eP
                         setup:
                             steps = Win2008-64-rss.steps
 -                            cdrom = windows/rss.iso

     # Unix/BSD section
     - @Unix:
 --
 1.5.4.1

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




 --
 Lucas
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST PATCH 03/12] KVM test: add sample 'qemu-ifup' script

2009-08-05 Thread sudhir kumar
Lets not make it a python script. Since the purpose of providing this
script is that the user can copy it to /etc also and not bother
updating it to kvm_tests.cfg, so let us keep it bash only. Also as
Michael pointed there is nothing much pythonic even if we write it in
python, so better keep it bash.

On Wed, Aug 5, 2009 at 6:21 PM, Michael Goldishmgold...@redhat.com wrote:

 - Lucas Meneghel Rodrigues l...@redhat.com wrote:

 I am taking some time to review your patches, and likewise you
 mentioned revising my unattended patchset, it's going to take
 sometime
 for me to go trough all the code. Starting with the low hanging
 fruit,
 this little setup script could be turned into a python script as
 well!

 qemu-ifup is a traditional qemu script. The one in this patch is
 almost identical to the ones included in KVM releases.
 Also, it's meant to be modified by the user -- the user may want to
 replace the 'brctl show | awk' expression with the name of a bridge,
 especially if the host has more than one. I think a python script
 will be awkward to modify.
 Also, traditionally this script resides in /etc, and this one is
 provided only in case the user doesn't have a better one in /etc.
 The script in /etc is normally a bash script.

 I have no problem with rewriting this as a python script -- I just
 think it's more natural to keep this one in bash.
 In python it would look something like:

 import sys, os, commands
 switch = commands.getoutput(/usr/sbin/brctl show).split()[1].split()[0]
 os.system(/sbin/ifconfig %s 0.0.0.0 up % sys.argv[1])
 os.system(/usr/sbin/brctl addif %s %s % (switch, sys.argv[1]))

 There's nothing 'pythonic' about this. It looks like it should be a
 bash script. It also looks simpler in bash. Anyway, if you like this
 better, or if you think the 'python only' policy should apply here,
 no problem.

 On Sun, Aug 2, 2009 at 8:58 PM, Michael Goldishmgold...@redhat.com
 wrote:
  The script adds a requested interface to an existing bridge.  It is
 meant to be
  used by qemu when running in TAP mode.
 
  Note: the user is responsible for setting up the bridge before
 running any
  tests.  This can be done with brctl or in any manner that is
 appropriate for
  the host OS.  It can be done inside 'qemu-ifup' as well, but this
 sample script
  doesn't do it.
 
  Signed-off-by: Michael Goldish mgold...@redhat.com
  ---
   client/tests/kvm/qemu-ifup |    8 
   1 files changed, 8 insertions(+), 0 deletions(-)
   create mode 100644 client/tests/kvm/qemu-ifup
 
  diff --git a/client/tests/kvm/qemu-ifup
 b/client/tests/kvm/qemu-ifup
  new file mode 100644
  index 000..bcd9a7a
  --- /dev/null
  +++ b/client/tests/kvm/qemu-ifup
  @@ -0,0 +1,8 @@
  +#!/bin/sh
  +
  +# The following expression selects the first bridge listed by
 'brctl show'.
  +# Modify it to suit your needs.
  +switch=$(/usr/sbin/brctl show | awk 'NR==2 { print $1 }')
  +
  +/sbin/ifconfig $1 0.0.0.0 up
  +/usr/sbin/brctl addif ${switch} $1
  --
  1.5.4.1
 
  ___
  Autotest mailing list
  autot...@test.kernel.org
  http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
 



 --
 Lucas Meneghel
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [AUTOTEST]telnet login fails in win2k8 DC 64. here are debug and other info

2009-08-04 Thread sudhir kumar
On Tue, Aug 4, 2009 at 11:23 AM, Michael Goldishmgold...@redhat.com wrote:
 Looks like there really are only 3 lines to read.
 Telnet is printing those lines, not windows. It's just indicating
 that it successfully connected. Windows is saying nothing.

 This is a little weird, because:
 - It can happen when the telnet server isn't running at all,
  but it seemed to be running fine when you connected manually.
 - It can happen when trying to connect to the wrong port, but I
  see you set the port to 23.

 It can also happen due to the short default timeout of 10 seconds.
 Windows can take longer than that to send something.
My manual observation also says that it does not take that much.
 Are you sure you increased the timeout? You only sent a log of 4
I increased the timeout and did not send the complete log. here are
the start and end of the complete log

12:26:44 INFO | Waiting for guest to be up...
12:26:44 INFO | DEBUG: in ssh_login ssh_prompt = C:\Users\Administrator
12:26:44 INFO | DEBUG: in ssh_login timeout = 20
12:26:47 INFO | DEBUG: Printing data read by read_nonblocking from output:
12:26:47 INFO | DEBUG: data read from output is Trying 10.0.99.100...
12:26:47 INFO | telnet: connect to address 10.0.99.100: No route to host
12:26:47 INFO | telnet: Unable to connect to remote host: No route to host

snip

12:30:48 INFO | DEBUG: Printing accumulated data read from output done.
12:30:48 INFO | DEBUG: Printing data after filter:
12:30:48 INFO | DEBUG: filtered data is Escape character is '^]'.
12:30:48 INFO | DEBUG: Printing data after filter done:
12:30:50 ERROR| Test failed: Could not log into guest


 seconds (starting at 12:27:25). How long does
 read_until_output_matches() wait before it gives up?

Here is the complete code flow with parameters.
in kvm_tests.py
session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
which calls in kvm_utils.py
output = func()
This calls ssh_login() in kvm_vm.py with default parameters and the
default def is changed to have timeout 20 second instead of 10
seconds.
def ssh_login(self, timeout=20):
This calls the telnet which calls remote_login() in kvm_utils.py
if use_telnet:
session = kvm_utils.telnet(address, port, username, password,
   prompt, timeout)
So timeout is 20 now. This calls remote_login
return remote_login(command, password, prompt, \r\n, timeout)
PS: A dumb Question: Does \r\n have to do something with the failure ?
So timeout is still 20. The next call is
while True:
(match, text) = sub.read_until_last_line_matches(
[r[Aa]re you sure, r[Pp]assword:\s*$, r^\s*[Ll]ogin:\s*$,
 r[Cc]onnection.*closed, r[Cc]onnection.*refused, prompt],
 timeout=timeout, internal_timeout=3.5)
which calls to
return self.read_until_output_matches(patterns, self.get_last_line,
  timeout, internal_timeout,
  print_func)

Hence this function waits for 20 seconds in each try. i think that is
good enough.

In my manual observation there is no delay between the above all the
lines printed. So this should not be an issue. I can bet that there is
not even a delay of 2 secs between these two lines:
 Escape character is '^]'.
 Welcome to Microsoft Telnet Service

But yes there is a very small delay(looks  1 second), which should be
taken care by increased internal_timeout which i have increased to
around 3 seconds.


 - Original Message -
 From: sudhir kumar smalik...@gmail.com
 To: kvm-devel kvm@vger.kernel.org
 Cc: Lucas Meneghel Rodrigues mrodr...@redhat.com, Michael Goldish 
 mgold...@redhat.com, Ryan Harper ry...@us.ibm.com
 Sent: Tuesday, August 4, 2009 8:06:32 AM (GMT+0200) Auto-Detected
 Subject: [AUTOTEST]telnet login fails in win2k8 DC 64. here are debug and  
 other info

 Hi,
 I am seeing a telnet failure in autotest to a win2k8 DC 64 bit guest.
 I have tried some debugging and came to a conclusion that
 read_nonblocking() is reading only 3 lines. let me first print the
 output of manual login

  # telnet -l Administrator 10.0.99.100 23
 Trying 10.0.99.100...
 Connected to ichigo-dom100.linuxperf9025.net (10.0.99.100).
 Escape character is '^]'.
 Welcome to Microsoft Telnet Service

 password:

 *===
 Microsoft Telnet Server.
 *===
 C:\Users\Administrator

 Now autotest only reads the first 3 lines(upto Escape.). It seems
 windows is doing something nasty here. I have put some debug prints in
 the code and here is the output. let me first put the code snippet
 from kvm_utils.py with added debug prints.

    def read_until_output_matches(self, patterns, filter=lambda(x):x,
                                  timeout=30.0, internal_timeout=3.0,
                                  print_func=None):
 snip

Re: [AUTOTEST]telnet login fails in win2k8 DC 64. here are debug and other info

2009-08-04 Thread sudhir kumar
For making it more clear here are the timoeout debug prints

14:31:04 INFO | ('DEBUG: timeout in read_until_output_matches =%d', 30)
14:31:04 INFO | ('DEBUG: internal_timeout in read_until_output_matches
=%d', 3.5)
{sorry with little syntax typo :(  }

On Tue, Aug 4, 2009 at 12:12 PM, sudhir kumarsmalik...@gmail.com wrote:
 On Tue, Aug 4, 2009 at 11:23 AM, Michael Goldishmgold...@redhat.com wrote:
 Looks like there really are only 3 lines to read.
 Telnet is printing those lines, not windows. It's just indicating
 that it successfully connected. Windows is saying nothing.

 This is a little weird, because:
 - It can happen when the telnet server isn't running at all,
  but it seemed to be running fine when you connected manually.
 - It can happen when trying to connect to the wrong port, but I
  see you set the port to 23.

 It can also happen due to the short default timeout of 10 seconds.
 Windows can take longer than that to send something.
 My manual observation also says that it does not take that much.
 Are you sure you increased the timeout? You only sent a log of 4
 I increased the timeout and did not send the complete log. here are
 the start and end of the complete log

 12:26:44 INFO | Waiting for guest to be up...
 12:26:44 INFO | DEBUG: in ssh_login ssh_prompt = C:\Users\Administrator
 12:26:44 INFO | DEBUG: in ssh_login timeout = 20
 12:26:47 INFO | DEBUG: Printing data read by read_nonblocking from output:
 12:26:47 INFO | DEBUG: data read from output is Trying 10.0.99.100...
 12:26:47 INFO | telnet: connect to address 10.0.99.100: No route to host
 12:26:47 INFO | telnet: Unable to connect to remote host: No route to host

 snip

 12:30:48 INFO | DEBUG: Printing accumulated data read from output done.
 12:30:48 INFO | DEBUG: Printing data after filter:
 12:30:48 INFO | DEBUG: filtered data is Escape character is '^]'.
 12:30:48 INFO | DEBUG: Printing data after filter done:
 12:30:50 ERROR| Test failed: Could not log into guest


 seconds (starting at 12:27:25). How long does
 read_until_output_matches() wait before it gives up?

 Here is the complete code flow with parameters.
 in kvm_tests.py
    session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
 which calls in kvm_utils.py
        output = func()
 This calls ssh_login() in kvm_vm.py with default parameters and the
 default def is changed to have timeout 20 second instead of 10
 seconds.
    def ssh_login(self, timeout=20):
 This calls the telnet which calls remote_login() in kvm_utils.py
        if use_telnet:
            session = kvm_utils.telnet(address, port, username, password,
                                       prompt, timeout)
 So timeout is 20 now. This calls remote_login
    return remote_login(command, password, prompt, \r\n, timeout)
 PS: A dumb Question: Does \r\n have to do something with the failure ?
 So timeout is still 20. The next call is
    while True:
        (match, text) = sub.read_until_last_line_matches(
                [r[Aa]re you sure, r[Pp]assword:\s*$, r^\s*[Ll]ogin:\s*$,
                 r[Cc]onnection.*closed, r[Cc]onnection.*refused, prompt],
                 timeout=timeout, internal_timeout=3.5)
 which calls to
        return self.read_until_output_matches(patterns, self.get_last_line,
                                              timeout, internal_timeout,
                                              print_func)

 Hence this function waits for 20 seconds in each try. i think that is
 good enough.

 In my manual observation there is no delay between the above all the
 lines printed. So this should not be an issue. I can bet that there is
 not even a delay of 2 secs between these two lines:
  Escape character is '^]'.
  Welcome to Microsoft Telnet Service

 But yes there is a very small delay(looks  1 second), which should be
 taken care by increased internal_timeout which i have increased to
 around 3 seconds.


 - Original Message -
 From: sudhir kumar smalik...@gmail.com
 To: kvm-devel kvm@vger.kernel.org
 Cc: Lucas Meneghel Rodrigues mrodr...@redhat.com, Michael Goldish 
 mgold...@redhat.com, Ryan Harper ry...@us.ibm.com
 Sent: Tuesday, August 4, 2009 8:06:32 AM (GMT+0200) Auto-Detected
 Subject: [AUTOTEST]telnet login fails in win2k8 DC 64. here are debug and  
 other info

 Hi,
 I am seeing a telnet failure in autotest to a win2k8 DC 64 bit guest.
 I have tried some debugging and came to a conclusion that
 read_nonblocking() is reading only 3 lines. let me first print the
 output of manual login

  # telnet -l Administrator 10.0.99.100 23
 Trying 10.0.99.100...
 Connected to ichigo-dom100.linuxperf9025.net (10.0.99.100).
 Escape character is '^]'.
 Welcome to Microsoft Telnet Service

 password:

 *===
 Microsoft Telnet Server.
 *===
 C:\Users\Administrator

 Now autotest only reads the first 3 lines(upto Escape.). It seems
 windows is doing something nasty

Re: [AUTOTEST]telnet login fails in win2k8 DC 64. here are debug and other info

2009-08-04 Thread sudhir kumar
On Tue, Aug 4, 2009 at 5:01 PM, Michael Goldishmgold...@redhat.com wrote:
 Maybe the problem is related to how you implemented TAP support in
 your tree.  You're obviously not using user mode networking because
I do not understand what do you mean here. Yes the setup is a tap
network and I can login into the guest using the same command
manually. Here is the output
# telnet -l Administrator 10.0.99.100 23
Trying 10.0.99.100...
Connected to ichigo-dom100.linuxperf9025.net (10.0.99.100).
Escape character is '^]'.
Welcome to Microsoft Telnet Service

password:

*===
Microsoft Telnet Server.
*===
C:\Users\Administrator

 the guest's IP address is 10.0.99.100.  I'm not sure exactly what the
 problem is, so can you print the connection command in remote_login()?
 Add something like 'print COMMAND:  + command' at the top of the
 function, and that might help us figure out what's wrong.
Here is the command:
23:35:13 INFO | COMMAND: telnet -l Administrator 10.0.99.100 23

So on further inspection the problem limits to that read_nonblocking()
 is reading only 3 lines. So the culprit may be the following code
data = 
while True:
r, w, x = select.select([self.fd], [], [], timeout)
if self.fd in r:
try:
data += os.read(self.fd, 1024)
except OSError:
return data
else:
return data
in this function. Since all the 5 lines(including last empty line) do
not appear in one go it is possible that select() returns before the
whole lines have been writen to the file pointed by fd and read()
reads only those 3 lines(which looks to be less likely). I observe a
slight delay in printing the lines. First 3 lines are printed in one
go and the last two lines are printed after a very small( 1 sec)
time. What do you say? I will try by putting one more call to read()
and see what happens.


 - Original Message -
 From: sudhir kumar smalik...@gmail.com
 To: Michael Goldish mgold...@redhat.com
 Cc: Lucas Meneghel Rodrigues mrodr...@redhat.com, Ryan Harper 
 ry...@us.ibm.com, kvm-devel kvm@vger.kernel.org
 Sent: Tuesday, August 4, 2009 9:43:42 AM (GMT+0200) Auto-Detected
 Subject: Re: [AUTOTEST]telnet login fails in win2k8 DC 64. here are debug and 
  other info

 For making it more clear here are the timoeout debug prints

 14:31:04 INFO | ('DEBUG: timeout in read_until_output_matches =%d', 30)
 14:31:04 INFO | ('DEBUG: internal_timeout in read_until_output_matches
 =%d', 3.5)
 {sorry with little syntax typo :(  }

 On Tue, Aug 4, 2009 at 12:12 PM, sudhir kumarsmalik...@gmail.com wrote:
 On Tue, Aug 4, 2009 at 11:23 AM, Michael Goldishmgold...@redhat.com wrote:
 Looks like there really are only 3 lines to read.
 Telnet is printing those lines, not windows. It's just indicating
 that it successfully connected. Windows is saying nothing.

 This is a little weird, because:
 - It can happen when the telnet server isn't running at all,
  but it seemed to be running fine when you connected manually.
 - It can happen when trying to connect to the wrong port, but I
  see you set the port to 23.

 It can also happen due to the short default timeout of 10 seconds.
 Windows can take longer than that to send something.
 My manual observation also says that it does not take that much.
 Are you sure you increased the timeout? You only sent a log of 4
 I increased the timeout and did not send the complete log. here are
 the start and end of the complete log

 12:26:44 INFO | Waiting for guest to be up...
 12:26:44 INFO | DEBUG: in ssh_login ssh_prompt = C:\Users\Administrator
 12:26:44 INFO | DEBUG: in ssh_login timeout = 20
 12:26:47 INFO | DEBUG: Printing data read by read_nonblocking from output:
 12:26:47 INFO | DEBUG: data read from output is Trying 10.0.99.100...
 12:26:47 INFO | telnet: connect to address 10.0.99.100: No route to host
 12:26:47 INFO | telnet: Unable to connect to remote host: No route to host

 snip

 12:30:48 INFO | DEBUG: Printing accumulated data read from output done.
 12:30:48 INFO | DEBUG: Printing data after filter:
 12:30:48 INFO | DEBUG: filtered data is Escape character is '^]'.
 12:30:48 INFO | DEBUG: Printing data after filter done:
 12:30:50 ERROR| Test failed: Could not log into guest


 seconds (starting at 12:27:25). How long does
 read_until_output_matches() wait before it gives up?

 Here is the complete code flow with parameters.
 in kvm_tests.py
    session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
 which calls in kvm_utils.py
        output = func()
 This calls ssh_login() in kvm_vm.py with default parameters and the
 default def is changed to have timeout 20 second instead of 10
 seconds.
    def ssh_login(self, timeout=20):
 This calls the telnet which calls remote_login() in kvm_utils.py
        if use_telnet:
            session

Re: [AUTOTEST]telnet login fails in win2k8 DC 64. here are debug and other info

2009-08-04 Thread sudhir kumar
On Tue, Aug 4, 2009 at 10:06 PM, Michael Goldishmgold...@redhat.com wrote:

 - sudhir kumar smalik...@gmail.com wrote:

 On Tue, Aug 4, 2009 at 5:01 PM, Michael Goldishmgold...@redhat.com
 wrote:
  Maybe the problem is related to how you implemented TAP support in
  your tree.  You're obviously not using user mode networking because
 I do not understand what do you mean here. Yes the setup is a tap
 network and I can login into the guest using the same command
 manually. Here is the output
 # telnet -l Administrator 10.0.99.100 23
 Trying 10.0.99.100...
 Connected to ichigo-dom100.linuxperf9025.net (10.0.99.100).
 Escape character is '^]'.
 Welcome to Microsoft Telnet Service

 password:

 *===
 Microsoft Telnet Server.
 *===
 C:\Users\Administrator

  the guest's IP address is 10.0.99.100.  I'm not sure exactly what
 the
  problem is, so can you print the connection command in
 remote_login()?
  Add something like 'print COMMAND:  + command' at the top of the
  function, and that might help us figure out what's wrong.
 Here is the command:
 23:35:13 INFO | COMMAND: telnet -l Administrator 10.0.99.100 23

 So on further inspection the problem limits to that
 read_nonblocking()
  is reading only 3 lines. So the culprit may be the following code
         data = 
         while True:
             r, w, x = select.select([self.fd], [], [], timeout)
             if self.fd in r:
                 try:
                     data += os.read(self.fd, 1024)
                 except OSError:
                     return data
             else:
                 return data
 in this function. Since all the 5 lines(including last empty line) do
 not appear in one go it is possible that select() returns before the
 whole lines have been writen to the file pointed by fd and read()
 reads only those 3 lines(which looks to be less likely). I observe a
 slight delay in printing the lines. First 3 lines are printed in one
 go and the last two lines are printed after a very small( 1 sec)
 time. What do you say? I will try by putting one more call to read()
 and see what happens.

 I don't think that'll help because it's OK to get 3 lines, wait a little,
 and then get 2 more lines.  read_until_output_matches() calls
 read_nonblocking() repeatedly to read output from the Telnet process, so
 pauses in output from the process are OK.
At this point my head is out of thoughts. Even putting a time.sleep()
before os.read() does not print any more lines. double read()
statement causes an indefinite wait after the telnet server starts
running in the guest. I am clueless now. Anyone else who faced the
same problem?
/me beats his head.

 I thought it was a good idea to print the login command because although
 the IP is OK, maybe the port isn't, and in the logs we don't see what
 port ssh_login() is actually trying to connect to.

 
  - Original Message -
  From: sudhir kumar smalik...@gmail.com
  To: Michael Goldish mgold...@redhat.com
  Cc: Lucas Meneghel Rodrigues mrodr...@redhat.com, Ryan Harper
 ry...@us.ibm.com, kvm-devel kvm@vger.kernel.org
  Sent: Tuesday, August 4, 2009 9:43:42 AM (GMT+0200) Auto-Detected
  Subject: Re: [AUTOTEST]telnet login fails in win2k8 DC 64. here are
 debug and  other info
 
  For making it more clear here are the timoeout debug prints
 
  14:31:04 INFO | ('DEBUG: timeout in read_until_output_matches =%d',
 30)
  14:31:04 INFO | ('DEBUG: internal_timeout in
 read_until_output_matches
  =%d', 3.5)
  {sorry with little syntax typo :(  }
 
  On Tue, Aug 4, 2009 at 12:12 PM, sudhir kumarsmalik...@gmail.com
 wrote:
  On Tue, Aug 4, 2009 at 11:23 AM, Michael
 Goldishmgold...@redhat.com wrote:
  Looks like there really are only 3 lines to read.
  Telnet is printing those lines, not windows. It's just indicating
  that it successfully connected. Windows is saying nothing.
 
  This is a little weird, because:
  - It can happen when the telnet server isn't running at all,
   but it seemed to be running fine when you connected manually.
  - It can happen when trying to connect to the wrong port, but I
   see you set the port to 23.
 
  It can also happen due to the short default timeout of 10
 seconds.
  Windows can take longer than that to send something.
  My manual observation also says that it does not take that much.
  Are you sure you increased the timeout? You only sent a log of 4
  I increased the timeout and did not send the complete log. here
 are
  the start and end of the complete log
 
  12:26:44 INFO | Waiting for guest to be up...
  12:26:44 INFO | DEBUG: in ssh_login ssh_prompt =
 C:\Users\Administrator
  12:26:44 INFO | DEBUG: in ssh_login timeout = 20
  12:26:47 INFO | DEBUG: Printing data read by read_nonblocking from
 output:
  12:26:47 INFO | DEBUG: data read from output is Trying
 10.0.99.100...
  12:26:47 INFO | telnet: connect to address 10.0.99.100: No route

Re: [Autotest] autotest exception in windows 2003 guest (ValueError: invalid literal for int() with base 10: '%errorlevel%)

2009-08-03 Thread sudhir kumar
On 7/27/09, Michael Goldish mgold...@redhat.com wrote:
 It looks like you're using opensshd with cygwin -- not the one we normally
 install on guests, but rather one that outputs colorful text. Is that right?
 Which server are you using exactly?
 There's not much I can do against colorful text, because it's difficult to
 automatically strip the weird formatting characters from the output.

 I suggest that you:

 - Use another server, like the cygwin+opensshd we normally use for Windows
 guests (http://sshwindows.webheat.co.uk/), or if you're not using SCP with
This does not install the server on my win2k8 Datacentre 64. Though
client works fine.

 Windows, you can use rss.exe which I posted recently -- it's the easiest to
 install and works with all Windows versions (ssh doesn't always work). If
Can you please share the binary? I wana give it a try.

 you use rss.exe make sure you use telnet as a client (not ssh) by setting
 use_telnet = yes for the relevant guest in kvm_tests.cfg. The best client
 is actually raw nc, which I'll post patches to support soon, but telnet
 should work too (though it will produce a double echo for each command).

 - Use a more recent version of KVM-Autotest. kvm_subprocess recently got
 into the tree, and it handles weird responses to echo %errorlevel% better
 (especially double echo), but as long as the output is colored, it still
 won't work. If you upgrade, make sure you also apply the patch that fixes
 kvm_subprocess on Python 2.6 (if you use 2.6), or just wait until Lucas
 applies the patch to the tree.

 I know this seems a little messy, but if you wait a little while everything
 will sort itself out -- if I understand correctly, we are moving towards
 using rss.exe on all Windows guests, with nc as a client, and with
 kvm_subprocess controlling the client, and then most of the problems should
 go away (hopefully).
I will be so happy. At present autotes is almost of no use for windows
with copssh and openssh. boot reboot code works fine  but migrate code
cause problem due to the colorful text of the prompt. I wana get out
of this issue as soon as possible. please share the binaries with me
so that we can come to know of issues if any and make this ssh server
more robust.
thanks in advance.


 Thanks,
 Michael

 - Original Message -
 From: sudhir kumar smalik...@gmail.com
 To: Autotest mailing list autot...@test.kernel.org
 Cc: Lucas Meneghel Rodrigues mrodr...@redhat.com, kvm-devel
 kvm@vger.kernel.org
 Sent: Monday, July 27, 2009 2:16:06 PM (GMT+0200) Auto-Detected
 Subject: [Autotest] autotest exception in windows 2003 guest (ValueError:
 invalid literal for int() with base 10: '%errorlevel%)

 Hi I have been getting the following exception in autotest for windows
 2003 datacenter.
   status =
 int(\n.join(status.splitlines()[1:-1]).strip())
   ValueError: invalid literal for int() with base 10:
 '%errorlevel%\n\x1b]0;~\x07\n\x1b[32madministra...@ibm-n81hj962hdx
 \x1b[33m~\x1b[0m'

 Is there any other command with which we can replace
 ssh_status_test_command = echo %errorlevel%

 echo $? also does not work in windows.


 --
 Sudhir Kumar
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST]telnet login fails in win2k8 DC 64. here are debug and other info

2009-08-03 Thread sudhir kumar
Hi,
I am seeing a telnet failure in autotest to a win2k8 DC 64 bit guest.
I have tried some debugging and came to a conclusion that
read_nonblocking() is reading only 3 lines. let me first print the
output of manual login

 # telnet -l Administrator 10.0.99.100 23
Trying 10.0.99.100...
Connected to ichigo-dom100.linuxperf9025.net (10.0.99.100).
Escape character is '^]'.
Welcome to Microsoft Telnet Service

password:

*===
Microsoft Telnet Server.
*===
C:\Users\Administrator

Now autotest only reads the first 3 lines(upto Escape.). It seems
windows is doing something nasty here. I have put some debug prints in
the code and here is the output. let me first put the code snippet
from kvm_utils.py with added debug prints.

def read_until_output_matches(self, patterns, filter=lambda(x):x,
  timeout=30.0, internal_timeout=3.0,
  print_func=None):
snip
end_time = time.time() + timeout
while time.time()  end_time:
# Read data from child
newdata = self.read_nonblocking(internal_timeout)
print(DEBUG: Printing data read by read_nonblocking from output:)
print(DEBUG: data read from output is %s % newdata)
print(DEBUG: Printing data read by read_nonblocking from
output done.)
# Print it if necessary
if print_func and newdata:
map(print_func, newdata.splitlines())
data += newdata
print(DEBUG: Printing accumulated data read from output:)
print(DEBUG: accumulated data read from output is %s % data)
print(DEBUG: Printing accumulated data read from output done.)

done = False
# Look for patterns
print(DEBUG: Printing data after filter:)
print(DEBUG: filtered data is %s % filter(data))
print(DEBUG: Printing data after filter done:)
match = self.match_patterns(filter(data), patterns)
if match != None:
done = True
# Check if child has died
if self.poll() != None:
logging.debug(Process terminated with status %d, self.poll())
done = True
# Are we done?
if done: break
snip

Here is the output once the guest comes up.

12:27:25 INFO | DEBUG: Printing data read by read_nonblocking from output:
12:27:25 INFO | DEBUG: data read from output is Trying 10.0.99.100...
12:27:25 INFO | Connected to ichigo-dom100.linuxperf9025.net (10.0.99.100).
12:27:25 INFO | Escape character is '^]'.
12:27:25 INFO |
12:27:25 INFO | DEBUG: Printing data read by read_nonblocking from output done.
12:27:25 INFO | DEBUG: Printing accumulated data read from output:
12:27:25 INFO | DEBUG: accumulated data read from output is Trying
10.0.99.100...
12:27:25 INFO | Connected to ichigo-dom100.linuxperf9025.net (10.0.99.100).
12:27:25 INFO | Escape character is '^]'.
12:27:25 INFO |
12:27:25 INFO | DEBUG: Printing accumulated data read from output done.
12:27:25 INFO | DEBUG: Printing data after filter:
12:27:25 INFO | DEBUG: filtered data is Escape character is '^]'.
12:27:25 INFO | DEBUG: Printing data after filter done:
12:27:29 INFO | DEBUG: Printing data read by read_nonblocking from output:
12:27:29 INFO | DEBUG: data read from output is
12:27:29 INFO | DEBUG: Printing data read by read_nonblocking from output done.
12:27:29 INFO | DEBUG: Printing accumulated data read from output:
12:27:29 INFO | DEBUG: accumulated data read from output is Trying
10.0.99.100...
12:27:29 INFO | Connected to ichigo-dom100.linuxperf9025.net (10.0.99.100).
12:27:29 INFO | Escape character is '^]'.
12:27:29 INFO |
12:27:29 INFO | DEBUG: Printing accumulated data read from output done.
12:27:29 INFO | DEBUG: Printing data after filter:
12:27:29 INFO | DEBUG: filtered data is Escape character is '^]'.
12:27:29 INFO | DEBUG: Printing data after filter done:

and so on... timeout elapses and test fails.

I am not able to find out why read_nonblocking() is reading only 3
lines. I have tried increasing the timeouts and internal_timeouts as
well. But no luck.

Here is how my kvm_tests.cfg looks like.
# colorful prompt, so migration_test_command will fail, try with telnet
ssh_prompt = C:\Users\Administrator
ssh_port = 23
use_telnet = yes
Any clues ?

-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


autotest exception in windows 2003 guest (ValueError: invalid literal for int() with base 10: '%errorlevel%)

2009-07-27 Thread sudhir kumar
Hi I have been getting the following exception in autotest for windows
2003 datacenter.
  status = int(\n.join(status.splitlines()[1:-1]).strip())
  ValueError: invalid literal for int() with base 10:
'%errorlevel%\n\x1b]0;~\x07\n\x1b[32madministra...@ibm-n81hj962hdx
\x1b[33m~\x1b[0m'

Is there any other command with which we can replace
ssh_status_test_command = echo %errorlevel%

echo $? also does not work in windows.


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] print login command and change some timeout values

2009-07-24 Thread sudhir kumar
This patch does two small things.
1. Prints the guest login command to debug messages.
2. Changes the guest login timeout to 240 seconds. I see the timeout for
*.wait_for() functions in boot test is 240 seconds, while in reboot is 120
seconds which causes the test to fail. We might have missed it by mistake.
240 seconds is a reasonable timeout duration. This patch fixes that.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/kvm_utils.py
===
--- autotest.orig/client/tests/kvm/kvm_utils.py
+++ autotest/client/tests/kvm/kvm_utils.py
@@ -637,6 +637,7 @@ def remote_login(command, password, prom
 password_prompt_count = 0

 logging.debug(Trying to login...)
+logging.debug(Guest login Command: %s % command)

 while True:
 (match, text) = sub.read_until_last_line_matches(
Index: autotest/client/tests/kvm/kvm_tests.py
===
--- autotest.orig/client/tests/kvm/kvm_tests.py
+++ autotest/client/tests/kvm/kvm_tests.py
@@ -48,7 +48,7 @@ def run_boot(test, params, env):

 logging.info(Guest is down; waiting for it to go up again...)

-session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
+session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
 if not session:
 raise error.TestFail(Could not log into guest after reboot)

@@ -88,7 +88,7 @@ def run_shutdown(test, params, env):

 logging.info(Shutdown command sent; waiting for guest to go down...)

-if not kvm_utils.wait_for(vm.is_dead, 120, 0, 1):
+if not kvm_utils.wait_for(vm.is_dead, 240, 0, 1):
 raise error.TestFail(Guest refuses to go down)

 logging.info(Guest is down)
@@ -445,7 +445,7 @@ def run_yum_update(test, params, env):

 logging.info(Logging into guest...)

-session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
+session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
 if not session:
 message = Could not log into guest
 logging.error(message)



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [AUTOTEST] print login command and change some timeout values

2009-07-24 Thread sudhir kumar
Ah!
As reported earlier the patch might be wrapped up. So sending as an
attachment too.

On Fri, Jul 24, 2009 at 4:58 PM, sudhir kumarsmalik...@gmail.com wrote:
 This patch does two small things.
 1. Prints the guest login command to debug messages.
 2. Changes the guest login timeout to 240 seconds. I see the timeout for
 *.wait_for() functions in boot test is 240 seconds, while in reboot is 120
 seconds which causes the test to fail. We might have missed it by mistake.
 240 seconds is a reasonable timeout duration. This patch fixes that.

 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

 Index: autotest/client/tests/kvm/kvm_utils.py
 ===
 --- autotest.orig/client/tests/kvm/kvm_utils.py
 +++ autotest/client/tests/kvm/kvm_utils.py
 @@ -637,6 +637,7 @@ def remote_login(command, password, prom
     password_prompt_count = 0

     logging.debug(Trying to login...)
 +    logging.debug(Guest login Command: %s % command)

     while True:
         (match, text) = sub.read_until_last_line_matches(
 Index: autotest/client/tests/kvm/kvm_tests.py
 ===
 --- autotest.orig/client/tests/kvm/kvm_tests.py
 +++ autotest/client/tests/kvm/kvm_tests.py
 @@ -48,7 +48,7 @@ def run_boot(test, params, env):

         logging.info(Guest is down; waiting for it to go up again...)

 -        session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
 +        session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
         if not session:
             raise error.TestFail(Could not log into guest after reboot)

 @@ -88,7 +88,7 @@ def run_shutdown(test, params, env):

     logging.info(Shutdown command sent; waiting for guest to go down...)

 -    if not kvm_utils.wait_for(vm.is_dead, 120, 0, 1):
 +    if not kvm_utils.wait_for(vm.is_dead, 240, 0, 1):
         raise error.TestFail(Guest refuses to go down)

     logging.info(Guest is down)
 @@ -445,7 +445,7 @@ def run_yum_update(test, params, env):

     logging.info(Logging into guest...)

 -    session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
 +    session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
     if not session:
         message = Could not log into guest
         logging.error(message)



 --
 Sudhir Kumar




-- 
Sudhir Kumar
This patch does two small things.
1. Prints the guest login command to debug messages.
2. Changes the guest login timeout to 240 seconds. I see the timeout for
*.wait_for() functions in boot test is 240 seconds, while in reboot is 120
seconds which causes the test to fail. We might have missed it by mistake.
240 seconds is a reasonable timeout duration. This patch fixes that.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/kvm_utils.py
===
--- autotest.orig/client/tests/kvm/kvm_utils.py
+++ autotest/client/tests/kvm/kvm_utils.py
@@ -637,6 +637,7 @@ def remote_login(command, password, prom
 password_prompt_count = 0

 logging.debug(Trying to login...)
+logging.debug(Guest login Command: %s % command)

 while True:
 (match, text) = sub.read_until_last_line_matches(
Index: autotest/client/tests/kvm/kvm_tests.py
===
--- autotest.orig/client/tests/kvm/kvm_tests.py
+++ autotest/client/tests/kvm/kvm_tests.py
@@ -48,7 +48,7 @@ def run_boot(test, params, env):

 logging.info(Guest is down; waiting for it to go up again...)

-session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
+session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
 if not session:
 raise error.TestFail(Could not log into guest after reboot)

@@ -88,7 +88,7 @@ def run_shutdown(test, params, env):

 logging.info(Shutdown command sent; waiting for guest to go down...)

-if not kvm_utils.wait_for(vm.is_dead, 120, 0, 1):
+if not kvm_utils.wait_for(vm.is_dead, 240, 0, 1):
 raise error.TestFail(Guest refuses to go down)

 logging.info(Guest is down)
@@ -445,7 +445,7 @@ def run_yum_update(test, params, env):

 logging.info(Logging into guest...)

-session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
+session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
 if not session:
 message = Could not log into guest
 logging.error(message)



Re: [AUTOTEST] print login command and change some timeout values

2009-07-24 Thread sudhir kumar
On Fri, Jul 24, 2009 at 5:54 PM, Michael Goldishmgold...@redhat.com wrote:

 - sudhir kumar smalik...@gmail.com wrote:

 This patch does two small things.
 1. Prints the guest login command to debug messages.

 Why do we want to do that?
I do not see any harm in that. We are logging trying to login. If
sometimes login fail we can check by manually typing the same command
and see what went wrong. That print statement has helped me in past
quite a number of times.

 2. Changes the guest login timeout to 240 seconds. I see the timeout
 for
 *.wait_for() functions in boot test is 240 seconds, while in reboot is
 120
 seconds which causes the test to fail. We might have missed it by
 mistake.
 240 seconds is a reasonable timeout duration. This patch fixes that.

 Using the same timeout value everywhere makes sense, but it surprises me
 that tests are failing because 120 isn't enough. It sounds like the host
 has to be heavily loaded for the boot to take longer than 2 minutes. But
 if it happened to you then let's increase the timeout.

Yes please,
the test failed very near to the sshd daemon was about to run. So that
shows 120 seconds is not sufficient. The host was not at all loaded
and is a pretty high end machine.
Thanks.

 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

 Index: autotest/client/tests/kvm/kvm_utils.py
 ===
 --- autotest.orig/client/tests/kvm/kvm_utils.py
 +++ autotest/client/tests/kvm/kvm_utils.py
 @@ -637,6 +637,7 @@ def remote_login(command, password, prom
      password_prompt_count = 0

      logging.debug(Trying to login...)
 +    logging.debug(Guest login Command: %s % command)

      while True:
          (match, text) = sub.read_until_last_line_matches(
 Index: autotest/client/tests/kvm/kvm_tests.py
 ===
 --- autotest.orig/client/tests/kvm/kvm_tests.py
 +++ autotest/client/tests/kvm/kvm_tests.py
 @@ -48,7 +48,7 @@ def run_boot(test, params, env):

          logging.info(Guest is down; waiting for it to go up
 again...)

 -        session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
 +        session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
          if not session:
              raise error.TestFail(Could not log into guest after
 reboot)

 @@ -88,7 +88,7 @@ def run_shutdown(test, params, env):

      logging.info(Shutdown command sent; waiting for guest to go
 down...)

 -    if not kvm_utils.wait_for(vm.is_dead, 120, 0, 1):
 +    if not kvm_utils.wait_for(vm.is_dead, 240, 0, 1):
          raise error.TestFail(Guest refuses to go down)

      logging.info(Guest is down)
 @@ -445,7 +445,7 @@ def run_yum_update(test, params, env):

      logging.info(Logging into guest...)

 -    session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
 +    session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
      if not session:
          message = Could not log into guest
          logging.error(message)



 --
 Sudhir Kumar




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM_AUTOTEST] add kvm hugepage variant

2009-07-21 Thread sudhir kumar
The patch looks to be pretty clean to me. I was running a small
hugetlbfs script doing the same, but its good now as the script is
being incorporated in the test.

On Tue, Jul 21, 2009 at 9:34 PM, Lukáš Doktorldok...@redhat.com wrote:
 Well, thank you for notifications, I'll keep them in my mind.

 Also the problem with mempath vs. mem-path is solved. It was just a misspell
 in one version of KVM.

 * fixed patch attached

 Dne 20.7.2009 14:58, Lucas Meneghel Rodrigues napsal(a):

 On Fri, 2009-07-10 at 12:01 +0200, Lukáš Doktor wrote:

 After discussion I split the patches.

 Hi Lukáš, sorry for the delay answering your patch. Looks good to me in
 general, I have some remarks to make:

 1) When posting patches to the autotest kvm tests, please cross post the
 autotest mailing list (autot...@test.kernel.org) and the KVM list.

 2) About scripts to prepare the environment to perform tests - we've had
 some discussion about including shell scripts on autotest. Bottom line,
 autotest has a policy of not including non python code when possible
 [1]. So, would you mind re-creating your hugepage setup code in python
 and re-sending it?

 Thanks for your contribution, looking forward getting it integrated to
 our tests.

 [1] Unless when it is not practical for testing purposes - writing tests
 in C is just fine, for example.

 This patch adds kvm_hugepage variant. It prepares the host system and
 start vm with -mem-path option. It does not clean after itself, because
   it's impossible to unmount and free hugepages before all guests are
 destroyed.

 I need to ask you what to do with change of qemu parameter. Newest
 versions are using -mempath insted of -mem-path. This is impossible to
 fix using current config file. I can see 2 solutions:
 1) direct change in kvm_vm.py (parse output and try another param)
 2) detect qemu capabilities outside and create additional layer (better
 for future occurrence)

 Dne 9.7.2009 11:24, Lukáš Doktor napsal(a):

 This patch adds kvm_hugepage variant. It prepares the host system and
 start vm with -mem-path option. It does not clean after itself, because
 it's impossible to unmount and free hugepages before all guests are
 destroyed.

 There is also added autotest.libhugetlbfs test.

 I need to ask you what to do with change of qemu parameter. Newest
 versions are using -mempath insted of -mem-path. This is impossible to
 fix using current config file. I can see 2 solutions:
 1) direct change in kvm_vm.py (parse output and try another param)
 2) detect qemu capabilities outside and create additional layer (better
 for future occurrence)

 Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5



 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest





-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How much physical memory can be used to run domains in a KVM machine?

2009-07-17 Thread sudhir kumar
On Fri, Jul 17, 2009 at 12:47 PM, Dor Laordl...@redhat.com wrote:
 On 07/17/2009 08:50 AM, Zhang Qian wrote:

 Hi,

 I have a KVM box which has 4GB physical memory totally, I'd like to
 know how much I can use to run my domains, and how much will be
 reserved by hypervisor(KVM) itself?
 Thanks!


 KVM and the Linux host use relatively low amount of memory.
 Unlike other hypervisors you know, kvm does not reserve memory and also is
 able to swap the guest memory, so you can even use more than 4G for your
Is that true? I think we can not allocate memory more than the
physical RAM. Or does upstream kvm supports it? My kvm version is not
that old but memory allocation failed for me when I tried to give the
whole memory on my host to the guest.

 guest. (Just note swapping will be slow)



 Regards,
 Qian
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] Assign an UUID for each VM in kvm command line

2009-07-15 Thread sudhir kumar
On Thu, Jul 16, 2009 at 8:12 AM, Yolkfull Chowyz...@redhat.com wrote:
 On 07/15/2009 09:36 PM, Dor Laor wrote:
 On 07/15/2009 12:12 PM, Yolkfull Chow wrote:
 Would submit this patch which is from our internal kvm-autotest
 patches submitted by Jason.
 So that we could go on test case about parameters verification(UUID,
 DMI data etc).

 Signed-off-by: Yolkfull Chowyz...@redhat.com
 ---
   client/tests/kvm/kvm_vm.py |    4 
   1 files changed, 4 insertions(+), 0 deletions(-)

 diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
 index 503f636..68cc235 100644
 --- a/client/tests/kvm/kvm_vm.py
 +++ b/client/tests/kvm/kvm_vm.py
 @@ -287,6 +287,10 @@ class VM:
           elif params.get(display) == nographic:
               qemu_cmd +=  -nographic

 +        uuid = os.popen(cat
 /proc/sys/kernel/random/uuid).readline().strip()
 +        if uuid:
 +            qemu_cmd +=  -uuid %s % uuid

 If you'll change the uuid on every run, the guest will notice that.
 Some guest (M$) might not love it.
 Why not use a static uuid or even just test uuid in a specific test
 without having it in all tests?
 Hi Dor, since we cannot use a static uuid for running stress_boot test,
 but just assign UUID in a specific test is a good idea. We could use an
 option like assign_uuid = yes for that specific test?

This will be far better and more flexible.

 btw: why you're at it, please add uuid to the block devices too.
 + the -smbios option.
 Do you mean assign serial number for block devices?

 Thanks for suggestions. :)

 Thanks,
 dor

 +
           return qemu_cmd





 --
 Yolkfull
 Regards,

 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [RFC] KVM-Autotest: remote shell utility for Windows guests

2009-07-15 Thread sudhir kumar
 as a superuser:
  - Copy the program to the guest (C:\ should be fine).
  - Disable the firewall.
  - Enable the Administrator account.
  - Make Windows logon automatically using the Administrator account.
  - Make the server program run automatically on startup.
 
  I'm attaching a setup.bat file that does the above.
  setup.bat and rss.exe should be packaged together in an ISO and sent
 to the
  guest with -cdrom.  Note that setup.bat assumes that rss.exe is in
 D:\.
  This will not be true if we run the guest with multiple hard drives,
 so we'll
  have to do something about that.
 
 
  Please send me comments if you have any.  If you think this isn't a
 proper
  solution, or if you can suggest a better one, please let me know.

 So far we have your utility, STAF (sugested by Yaniv) and *possibly*
 qarsh. I am going to mail the qarsh authors asking for questions.

 I've considered STAF, which I know Yaniv likes very much, but it isn't
 interactive (as far as I know) so if we use it we'll end up with two very
 different methods of talking to guests (interactive with Linux and
 non-interactive with Windows) and we'll have to maintain two separate
 APIs. STAF also has to be setup on the host and guest (there's some
 permission related configuration to do).

 After giving the qarsh code a quick look I got the impression that some
 of it is very unix specific, and even if it makes sense to port it, the
 resulting code will be much longer than 450 lines, more difficult to
 maintain, and possibly less reliable. When a boot test fails we'd like to
 be able to put the blame on KVM with a high degree of certainty. If we
 use complex utilities we may not know for sure that they work flawlessly.
 In any case, we should see the author's reply and then make an informed
 decision.

 Note that writing a homemade solution was a last resort for me -- I spent
 as much time looking for a reliable existing program as I did writing
 this utility.

  Lucas: if we commit this, where should it go? Under tests/kvm/src
 maybe?

 No, src is more commonly used as a build directory rather than a
 directory that stores packages that are going to be uncompressed and
 build. Perhaps deps would be a good name for such applications, since
 once they get in we'll start depending on them.

 Sorry for the delay answering, Michael.

 Lucas
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest


thanks again for starting on this particular problem !!!

-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [AUTOTEST] [PATCH 1/5] update ltp.patch

2009-07-15 Thread sudhir kumar
Martin, Lucas
Do I need to resend the patches as attachments ? I hope you would have
a given a look on the patches. Please let me know if still something
more needs to be done. I am happy addressing all your comments.

On Mon, Jul 13, 2009 at 11:33 AM, sudhir kumarsmalik...@gmail.com wrote:
 This patch updates the ltp.patch in autotest to disable few testcases.
 This patch disables the default execution of some testcases which are
 either broken or are not required to be executed.

 If someone wants to execute only a subset of the testcases then one can
 update this patch to achieve the required flexibility.

 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

 Index: autotest/client/tests/ltp/ltp.patch
 ===
 --- autotest.orig/client/tests/ltp/ltp.patch
 +++ autotest/client/tests/ltp/ltp.patch
 @@ -1,38 +1,55 @@
 -diff -urN ltp-full-20080229_vanilla/runtest/syscalls
 ltp-full-20080229/runtest/syscalls
  ltp-full-20080229_vanilla/runtest/syscalls 2008-02-28
 23:55:41.0 -0800
 -+++ ltp-full-20080229/runtest/syscalls 2008-03-07 10:35:28.0 -0800
 -@@ -981,7 +981,7 @@
 +
 +Index: ltp-full-20090630/runltp
 +===
 +--- ltp-full-20090630.orig/runltp
  ltp-full-20090630/runltp
 +@@ -536,7 +536,6 @@ main()
 +                          ${LTPROOT}/runtest/pty                     \
 +                          ${LTPROOT}/runtest/containers              \
 +                          ${LTPROOT}/runtest/fs_bind                 \
 +-                         ${LTPROOT}/runtest/controllers             \
 +                          ${LTPROOT}/runtest/filecaps                \
 +                          ${LTPROOT}/runtest/cap_bounds              \
 +                          ${LTPROOT}/runtest/fcntl-locktests         \
 +Index: ltp-full-20090630/runtest/syscalls
 +===
 +--- ltp-full-20090630.orig/runtest/syscalls
  ltp-full-20090630/runtest/syscalls
 +@@ -1249,7 +1249,7 @@ vhangup01 vhangup01
  vhangup02 vhangup02
 -
 +
  #vmsplice test cases
  -vmsplice01 vmsplice01
  +#vmsplice01 vmsplice01
 -
 +
  wait02 wait02
 -
 -diff -urN ltp-full-20080229_vanilla/testcases/kernel/syscalls/paging/Makefile
 ltp-full-20080229/testcases/kernel/syscalls/paging/Makefile
  ltp-full-20080229_vanilla/testcases/kernel/syscalls/paging/Makefile
       2008-02-28 23:55:46.0 -0800
 -+++ ltp-full-20080229/testcases/kernel/syscalls/paging/Makefile
  2008-03-07 10:37:48.0 -0800
 -@@ -25,7 +25,9 @@
 +
 +Index: ltp-full-20090630/testcases/kernel/syscalls/paging/Makefile
 +===
 +--- ltp-full-20090630.orig/testcases/kernel/syscalls/paging/Makefile
  ltp-full-20090630/testcases/kernel/syscalls/paging/Makefile
 +@@ -25,7 +25,9 @@ TARGETS = $(patsubst %.c,%,$(SRCS))
  all: $(TARGETS)

  install:
  +ifneq ($(TARGETS),)
 -       @set -e; for i in $(TARGETS); do ln -f $$i ../../../bin/$$i ; done
 +       �...@set -e; for i in $(TARGETS); do ln -f $$i ../../../bin/$$i ; done
  +endif

  clean:
 -       rm -f $(TARGETS)
 -diff -urN ltp-full-20080229_vanilla/testcases/network/nfsv4/acl/Makefile
 ltp-full-20080229/testcases/network/nfsv4/acl/Makefile
  ltp-full-20080229_vanilla/testcases/network/nfsv4/acl/Makefile
  2008-02-28 23:55:52.0 -0800
 -+++ ltp-full-20080229/testcases/network/nfsv4/acl/Makefile
 2008-03-07 10:38:23.0 -0800
 -@@ -19,7 +19,9 @@
 -       $(CC) $(CFLAGS) $(LDFLAGS) -o acl1 acl1.c $(LIBS)
 +        rm -f $(TARGETS)
 +Index: ltp-full-20090630/testcases/network/nfsv4/acl/Makefile
 +===
 +--- ltp-full-20090630.orig/testcases/network/nfsv4/acl/Makefile
  ltp-full-20090630/testcases/network/nfsv4/acl/Makefile
 +@@ -19,7 +19,9 @@ acl1: acl1.c
 +        $(CC) $(CFLAGS) $(LDFLAGS) -o acl1 acl1.c $(LIBS)

  install: $(ACLTESTS)
  +ifneq ($(ACLTESTS),)
 -       @set -e; for i in $(ACLTESTS); do ln -f $$i ../../../bin ; done
 +       �...@set -e; for i in $(ACLTESTS); do ln -f $$i ../../../bin ; done
  +endif

  clean:
 -       rm -f $(ACLTESTS)
 +        rm -f $(ACLTESTS)
 +


 --
 Sudhir Kumar




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 0/5] Add latest LTP test in autotest (v2)

2009-07-13 Thread sudhir kumar
Hi,
Here is the version 2 of the patches to include latest ltp in autotest
and enable execution under kvm guests. I have incorporated all the
comments and approach. Please give a quick review and let me know if
something better is to be done.

Thanks
-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 1/5] update ltp.patch

2009-07-13 Thread sudhir kumar
This patch updates the ltp.patch in autotest to disable few testcases.
This patch disables the default execution of some testcases which are
either broken or are not required to be executed.

If someone wants to execute only a subset of the testcases then one can
update this patch to achieve the required flexibility.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/ltp/ltp.patch
===
--- autotest.orig/client/tests/ltp/ltp.patch
+++ autotest/client/tests/ltp/ltp.patch
@@ -1,38 +1,55 @@
-diff -urN ltp-full-20080229_vanilla/runtest/syscalls
ltp-full-20080229/runtest/syscalls
 ltp-full-20080229_vanilla/runtest/syscalls 2008-02-28
23:55:41.0 -0800
-+++ ltp-full-20080229/runtest/syscalls 2008-03-07 10:35:28.0 -0800
-@@ -981,7 +981,7 @@
+
+Index: ltp-full-20090630/runltp
+===
+--- ltp-full-20090630.orig/runltp
 ltp-full-20090630/runltp
+@@ -536,7 +536,6 @@ main()
+  ${LTPROOT}/runtest/pty \
+  ${LTPROOT}/runtest/containers  \
+  ${LTPROOT}/runtest/fs_bind \
+- ${LTPROOT}/runtest/controllers \
+  ${LTPROOT}/runtest/filecaps\
+  ${LTPROOT}/runtest/cap_bounds  \
+  ${LTPROOT}/runtest/fcntl-locktests \
+Index: ltp-full-20090630/runtest/syscalls
+===
+--- ltp-full-20090630.orig/runtest/syscalls
 ltp-full-20090630/runtest/syscalls
+@@ -1249,7 +1249,7 @@ vhangup01 vhangup01
  vhangup02 vhangup02
-
+
  #vmsplice test cases
 -vmsplice01 vmsplice01
 +#vmsplice01 vmsplice01
-
+
  wait02 wait02
-
-diff -urN ltp-full-20080229_vanilla/testcases/kernel/syscalls/paging/Makefile
ltp-full-20080229/testcases/kernel/syscalls/paging/Makefile
 ltp-full-20080229_vanilla/testcases/kernel/syscalls/paging/Makefile
   2008-02-28 23:55:46.0 -0800
-+++ ltp-full-20080229/testcases/kernel/syscalls/paging/Makefile
 2008-03-07 10:37:48.0 -0800
-@@ -25,7 +25,9 @@
+
+Index: ltp-full-20090630/testcases/kernel/syscalls/paging/Makefile
+===
+--- ltp-full-20090630.orig/testcases/kernel/syscalls/paging/Makefile
 ltp-full-20090630/testcases/kernel/syscalls/paging/Makefile
+@@ -25,7 +25,9 @@ TARGETS = $(patsubst %.c,%,$(SRCS))
  all: $(TARGETS)

  install:
 +ifneq ($(TARGETS),)
-   @set -e; for i in $(TARGETS); do ln -f $$i ../../../bin/$$i ; done
+@set -e; for i in $(TARGETS); do ln -f $$i ../../../bin/$$i ; done
 +endif

  clean:
-   rm -f $(TARGETS)
-diff -urN ltp-full-20080229_vanilla/testcases/network/nfsv4/acl/Makefile
ltp-full-20080229/testcases/network/nfsv4/acl/Makefile
 ltp-full-20080229_vanilla/testcases/network/nfsv4/acl/Makefile
 2008-02-28 23:55:52.0 -0800
-+++ ltp-full-20080229/testcases/network/nfsv4/acl/Makefile
2008-03-07 10:38:23.0 -0800
-@@ -19,7 +19,9 @@
-   $(CC) $(CFLAGS) $(LDFLAGS) -o acl1 acl1.c $(LIBS)
+rm -f $(TARGETS)
+Index: ltp-full-20090630/testcases/network/nfsv4/acl/Makefile
+===
+--- ltp-full-20090630.orig/testcases/network/nfsv4/acl/Makefile
 ltp-full-20090630/testcases/network/nfsv4/acl/Makefile
+@@ -19,7 +19,9 @@ acl1: acl1.c
+$(CC) $(CFLAGS) $(LDFLAGS) -o acl1 acl1.c $(LIBS)

  install: $(ACLTESTS)
 +ifneq ($(ACLTESTS),)
-   @set -e; for i in $(ACLTESTS); do ln -f $$i ../../../bin ; done
+@set -e; for i in $(ACLTESTS); do ln -f $$i ../../../bin ; done
 +endif

  clean:
-   rm -f $(ACLTESTS)
+rm -f $(ACLTESTS)
+


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 2/5] add kvm_ltp.patch for kvm guests

2009-07-13 Thread sudhir kumar
Disable the testcases which are not required to be executed under kvm guests.
This patch is specific to runs under kvm guests and will not be applied for
bare metal runs. Therefore if one wants to include/exclude/whatever to the
testruns can simply update this patch and things are done!!

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/ltp/kvm_ltp.patch
===
--- /dev/null
+++ autotest/client/tests/ltp/kvm_ltp.patch
@@ -0,0 +1,27 @@
+This patch disables the default execution of some testcases which are
+not required to be executed under kvm guests or are supposed to break
+or fail under kvm guests.
+
+Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com
+
+Index: ltp-full-20090630/runltp
+===
+--- ltp-full-20090630.orig/runltp
 ltp-full-20090630/runltp
+@@ -534,7 +534,6 @@ main()
+  ${LTPROOT}/runtest/math\
+  ${LTPROOT}/runtest/nptl\
+  ${LTPROOT}/runtest/pty \
+- ${LTPROOT}/runtest/containers  \
+  ${LTPROOT}/runtest/fs_bind \
+  ${LTPROOT}/runtest/filecaps\
+  ${LTPROOT}/runtest/cap_bounds  \
+@@ -542,7 +541,6 @@ main()
+  ${LTPROOT}/runtest/connectors  \
+  ${LTPROOT}/runtest/admin_tools \
+  ${LTPROOT}/runtest/timers  \
+- ${LTPROOT}/runtest/power_management_tests  \
+  ${LTPROOT}/runtest/numa\
+  ${LTPROOT}/runtest/hugetlb \
+  ${LTPROOT}/runtest/commands\
+


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 3/5] add ltp control file for kvm guests

2009-07-13 Thread sudhir kumar
This patch adds the control file under kvm/autotest_control.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/autotest_control/ltp.control
===
--- /dev/null
+++ autotest/client/tests/kvm/autotest_control/ltp.control
@@ -0,0 +1,13 @@
+NAME = LTP
+AUTHOR = Sudhir Kumar sku...@linux.vnet.ibm.com
+TIME = MEDIUM
+TEST_CATEGORY = FUNCTIONAL
+TEST_CLASS = KERNEL
+TEST_TYPE = CLIENT
+DOC = 
+Linux Test Project: A collection of various functional testsuites
+to test stability and reliability of Linux. For further details see
+http://ltp.sourceforge.net/
+
+
+job.run_test('ltp', guest=kvm)


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 4/5] update ltp wrapper in autotest

2009-07-13 Thread sudhir kumar
This patch updates the ltp wrapper in autotest to execute the latest ltp.
At present autotest contains ltp which is more than 1 year old. There have
been added lots of testcases in ltp within this period. So this patch updates
the wrapper to run the June2009 release of ltp.
http://prdownloads.sourceforge.net/ltp/ltp-full-20090630.tgz

I have added an option which generates a fancy html results file. Also the
run is left to be a default run as expected.

This patch also adds the facility to apply kvm_ltp.patch which can customize
the test execution under kvm guests.

For autotest users, please untar the results file I am sending, run
cd results/default; firefox results.html, click ltp_results.html
This is a symlink to the ltp_results.html which is generated by ltp.

Please provide your comments, concerns and issues.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/ltp/ltp.py
===
--- autotest.orig/client/tests/ltp/ltp.py
+++ autotest/client/tests/ltp/ltp.py
@@ -23,13 +23,17 @@ class ltp(test.test):
 self.job.require_gcc()


-# http://prdownloads.sourceforge.net/ltp/ltp-full-20080229.tgz
-def setup(self, tarball = 'ltp-full-20080229.tar.bz2'):
+# http://prdownloads.sourceforge.net/ltp/ltp-full-20090630.tgz
+def setup(self, tarball = 'ltp-full-20090630.tgz', guest=None):
 tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
 utils.extract_tarball_to_dir(tarball, self.srcdir)
 os.chdir(self.srcdir)

-utils.system('patch -p1  ../ltp.patch')
+try:
+utils.system('patch -p1  ../ltp.patch')
+print Patch ltp.patch applied successfully
+except :
+print Patch ltp.patch failed to apply

 # comment the capability tests if we fail to load the capability module
 try:
@@ -37,6 +41,14 @@ class ltp(test.test):
 except error.CmdError, detail:
 utils.system('patch -p1  ../ltp_capability.patch')

+# if we are running under kvm guests apply kvm_ltp.patch
+if guest == kvm :
+try:
+utils.system('patch -p1  ../kvm_ltp.patch')
+print Patch kvm_ltp.patch applied successfully
+except :
+print Patch kvm_ltp.patch failed to apply
+
 utils.system('cp ../scan.c pan/')   # saves having lex installed
 utils.system('make -j %d' % utils.count_cpus())
 utils.system('yes n | make install')
@@ -52,8 +64,9 @@ class ltp(test.test):
 # In case the user wants to run another test script
 if script == 'runltp':
 logfile = os.path.join(self.resultsdir, 'ltp.log')
+htmlfile = os.path.join(self.resultsdir, 'ltp_results.html')
 failcmdfile = os.path.join(self.debugdir, 'failcmdfile')
-args2 = '-q -l %s -C %s -d %s' % (logfile, failcmdfile,
self.tmpdir)
+args2 = '-l %s -g %s -C %s -d %s' % (logfile, htmlfile,
failcmdfile, self.tmpdir)
 args = args + ' ' + args2

 cmd = os.path.join(self.srcdir, script) + ' ' + args


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 5/5] view ltp_results.html under kvm guests' results.html

2009-07-13 Thread sudhir kumar
This patch creates a link to the results html file generated by the test
under autotest. This is specific to the kvm part only. The assumption made is
that the file name is test_name_results.html and it is located under
test_name/results/ directory. This helps in quickly viewing the test results.

The attached tar file contains the full results directory. The results.html file
points to ltp_results.html which looks quite fancy.

Please have a look at the results and the patch and provide your comments.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/kvm_tests.py
===
--- autotest.orig/client/tests/kvm/kvm_tests.py
+++ autotest/client/tests/kvm/kvm_tests.py
@@ -391,6 +391,15 @@ def run_autotest(test, params, env):
 if not vm.scp_from_remote(autotest/results/default/*, guest_results_dir):
 logging.error(Could not copy results back from guest)

+# Some tests create html file as a result, link it to be viewed under
+# the results statistics. We assume this file is located under
+# test_name/results/ directory and named as test_name_results.html,
+# e.g. ltp_result.html, vmmstress_results.html
+html_file = test_name + _results.html
+html_path = os.path.join(guest_results_dir, test_name, results,
html_file)
+if os.path.exists(html_path):
+os.symlink(html_path, os.path.join(test.debugdir, html_file))
+
 # Fail the test if necessary
 if status_fail:
 raise error.TestFail(message_fail)



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [autotest] [PATCH 1/6] add ebizzy in autotest

2009-07-12 Thread sudhir kumar
On Sat, Jul 11, 2009 at 6:05 AM, Martin Blighmbl...@google.com wrote:
 On Fri, Jul 10, 2009 at 4:29 AM, sudhir kumarsmalik...@gmail.com wrote:
 So is there any plan for adding this patch set in the patch queue? I
 would love to incorporate all the comments if any.

 Yup, just was behind on patches.

 I added it now - the mailer you are using seems to chew patches fairly
 thoroughly though ... if it's gmail, it does that ... might want to just
 attach as text ?
Thanks!
Ah! I have been using gmail only in the text mode. I was unable to
subscribe to the list using my imap id(and i use mutt client for that)
though.
Is this problem of gmail known to all? Any workaround ?


 On Wed, Jul 8, 2009 at 1:47 PM, sudhir kumarsmalik...@gmail.com wrote:
 This patch adds the wrapper for ebizzy into autotest. here is the link
 to get a copy of the test tarball.
 http://sourceforge.net/project/platformdownload.php?group_id=202378sel_platform=3809

 Please review the patch and provide your comments.


 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

 Index: autotest/client/tests/ebizzy/control
 ===
 --- /dev/null
 +++ autotest/client/tests/ebizzy/control
 @@ -0,0 +1,11 @@
 +NAME = ebizzy
 +AUTHOR = Sudhir Kumar sku...@linux.vnet.ibm.com
 +TIME = MEDIUM, VARIABLE
 +TEST_CATEGORY = FUNCTIONAL
 +TEST_CLASS = SYSTEM STRESS
 +TEST_TYPE = CLIENT
 +DOC = 
 +http://sourceforge.net/project/platformdownload.php?group_id=202378sel_platform=3809
 +
 +
 +job.run_test('ebizzy', args = '-vv')
 Index: autotest/client/tests/ebizzy/ebizzy.py
 ===
 --- /dev/null
 +++ autotest/client/tests/ebizzy/ebizzy.py
 @@ -0,0 +1,32 @@
 +import os
 +from autotest_lib.client.bin import utils, test
 +from autotest_lib.client.common_lib import error
 +
 +class ebizzy(test.test):
 +    version = 3
 +
 +    def initialize(self):
 +        self.job.require_gcc()
 +
 +
 +    # 
 http://sourceforge.net/project/downloading.php?group_id=202378filename=ebizzy-0.3.tar.gz
 +    def setup(self, tarball = 'ebizzy-0.3.tar.gz'):
 +        tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
 +        utils.extract_tarball_to_dir(tarball, self.srcdir)
 +        os.chdir(self.srcdir)
 +
 +        utils.system('[ -x configure ]  ./configure')
 +        utils.system('make')
 +
 +
 +    # Note: default we use always mmap()
 +    def run_once(self, args = '', num_chunks = 1000, chunk_size =
 512000, seconds = 100, num_threads = 100):
 +
 +        #TODO: Write small functions which will choose many of the above
 +        # variables dynamicaly looking at guest's total resources
 +        logfile = os.path.join(self.resultsdir, 'ebizzy.log')
 +        args2 = '-m -n %s -P -R -s %s -S %s -t %s' % (num_chunks,
 chunk_size, seconds, num_threads)
 +        args = args + ' ' + args2
 +
 +        cmd = os.path.join(self.srcdir, 'ebizzy') + ' ' + args
 +        utils.system(cmd)


 --
 Sudhir Kumar




 --
 Sudhir Kumar





-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-12 Thread sudhir kumar
On Tue, Jul 7, 2009 at 12:07 AM, Martin Blighmbl...@google.com wrote:
 Issues: LTP has a history of some of the testcases getting broken.

 Right, that's always the concern with doing this.

 Anyways
 that has nothing to worry about with respect to autotest. One of the known 
 issue
 is broken memory controller issue with latest kernels(cgroups and memory
 resource controller enabled kernels). The workaround for them I use is to
 disable or delete those tests from ltp source and tar it again with the same
 name. Though people might use different workarounds for it.

 OK, Can we encapsulate this into the wrapper though, rather than making
 people do it manually? in the existing ltp.patch or something?


I have rebased the patches and updated the existing ltp.patch. I will
be sending them soon.
Also for runningn ltp under kvm I have generated a patch kvm_ltp.patch
whose purpose is same as of ltp.patch but only for kvm guests. i will
be sending the results of execution on the guest as well as on bare
metal. Thanks everyone for your comments!!


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM_AUTOTEST] add autotest.libhugetlbfs test

2009-07-10 Thread sudhir kumar
This looks pretty clear now as the two patches do two different
things. The guest large pages support is completely independent of the
host support of large pages for the guest.
patches look good to me. thanks for splitting them.

2009/7/10 Lukáš Doktor ldok...@redhat.com:
 After discussion I split the patches.

 this patch adds autotest.libhugetlbfs test which tests hugepage support
 inside of kvm guest.

 Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5

 Dne 9.7.2009 11:24, Lukáš Doktor napsal(a):

 This patch adds kvm_hugepage variant. It prepares the host system and
 start vm with -mem-path option. It does not clean after itself, because
 it's impossible to unmount and free hugepages before all guests are
 destroyed.

 There is also added autotest.libhugetlbfs test.

 I need to ask you what to do with change of qemu parameter. Newest
 versions are using -mempath insted of -mem-path. This is impossible to
 fix using current config file. I can see 2 solutions:
 1) direct change in kvm_vm.py (parse output and try another param)
 2) detect qemu capabilities outside and create additional layer (better
 for future occurrence)

 Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5






-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [autotest] [PATCH 1/6] add ebizzy in autotest

2009-07-10 Thread sudhir kumar
So is there any plan for adding this patch set in the patch queue? I
would love to incorporate all the comments if any.

On Wed, Jul 8, 2009 at 1:47 PM, sudhir kumarsmalik...@gmail.com wrote:
 This patch adds the wrapper for ebizzy into autotest. here is the link
 to get a copy of the test tarball.
 http://sourceforge.net/project/platformdownload.php?group_id=202378sel_platform=3809

 Please review the patch and provide your comments.


 Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

 Index: autotest/client/tests/ebizzy/control
 ===
 --- /dev/null
 +++ autotest/client/tests/ebizzy/control
 @@ -0,0 +1,11 @@
 +NAME = ebizzy
 +AUTHOR = Sudhir Kumar sku...@linux.vnet.ibm.com
 +TIME = MEDIUM, VARIABLE
 +TEST_CATEGORY = FUNCTIONAL
 +TEST_CLASS = SYSTEM STRESS
 +TEST_TYPE = CLIENT
 +DOC = 
 +http://sourceforge.net/project/platformdownload.php?group_id=202378sel_platform=3809
 +
 +
 +job.run_test('ebizzy', args = '-vv')
 Index: autotest/client/tests/ebizzy/ebizzy.py
 ===
 --- /dev/null
 +++ autotest/client/tests/ebizzy/ebizzy.py
 @@ -0,0 +1,32 @@
 +import os
 +from autotest_lib.client.bin import utils, test
 +from autotest_lib.client.common_lib import error
 +
 +class ebizzy(test.test):
 +    version = 3
 +
 +    def initialize(self):
 +        self.job.require_gcc()
 +
 +
 +    # 
 http://sourceforge.net/project/downloading.php?group_id=202378filename=ebizzy-0.3.tar.gz
 +    def setup(self, tarball = 'ebizzy-0.3.tar.gz'):
 +        tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
 +        utils.extract_tarball_to_dir(tarball, self.srcdir)
 +        os.chdir(self.srcdir)
 +
 +        utils.system('[ -x configure ]  ./configure')
 +        utils.system('make')
 +
 +
 +    # Note: default we use always mmap()
 +    def run_once(self, args = '', num_chunks = 1000, chunk_size =
 512000, seconds = 100, num_threads = 100):
 +
 +        #TODO: Write small functions which will choose many of the above
 +        # variables dynamicaly looking at guest's total resources
 +        logfile = os.path.join(self.resultsdir, 'ebizzy.log')
 +        args2 = '-m -n %s -P -R -s %s -S %s -t %s' % (num_chunks,
 chunk_size, seconds, num_threads)
 +        args = args + ' ' + args2
 +
 +        cmd = os.path.join(self.srcdir, 'ebizzy') + ' ' + args
 +        utils.system(cmd)


 --
 Sudhir Kumar




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM_AUTOTEST] add kvm hugepage variant and test

2009-07-09 Thread sudhir kumar
Why do you want to use a control file and put the libhugetlbfs as a
variant of autotest in kvm? Just keeping the kvm_hugepages variant
will not serve the same purpose ? I have been using hugetlbfs variant
for a long but yes without pre script(I have done that manually)? Am I
missing something here?
Rest all looks fine to me except you need somewhere s/enaugh/enough

2009/7/9 Lukáš Doktor ldok...@redhat.com:
 This patch adds kvm_hugepage variant. It prepares the host system and start
 vm with -mem-path option. It does not clean after itself, because it's
 impossible to unmount and free hugepages before all guests are destroyed.

 There is also added autotest.libhugetlbfs test.

 I need to ask you what to do with change of qemu parameter. Newest versions
 are using -mempath insted of -mem-path. This is impossible to fix using
 current config file. I can see 2 solutions:
 1) direct change in kvm_vm.py (parse output and try another param)
 2) detect qemu capabilities outside and create additional layer (better for
 future occurrence)

 Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[autotest] [PATCH 1/6] add ebizzy in autotest

2009-07-08 Thread sudhir kumar
This patch adds the wrapper for ebizzy into autotest. here is the link
to get a copy of the test tarball.
http://sourceforge.net/project/platformdownload.php?group_id=202378sel_platform=3809

Please review the patch and provide your comments.


Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/ebizzy/control
===
--- /dev/null
+++ autotest/client/tests/ebizzy/control
@@ -0,0 +1,11 @@
+NAME = ebizzy
+AUTHOR = Sudhir Kumar sku...@linux.vnet.ibm.com
+TIME = MEDIUM, VARIABLE
+TEST_CATEGORY = FUNCTIONAL
+TEST_CLASS = SYSTEM STRESS
+TEST_TYPE = CLIENT
+DOC = 
+http://sourceforge.net/project/platformdownload.php?group_id=202378sel_platform=3809
+
+
+job.run_test('ebizzy', args = '-vv')
Index: autotest/client/tests/ebizzy/ebizzy.py
===
--- /dev/null
+++ autotest/client/tests/ebizzy/ebizzy.py
@@ -0,0 +1,32 @@
+import os
+from autotest_lib.client.bin import utils, test
+from autotest_lib.client.common_lib import error
+
+class ebizzy(test.test):
+version = 3
+
+def initialize(self):
+self.job.require_gcc()
+
+
+# 
http://sourceforge.net/project/downloading.php?group_id=202378filename=ebizzy-0.3.tar.gz
+def setup(self, tarball = 'ebizzy-0.3.tar.gz'):
+tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
+utils.extract_tarball_to_dir(tarball, self.srcdir)
+os.chdir(self.srcdir)
+
+utils.system('[ -x configure ]  ./configure')
+utils.system('make')
+
+
+# Note: default we use always mmap()
+def run_once(self, args = '', num_chunks = 1000, chunk_size =
512000, seconds = 100, num_threads = 100):
+
+#TODO: Write small functions which will choose many of the above
+# variables dynamicaly looking at guest's total resources
+logfile = os.path.join(self.resultsdir, 'ebizzy.log')
+args2 = '-m -n %s -P -R -s %s -S %s -t %s' % (num_chunks,
chunk_size, seconds, num_threads)
+args = args + ' ' + args2
+
+cmd = os.path.join(self.srcdir, 'ebizzy') + ' ' + args
+utils.system(cmd)


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[autotest] [PATCH 2/6] add-ebizzy-control-file-in-kvm-test under autotest

2009-07-08 Thread sudhir kumar
This patch adds the control file for ebizzy test to be executed
under kvm test.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/autotest_control/ebizzy.control
===
--- /dev/null
+++ autotest/client/tests/kvm/autotest_control/ebizzy.control
@@ -0,0 +1,11 @@
+NAME = ebizzy
+AUTHOR = Sudhir Kumar sku...@linux.vnet.ibm.com
+TIME = MEDIUM, VARIABLE
+TEST_CATEGORY = FUNCTIONAL
+TEST_CLASS = SYSTEM STRESS
+TEST_TYPE = CLIENT
+DOC = 
+http://sourceforge.net/project/platformdownload.php?group_id=202378sel_platform=3809
+
+
+job.run_test('ebizzy', args = '-vv')


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[autotest] [PATCH 3/6] update-stress-test-in-autotest

2009-07-08 Thread sudhir kumar
this patch updates the stress test to the latest version 1.0.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/stress/stress.py
===
--- autotest.orig/client/tests/stress/stress.py
+++ autotest/client/tests/stress/stress.py
@@ -9,8 +9,8 @@ class stress(test.test):
 self.job.require_gcc()


-# http://weather.ou.edu/~apw/projects/stress/stress-0.18.8.tar.gz
-def setup(self, tarball = 'stress-0.18.8.tar.gz'):
+# http://weather.ou.edu/~apw/projects/stress/stress-1.0.0.tar.gz
+def setup(self, tarball = 'stress-1.0.0.tar.gz'):
 tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
 utils.extract_tarball_to_dir(tarball, self.srcdir)
 os.chdir(self.srcdir)


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[autotest] [PATCH 4/6] add-stress-test-control-file-in-kvmtest

2009-07-08 Thread sudhir kumar
This patch adds the control file for running stress test under
kvm guests.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/autotest_control/stress.control
===
--- /dev/null
+++ autotest/client/tests/kvm/autotest_control/stress.control
@@ -0,0 +1,14 @@
+NAME='Stress'
+AUTHOR='Sudhir Kumar sku...@linux.vnet.ibm.com'
+EXPERIMENTAL='True'
+TEST_TYPE='client'
+TIME='MEDIUM'
+TEST_CATEGORY='Functional'
+TEST_CLASS='Software'
+DOC='''\
+stress is not a benchmark, but is rather a tool designed to put given subsytems
+under a specified load. Instances in which this is useful include
those in which
+a system administrator wishes to perform tuning activities, a kernel or libc
+programmer wishes to evaluate denial of service possibilities, etc.
+'''
+job.run_test('stress')


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[autotest] [PATCH 5/6] add-disktest-control-file-in-kvmtest

2009-07-08 Thread sudhir kumar
This patch adds the disktest control file for the test to be executed under
kvm tests.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/autotest_control/disktest.control
===
--- /dev/null
+++ autotest/client/tests/kvm/autotest_control/disktest.control
@@ -0,0 +1,15 @@
+AUTHOR = 'Sudhir Kumar sku...@linux.vnet.ibm.com'
+NAME = 'Disktest'
+DOC = '''\
+Pattern test of the disk, using unique signatures for each block and each
+iteration of the test. Designed to check for data corruption issues in the
+disk and disk controller.
+
+It writes 50MB/s of 500KB size ops.
+'''
+TIME = 'MEDIUM'
+TEST_CATEGORY = 'Kernel'
+TEST_TYPE = 'client'
+TEST_CLASS = 'Hardware'
+
+job.run_test('disktest')


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[autotest] [PATCH 6/6] add-disktest-stress-ebizzy-entry-in-kvm-sample-config

2009-07-08 Thread sudhir kumar
This patch adds the test entries in the sample config file for kvm
test execution.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/kvm_tests.cfg.sample
===
--- autotest.orig/client/tests/kvm/kvm_tests.cfg.sample
+++ autotest/client/tests/kvm/kvm_tests.cfg.sample
@@ -79,6 +79,15 @@ variants:
 - bonnie:
 test_name = bonnie
 test_control_file = bonnie.control
+- ebizzy:
+test_name = ebizzy
+test_control_file = ebizzy.control
+- stress:
+test_name = stress
+test_control_file = stress.control
+- disktest:
+test_name = disktest
+test_control_file = disktest.control

 - linux_s3:  install setup
 type = linux_s3


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-08 Thread sudhir kumar
On Wed, Jul 8, 2009 at 2:55 PM, Dor Laordl...@redhat.com wrote:
 On 07/08/2009 07:40 AM, Martin Bligh wrote:

 ATM I will suggest to merge the patches in and let get tested so that
 we can collect failures/breakages if any.

 I am not keen on causing regressions, which we've risked doing every
 time we change LTP. I think we at least need to get a run on a
 non-virtualized
 machine with some recent kernel, and exclude the tests that fail every
 time.

 We can use the reported results (impressive) as a base.
 When more regressions are introduced, we can chop more tests

Sure, soon will send the revised patch. Thanks everyone for your thoughts!!




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[autotest] [PATCH] add control file for kernbench under kvm

2009-07-08 Thread sudhir kumar
This patch adds the control file for kernbench to be executed under
kvm tests.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/autotest_control/kernbench.control
===
--- /dev/null
+++ autotest/client/tests/kvm/autotest_control/kernbench.control
@@ -0,0 +1,12 @@
+AUTHOR = Sudhir Kumar sku...@linux.vnet.ibm.com
+NAME = Kernbench
+TIME = SHORT
+TEST_CLASS = Kernel
+TEST_CATEGORY = Benchmark
+TEST_TYPE = client
+
+DOC = 
+A standard CPU benchmark. Runs a kernel compile and measures the performance.
+
+
+job.run_test('kernbench')


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-07 Thread sudhir kumar
On Tue, Jul 7, 2009 at 12:07 AM, Martin Blighmbl...@google.com wrote:
 Issues: LTP has a history of some of the testcases getting broken.

 Right, that's always the concern with doing this.

 Anyways
 that has nothing to worry about with respect to autotest. One of the known 
 issue
 is broken memory controller issue with latest kernels(cgroups and memory
 resource controller enabled kernels). The workaround for them I use is to
 disable or delete those tests from ltp source and tar it again with the same
 name. Though people might use different workarounds for it.

 OK, Can we encapsulate this into the wrapper though, rather than making
 people do it manually? in the existing ltp.patch or something?

definitely we can do that, but that needs to know about all the corner
cases of failure. So may be we can continue enhancing the patch as per
the failure reports on different OSes.

1 more thing I wanted to start a discussion on LTP mailing list is to
make aware the testcase if it is running on a physical host or on a
guest(say KVM guest). Testcases like power management, group
scheduling fairness etc do not make much sense to run on a guest(as
they will fail or break). So It is better for the test to recognise
the environment and not execute if it is under virtualization and it
is supposed to fail or break under that environment. Does that make
sense to you also ?



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-07 Thread sudhir kumar
Ok Then. So my idea is to include the patch in autotest and let the
people report failures(in compilation or execution), and we can patch
autotest to apply the fix patch and build and run ltp. I do not think
we can find all cases untill and unless we start execution.

However I will start the discussion on the ltp list and see the
response from people. At least we can get the new testcases to be
aware of virtualization.

On Tue, Jul 7, 2009 at 11:15 PM, Martin Blighmbl...@google.com wrote:
 On Tue, Jul 7, 2009 at 12:24 AM, sudhir kumarsmalik...@gmail.com wrote:
 On Tue, Jul 7, 2009 at 12:07 AM, Martin Blighmbl...@google.com wrote:
 Issues: LTP has a history of some of the testcases getting broken.

 Right, that's always the concern with doing this.

 Anyways
 that has nothing to worry about with respect to autotest. One of the 
 known issue
 is broken memory controller issue with latest kernels(cgroups and memory
 resource controller enabled kernels). The workaround for them I use is to
 disable or delete those tests from ltp source and tar it again with the 
 same
 name. Though people might use different workarounds for it.

 OK, Can we encapsulate this into the wrapper though, rather than making
 people do it manually? in the existing ltp.patch or something?

 definitely we can do that, but that needs to know about all the corner
 cases of failure. So may be we can continue enhancing the patch as per
 the failure reports on different OSes.

 1 more thing I wanted to start a discussion on LTP mailing list is to
 make aware the testcase if it is running on a physical host or on a
 guest(say KVM guest). Testcases like power management, group
 scheduling fairness etc do not make much sense to run on a guest(as
 they will fail or break). So It is better for the test to recognise
 the environment and not execute if it is under virtualization and it
 is supposed to fail or break under that environment. Does that make
 sense to you also ?

 Yup, we can pass an excluded test list. I really wish they'd fix their
 tests, but I've been saying that for 6 years now, and it hasn't happened
 yet ;-(




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-07 Thread sudhir kumar
On Tue, Jul 7, 2009 at 9:01 PM, Lucas Meneghel Rodriguesl...@redhat.com wrote:
 On Tue, Jul 7, 2009 at 4:24 AM, sudhir kumarsmalik...@gmail.com wrote:
 OK, Can we encapsulate this into the wrapper though, rather than making
 people do it manually? in the existing ltp.patch or something?

 definitely we can do that, but that needs to know about all the corner
 cases of failure. So may be we can continue enhancing the patch as per
 the failure reports on different OSes.

 For the most immediate needs, we could try  building LTP with make -k.
 Plain re-package of LTP kinda goes against our own rules. The
 preferred way to do testsuite modifications is patching before the
 execution. So let's strive to use the approach 'upstream package
 unmodified, patch if needed'. That's how distro package does it, makes
 sense for us too.
I will request you to merge the patches if they do not appear to see
any major changes at the very early. let people see the failures and
we can quickly patch autotest to fix it. I can volunteer to look into
the ltp issues reported by people or found by me.

 1 more thing I wanted to start a discussion on LTP mailing list is to
 make aware the testcase if it is running on a physical host or on a
 guest(say KVM guest). Testcases like power management, group
 scheduling fairness etc do not make much sense to run on a guest(as
 they will fail or break). So It is better for the test to recognise
 the environment and not execute if it is under virtualization and it
 is supposed to fail or break under that environment. Does that make
 sense to you also ?

 We need to make an assessment of what we would expect to see failing
 under a guest. LTP has a fairly large codebase, so it will be a fair
 amount of work.
Yeah, as Martin also points the same. At the very least we can expect
the new cases to be virtualization aware. For the existing ones we can
take it forward gradually, may be catching the test developers
individually :)

ATM I will suggest to merge the patches in and let get tested so that
we can collect failures/breakages if any.

 Lucas




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Missing shutdown.exe in win2k3 R2 Datacentre Guest under ssh

2009-07-07 Thread sudhir kumar
On Sat, Jun 6, 2009 at 6:07 PM, Michael Goldishmgold...@redhat.com wrote:
 Try running cmd.exe (both under SSH and directly) and try running 
 shutdown.exe there.
It does not work. I think there is something wrong with the environment.
 If that works, the problem is probably somewhere in the Cygwin configuration, 
 so it might help to try an SSH server that doesn't use Cygwin.
So not sure if that is the issue with Cygwin. May be windows security
be the reason here?

 (We're currently considering the option of writing our own simple remote 
 command shell server because we've had various problems with Cygwin and other 
 SSH servers.)
What is the update on it. I would love to test one if any.


 Note that it's possible to test guests without shutdown.exe in KVM-Autotest 
 -- Win2000 doesn't have shutdown.exe so we don't run the reboot test on it. 
 But obviously it's much better to have it.

Thanks for your response.(sorry for the delayed response)
 - Original Message -
 From: sudhir kumar smalik...@gmail.com
 To: kvm-devel kvm@vger.kernel.org
 Cc: Uri Lublin u...@redhat.com, Michael Goldish mgold...@redhat.com, 
 David Huff dh...@redhat.com, Lucas Meneghel Rodrigues 
 mrodr...@redhat.com, Ryan Harper ry...@us.ibm.com
 Sent: Saturday, June 6, 2009 1:08:51 PM (GMT+0200) Auto-Detected
 Subject: Re: Missing shutdown.exe in win2k3 R2 Datacentre Guest under ssh

 Does anyone have an idea on the issue below ?

 On Fri, Jun 5, 2009 at 4:16 PM, sudhir kumarsmalik...@gmail.com wrote:
 Hi,
 I recently installed a Windows 2003 R2 datacentre 64 bit guest under
 kvm. I installed copssh as the ssh server in the guest. When I logged
 into the guest using ssh I found that the shutdown command does not
 exist. I opened the command line and saw that the shutdown.exe file
 exist under C:\WINDOWS/system32 and I can shutdown the guest from
 command line. However this path is not mounted under cygwin but I can
 not find the shutdown.exe under /cygdrive/c/WINDOWS/system32.

 administra...@ibm-gl4gty5dclb /cygdrive/c/WINDOWS/system32
 $ ls shutdown.exe
 ls: cannot access shutdown.exe: No such file or directory


 Any idea why the file is not visible? This makes the use of
 kvm-autotest impossible for testing this guest. To add to that win2k3
 datacentre does not have a telnet-server installed by default and
 hence I opted for installing ssh server and use it. This problem I did
 not notice with the 2008 datacentre and others.
 So is there any solution for it or any work around?

 Thanks in advance
 --
 Sudhir Kumar




 --
 Sudhir Kumar




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Adds the LTP control file under tests/kvm/autotest_control/

2009-07-06 Thread sudhir kumar
This change I had made manually but it needs a patch. Hence sending it.

This patch adds the control file under kvm/autotest_control.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/autotest_control/ltp.control
===
--- /dev/null
+++ autotest/client/tests/kvm/autotest_control/ltp.control
@@ -0,0 +1,13 @@
+NAME = LTP
+AUTHOR = Sudhir Kumar sku...@linux.vnet.ibm.com
+TIME = MEDIUM
+TEST_CATEGORY = FUNCTIONAL
+TEST_CLASS = KERNEL
+TEST_TYPE = CLIENT
+DOC = 
+Linux Test Project: A collection of various functional testsuites
+to test stability and reliability of Linux. For further details see
+http://ltp.sourceforge.net/
+
+
+job.run_test('ltp')


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] sample entry in kvm_tests.cfg for LTP

2009-07-06 Thread sudhir kumar
Just for info, the entry looks like this.


Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/kvm_tests.cfg.sample
===
--- autotest.orig/client/tests/kvm/kvm_tests.cfg.sample
+++ autotest/client/tests/kvm/kvm_tests.cfg.sample
@@ -79,6 +79,10 @@ variants:
 - bonnie:
 test_name = bonnie
 test_control_file = bonnie.control
+- ltp:
+test_name = ltp
+test_timeout = 15000
+test_control_file = ltp.control

 - linux_s3:  install setup
 type = linux_s3


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


netperf in autotest

2009-07-06 Thread sudhir kumar
Hi,
In order to include netperf tests under kvm guests run I have been
trying to run netperf testsuit in autotest but I am getting barrier
failures. I want to understand the philosophy of implementation of the
autotest wrappers, so thought of quickly asking it on the list.
Here are my questions:

1. There are 3 control files control.server, control.client and
control.parallel. What is the scenario for which file ? Will
../../autotest/bin control.client with proper configuration on machine
(say, 9.126.89.168) be able to  run netperf completely automatically
or do I need to run the netserver on the other machine(say,
9.124.124.82)?
# cat control.client | grep ip
 server_ip='9.124.124.82',
 client_ip='9.126.89.168',

2. What is the purpose of control.parallel ?

3. What is the use of barriers in netperf2.py? Is it mandatory ? I
tried to understand it by going through the code but still I want to
double check.

The execution of this test using autotest is so far failing for me.
(though a minimal manual execution from command lines passes for me).
It mainly fails on barrier due to timeouts timeout waiting for
barrier: start_1. I tried by running
../../bin/autotest client.control on machineA with server_ip set to
remote machine(B) and client ip set to this machine's ip(A).
../../bin/autotest server.control on one machineB with server_ip set
to machineB and client ip set to the remote machine's ip(A).

I want to ensure that I am not doing anything wrong.
Thanks in advance!!

-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 1/2] Add latest LTP test in autotest

2009-07-05 Thread sudhir kumar
This patch updates the ltp wrapper in autotest to execute the latest ltp.
At present autotest contains ltp which is more than 1 year old. There have
been added lots of testcases in ltp within this period. So this patch updates
the wrapper to run the June2009 release of ltp which is available at
http://prdownloads.sourceforge.net/ltp/ltp-full-20090630.tgz

Issues: LTP has a history of some of the testcases getting broken. Anyways
that has nothing to worry about with respect to autotest. One of the known issue
is broken memory controller issue with latest kernels(cgroups and memory
resource controller enabled kernels). The workaround for them I use is to
disable or delete those tests from ltp source and tar it again with the same
name. Though people might use different workarounds for it.

I have added an option which generates a fancy html results file. Also the
run is left to be a default run as expected.

For autotest users, please untar the results file I am sending, run
cd results/default; firefox results.html, click ltp_results.html
This is a symlink to the ltp_results.html which is generated by ltp.

Please provide your comments, concerns and issues.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/ltp/ltp.py
===
--- autotest.orig/client/tests/ltp/ltp.py
+++ autotest/client/tests/ltp/ltp.py
@@ -23,8 +23,8 @@ class ltp(test.test):
 self.job.require_gcc()


-# http://prdownloads.sourceforge.net/ltp/ltp-full-20080229.tgz
-def setup(self, tarball = 'ltp-full-20080229.tar.bz2'):
+# http://prdownloads.sourceforge.net/ltp/ltp-full-20090630.tgz
+def setup(self, tarball = 'ltp-full-20090630.tgz'):
 tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
 utils.extract_tarball_to_dir(tarball, self.srcdir)
 os.chdir(self.srcdir)
@@ -52,8 +52,9 @@ class ltp(test.test):
 # In case the user wants to run another test script
 if script == 'runltp':
 logfile = os.path.join(self.resultsdir, 'ltp.log')
+htmlfile = os.path.join(self.resultsdir, 'ltp_results.html')
 failcmdfile = os.path.join(self.debugdir, 'failcmdfile')
-args2 = '-q -l %s -C %s -d %s' % (logfile, failcmdfile,
self.tmpdir)
+args2 = '-l %s -g %s -C %s -d %s' % (logfile, htmlfile,
failcmdfile, self.tmpdir)
 args = args + ' ' + args2

 cmd = os.path.join(self.srcdir, script) + ' ' + args




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 2/2] View LTP execution results under kvm's results.html file

2009-07-05 Thread sudhir kumar
This patch creates a link to the results html file generated by the test
under autotest. This is specific to the kvm part only. The assumption made is
that the file name is test_name_results.html and it is located under
test_name/results/ directory. This helps in quickly viewing the test results.

The attached tar file contains the full results directory. The results.html file
points to ltp_results.html which looks quite fancy.

Please have a look at the results and the patch and provide your comments.

Signed-off-by: Sudhir Kumar sku...@linux.vnet.ibm.com

Index: autotest/client/tests/kvm/kvm_tests.py
===
--- autotest.orig/client/tests/kvm/kvm_tests.py
+++ autotest/client/tests/kvm/kvm_tests.py
@@ -391,6 +391,15 @@ def run_autotest(test, params, env):
 if not vm.scp_from_remote(autotest/results/default/*, guest_results_dir):
 logging.error(Could not copy results back from guest)

+# Some tests create html file as a result, link it to be viewed under
+# the results statistics. We assume this file is located under
+# test_name/results/ directory and named as test_name_results.html,
+# e.g. ltp_result.html, vmmstress_results.html
+html_file = test_name + _results.html
+html_path = os.path.join(guest_results_dir, test_name, results,
html_file)
+if os.path.exists(html_path):
+os.symlink(html_path, os.path.join(test.debugdir, html_file))
+
 # Fail the test if necessary
 if status_fail:
 raise error.TestFail(message_fail)


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [RESULTS] LTP execution results in the html format

2009-07-05 Thread sudhir kumar
Please untar the file and open the results.html to see the tests
statistics under the debug link.

tar -xvzf results.tar.gz;
cd results/default;
firefox results.html
click debug
click ltp_results.html

I am sorry for attaching such a big file to the list. Do we use any
common place to  put the files and use a link to it in the mail?

-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Weird Windows license issue

2009-07-02 Thread sudhir kumar
Hmm... What key are you using. I did not get any issue at all. I am
using a volume key. I did multiple installs and never got such an
issue.

On Fri, Jul 3, 2009 at 4:32 AM, Michael Jinksmichael.ji...@gmail.com wrote:
 On Thu, Jul 2, 2009 at 5:45 PM, Sterling Windmillsterl...@ampx.net wrote:
 What do you mean by rejected? Is the installer not taking your key (I 
 doubt this would be caused by anything KVM specific),

 Right, that.  I don't have the screen in front of me so I might be
 getting the exact word wrong, but it immediately throws back something
 to the effect that the key is invalid.

 Since the license key entry stage happens before Windows tries to
 bring up networking, I don't think that license exhaustion is a likely
 explanation.

 Maybe KVM isn't either (yes, it does strike me as unlikely), but like
 I said in my first post I'm having a hard time finding other
 explanations.

 But anyhow.  If license issues like this one aren't known to occur on
 KVM, there must be something else going on, so I'll try again and look
 elsewhere for the cause of the problem.  Thanks for the info.

 Cheers,
 -j
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH] Adding iperf test

2009-06-30 Thread sudhir kumar
)
 +        session.sendline('')
 +        logging.info(starting iPerf client on Windows VM, connecting to 
 host)
 +        session.sendline('C:\iperf -t %s -c %s -P %s' % (int(iperf_duration),
 +                                                         iperf_dest_ip,
 +                                                   
 int(iperf_parallel_threads)))
 +    else:
 +        logging.info('starting copying %s to Linux VM ' % iperf_binary)
 +        if not vm.scp_to_remote(iperf_binary, /usr/local/bin):
 +            message = Could not copy Linux iPerf to guest
 +            logging.error(message)
 +            raise error.TestError, message
 +        print starting iPerf client on VM, connecting to host
 +        session.sendline('iperf -t %s -c %s -P %s' % (int(iperf_duration),
 +                                                      iperf_dest_ip,
 +                                                   
 int(iperf_parallel_threads)))
 +
 +    # Analyzing results
 +    iperf_result_match, iperf_result = session.read_up_to_prompt()
 +    logging.debug(iperf_result =, iperf_result)
 +
 +    if iperf_result.__contains__( 0.00 bits/sec):
 +        msg = 'Guest returned 0.00 bits/sec during iperf test.'
 +        raise error.TestError(msg)
 +    elif iperf_result.__contains__(No route to host):
 +        msg = 'SSH to guest returned: No route to host.'
 +        raise error.TestError(msg)
 +    elif iperf_result.__contains__(Access is denied):
 +        msg = 'SSH to guest returned: Access is denied.'
 +        raise error.TestError(msg)
 +    elif not iperf_result.__contains__(bits/sec):
 +        msg = 'SSH result unrecognizeable.'
 +        raise error.TestError(msg)
 +
 +    session.close()
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
 b/client/tests/kvm/kvm_tests.cfg.sample
 index 2c0b321..931f748 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -82,6 +82,10 @@ variants:
     - linux_s3:      install setup
         type = linux_s3

 +    - iperf:        install setup
 +        type = iperf
 +        extra_params +=  -snapshot
 +
  # NICs
  variants:
     - @rtl8139:
 @@ -102,6 +106,8 @@ variants:
         ssh_status_test_command = echo $?
         username = root
         password = 123456
 +        iperf:
 +          iperf_binary = misc/iperf

         variants:
             - Fedora:
 @@ -292,6 +298,8 @@ variants:
         password = 123456
         migrate:
             migration_test_command = ver  vol
 +        iperf:
 +            iperf_binary = misc/iperf.exe

         variants:
             - Win2000:
 --
 1.6.2.2

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio_blk fails for rhel5.3 guest with LVM: kernel panic

2009-06-24 Thread sudhir kumar
initrd does not contain these modules. My kernel contains the above modules.
Is it mandatory for the initrd to contain these modules? It does not
contain virtio-net and the guest boots fine with virtio.

On Wed, Jun 24, 2009 at 2:04 PM, Avi Kivitya...@redhat.com wrote:
 On 06/23/2009 08:21 PM, sudhir kumar wrote:

 I see that Rhel5.3 32 and 64 bit guests fail to boot with virtio for
 block device. The
 guest can not find the root filesystem. The guest is using LVM. As per
 the linux-kvm wiki  instructions I did change device.map and booted
 with if=virtio.
 The wiki link is
 http://www.linux-kvm.org/page/Boot_from_virtio_block_device



 Does your initrd include virtio-pci.ko and virtio-blk.ko?

 --
 error compiling committee.c: too many arguments to function





-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


virtio_blk fails for rhel5.3 guest with LVM: kernel panic

2009-06-23 Thread sudhir kumar
I see that Rhel5.3 32 and 64 bit guests fail to boot with virtio for
block device. The
guest can not find the root filesystem. The guest is using LVM. As per
the linux-kvm wiki  instructions I did change device.map and booted
with if=virtio.
The wiki link is http://www.linux-kvm.org/page/Boot_from_virtio_block_device

qemu command:
# qemu-system-x86_64 -drive file=rhel5-32.raw,if=virtio,boot=on -m 2048 -smp 4
-net nic,model=rtl8139,macaddr=00:FF:FE:00:00:64 -net
tap,script=/root/qemu-ifup-latest -name 32virtio -vnc :2 -watchdog i6300esb
-watchdog-action reset 
The image was installed using if=ide.

The messages before panic shows that the guest was unable to mount
root file system as below.
Volume group Volgroup00 not found
mount: could not find filesystem /dev/root
setuproot: moving /dev failed: no such file or directory
setuproot: mounting /proc failed: no such file or directory
setuproot: mounting /sys failed: no such file or directory
switchroot: mount failed: no such file or directory
kernle panic- not syncing: Attempted to kill init 

I see similar messages for 64 bit guest as well.

This is the information from within guest:
[r...@ichigo-dom101 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/hda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
[r...@ichigo-dom101 ~]# fdisk -l

Disk /dev/hda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hda1   *   1  13  104391   83  Linux
/dev/hda2  14130510377990   8e  Linux LVM
[r...@ichigo-dom101 ~]# cat /boot/grub/device.map
# this device map was generated by anaconda
(hd0) /dev/vda

[r...@ichigo-dom101 ~]# cat /boot/grub/menu.lst
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#  all kernel and initrd paths are relative to /boot/, eg.
#  root (hd0,0)
#  kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
#  initrd /initrd-version.img
#boot=/dev/hda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18-128.el5PAE)
root (hd0,0)
kernel /vmlinuz-2.6.18-128.el5PAE ro root=/dev/VolGroup00/LogVol00 rhgb
quiet
initrd /initrd-2.6.18-128.el5PAE.img

I ensured that the guest has the virtio_blk.ko module.

rpm -qa | grep kvm:  qemu-kvm-0.10.5-6

Guest OS: RHEL 5.3 32/64

Host OS: SlES11 64

Guest OS Image storage type: nfs


Is there anything that I am not doing in correct way?
Please let me know any other information is required.

Thanks
Sudhir


-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Latest kvm autotest patch queue

2009-06-22 Thread sudhir kumar
On Mon, Jun 22, 2009 at 6:54 PM, Lucas Meneghel Rodriguesl...@redhat.com 
wrote:
 KVM Autotest patch queue

 
 Michael Goldish (mgold...@redhat.com)

 1) KVM test: optionally convert PPM files to PNG format after test - Reviewed

 2) Step file tests: introducing a new feature and some small changes - 
 Reviewed

 3) Introducing kvm_subprocess - Reviewed

 
 Jason Wang (jasow...@redhat.com)

 1) TAP network support in kvm-autotest

 Michael Goldish has a similar patch to this one, so we need to decide which 
 one
I remember Michael's patch using nmap to get the ip of the guest. I
tested the patch and found nmap taking too long to do a lookup and get
the guest ip. I will suggets two things here.
1. Modify nmap options to make nmap lookup really faster.
2. Provide an option to use a dictionary based(mac address to ip
mapping) lookup in config file and return mapped ip for a mac address.

I can send a patch for option 2 on top of Michael's patch. In the
simplest form it will generate a dictionary from a macaddress vs IP
address list(provided by the dhcp server in your lab/environment. you
can put this list in a file and use it). You need to specify macaddr
parameter in config file from this list. Looking the mac address the
parser will return the corresponding ip address from the dictionary
generated.

Please let me know if 2 sounds good. Also I think I missed to look at
Jason's patch?
 is going to be used. Michael, when possible, please send your patch to support
 TAP network on the kvm test.

 
 David Huff (dh...@redhat.com)

 1) Unattended installs - Needs review

 Reviewed. Changes made to the unattended script, would like to create a python
 module to handle the unattended install and study the possibility of doing 
 things
 with cobbler.

 2) Move kvm functional tests to a 'test' directory

 http://kerneltrap.org/mailarchive/linux-kvm/2009/5/26/5812453

 I have reworked this patch and made my own version. I need to split it into
 smaller patches though.

 Needs review

 
 Yogi (anant...@linux.vnet.ibm.com)

 1) Support for remote migration

 http://kerneltrap.org/mailarchive/linux-kvm/2009/4/30/5607344/thread

 Rebased the patch. However, the patch review and discussion shows that we want
 to implement remote migration as a server side test instead of a client side
 one. The rebase is just in case we need this in the future.

 Probably not going to apply, look at a different way to implement the test.

 
 Alexey Eromenko (aerom...@redhat.com)

 1) New test module: iperf

 http://kerneltrap.org/mailarchive/linux-kvm/2009/5/31/5840973/thread

 Rebase made, made some comments to it.
  * The idea of shipping binaries doesn't look very appealing to me
  * Autotest already has an iperf test, might be worth a look for linux guests

 Pending some issues during review.

 


 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Missing shutdown.exe in win2k3 R2 Datacentre Guest under ssh

2009-06-06 Thread sudhir kumar
Does anyone have an idea on the issue below ?

On Fri, Jun 5, 2009 at 4:16 PM, sudhir kumarsmalik...@gmail.com wrote:
 Hi,
 I recently installed a Windows 2003 R2 datacentre 64 bit guest under
 kvm. I installed copssh as the ssh server in the guest. When I logged
 into the guest using ssh I found that the shutdown command does not
 exist. I opened the command line and saw that the shutdown.exe file
 exist under C:\WINDOWS/system32 and I can shutdown the guest from
 command line. However this path is not mounted under cygwin but I can
 not find the shutdown.exe under /cygdrive/c/WINDOWS/system32.

 administra...@ibm-gl4gty5dclb /cygdrive/c/WINDOWS/system32
 $ ls shutdown.exe
 ls: cannot access shutdown.exe: No such file or directory


 Any idea why the file is not visible? This makes the use of
 kvm-autotest impossible for testing this guest. To add to that win2k3
 datacentre does not have a telnet-server installed by default and
 hence I opted for installing ssh server and use it. This problem I did
 not notice with the 2008 datacentre and others.
 So is there any solution for it or any work around?

 Thanks in advance
 --
 Sudhir Kumar




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Missing shutdown.exe in win2k3 R2 Datacentre Guest under ssh

2009-06-05 Thread sudhir kumar
Hi,
I recently installed a Windows 2003 R2 datacentre 64 bit guest under
kvm. I installed copssh as the ssh server in the guest. When I logged
into the guest using ssh I found that the shutdown command does not
exist. I opened the command line and saw that the shutdown.exe file
exist under C:\WINDOWS/system32 and I can shutdown the guest from
command line. However this path is not mounted under cygwin but I can
not find the shutdown.exe under /cygdrive/c/WINDOWS/system32.

administra...@ibm-gl4gty5dclb /cygdrive/c/WINDOWS/system32
$ ls shutdown.exe
ls: cannot access shutdown.exe: No such file or directory


Any idea why the file is not visible? This makes the use of
kvm-autotest impossible for testing this guest. To add to that win2k3
datacentre does not have a telnet-server installed by default and
hence I opted for installing ssh server and use it. This problem I did
not notice with the 2008 datacentre and others.
So is there any solution for it or any work around?

Thanks in advance
-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][KVM-AUTOTEST] TAP network support in kvm-autotest

2009-06-02 Thread sudhir kumar
:
 +                    qemu_cmd += ,script=/etc/qemu-ifup

 Why not just leave 'script' out if the user doesn't specify 'ifup'?
 There's no good reason to prefer /etc/qemu-ifup to /etc/kvm-ifup or
 anything else, so I think it's best to leave it up to qemu if the
 user has no preference. It's also slightly shorter.

 +                ifdown = nic_params.get(ifdown)
 +                if ifdown:
 +                    qemu_cmd += ,downscript=%s % ifdown
 +                else:
 +                    qemu_cmd += ,downscript=no

 The same applies here.

 This is just my opinion; I'd like to hear your thoughts on this too.

 +            else:
 +                qemu_cmd +=  -net user,vlan=%d % vlan
              vlan += 1

          mem = params.get(mem)
 @@ -206,11 +226,11 @@ class VM:
          extra_params = params.get(extra_params)
          if extra_params:
              qemu_cmd +=  %s % extra_params
 -
 +
          for redir_name in kvm_utils.get_sub_dict_names(params,
 redirs):
              redir_params = kvm_utils.get_sub_dict(params,
 redir_name)
              guest_port = int(redir_params.get(guest_port))
 -            host_port = self.get_port(guest_port)
 +            host_port = self.get_port(guest_port,True)
              qemu_cmd +=  -redir tcp:%s::%s % (host_port,
 guest_port)

          if params.get(display) == vnc:
 @@ -467,27 +487,57 @@ class VM:
          If port redirection is used, return 'localhost' (the guest
 has no IP
          address of its own).  Otherwise return the guest's IP
 address.
          
 -        # Currently redirection is always used, so return
 'localhost'
 -        return localhost
 +        if self.params.get(network) == bridge:
 +            # probing ip address through arp
 +            bridge_name = self.params['bridge']
 +            macaddr = self.macaddr[0]

 I think VM.get_address() should take an index parameter, instead of
 just return the first address. The index parameter can default to 0.

 +            lines = os.popen(arp -a).readlines()
 +            for line in lines:
 +                if macaddr in line:
 +                    return line.split()[1].strip('()')
 +
 +            # probing ip address through nmap
 +            lines = os.popen(ip route).readlines()
 +            birdge_network = None
 +            for line in lines:
 +                if bridge_name in line:
 +                    bridge_network = line.split()[0]
 +                    break
 +
 +            if bridge_network != None:
 +                lines = os.popen(nmap -sP -n %s %
 bridge_network).readlines()
 +                lastline = None
 +                for line in lines:
 +                    if macaddr in line:
 +                        return lastline.split()[1]
 +                    lastline = line
 +
 +            # could not found ip address
 +            return None
 +        else:
 +            return localhost

 -    def get_port(self, port):
 +    def get_port(self, port, query = False):
          Return the port in host space corresponding to port in
 guest space.

          If port redirection is used, return the host port redirected
 to guest port port.
          Otherwise return port.
          
 -        # Currently redirection is always used, so use the redirs
 dict
 -        if self.redirs.has_key(port):
 -            return self.redirs[port]
 +
 +        if query == True or self.params.get(network) != bridge:

 Why do we need a 'query' parameter here? It looks to me like
 self.params.get(network) != bridge
 should suffice.

 +            if self.redirs.has_key(port):
 +                return self.redirs[port]
 +            else:
 +                kvm_log.debug(Warning: guest port %s requested but
 not redirected % port)
 +                return None
          else:
 -            kvm_log.debug(Warning: guest port %s requested but not
 redirected % port)
 -            return None
 +            return port

      def is_sshd_running(self, timeout=10):
          Return True iff the guest's SSH port is responsive.
          address = self.get_address()
          port = self.get_port(int(self.params.get(ssh_port)))
 -        if not port:
 +        if not port or not address:
              return False
          return kvm_utils.is_sshd_running(address, port,
 timeout=timeout)

 Again, I will do my best to post my patches soon.

 Thanks,
 Michael
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Fwd: kvm-autotest: False PASS results

2009-06-02 Thread sudhir kumar
On Mon, Jun 1, 2009 at 8:33 PM, Uri Lublin u...@redhat.com wrote:
 On 05/10/2009 08:15 PM, sudhir kumar wrote:

 Hi Uri,
 Any comments?


 -- Forwarded message --
 From: sudhir kumarsmalik...@gmail.com

 The kvm-autotest shows the following PASS results for migration,
 while the VM was crashed and test should have failed.

 Here is the sequence of test commands and results grepped from
 kvm-autotest output.


 /root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/qemu
 -name 'vm1' -monitor
 unix:/tmp/monitor-20090508-055624-QSuS,server,nowait -drive

 file=/root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/images/rhel5-32.raw,if=ide,boot=on
 -net nic,vlan=0 -net user,vlan=0 -m 8192
 -smp 4 -redir tcp:5000::22 -vnc :1



 /root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/qemu
 -name 'dst' -monitor
 unix:/tmp/monitor-20090508-055625-iamW,server,nowait -drive

 file=/root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/images/rhel5-32.raw,if=ide,boot=on
 -net nic,vlan=0 -net user,vlan=0 -m 8192
 -smp 4 -redir tcp:5001::22 -vnc :2 -incoming tcp:0:5200



 2009-05-08 05:58:43,471 Configuring logger for client level
                GOOD
 kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.1
        END GOOD
 kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.1

                GOOD
 kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
 kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
 timestamp=1241762371
 localtime=May 08 05:59:31       completed successfully
 Persistent state variable __group_level now set to 1
        END GOOD
 kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
 kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
 timestamp=1241762371
 localtime=May 08 05:59:31

  From the test output it looks that the test was succesful to
 log into the guest after migration:

 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Migration
 finished successfully
 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 send_monitor_cmd: Sending monitor command: screendump

 /root/sudhir/regression/test/kvm-autotest-phx/client/results/default/kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2/debug/migration_post.ppm
 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 send_monitor_cmd: Sending monitor command: screendump

 /root/sudhir/regression/test/kvm-autotest-phx/client/results/default/kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2/debug/migration_pre.ppm
 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 send_monitor_cmd: Sending monitor command: quit
 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 is_sshd_running: Timeout
 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Logging into
 guest after migration...
 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 remote_login: Trying to login...
 20090508-055927 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 remote_login: Got 'Are you sure...'; sending 'yes'
 20090508-055927 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 remote_login: Got password prompt; sending '123456'
 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 remote_login: Got shell prompt -- logged in
 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Logged in
 after migration
 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 get_command_status_output: Sending command: help
 20090508-055930 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 postprocess_vm: Postprocessing VM 'vm1'...
 20090508-055930 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
 postprocess_vm: VM object found in environment

 When I did vnc to the final migrated VM was crashed with a call trace
 as shown in the attachment.
 Quite less possible that the call trace appeared after the test
 finished as migration with memory
 more than 4GB is already broken [BUG 52527]. This looks a false PASS
 to me. Any idea how can we handle
 such falso positive results? Shall we wait for sometime after
 migration, log into the vm, do some work or run some good test,
 get output and report that the vm is alive?



 I don't think it's a False PASS.
 It seems the test was able to ssh into the guest, and run a command on the
 guest.

 Currently we only run migration once (round-trip). I think we should run
 migration more than once (using iterations). If the guest crashes due to
 migration, it would fail following rounds of migration.
Also I would like to have some scripts like basic_test.py which will
be executed inside the guest to check it's health to a more extent.
Though this will again need different scripts for windows/linux/mac
etc. Do you agree on it?


 Sorry for the late reply,
Its OK. Thanks for the response.
    Uri.




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo

Re: [KVM-AUTOTEST] [PATCH] support for remote migration

2009-05-27 Thread sudhir kumar
On Wed, May 27, 2009 at 7:57 PM, Uri Lublin u...@redhat.com wrote:
 sudhir kumar wrote:

 Michael,
 any updates on this patch? Are you going to commit this or you have
 any other plans/patch ?


 I'm not sure having the inter-host migration is best implemented on the
 autotest-client side. Actually this is one of the few tests I think belong
 to the autotest-server side.
This one was a minimal and the first point implementation. I am not
much expert in the server side. I will look into the code and design
of the server.

 On the other hand it is pretty simple to implement it here. So I think we'd
 better try first implementing it as a server test, and apply it to the
 client side only as a second choice.

 A few (minor) problems with it running on the client side:
 1. We do not know what the other host is running (what version?, kvm-modules
 loaded? etc.)
 2. There may be a conflict between a running local guest running on the
 remote (if it allowed to run tests while being a migration destination), and
 the remote guest.
Yes, especially in case of parallel job execution. This is imp to be handeled.
 3. There may be a conflict between two remote guests running a migration
 test on two different hosts.
So you mean the server will enquire about this event if implemented
using autotest server?
 3. get_free_ports run on the local machine, but expected/assumed to be free
 on the remote machine too.
 4. For a migration to be successful, the image(s) must be shared by both
 hosts. On the other hand, when installing a guest OS (e.g. Fedora 8) on both
 hosts (let's assume they both are running fc8_quick) we want different
 images on different hosts.
This may be true only when the autoserver is running multiple parallel
jobs. Though less likely that one would like to run 2 instances of
same installation. However still this is a case.


 These are all can be solved easily by non-trivial pretty simple
 configuration on the server. One can configure the remote migration to run
 as a separate tests to all other tests.

I feel Virtmanager also will be taking care of such scenarios. Any
idea how do they handle it or they  just leave it to the user?

 Regards,
    Uri.



Thanks for your comments.



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm_autotest: dynamically load tests

2009-05-27 Thread sudhir kumar
On Wed, May 27, 2009 at 5:44 PM, David Huff dh...@redhat.com wrote:
 Uri Lublin wrote:
 Hi David,

 I'm not sure this patch-set makes development much easier, as it only
 saves adding one line (entry) in test_routines table, but is less
 flexible (e.g. force file-name and function-name of test routines).

 Moving the test to a separate kvm_tests directory can be done without
 dynamic load of the tests.

 I don't have strong feelings about it though.

 What others think ?

 Thanks,
     Uri.


 Uri,

 Thanks for the feedback, Lucas and I had a brief discussion about this
 last week so I just wanted to repost what I already had just as a
 starting point for a discussion on this topic, I know my patches are rough.

 Our thinking is that it would be nice to be able to drop new tests in to
 a kvm_test dir, modify the config file, and not have to update the
 kvm_auottest or autotest binarys.

 However I would also like to hear others opinions on this...
This looks nice to me. As the number of tests increase with time the
code will still be modular and independent of testcases. Also one can
very quickly add a set of new testcases to execute locally. This will
keep the kvm_tests.py short.
We can think the other way too. What are the issues with the new
approach other than what Uri pointed out?


 -D
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] [KVM_Autotest] Added functionality to the preprocessor to run scripts

2009-05-27 Thread sudhir kumar
On Tue, May 26, 2009 at 9:37 PM, David Huff dh...@redhat.com wrote:
 Michael Goldish wrote:
 Looks good to me. See some comments below.

 - David Huff dh...@redhat.com wrote:

 This patch will run pre and post scripts
 defined in config file with the parameter pre_command
 and post_command post_command.

 Also exports all the prameters in preprocess for passing
 arguments to the script.

 Why not do this for post_command as well?

 I didn't do post_command b/c I figured that they would already be
 exported in the pre_command, however I guess that there can be the case
 where there is no pre and only post and I guess exporting twice will not
 hurt anything  I will add this to the patch.


 ---
  client/tests/kvm_runtest_2/kvm_preprocessing.py |   31
 +-
  1 files changed, 29 insertions(+), 2 deletions(-)

 diff --git a/client/tests/kvm_runtest_2/kvm_preprocessing.py
 b/client/tests/kvm_runtest_2/kvm_preprocessing.py
 index c9eb35d..02df615 100644
 --- a/client/tests/kvm_runtest_2/kvm_preprocessing.py
 +++ b/client/tests/kvm_runtest_2/kvm_preprocessing.py
 @@ -135,8 +135,7 @@ def postprocess_vm(test, params, env, name):
                  Waiting for VM to kill itself...):
              kvm_log.debug('kill_vm' specified; killing VM...)
          vm.destroy(gracefully = params.get(kill_vm_gracefully) ==
 yes)
 -
 -
 +

 I hate to be petty, but we usually keep two blank lines between top
 level functions.

 Also, you have some trailing whitespace there...

 good catch, I will take care of this

 It wouldn't hurt to make this timeout user-configurable with a default
 value of 600 or so:

 timeout = int(params.get(pre_commmand_timeout, 600))
 (status, pid, output) = kvm_utils.run_bg(..., timeout=timeout)

 We can also do that in a separate patch.


 I'll go ahead and add this while I rework the patch...
Please ensure that we have a number of timeout categories
now.(remote_login timeout, test execution timeout, migration timeout
etc) If we wana make timeout a variable let us do it for all the
timeouts.

 -D
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH] kvm_utils.py: remote_login(): improve regular expression matching

2009-05-26 Thread sudhir kumar
On Mon, May 25, 2009 at 2:15 PM, Michael Goldish mgold...@redhat.com wrote:

 - sudhir kumar smalik...@gmail.com wrote:

 The patch looks sane to me. A very good thing that can be done for
 remote_login() is to tune the tmeouts. I have seen especialy with
 windows guests or sometimes when the machine is heavily loaded the
 timeouts elapse and the test fails. When I increased the timeouts the
 test did not fail. internal_timeout=0.5 is too less in my views and
 even timeouts of 10 seconds prove insufficient sometimes. Do you too
 have any such experience?

 Yes, and I have a patch to fix the problem, which depends on another
 patch that isn't ready yet...

 Comments:

 1. internal_timeout has nothing to do with this -- it controls the time
 duration read_nonblocking() waits until it decides there's no more output
 to read and returns. 0.5 is high enough in my opinion. Increasing
 internal_timeout leads to more robust prompt recognition. Decreasing it
 makes all related functions return sooner and thus increases overall
 performance (slightly). I think even 0.1 is a reasonable value.
I noticed changing internal_timeout from 0.5 to 1.0 caused my test to
pass for windows guest.

 2. My solution to the prompt timeout problem (which isn't a very common
 problem AFAIK) is not to make the timeouts configurable -- instead I use
 2 timeouts everywhere: an initial output timeout, and a further output
 timeout. The first timeout (typically 10 sec) expires if the guest hasn't
 responded to the SSH login request. Then the second timeout (typically 30
 sec) expires if there's no additional output. I think this makes sense
 because it usually doesn't take very long to get a password prompt or an
 Are you sure prompt. It can take a while to get the things that follow
 (a shell prompt). If we got some initial output it's likely that the guest
 will provide more, so we can afford to wait 30 seconds. We can make the 2
 timeouts configurable, but even fixing them at 10 and 30 will probably work
 well enough.
That looks ok.
Though if we are going to use the two timeout values everywhere then i
do not think there is any harm in making it configurable. We can keep
two parameters in the config file(provided we are going to use only 2
timeouts with fixed values), give them some default value(in function
or even in config file), and let the user have little bit of control.
Think of the scenario when one wants to stress the system and hence a
test failure because of a timeout is never ever expected. In case
there are any complications in providing the config variables please
let me too know. Please post the patch. i would like to test it and
try to make the timeouts as config variables.

 On Sun, May 24, 2009 at 9:16 PM, Michael Goldish mgold...@redhat.com
 wrote:
  1. Make the 'login:' regular expression stricter so it doesn't
 match
  'Last login: ...' messages.
  2. Make the 'password:' regular expression stricter.
  3. Handle 'Connection refused' messages.
 
  Signed-off-by: Michael Goldish mgold...@redhat.com
  ---
   client/tests/kvm_runtest_2/kvm_utils.py |   13 +
   1 files changed, 9 insertions(+), 4 deletions(-)
 
  diff --git a/client/tests/kvm_runtest_2/kvm_utils.py
 b/client/tests/kvm_runtest_2/kvm_utils.py
  index be8ad95..5736cf6 100644
  --- a/client/tests/kvm_runtest_2/kvm_utils.py
  +++ b/client/tests/kvm_runtest_2/kvm_utils.py
  @@ -413,7 +413,8 @@ def remote_login(command, password, prompt,
 linesep=\n, timeout=10):
 
      while True:
          (match, text) = sub.read_until_last_line_matches(
  -                [[Aa]re you sure, [Pp]assword:, [Ll]ogin:,
 [Cc]onnection.*closed, prompt],
  +                [r[Aa]re you sure, r[Pp]assword:\s*$,
 r^\s*[Ll]ogin:\s*$,
  +                    r[Cc]onnection.*closed,
 r[Cc]onnection.*refused, prompt],
                  timeout=timeout, internal_timeout=0.5)
          if match == 0:  # Are you sure you want to continue
 connecting
              kvm_log.debug(Got 'Are you sure...'; sending 'yes')
  @@ -437,11 +438,15 @@ def remote_login(command, password, prompt,
 linesep=\n, timeout=10):
              kvm_log.debug(Got 'Connection closed')
              sub.close()
              return None
  -        elif match == 4:  # prompt
  +        elif match == 4:  # Connection refused
  +            kvm_log.debug(Got 'Connection refused')
  +            sub.close()
  +            return None
  +        elif match == 5:  # prompt
              kvm_log.debug(Got shell prompt -- logged in)
              return sub
          else:  # match == None
  -            kvm_log.debug(Timeout or process terminated)
  +            kvm_log.debug(Timeout elapsed or process terminated)
              sub.close()
              return None
 
  @@ -470,7 +475,7 @@ def remote_scp(command, password, timeout=300,
 login_timeout=10):
 
      while True:
          (match, text) = sub.read_until_last_line_matches(
  -                [[Aa]re you sure, [Pp]assword:, lost
 connection

Re: [KVM-AUTOTEST] [PATCH] support for remote migration

2009-05-25 Thread sudhir kumar
Michael,
any updates on this patch? Are you going to commit this or you have
any other plans/patch ?

On Tue, May 5, 2009 at 1:12 AM, Michael Goldish mgold...@redhat.com wrote:
 Thanks for the new patch. I'll comment on it later because I want to take 
 some more time to review it.

 The login prompt problem is my fault -- please see my comment below.

 - yogi anant...@linux.vnet.ibm.com wrote:

 Hello everyone,

 I like to resubmit patch to add support for remote migration in
 kvm-autotest, based on Michael Goldish's suggestions.

 To use this patch the following seven parameters should be added to
 the
 existing migration test

         remote_dst = yes
         hostip = localhost ip or name
         remoteip = remote host ip or name
         remuser = root
         rempassword = password
         qemu_path_dst = qemu binary path on remote host
         image_dir_dst = images dir on remote host


 For example:
     - migrate:      install setup
         type = migration
         vms +=  dst
         migration_test_command = help
         kill_vm_on_error = yes
         hostip = 192.168.1.2
         remoteip = 192.168.1.3
         remuser = root
         rempassword = 123456
         remote_dst = yes
         qemu_path_dst = /tmp/kvm_autotest_root/qemu
         image_dir_dst = /tmp/kvm_autotest_root/images

         variants:

 The parameter remote_dst = yes, indicates that the VM dst should
 be
 started on the remote host.If the parameter qemu_path_dst and
 image_dir_dst, it is assumed tht the qemu binary images path is same
 on
 both local and remote host.

  Regarding remote_login:
 
  - Why should remote_login return a session when it gets an
 unexpected login prompt? If you get a login prompt doesn't that mean
 something went wrong? The username is always provided in the ssh
 command line, so we shouldn't expect to receive a login prompt -- or
 am I missing something? I am pretty confident this is true in the
 general case, but maybe it's different when ssh keys have been
 exchanged between the hosts.
 
  - I think it makes little sense to return a session object when you
 see a login prompt because that session will be useless. You can't
 send any commands to it because you don't have a shell prompt yet. Any
 command you send will be interpreted as a username, and will most
 likely be the wrong username.
 
  - When a guest is in the process of booting and we try to log into
 it, remote_login sometimes fails because it gets an unexpected login
 prompt. This is good, as far as I understand, because it means the
 guest isn't ready yet (still booting). The next time remote_login
 attempts to log in, it usually succeeds. If we consider an unexpected
 login prompt OK, we pass login attempts that actually should have
 failed (and the resulting sessions will be useless anyway).
 
 I have removed this from the current patch, so now the remote_login
 function is unchanged.I will recheck my machine configuration and
 submit
 it as new patch if necessary. I had exchanged ssh keys between the
 hosts(both local and remote hosts), but the login sessions seem to
 terminates with Got unexpected login prompt.

 It seems the problem is caused by a loose regular expression in 
 kvm_utils.remote_login().
 In the list of parameters to read_until_last_line_matches, you'll find 
 something like [Ll]ogin:.
 I put it there to match the telnet login prompt which indicates failure, but 
 it also matches the
 Last login: Mon May 4 ... from ... line, which appears when SSH login 
 succeeds.
 This regex should be made stricter, e.g. r^[Ll]ogin:\s*$, which means it 
 must appear at the beginning
 of the line, and must be followed by nothing other than whitespace characters.

 I'll commit a fix, which will also make the other regex's stricter as well, 
 but it won't appear in the
 public repository until Uri comes back from vacation.

 Thanks,
 Michael
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST] [PATCH] support for remote migration

2009-05-25 Thread sudhir kumar
Thanks for the feedback.
Please provide your thoughts or RFC so that we can start a discussion
to implement this feature.
Meanwhile I will request you to give a try to this patch. I tested the
patch once and found that setting the qemu path in the config is not
as good. We can do it in the same way as we do for a normal VM
creation assuming the kvm-autotest is installed on the remote under
the same directory tree as in the source. I will test it further and
provide my feedback.

On Mon, May 25, 2009 at 3:23 PM, Michael Goldish mgold...@redhat.com wrote:

 - sudhir kumar smalik...@gmail.com wrote:

 Michael,
 any updates on this patch? Are you going to commit this or you have
 any other plans/patch ?

 Currently we don't have a patch for remote migration other than yogi's.
 We would, however, like to take some time to think about it, because it
 might be a better idea to implement it as two tests ('migration_source'
 and 'migration_dest') that are synchronized by the server. This way we
 won't have to deal with remote VM objects in the framework.

 If the server idea turns out to be infeasible then yogi's patch looks
 like the way to go (assuming it gets some testing to make sure it doesn't
 break anything).

 On Tue, May 5, 2009 at 1:12 AM, Michael Goldish mgold...@redhat.com
 wrote:
  Thanks for the new patch. I'll comment on it later because I want to
 take some more time to review it.
 
  The login prompt problem is my fault -- please see my comment
 below.
 
  - yogi anant...@linux.vnet.ibm.com wrote:
 
  Hello everyone,
 
  I like to resubmit patch to add support for remote migration in
  kvm-autotest, based on Michael Goldish's suggestions.
 
  To use this patch the following seven parameters should be added
 to
  the
  existing migration test
 
          remote_dst = yes
          hostip = localhost ip or name
          remoteip = remote host ip or name
          remuser = root
          rempassword = password
          qemu_path_dst = qemu binary path on remote host
          image_dir_dst = images dir on remote host
 
 
  For example:
      - migrate:      install setup
          type = migration
          vms +=  dst
          migration_test_command = help
          kill_vm_on_error = yes
          hostip = 192.168.1.2
          remoteip = 192.168.1.3
          remuser = root
          rempassword = 123456
          remote_dst = yes
          qemu_path_dst = /tmp/kvm_autotest_root/qemu
          image_dir_dst = /tmp/kvm_autotest_root/images
 
          variants:
 
  The parameter remote_dst = yes, indicates that the VM dst
 should
  be
  started on the remote host.If the parameter qemu_path_dst and
  image_dir_dst, it is assumed tht the qemu binary images path is
 same
  on
  both local and remote host.
 
   Regarding remote_login:
  
   - Why should remote_login return a session when it gets an
  unexpected login prompt? If you get a login prompt doesn't that
 mean
  something went wrong? The username is always provided in the ssh
  command line, so we shouldn't expect to receive a login prompt --
 or
  am I missing something? I am pretty confident this is true in the
  general case, but maybe it's different when ssh keys have been
  exchanged between the hosts.
  
   - I think it makes little sense to return a session object when
 you
  see a login prompt because that session will be useless. You can't
  send any commands to it because you don't have a shell prompt yet.
 Any
  command you send will be interpreted as a username, and will most
  likely be the wrong username.
  
   - When a guest is in the process of booting and we try to log
 into
  it, remote_login sometimes fails because it gets an unexpected
 login
  prompt. This is good, as far as I understand, because it means the
  guest isn't ready yet (still booting). The next time remote_login
  attempts to log in, it usually succeeds. If we consider an
 unexpected
  login prompt OK, we pass login attempts that actually should have
  failed (and the resulting sessions will be useless anyway).
  
  I have removed this from the current patch, so now the
 remote_login
  function is unchanged.I will recheck my machine configuration and
  submit
  it as new patch if necessary. I had exchanged ssh keys between the
  hosts(both local and remote hosts), but the login sessions seem to
  terminates with Got unexpected login prompt.
 
  It seems the problem is caused by a loose regular expression in
 kvm_utils.remote_login().
  In the list of parameters to read_until_last_line_matches, you'll
 find something like [Ll]ogin:.
  I put it there to match the telnet login prompt which indicates
 failure, but it also matches the
  Last login: Mon May 4 ... from ... line, which appears when SSH
 login succeeds.
  This regex should be made stricter, e.g. r^[Ll]ogin:\s*$, which
 means it must appear at the beginning
  of the line, and must be followed by nothing other than whitespace
 characters.
 
  I'll commit a fix, which will also make the other

Re: [KVM-AUTOTEST PATCH] kvm_vm.py: make sure the bulk of VM.create() is not executed in parallel

2009-05-25 Thread sudhir kumar
On Mon, May 25, 2009 at 2:47 PM, Michael Goldish mgold...@redhat.com wrote:

 - sudhir kumar smalik...@gmail.com wrote:

 On Sun, May 24, 2009 at 9:16 PM, Michael Goldish mgold...@redhat.com
 wrote:
  VM.create() does a few things (such as finding free ports) which are
 not safe
  to execute in parallel. Use a lock file to make sure this doesn't
 happen. The
  lock is released only after the VM is started or fails to start.
 
  Signed-off-by: Michael Goldish mgold...@redhat.com
  ---
   client/tests/kvm_runtest_2/kvm_vm.py |   85
 +++---
   1 files changed, 48 insertions(+), 37 deletions(-)
 
  diff --git a/client/tests/kvm_runtest_2/kvm_vm.py
 b/client/tests/kvm_runtest_2/kvm_vm.py
  index 3ce2003..af06693 100644
  --- a/client/tests/kvm_runtest_2/kvm_vm.py
  +++ b/client/tests/kvm_runtest_2/kvm_vm.py
  @@ -3,6 +3,7 @@
   import time
   import socket
   import os
  +import fcntl
 
   import kvm_utils
   import kvm_log
  @@ -289,48 +290,58 @@ class VM:
                      kvm_log.error(Actual MD5 sum differs from
 expected one)
                      return False
 
  -        # Handle port redirections
  -        redir_names = kvm_utils.get_sub_dict_names(params,
 redirs)
  -        host_ports = kvm_utils.find_free_ports(5000, 6000,
 len(redir_names))
  -        self.redirs = {}
  -        for i in range(len(redir_names)):
  -            redir_params = kvm_utils.get_sub_dict(params,
 redir_names[i])
  -            guest_port = int(redir_params.get(guest_port))
  -            self.redirs[guest_port] = host_ports[i]
  -
  -        # Find available VNC port, if needed
  -        if params.get(display) == vnc:
  -            self.vnc_port = kvm_utils.find_free_port(5900, 6000)
  -
  -        # Make qemu command
  -        qemu_command = self.make_qemu_command()
  +        # Make sure the following code is not executed by more than
 one thread
  +        # at the same time
  +        lockfile = open(/tmp/kvm-autotest-vm-create.lock, w+)
 How do you handle an open failure?

 Obviously I don't, but I don't think there should be a reason for this to
 fail, unless you deliberately create a file with such a name and with no
 write permission. The mode w+ creates the file if it doesn't exist.
 In the unlikely event of an open failure the test will fail with some
 Python exception.

 It may be a good idea to create the file in the test's local directory
 (test.bindir) instead of /tmp to prevent a symlink attack.
Agree.

  +        fcntl.lockf(lockfile, fcntl.LOCK_EX)
 What if other instance has locked the file at the moment. Definitely
 you would not like to fail. You may want to wait for a while and try
 agian.

 This is what fcntl.lockf() does -- it blocks until it can acquire the
 lock (it doesn't fail).

 I feel the default behaviour should be a blocking one but
 still you want to print the debug message
 kvm_log.debug(Trying to acquire lock for port selection)
 before getting the lock.

 This could be a good idea, but I'm not sure it's necessary, because while
 one process waits, another process is busy doing stuff and printing
 stuff, so the user never gets bored. Also, the time duration to wait is
 typically around 1 sec.

 In any case it shouldn't hurt too much to print debugging info, so I'll
 either resend this patch, or add to it in a future patch.
thanks.

 
  -        # Is this VM supposed to accept incoming migrations?
  -        if for_migration:
  -            # Find available migration port
  -            self.migration_port = kvm_utils.find_free_port(5200,
 6000)
  -            # Add -incoming option to the qemu command
  -            qemu_command +=  -incoming tcp:0:%d %
 self.migration_port
  +        try:
  +            # Handle port redirections
  +            redir_names = kvm_utils.get_sub_dict_names(params,
 redirs)
  +            host_ports = kvm_utils.find_free_ports(5000, 6000,
 len(redir_names))
  +            self.redirs = {}
  +            for i in range(len(redir_names)):
  +                redir_params = kvm_utils.get_sub_dict(params,
 redir_names[i])
  +                guest_port = int(redir_params.get(guest_port))
  +                self.redirs[guest_port] = host_ports[i]
  +
  +            # Find available VNC port, if needed
  +            if params.get(display) == vnc:
  +                self.vnc_port = kvm_utils.find_free_port(5900,
 6000)
  +
  +            # Make qemu command
  +            qemu_command = self.make_qemu_command()
  +
  +            # Is this VM supposed to accept incoming migrations?
  +            if for_migration:
  +                # Find available migration port
  +                self.migration_port =
 kvm_utils.find_free_port(5200, 6000)
  +                # Add -incoming option to the qemu command
  +                qemu_command +=  -incoming tcp:0:%d %
 self.migration_port
  +
  +            kvm_log.debug(Running qemu command:\n%s %
 qemu_command)
  +            (status, pid, output) = kvm_utils.run_bg(qemu_command

Re: [KVM-AUTOTEST PATCH] kvm_utils.py: remote_login(): improve regular expression matching

2009-05-24 Thread sudhir kumar
The patch looks sane to me. A very good thing that can be done for
remote_login() is to tune the tmeouts. I have seen especialy with
windows guests or sometimes when the machine is heavily loaded the
timeouts elapse and the test fails. When I increased the timeouts the
test did not fail. internal_timeout=0.5 is too less in my views and
even timeouts of 10 seconds prove insufficient sometimes. Do you too
have any such experience?

On Sun, May 24, 2009 at 9:16 PM, Michael Goldish mgold...@redhat.com wrote:
 1. Make the 'login:' regular expression stricter so it doesn't match
 'Last login: ...' messages.
 2. Make the 'password:' regular expression stricter.
 3. Handle 'Connection refused' messages.

 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm_runtest_2/kvm_utils.py |   13 +
  1 files changed, 9 insertions(+), 4 deletions(-)

 diff --git a/client/tests/kvm_runtest_2/kvm_utils.py 
 b/client/tests/kvm_runtest_2/kvm_utils.py
 index be8ad95..5736cf6 100644
 --- a/client/tests/kvm_runtest_2/kvm_utils.py
 +++ b/client/tests/kvm_runtest_2/kvm_utils.py
 @@ -413,7 +413,8 @@ def remote_login(command, password, prompt, linesep=\n, 
 timeout=10):

     while True:
         (match, text) = sub.read_until_last_line_matches(
 -                [[Aa]re you sure, [Pp]assword:, [Ll]ogin:, 
 [Cc]onnection.*closed, prompt],
 +                [r[Aa]re you sure, r[Pp]assword:\s*$, 
 r^\s*[Ll]ogin:\s*$,
 +                    r[Cc]onnection.*closed, r[Cc]onnection.*refused, 
 prompt],
                 timeout=timeout, internal_timeout=0.5)
         if match == 0:  # Are you sure you want to continue connecting
             kvm_log.debug(Got 'Are you sure...'; sending 'yes')
 @@ -437,11 +438,15 @@ def remote_login(command, password, prompt, 
 linesep=\n, timeout=10):
             kvm_log.debug(Got 'Connection closed')
             sub.close()
             return None
 -        elif match == 4:  # prompt
 +        elif match == 4:  # Connection refused
 +            kvm_log.debug(Got 'Connection refused')
 +            sub.close()
 +            return None
 +        elif match == 5:  # prompt
             kvm_log.debug(Got shell prompt -- logged in)
             return sub
         else:  # match == None
 -            kvm_log.debug(Timeout or process terminated)
 +            kvm_log.debug(Timeout elapsed or process terminated)
             sub.close()
             return None

 @@ -470,7 +475,7 @@ def remote_scp(command, password, timeout=300, 
 login_timeout=10):

     while True:
         (match, text) = sub.read_until_last_line_matches(
 -                [[Aa]re you sure, [Pp]assword:, lost connection],
 +                [r[Aa]re you sure, r[Pp]assword:\s*$, rlost 
 connection],
                 timeout=_timeout, internal_timeout=0.5)
         if match == 0:  # Are you sure you want to continue connecting
             kvm_log.debug(Got 'Are you sure...'; sending 'yes')
 --
 1.5.4.1

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH] kvm_runtest_2.py: use environment filename specified by the 'env' parameter

2009-05-24 Thread sudhir kumar
good one to be in.  thanks

On Sun, May 24, 2009 at 9:16 PM, Michael Goldish mgold...@redhat.com wrote:
 Do not use hardcoded environment filename 'env'. Instead use the value
 specified by the 'env' parameter. If unspecified, use 'env' as the filename.

 This is important for parallel execution; it may be necessary to use a 
 separate
 environment file for each process.

 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm_runtest_2/kvm_runtest_2.py |    2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

 diff --git a/client/tests/kvm_runtest_2/kvm_runtest_2.py 
 b/client/tests/kvm_runtest_2/kvm_runtest_2.py
 index fda7282..a69951b 100644
 --- a/client/tests/kvm_runtest_2/kvm_runtest_2.py
 +++ b/client/tests/kvm_runtest_2/kvm_runtest_2.py
 @@ -64,7 +64,7 @@ class kvm_runtest_2(test.test):
             self.write_test_keyval({key: params[key]})

         # Open the environment file
 -        env_filename = os.path.join(self.bindir, env)
 +        env_filename = os.path.join(self.bindir, params.get(env, env))
         env = shelve.open(env_filename, writeback=True)
         kvm_log.debug(Contents of environment: %s % str(env))

 --
 1.5.4.1

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH] kvm_vm.py: make sure the bulk of VM.create() is not executed in parallel

2009-05-24 Thread sudhir kumar
 not be created with command:\n%s % 
 qemu_command)
 -            self.destroy()
 -            return False
 +            kvm_log.debug(VM appears to be alive with PID %d % self.pid)

 -        kvm_log.debug(VM appears to be alive with PID %d % self.pid)
 +            return True

 -        return True
 +        finally:
 +            fcntl.lockf(lockfile, fcntl.LOCK_UN)
 +            lockfile.close()

     def send_monitor_cmd(self, command, block=True, timeout=20.0):
         Send command to the QEMU monitor.
 --
 1.5.4.1

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH] WinXP step files: add an optional barrier to deal with a closed start menu

2009-05-24 Thread sudhir kumar
Any plans to add other windows stepfiles?

On Sun, May 24, 2009 at 9:16 PM, Michael Goldish mgold...@redhat.com wrote:
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  .../kvm_runtest_2/steps/WinXP-32-setupssh.steps    |   10 --
  client/tests/kvm_runtest_2/steps/WinXP-32.steps    |    4 +++-
  client/tests/kvm_runtest_2/steps/WinXP-64.steps    |    3 +--
  3 files changed, 12 insertions(+), 5 deletions(-)

 diff --git a/client/tests/kvm_runtest_2/steps/WinXP-32-setupssh.steps 
 b/client/tests/kvm_runtest_2/steps/WinXP-32-setupssh.steps
 index 729d9df..ebb665f 100644
 --- a/client/tests/kvm_runtest_2/steps/WinXP-32-setupssh.steps
 +++ b/client/tests/kvm_runtest_2/steps/WinXP-32-setupssh.steps
 @@ -4,8 +4,14 @@
  # 
  step 24.72
  screendump 20080101_01_5965948293222a6d6f3e545db40c23c1.ppm
 -# open start menu
 -barrier_2 125 79 342 270 368b3d82c870dbcdc4dfc2a49660e798 124
 +# desktop reached
 +barrier_2 36 32 392 292 3828d3a9587b3a9766a567a2b7570e42 124
 +# 
 +step 24.72
 +screendump 20080101_01_5965948293222a6d6f3e545db40c23c1.ppm
 +# open start menu if not already open
 +sleep 10
 +barrier_2 84 48 0 552 082462ce890968a264b9b13cddda8ae3 10 optional
  # Sending keys: ctrl-esc
  key ctrl-esc
  # 
 diff --git a/client/tests/kvm_runtest_2/steps/WinXP-32.steps 
 b/client/tests/kvm_runtest_2/steps/WinXP-32.steps
 index b0c6e35..f52fd0e 100644
 --- a/client/tests/kvm_runtest_2/steps/WinXP-32.steps
 +++ b/client/tests/kvm_runtest_2/steps/WinXP-32.steps
 @@ -136,7 +136,8 @@ key alt-n
  step 2251.56
  screendump 20080101_22_dcdc2fe9606c044ce648422afe42e23d.ppm
  # User
 -barrier_2 409 35 64 188 3d71d4d7a9364c1e6415b3d554ce6e5b 9
 +barrier_2 161 37 312 187 a941ecbeb73f9d73e3e9c38da9a4b743 9
 +# Sending keys: $user alt-n
  var user
  key alt-n
  # 
 @@ -154,6 +155,7 @@ barrier_2 48 51 391 288 bbac8a522510d7c8d6e515f6a3fbd4c3 
 240
  step 2279.61
  screendump 20090416_150641_b72ad5c48ec2dbc9814d569e38cbb4cc.ppm
  # Win XP Start Menu (closed)
 +sleep 20
  barrier_2 104 41 0 559 a7cc02cecff2cb495f300aefbb99d9ae 5 optional
  # Sending keys: ctrl-esc
  key ctrl-esc
 diff --git a/client/tests/kvm_runtest_2/steps/WinXP-64.steps 
 b/client/tests/kvm_runtest_2/steps/WinXP-64.steps
 index 20bac81..91e6d0f 100644
 --- a/client/tests/kvm_runtest_2/steps/WinXP-64.steps
 +++ b/client/tests/kvm_runtest_2/steps/WinXP-64.steps
 @@ -74,7 +74,6 @@ key ret
  # 
  step 286.86
  screendump 20080101_10_bb878343930f948c0346f103a387157a.ppm
 -barrier_2 69 15 179 8 93889bdbe5351e61a6d9c7d00bb1c971 10
  # 
  step 409.46
  screendump 20080101_11_30db9777a7883a07e6e65bff74e1d98f.ppm
 @@ -100,7 +99,7 @@ key 0xdc
  step 978.02
  screendump 20080101_14_213fbe6fa13bf32dfac6a00bf4205e45.ppm
  # Windows XP Start Menu Opened
 -barrier_2 48 20 274 420 c4a9620d84508013050e5a37a0d9e4ef 15
 +barrier_2 129 30 196 72 aae68af7e05e2312c707f2f4bd73f024 15
  # Sending keys: u
  key u
  # 
 --
 1.5.4.1

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-autotest: The automation plans?

2009-05-14 Thread sudhir kumar
On Wed, May 13, 2009 at 11:30 PM, Michael Goldish mgold...@redhat.com wrote:

 - sudhir kumar smalik...@gmail.com wrote:

 Hi Uri/Lucas,

 Do you have any plans for enhancing kvm-autotest?
 I was looking mainly on the following 2 aspects:

 (1).
 we have standalone migration only. Is there any plans of enhancing
 kvm-autotest so that we can trigger migration while a workload is
 running?
 Something like this:
 Start a workload(may be n instances of it).
 let the test execute for some time.
 Trigger migration.
 Log into the target.
 Check if the migration is succesful
 Check if the test results are consistent.

 Yes, we have plans to implement such functionality. It shouldn't be
 hard, but we need to give it some thought in order to implement it as
 elegantly as possible.
I completely agree here.

 (2).
 How can we run N parallel instances of a test? Will the current
 configuration  be easily able to support it?

 I currently have some experimental patches that allow running of
 several parallel queues of tests. But what exactly do you mean by
Please post them.
 N parallel instances of a test? Do you mean N queues? Please provide
 an example so I can get a better idea.
I wanted a parallelism in 2 degrees. Let me try with an example.
The following test
 only raw.*ide.*default.*smp2.*RHEL5.3.i386.*migrate.dbench
is just one instance and will create one VM with given specifications
and execute migrate and dbench. So I am thinking how can we trigger n
similar tests execution in parallel. I feel job.parallel() is meant
for that but is kvm_tests.cfg good enough to be used under such a
scenario? However we have most of the stuff non static(as getting the
free vnc port, etc) but still we have some variables which are static.
For ex. vm name, migration port etc. So what are your thoughts on it.
In this scenario my system will be having N VMs, all running the same
set of testcases.

On the other hand I was looking for something like this as well.
 only 
raw.*ide.*default.*smp2.*RHEL5.3.i386.*migrate.dbench.dbench_instancesN.bonnie
Thus all the tests will be executed in normal way except dbench. There
should be running N instances of dbench and when over simply run
bonnie and exit.

I hope my demand to kvm-autotest is not too much but for an effective
and rigorous testing of kvm such a framework is necessary. I am bit
new to autotest framework and have very little knowledge of the server
side. I will start spending some time on looking at the available
features.

Hope I was clear this time.



-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-autotest: The automation plans?

2009-05-14 Thread sudhir kumar
On Thu, May 14, 2009 at 12:22 PM, jason wang jasow...@redhat.com wrote:
 sudhir kumar 写道:

 Hi Uri/Lucas,

 Do you have any plans for enhancing kvm-autotest?
 I was looking mainly on the following 2 aspects:

 (1).
 we have standalone migration only. Is there any plans of enhancing
 kvm-autotest so that we can trigger migration while a workload is
 running?
 Something like this:
 Start a workload(may be n instances of it).
 let the test execute for some time.
 Trigger migration.
 Log into the target.
 Check if the migration is succesful
 Check if the test results are consistent.


 We have some patches of ping pong migration and workload adding. The
 migration is based on public bridge and workload adding is based on running
 benchmark in the background of guest.
Cool. I would like to have look on them. So how do you manage the
background process/thread?


 (2).
 How can we run N parallel instances of a test? Will the current
 configuration  be easily able to support it?

 Please provide your thoughts on the above features.



 The parallelized instances could be easily achieved through job.parallel()
 of autotest framework, and that is what we have used in our tests. We have
 make some helper routines such as get_free_port to be reentrant through file
 lock.
 We've implemented following test cases: timedrift(already sent here),
 savevm/loadvm, suspend/resume, jumboframe, migration between two machines
 and others. We will sent it here for review in the following weeks.
 There are some other things could be improved:
 1) Current kvm_test.cfg.sample/kvm_test.cfg is transparent to autotest
 server UI. This would make it hard to configure the tests in the server
 side. During our test, we have merged it into control and make it could be
 configured by editing control file function of autotest server side web
 UI.
Not much clue here. But I would like to keep the control file as
simple as possible and as much independent of test scenarios as
possible. kvm_tests.cfg should be the right file untill and unless it
is impossible to do by using it.
 2) Public bridge support: I've sent a patch(TAP network support in
 kvm-autotest), this patch needs external DHCP server and requires nmap
 support. I don't know whether the method of original kvm_runtes_old(DHCP
 server of private bridge) is preferable.
The old approach is better. All might not be able to run an external
DHCP server for running the test. I do not see any issue with the old
approach.





-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


kvm-autotest: The automation plans?

2009-05-13 Thread sudhir kumar
Hi Uri/Lucas,

Do you have any plans for enhancing kvm-autotest?
I was looking mainly on the following 2 aspects:

(1).
we have standalone migration only. Is there any plans of enhancing
kvm-autotest so that we can trigger migration while a workload is
running?
Something like this:
Start a workload(may be n instances of it).
let the test execute for some time.
Trigger migration.
Log into the target.
Check if the migration is succesful
Check if the test results are consistent.

(2).
How can we run N parallel instances of a test? Will the current
configuration  be easily able to support it?

Please provide your thoughts on the above features.

-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: CPU Limits on KVM?

2009-04-07 Thread sudhir kumar
 to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: new wiki missing pages? / new wiki for kvm

2009-03-31 Thread sudhir kumar
hugetlbfs info missing on new wiki. the info is here
http://il.qumranet.com/kvmwiki/UsingLargePages

On Wed, Mar 11, 2009 at 4:38 AM, Dor Laor dl...@redhat.com wrote:
 Hollis Blanchard wrote:

 On Tue, 2009-03-10 at 22:49 +0200, Dor Laor wrote:


 Sorry for that. It took IT only few month to change the Wiki... During
 this tight schedule
 some pages got lost as you can see.. Please report on a
 problematic/missing page.


 Are these emails sufficient, or are you asking us to report some other
 way?


 It is sufficient, I meant that all of the content writers should double
 check. Thanks.



 The original content can be reached using http://il.qumranet.com/kvmwiki


 Please restore all pages linked from here:
 http://il.qumranet.com/kvmwiki/CategoryPowerPC


 Sure



 In general, finally the kvm wiki just moved from qumranet.kvm.com to
 www.linux-kvm.org.


 It's very confusing that linux-kvm.com and linux-kvm.org are apparently
 completely unrelated. I wonder why you chose to create .org when .com
 already existed.


 You're right, I didn't pick this. Also we need to get rid of the old usage
 for kvm acronyms :)
 If we move to qemu wiki the problem will vanish.



 We're considering an option to unite the kvm and qemu wikis together
 since there is allot
 of shared content and eventually we'll have a shared userspace
 executable.


 That would be great! First we'll need a working qemu wiki though...
 maybe you can solve that problem at the same time.

 What timeframe are we talking about? Next week? 6 months? Just a
 brainstorm?


 One of the qemu maintainers handles it.
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-autotest -- introducing kvm_runtest_2

2009-03-04 Thread sudhir kumar
. So at the end I was fully meshed up with respect to the cropped
images and I had no idea which cropped image corresponded to which
md5sum.

Though the RHEL5.3 step files that I generated in the text mode
installation were quite strong.

 one similar to kvmtest. The kvmtest way is to let the user create his/her
 own screendumps to be used later. I did not want to add so many screendumps
 images to the repository. Step-Maker keeps the images it uses, so we can
 compare them upon failure. Step-Editor lets the user to change a single
 barrier_2 step (or more) by looking at the original image and picking a
 different area.

 Agreed, I don't want to commit screens to the repo either, I just want
 to be able to use screens if a user has them available.

I have two questions with respect to stepfiles.
1. The timeouts: Timeouts may fall short if a step file generated on a
high end machine is to be used on a very low config machine or
installing N virtual machines (say N ~ 50,100 etc) in parallel.
2. If there are changes in KVM display in future the md5sum will fail.
So are we prepared for that?

   - a lot of the ssh and scp work to copy autotest client into a guest
   is already handled by autoserv
 That is true, but we want to be able to run it as client test too. That way
 a user does not have to install the server to run kvm tests on his/her
 machine.

 While true, we should try to use the existing server code for autotest
 install.

   - vm.py has a lot of infrastructure that should be integrated into
   autotest/server/kvm.py  or possibly client-side common code to support
   the next comment
 In the long term, there should be a client-server shared directory that
 deals with kvm guests (letting the host-client be the server for kvm-guests
 clients)

 I believe client/common_lib is imported into server-side as common code,
 so moving kvm infrastructure bits there will allow server-side and any
 other client test to manipulate VM/KVM objects.


   - kvm_tests.py defines new tests as functions, each of these tests
   should be a separate client tests  which sounds like a pain, but
   should allow for easier test composition and hopefully make it easier
   to add new tests that look like any other client side test with just
   the implementation.py and control file
     - this model moves toward eliminating kvm_runtest_2 and having a
     server-side generate a set of tests to run and spawning those on a
     target system.
 I am not sure that I follow. Why implementing a test as a function is a
 pain ?

 Test as a function of course isn't a pain.  What I meant was that if
 I've already have to guests and I just want to do a migration test, it
 would be nice to just be able to:

 % cd client/tests/kvm/migration
 % $AUTOTEST_BIN ./control

 I'd like to move away from tests eval-ing and exec-ing other tests; it
 just feels a little hacky and stands out versus other autotest client
 tests.

 We can probably table the discussion until we push patches at autotest
 and see what that community thinks of kvm_runtest_2 at that time.


 The plan is to keep kvm_runtest_2 in the client side, but move the
 configuration file + parser to the server (if one wants to dispatch test
 from the server). The server can dispatch the client test (using
 kvm_runtest_2 test + dictionary + tag). We have dependencies and we can
 spread unrelated kvm tests among similar hosts (e.g install guest OS and
 run tests for Fedora-10 64 bit on one machine, and install guest OS and run
 tests for Windows-XP 32 bit on another).

 Yeah, that sounds reasonable.

 We may have to add hosts into the configuration file, or add it while
 running (from the control file ?). We sure do not want to go back to the
 way kvm_runtest works (playing with symbolic links so that autotest would
 find and run the test), and we do not want too many kvm_test_NAME
 directories under client/tests/ .

 Agree with no symbolic links.  If we move common kvm utils and objects
 to client/common_lib that avoids any of that hackery.

 On the dir structure, I agree we don't have to pollute client/tests with
 a ton of kvm_* tests.  I'll look around upstream autotest and see if
 there is an other client side tests to use an example.

   I do still like the idea of having a client-side test that can just
   run on a developer/user's system to produce results without having to
   configure all of the autotest server-side bits.
 Me too :-)

 Thanks for all the comments and suggestions,

 Sure


 --
 Ryan Harper
 Software Engineer; Linux Technology Center
 IBM Corp., Austin, Tx
 ry...@us.ibm.com
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




-- 
Sudhir Kumar
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html