Re: [KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-04 Thread Yolkfull Chow
On Mon, Aug 03, 2009 at 04:00:45AM -0400, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  On Mon, Aug 03, 2009 at 02:45:23AM -0400, Michael Goldish wrote:
   
   - Yolkfull Chow yz...@redhat.com wrote:
   
Hi Michael, I just have some comments on what you changed on
stress_boot. :-)


On Mon, Aug 03, 2009 at 02:58:21AM +0300, Michael Goldish wrote:
 Take an additional parameter 'clone_address_index_base' which
indicates the
 initial value for 'address_index' for the cloned VMs.  This
  value
is
 incremented after each clone is created.  I assume the original
  VM
has a single
 NIC; otherwise NICs will end up sharing MAC addresses, which is
bad.
 
 Also, make a few small corrections:
 
 - Take the params for the clones from the original VM's params,
  not
from the
 test's params, because the test's params contain information
  about
several
 VMs, only one of which is the original VM. The original VM's
  params,
on the
 other hand, describe just a single VM (which we want to clone).
 
 - Change the way kill_vm.* parameters are sent to the
postprocessor.
 (The postprocessor doesn't read params from the VM objects but
rather from the
 test's params dict.)
 
 - Replace 'if get_command_status(...)' with 'if
get_command_status(...) != 0'.
 
 - Replace the test command 'ps aux' with 'uname -a'.  The silly
reason for this
 is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is
  just
as good
 (AFAIK), use it instead.
 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm_tests.cfg.sample |7 ++-
  client/tests/kvm/kvm_tests.py |   14 ++
  2 files changed, 12 insertions(+), 9 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample
b/client/tests/kvm/kvm_tests.cfg.sample
 index 3c23432..3a4bf64 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -117,7 +117,12 @@ variants:
  - stress_boot:  install setup
  type = stress_boot
  max_vms = 5
 -alive_test_cmd = ps aux
 +alive_test_cmd = uname -a
 +clone_address_index_base = 10
 +kill_vm = yes
 +kill_vm_vm1 = no


Actually, there are two methods to deal with started VMs via
framework, one is as soon
as the current test finished and the other is just before next
  test
going to be started. 
In this case both methods equal somewhat between each other I
  think,
and
I had chosen the later one. But if you had done some testing and
proved the previous one is better, I would agree. 
   
   It's not that I tested and found the former to be better, it's just
  that
   if a user runs only stress_boot, it makes sense to clean up at the
  end
   of the test, not at the beginning of the next one (because there's
  no
   next one).  Since you're adding VMs to the postprocessor's params
   (params['vms'] +=   + vm_name) I thought we might as well use
  that.
   (In order to clean up the VMs automatically when the next test
  starts,
   we don't need to add the VM to those params -- only to 'env'.)
  
  Hmm...yes, if users only want to run stress_boot, cleaning up should
  be
  implemented at the end of the test. 
  
   


 +kill_vm_gracefully = no

This has been specified within code during cloning. 
   
   Right, but the postprocessor is supposed to ignore that, because it
  doesn't
   look at the params inside the VM -- it looks at test params that get
  to the
   postprocessor.
   

 +extra_params +=  -snapshot

If you add this option '-snapshot' here, the first VM will be in
snapshot
mode as well which will affect next test.
   
   Which is good in my opinion, because I don't want the first VM
  writing to the
   same image that others are reading from.  It sounds like it might
   
   Let me know what you think.
  
  My opinion is, since the scenario that the first VM using
  non-snapshot
  mode while the others do has not caused any problem, why not just keep
  the first VM
  in non-snapshot mode and let it to be used in next test which really
  save booting time? 
  
  What do you think?  Please let me know if there is a trap in such
  scenario.
  Thanks. :-)
 
 I'm not sure if it really will cause trouble, but it sounds like it might,
 since we're modifying an image that's being used by another VM, if I
 understand correctly.  I don't think we actually have to see a test fail
 before we decide to avoid doing something -- it's enough just to know that
 something is bad practice.  I'm not even sure this is bad practice, but I
 want to be on the safe side anyway.  Can anyone shed some light on this?
 (Is it risky to start a VM without 

Re: [KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-03 Thread Michael Goldish

- Yolkfull Chow yz...@redhat.com wrote:

 Hi Michael, I just have some comments on what you changed on
 stress_boot. :-)
 
 
 On Mon, Aug 03, 2009 at 02:58:21AM +0300, Michael Goldish wrote:
  Take an additional parameter 'clone_address_index_base' which
 indicates the
  initial value for 'address_index' for the cloned VMs.  This value
 is
  incremented after each clone is created.  I assume the original VM
 has a single
  NIC; otherwise NICs will end up sharing MAC addresses, which is
 bad.
  
  Also, make a few small corrections:
  
  - Take the params for the clones from the original VM's params, not
 from the
  test's params, because the test's params contain information about
 several
  VMs, only one of which is the original VM. The original VM's params,
 on the
  other hand, describe just a single VM (which we want to clone).
  
  - Change the way kill_vm.* parameters are sent to the
 postprocessor.
  (The postprocessor doesn't read params from the VM objects but
 rather from the
  test's params dict.)
  
  - Replace 'if get_command_status(...)' with 'if
 get_command_status(...) != 0'.
  
  - Replace the test command 'ps aux' with 'uname -a'.  The silly
 reason for this
  is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is just
 as good
  (AFAIK), use it instead.
  
  Signed-off-by: Michael Goldish mgold...@redhat.com
  ---
   client/tests/kvm/kvm_tests.cfg.sample |7 ++-
   client/tests/kvm/kvm_tests.py |   14 ++
   2 files changed, 12 insertions(+), 9 deletions(-)
  
  diff --git a/client/tests/kvm/kvm_tests.cfg.sample
 b/client/tests/kvm/kvm_tests.cfg.sample
  index 3c23432..3a4bf64 100644
  --- a/client/tests/kvm/kvm_tests.cfg.sample
  +++ b/client/tests/kvm/kvm_tests.cfg.sample
  @@ -117,7 +117,12 @@ variants:
   - stress_boot:  install setup
   type = stress_boot
   max_vms = 5
  -alive_test_cmd = ps aux
  +alive_test_cmd = uname -a
  +clone_address_index_base = 10
  +kill_vm = yes
  +kill_vm_vm1 = no
 
 
 Actually, there are two methods to deal with started VMs via
 framework, one is as soon
 as the current test finished and the other is just before next test
 going to be started. 
 In this case both methods equal somewhat between each other I think,
 and
 I had chosen the later one. But if you had done some testing and
 proved the previous one is better, I would agree. 

It's not that I tested and found the former to be better, it's just that
if a user runs only stress_boot, it makes sense to clean up at the end
of the test, not at the beginning of the next one (because there's no
next one).  Since you're adding VMs to the postprocessor's params
(params['vms'] +=   + vm_name) I thought we might as well use that.
(In order to clean up the VMs automatically when the next test starts,
we don't need to add the VM to those params -- only to 'env'.)

 
 
  +kill_vm_gracefully = no
 
 This has been specified within code during cloning. 

Right, but the postprocessor is supposed to ignore that, because it doesn't
look at the params inside the VM -- it looks at test params that get to the
postprocessor.

 
  +extra_params +=  -snapshot
 
 If you add this option '-snapshot' here, the first VM will be in
 snapshot
 mode as well which will affect next test.

Which is good in my opinion, because I don't want the first VM writing to the
same image that others are reading from.  It sounds like it might

Let me know what you think.

Thanks,
Michael

 
   
   - shutdown: install setup
   type = shutdown
  diff --git a/client/tests/kvm/kvm_tests.py
 b/client/tests/kvm/kvm_tests.py
  index 9784ec9..308db97 100644
  --- a/client/tests/kvm/kvm_tests.py
  +++ b/client/tests/kvm/kvm_tests.py
  @@ -541,8 +541,8 @@ def run_stress_boot(tests, params, env):
   raise error.TestFail(Could not log into first guest)
   
   num = 2
  -vms = []
   sessions = [session]
  +address_index = int(params.get(clone_address_index_base,
 10))
   
   # boot the VMs
   while num = int(params.get(max_vms)):
  @@ -550,15 +550,12 @@ def run_stress_boot(tests, params, env):
   vm_name = vm + str(num)
   
   # clone vm according to the first one
  -vm_params = params.copy()
  -vm_params['image_snapshot'] = yes
  -vm_params['kill_vm'] = yes
  -vm_params['kill_vm_gracefully'] = no
  +vm_params = vm.get_params().copy()
  +vm_params[address_index] = str(address_index)
   curr_vm = vm.clone(vm_name, vm_params)
   kvm_utils.env_register_vm(env, vm_name, curr_vm)
   params['vms'] +=   + vm_name
   
  -#vms.append(curr_vm)
   logging.info(Booting guest #%d % num)
   if not curr_vm.create():
   raise error.TestFail(Cannot create VM #%d % num)
  @@ -571,10 +568,11 @@ def run_stress_boot(tests, params, 

Re: [KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-03 Thread Yolkfull Chow
On Mon, Aug 03, 2009 at 02:45:23AM -0400, Michael Goldish wrote:
 
 - Yolkfull Chow yz...@redhat.com wrote:
 
  Hi Michael, I just have some comments on what you changed on
  stress_boot. :-)
  
  
  On Mon, Aug 03, 2009 at 02:58:21AM +0300, Michael Goldish wrote:
   Take an additional parameter 'clone_address_index_base' which
  indicates the
   initial value for 'address_index' for the cloned VMs.  This value
  is
   incremented after each clone is created.  I assume the original VM
  has a single
   NIC; otherwise NICs will end up sharing MAC addresses, which is
  bad.
   
   Also, make a few small corrections:
   
   - Take the params for the clones from the original VM's params, not
  from the
   test's params, because the test's params contain information about
  several
   VMs, only one of which is the original VM. The original VM's params,
  on the
   other hand, describe just a single VM (which we want to clone).
   
   - Change the way kill_vm.* parameters are sent to the
  postprocessor.
   (The postprocessor doesn't read params from the VM objects but
  rather from the
   test's params dict.)
   
   - Replace 'if get_command_status(...)' with 'if
  get_command_status(...) != 0'.
   
   - Replace the test command 'ps aux' with 'uname -a'.  The silly
  reason for this
   is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is just
  as good
   (AFAIK), use it instead.
   
   Signed-off-by: Michael Goldish mgold...@redhat.com
   ---
client/tests/kvm/kvm_tests.cfg.sample |7 ++-
client/tests/kvm/kvm_tests.py |   14 ++
2 files changed, 12 insertions(+), 9 deletions(-)
   
   diff --git a/client/tests/kvm/kvm_tests.cfg.sample
  b/client/tests/kvm/kvm_tests.cfg.sample
   index 3c23432..3a4bf64 100644
   --- a/client/tests/kvm/kvm_tests.cfg.sample
   +++ b/client/tests/kvm/kvm_tests.cfg.sample
   @@ -117,7 +117,12 @@ variants:
- stress_boot:  install setup
type = stress_boot
max_vms = 5
   -alive_test_cmd = ps aux
   +alive_test_cmd = uname -a
   +clone_address_index_base = 10
   +kill_vm = yes
   +kill_vm_vm1 = no
  
  
  Actually, there are two methods to deal with started VMs via
  framework, one is as soon
  as the current test finished and the other is just before next test
  going to be started. 
  In this case both methods equal somewhat between each other I think,
  and
  I had chosen the later one. But if you had done some testing and
  proved the previous one is better, I would agree. 
 
 It's not that I tested and found the former to be better, it's just that
 if a user runs only stress_boot, it makes sense to clean up at the end
 of the test, not at the beginning of the next one (because there's no
 next one).  Since you're adding VMs to the postprocessor's params
 (params['vms'] +=   + vm_name) I thought we might as well use that.
 (In order to clean up the VMs automatically when the next test starts,
 we don't need to add the VM to those params -- only to 'env'.)

Hmm...yes, if users only want to run stress_boot, cleaning up should be
implemented at the end of the test. 

 
  
  
   +kill_vm_gracefully = no
  
  This has been specified within code during cloning. 
 
 Right, but the postprocessor is supposed to ignore that, because it doesn't
 look at the params inside the VM -- it looks at test params that get to the
 postprocessor.
 
  
   +extra_params +=  -snapshot
  
  If you add this option '-snapshot' here, the first VM will be in
  snapshot
  mode as well which will affect next test.
 
 Which is good in my opinion, because I don't want the first VM writing to the
 same image that others are reading from.  It sounds like it might
 
 Let me know what you think.

My opinion is, since the scenario that the first VM using non-snapshot
mode while the others do has not caused any problem, why not just keep the 
first VM
in non-snapshot mode and let it to be used in next test which really save 
booting time? 

What do you think?  Please let me know if there is a trap in such scenario.
Thanks. :-)

 
 Thanks,
 Michael
 
  

- shutdown: install setup
type = shutdown
   diff --git a/client/tests/kvm/kvm_tests.py
  b/client/tests/kvm/kvm_tests.py
   index 9784ec9..308db97 100644
   --- a/client/tests/kvm/kvm_tests.py
   +++ b/client/tests/kvm/kvm_tests.py
   @@ -541,8 +541,8 @@ def run_stress_boot(tests, params, env):
raise error.TestFail(Could not log into first guest)

num = 2
   -vms = []
sessions = [session]
   +address_index = int(params.get(clone_address_index_base,
  10))

# boot the VMs
while num = int(params.get(max_vms)):
   @@ -550,15 +550,12 @@ def run_stress_boot(tests, params, env):
vm_name = vm + str(num)

# clone vm according to the first one
   -vm_params = params.copy()
   -

Re: [KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-03 Thread Michael Goldish

- Yolkfull Chow yz...@redhat.com wrote:

 On Mon, Aug 03, 2009 at 02:45:23AM -0400, Michael Goldish wrote:
  
  - Yolkfull Chow yz...@redhat.com wrote:
  
   Hi Michael, I just have some comments on what you changed on
   stress_boot. :-)
   
   
   On Mon, Aug 03, 2009 at 02:58:21AM +0300, Michael Goldish wrote:
Take an additional parameter 'clone_address_index_base' which
   indicates the
initial value for 'address_index' for the cloned VMs.  This
 value
   is
incremented after each clone is created.  I assume the original
 VM
   has a single
NIC; otherwise NICs will end up sharing MAC addresses, which is
   bad.

Also, make a few small corrections:

- Take the params for the clones from the original VM's params,
 not
   from the
test's params, because the test's params contain information
 about
   several
VMs, only one of which is the original VM. The original VM's
 params,
   on the
other hand, describe just a single VM (which we want to clone).

- Change the way kill_vm.* parameters are sent to the
   postprocessor.
(The postprocessor doesn't read params from the VM objects but
   rather from the
test's params dict.)

- Replace 'if get_command_status(...)' with 'if
   get_command_status(...) != 0'.

- Replace the test command 'ps aux' with 'uname -a'.  The silly
   reason for this
is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is
 just
   as good
(AFAIK), use it instead.

Signed-off-by: Michael Goldish mgold...@redhat.com
---
 client/tests/kvm/kvm_tests.cfg.sample |7 ++-
 client/tests/kvm/kvm_tests.py |   14 ++
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/client/tests/kvm/kvm_tests.cfg.sample
   b/client/tests/kvm/kvm_tests.cfg.sample
index 3c23432..3a4bf64 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -117,7 +117,12 @@ variants:
 - stress_boot:  install setup
 type = stress_boot
 max_vms = 5
-alive_test_cmd = ps aux
+alive_test_cmd = uname -a
+clone_address_index_base = 10
+kill_vm = yes
+kill_vm_vm1 = no
   
   
   Actually, there are two methods to deal with started VMs via
   framework, one is as soon
   as the current test finished and the other is just before next
 test
   going to be started. 
   In this case both methods equal somewhat between each other I
 think,
   and
   I had chosen the later one. But if you had done some testing and
   proved the previous one is better, I would agree. 
  
  It's not that I tested and found the former to be better, it's just
 that
  if a user runs only stress_boot, it makes sense to clean up at the
 end
  of the test, not at the beginning of the next one (because there's
 no
  next one).  Since you're adding VMs to the postprocessor's params
  (params['vms'] +=   + vm_name) I thought we might as well use
 that.
  (In order to clean up the VMs automatically when the next test
 starts,
  we don't need to add the VM to those params -- only to 'env'.)
 
 Hmm...yes, if users only want to run stress_boot, cleaning up should
 be
 implemented at the end of the test. 
 
  
   
   
+kill_vm_gracefully = no
   
   This has been specified within code during cloning. 
  
  Right, but the postprocessor is supposed to ignore that, because it
 doesn't
  look at the params inside the VM -- it looks at test params that get
 to the
  postprocessor.
  
   
+extra_params +=  -snapshot
   
   If you add this option '-snapshot' here, the first VM will be in
   snapshot
   mode as well which will affect next test.
  
  Which is good in my opinion, because I don't want the first VM
 writing to the
  same image that others are reading from.  It sounds like it might
  
  Let me know what you think.
 
 My opinion is, since the scenario that the first VM using
 non-snapshot
 mode while the others do has not caused any problem, why not just keep
 the first VM
 in non-snapshot mode and let it to be used in next test which really
 save booting time? 
 
 What do you think?  Please let me know if there is a trap in such
 scenario.
 Thanks. :-)

I'm not sure if it really will cause trouble, but it sounds like it might,
since we're modifying an image that's being used by another VM, if I
understand correctly.  I don't think we actually have to see a test fail
before we decide to avoid doing something -- it's enough just to know that
something is bad practice.  I'm not even sure this is bad practice, but I
want to be on the safe side anyway.  Can anyone shed some light on this?
(Is it risky to start a VM without -snapshot, and in parallel start other
VMs using the same image with -snapshot?)

  
  Thanks,
  Michael
  
   
 
 - shutdown: install setup
 type = shutdown
diff --git 

[KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-02 Thread Michael Goldish
Take an additional parameter 'clone_address_index_base' which indicates the
initial value for 'address_index' for the cloned VMs.  This value is
incremented after each clone is created.  I assume the original VM has a single
NIC; otherwise NICs will end up sharing MAC addresses, which is bad.

Also, make a few small corrections:

- Take the params for the clones from the original VM's params, not from the
test's params, because the test's params contain information about several
VMs, only one of which is the original VM. The original VM's params, on the
other hand, describe just a single VM (which we want to clone).

- Change the way kill_vm.* parameters are sent to the postprocessor.
(The postprocessor doesn't read params from the VM objects but rather from the
test's params dict.)

- Replace 'if get_command_status(...)' with 'if get_command_status(...) != 0'.

- Replace the test command 'ps aux' with 'uname -a'.  The silly reason for this
is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is just as good
(AFAIK), use it instead.

Signed-off-by: Michael Goldish mgold...@redhat.com
---
 client/tests/kvm/kvm_tests.cfg.sample |7 ++-
 client/tests/kvm/kvm_tests.py |   14 ++
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 3c23432..3a4bf64 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -117,7 +117,12 @@ variants:
 - stress_boot:  install setup
 type = stress_boot
 max_vms = 5
-alive_test_cmd = ps aux
+alive_test_cmd = uname -a
+clone_address_index_base = 10
+kill_vm = yes
+kill_vm_vm1 = no
+kill_vm_gracefully = no
+extra_params +=  -snapshot
 
 - shutdown: install setup
 type = shutdown
diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
index 9784ec9..308db97 100644
--- a/client/tests/kvm/kvm_tests.py
+++ b/client/tests/kvm/kvm_tests.py
@@ -541,8 +541,8 @@ def run_stress_boot(tests, params, env):
 raise error.TestFail(Could not log into first guest)
 
 num = 2
-vms = []
 sessions = [session]
+address_index = int(params.get(clone_address_index_base, 10))
 
 # boot the VMs
 while num = int(params.get(max_vms)):
@@ -550,15 +550,12 @@ def run_stress_boot(tests, params, env):
 vm_name = vm + str(num)
 
 # clone vm according to the first one
-vm_params = params.copy()
-vm_params['image_snapshot'] = yes
-vm_params['kill_vm'] = yes
-vm_params['kill_vm_gracefully'] = no
+vm_params = vm.get_params().copy()
+vm_params[address_index] = str(address_index)
 curr_vm = vm.clone(vm_name, vm_params)
 kvm_utils.env_register_vm(env, vm_name, curr_vm)
 params['vms'] +=   + vm_name
 
-#vms.append(curr_vm)
 logging.info(Booting guest #%d % num)
 if not curr_vm.create():
 raise error.TestFail(Cannot create VM #%d % num)
@@ -571,10 +568,11 @@ def run_stress_boot(tests, params, env):
 sessions.append(curr_vm_session)
 
 # check whether all previous ssh sessions are responsive
-for i, vm_session in enumerate(sessions):
-if vm_session.get_command_status(params.get(alive_test_cmd)):
+for i, se in enumerate(sessions):
+if se.get_command_status(params.get(alive_test_cmd)) != 0:
 raise error.TestFail(Session #%d is not responsive % i)
 num += 1
+address_index += 1
 
 except (error.TestFail, OSError):
 for se in sessions:
-- 
1.5.4.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH 12/12] KVM test: make stress_boot work properly with TAP networking

2009-08-02 Thread Yolkfull Chow
Hi Michael, I just have some comments on what you changed on
stress_boot. :-)


On Mon, Aug 03, 2009 at 02:58:21AM +0300, Michael Goldish wrote:
 Take an additional parameter 'clone_address_index_base' which indicates the
 initial value for 'address_index' for the cloned VMs.  This value is
 incremented after each clone is created.  I assume the original VM has a 
 single
 NIC; otherwise NICs will end up sharing MAC addresses, which is bad.
 
 Also, make a few small corrections:
 
 - Take the params for the clones from the original VM's params, not from the
 test's params, because the test's params contain information about several
 VMs, only one of which is the original VM. The original VM's params, on the
 other hand, describe just a single VM (which we want to clone).
 
 - Change the way kill_vm.* parameters are sent to the postprocessor.
 (The postprocessor doesn't read params from the VM objects but rather from the
 test's params dict.)
 
 - Replace 'if get_command_status(...)' with 'if get_command_status(...) != 0'.
 
 - Replace the test command 'ps aux' with 'uname -a'.  The silly reason for 
 this
 is that DSL-4.2.5 doesn't like 'ps aux'.  Since 'uname -a' is just as good
 (AFAIK), use it instead.
 
 Signed-off-by: Michael Goldish mgold...@redhat.com
 ---
  client/tests/kvm/kvm_tests.cfg.sample |7 ++-
  client/tests/kvm/kvm_tests.py |   14 ++
  2 files changed, 12 insertions(+), 9 deletions(-)
 
 diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
 b/client/tests/kvm/kvm_tests.cfg.sample
 index 3c23432..3a4bf64 100644
 --- a/client/tests/kvm/kvm_tests.cfg.sample
 +++ b/client/tests/kvm/kvm_tests.cfg.sample
 @@ -117,7 +117,12 @@ variants:
  - stress_boot:  install setup
  type = stress_boot
  max_vms = 5
 -alive_test_cmd = ps aux
 +alive_test_cmd = uname -a
 +clone_address_index_base = 10
 +kill_vm = yes
 +kill_vm_vm1 = no


Actually, there are two methods to deal with started VMs via framework, one is 
as soon
as the current test finished and the other is just before next test going to be 
started. 
In this case both methods equal somewhat between each other I think, and
I had chosen the later one. But if you had done some testing and
proved the previous one is better, I would agree. 


 +kill_vm_gracefully = no

This has been specified within code during cloning. 

 +extra_params +=  -snapshot

If you add this option '-snapshot' here, the first VM will be in snapshot
mode as well which will affect next test.

  
  - shutdown: install setup
  type = shutdown
 diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
 index 9784ec9..308db97 100644
 --- a/client/tests/kvm/kvm_tests.py
 +++ b/client/tests/kvm/kvm_tests.py
 @@ -541,8 +541,8 @@ def run_stress_boot(tests, params, env):
  raise error.TestFail(Could not log into first guest)
  
  num = 2
 -vms = []
  sessions = [session]
 +address_index = int(params.get(clone_address_index_base, 10))
  
  # boot the VMs
  while num = int(params.get(max_vms)):
 @@ -550,15 +550,12 @@ def run_stress_boot(tests, params, env):
  vm_name = vm + str(num)
  
  # clone vm according to the first one
 -vm_params = params.copy()
 -vm_params['image_snapshot'] = yes
 -vm_params['kill_vm'] = yes
 -vm_params['kill_vm_gracefully'] = no
 +vm_params = vm.get_params().copy()
 +vm_params[address_index] = str(address_index)
  curr_vm = vm.clone(vm_name, vm_params)
  kvm_utils.env_register_vm(env, vm_name, curr_vm)
  params['vms'] +=   + vm_name
  
 -#vms.append(curr_vm)
  logging.info(Booting guest #%d % num)
  if not curr_vm.create():
  raise error.TestFail(Cannot create VM #%d % num)
 @@ -571,10 +568,11 @@ def run_stress_boot(tests, params, env):
  sessions.append(curr_vm_session)
  
  # check whether all previous ssh sessions are responsive
 -for i, vm_session in enumerate(sessions):
 -if 
 vm_session.get_command_status(params.get(alive_test_cmd)):
 +for i, se in enumerate(sessions):
 +if se.get_command_status(params.get(alive_test_cmd)) != 0:
  raise error.TestFail(Session #%d is not responsive % i)
  num += 1
 +address_index += 1
  
  except (error.TestFail, OSError):
  for se in sessions:
 -- 
 1.5.4.1
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html