Thanks for the review. Here are some responses...
Brock Pytlik wrote:
So here are my thoughts and questions:
Overall question: Does this framework work with the SUNWldtp approach
that's used to test the packagemanager or will only CLI tests work with
this framework?
I assume it will work fine with ldtp.
The proto area, including the entire test suite
gets copied to the VM, so you can run old or
new ldtp test cases on a VM using vtf.
General nit: We've been using 8 space indentation, I think these files
should probably follow the same formatting.
Yep.
General comment:
Having each test override the superclass tearDown just to pass doesn't
seem useful, especially that's exactly what the superclass is doing (I
think).
The setUp and tearDown functions are still evolving.
src/tests/vbox/README:
72-78: I think these steps should be: "Install the <name> package" if
this isn't going to be integrated into our gate
This code is going into the IPS gate.
170-193: I think more explanation is needed here. Why would I copy a vdi
into the system? What's it's mean to "register it for use"? Could I copy
it in without registering it for use? How can I delete a VDI if I've
already deleted the virtual machine? Can a VDI be associated with more
than one Virtual Machine?
I can work on the wording more.
Registration is just the way VirtualBox handles VMs and VDIs.
They must be registered to be used. VDIs can be used with
different VMs, but not at the same time.
vboxaddvm.py and vboxdelvm.py make VDI management much easier.
So, we can produce a golden set of VDI images to ensure consistent
testing.
For IPS testing I intend to have a VDI image for each relevant
development build and OpenSolaris release, for example:
osrel0805
osrel0811
osrel0906
osdev116
...
osdev129
osdev130
...
Besides this, specific VDIs and VMs can be created as needed.
t_vtf_console:
92: Why is 30 the magical number of characters to look for "/etc/hosts" in?
I can make this more clear.
The command line prompt has to be included. It's 16 characters
in this case:
t...@osdev125:~$ ls /etc/hosts
/etc/hosts
t...@osdev125:~$
t_vtf_pkg:
I'm not sure I get the point of these tests. Are they just to ensure
that the basic infrastructure is working?
Correct. All the t_vtf_* test cases verify the vtf interface is working.
Since there are separate proto
and non-proto tests, should these make sure that the proto (or
non-proto) version is actually being run? I'm also not sure why/how the
proto would be used in this framework? Is the idea to sometimes use the
old version of pkg and sometimes use the current workspace version?
Speaking of which, I probably just missed it when looking at the client
setup, but how does the proto area for the current workspace get plunked
down on each client? Can I control which clients get it? (So I can test
a b123 server talking with a gate client, or the other way around for
example.)
Right. The test case writer can decide when to use the
native pkg commands or use the code from the proto area.
The proto_setup() function in vtfutils.py copies the
proto area to the guest.
t_vtf_teardown:
Shouldn't there be some kind of check to see that the guests (and host)
are in fact torndown?
Yep. I'll add an assertion.
t_pkg_image_update:
Would it be worth looking at parallelizing the multiple client image
update? I'm just thinking that each of those steps could well be fairly
slow, especially the reboot. Maybe we're only anticipating having 1 or 2
guests, but if we had 10 or 20 to image-update, this seems like an
unnecessary bottleneck, or does doing one of these operations tax most
systems to the max anyway?
Each VM uses memory (1G) and cpu, so there are system limitations, but
we can do things in parallel on most systems. Initially I was just
going to do things serially. In the end we will probably end up with
several image-update test cases.
Also, I'm concerned that setUp is only happening once per test class. I
know we've designed our tests to allow persistent depots, but this seems
to be making persistent clients the default as well. For example, if I
wrote a test called test_zzz, would the clients it dealt with be the
pre-image updated ones or the post-image-updated ones?
The setUp() function is still in flux.
This test case is more of a regression test that tests image-update
on a set of VMs with different versions. At the end, if successful,
each VM would be at the current version. We could move the setUp()
code into the test case function and have other test cases operate
on image-updated VMs. Or, do that in another test case file. The
test case writer has a lot of latitude.
If you want to perform post image-update tests, it does make sense
to do the testing in sequence and avoid rolling back the snapshots
and starting fresh each time.
75: More of a nit, but won't the exit code for installing SUNWipkg vary
depending on whether or not it's been backpublished to the version the
client's currently running? Doesn't this need to account for that?
Correct. Per version (or build specific) pre and post image-update logic
needs to be added.
vtfutils:
I'm surprised that tearDown simply passes. There's not cleanup that
needs to be done after a test? Shouldn't it at least be removing the
test directory it setup?
The default setUp() and tearDown() functions are still in flux.
351-355: Shouldn't we always raise the traceback exception if one's
encountered, no matter what check is set to? I can see situations where
we might not want to check error codes, possibly... though that seems
dubious as well, but I can't think of times where a traceback should be
ignored.
The check flag allows the calling routine to handle
unexpected exit codes.
Good point. traceback exceptions should always be raised.
I'll move the if down.
Cheers,
Jim
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss