Whenever I start using an open-source tool, I find it useful to see how
other teams are using it. Each project has different needs, and will apply
the tool in different ways. By opening the discussion about how people use
it, you will sometimes find surprising uses that you never expected, and
other times you find that people often run into the same problem, or are
unaware of certain features (then you can update your docs)
Here is what we are doing:
1. we are using selenium in tomcat, deployed with the app
2. all our tests are .jsps, and we converted TestSuite.html to a jsp, so we
could just read the directory to list out the tests (instead of having to
run that script to add the tests)
3. in TestSuite.jsp we have some filtering logic to show each directory as a
suite, so you can run all the tests in a folder ( login, search, user
profile, etc)
4. all of our tests are "includeable" meaning that any test can include
another test. there is a header.jsp that figures out whether or not to start
writing the <table> or not. This makes our tests very easy to maintain and
fix, because there is rarely an instance where you have to "go fix a bunch
of tests" because of some page change, it is usually just a fix to one test.
5. we have added parameters to our tests, by setting a value in the request.
If present the jsp will write that row to the table (we are using jstl to
look for the value, and write that row if needed).
6. we added a test refresh button so you can change and reload a single
test, without having to refresh all of the frames
7. we have added a "test - testing" page that allows a user to paste the
data from the "selenium recorder" into a window, and run that test. This is
very helpful when creating a test, as you can just tweak and focus on the
part you want (without having to reload or recompile the test) - this also
allows our QA team to record tests when they encounter a bug - they record
their actions, retest the script (make adjustments when needed) and submit
it along with a bug report.
Issues we have:
our selenium tests take a long time to run
they are dependent on downstream systems (we do not yet mock out the
backend, so if the DB is down...)
we would like to add some conditional logic to get around the dependencies
on downstream systems (if error, then try something else) - though if we
mocked it all out this wouldn't be an issue
they are not yet part of our automated build (cuz they are slow) - so
developers only know they broke something if they *happen* to run the tests
our QA team keeps comparing selenium to winrunner, and the gaps
We think Selenium is great, and we thank L. Maxon for introducing it to us.
We want to add more, but we also want to learn more about how other people
are solving some of these same issues, so we don't duplicate efforts.
So, How are you using selenium?
_______________________________________________
Selenium-users mailing list
Selenium-users@lists.public.thoughtworks.org
http://lists.public.thoughtworks.org/mailman/listinfo/selenium-users