Not sure what framework you're using or whatever, but you can do this
pretty easy with a loop. Below is the basic concept.
require 'watir-webdriver'
browsers = [:firefox,:chrome,:safari].each do |br|
b = Watir::Browser.new br
b.goto(google.com)
puts b.title
b.close
end
On Wednesday, September
I'm not sure that this is the correct forum for this, but I'm sure that
someone here must have run into a similar situation. We pretty heavily use
QC/ALM here along with a mixture of QTP and watir-webdriver. Currently
we're using a custom framework, but I'd like to start moving to something
If you are going to migrate over to cucumber from QTP. I have done that
they way we did it was located the important behaviors that were being
tested in QTP rewrite them in cucumber/watir and then delete them from QTP.
On Thu, Sep 19, 2013 at 11:50 AM, Dan dfra...@gmail.com wrote:
I'm not
I should clarify. QC won't be going away. It's going to remain the
repository for manual test cases from which we'll create automation. We
use watir-webdriver as opposed to QTP in the cases where our tests are
browser based. My concern is that if we have the test cases in QC and then
we
So we're standing up ALM now actually, which has the rest API, but I'm not
sure I want to go down that road. One thing that's really drawing me to
cucumber is that along with reporting the results of the test you get a
good description of what it's doing. It is possible to pull the
At my previous company we were using TestLink and Test::Unit. I modified
Test::Unit to update tests as they were executed (individual classes or methods
just needed to include the test id as part of the class or method name). The
test would make a call to TestLink as each test result was
Hi Dan
Thanks for your reply. This does work if the script runs without any
failure, but, in case any failure happens script stops and does not execute
on other browsers.
I wanted something where even if script fails on one browser, it still runs
on others to complete the test.
Your help is
my question is... if you take a manual test case and automate it... do you
still need to run it manually?
if so then why automate it :)
And if it is automated why does it need to be with the manual test cases?
how will you know what is automated or not? probably a flag somewhere its
always a