Hi Dan,
See comments inline.
Dan Steinicke wrote:
Hi Brian,
I personally went through the commit logs for the changes to the tests
in the old framework and updated the
new tests with these same changes. Your i18n changes were the bulk of
this (including adding the uw() method)
but there were also a number of changes from other devs. The amount
of effort that must now be expended to maintain
two sets of tests is the main reason I am now pushing for moving
quickly forward to using the new tests. Once
the new tests are running on tinderbox without problems the plan is to
stop maintaining the old tests.
+1 makes sense to migrate as soon as possible.
The new framework is mainly about how the tests are run and logged,
with little change to the function of the tests themselves.
I don't believe we have made any special efforts to adapt the CATS-0.2
framework for i18n testing specifically, other than the
new framework being more flexible and easier to develop tests for. My
first thought on this is that we should
continue to push ahead to get the new framework working on tinderbox
and once that is accomplished look work on the
modifications you mention to test for i18n issues.
Sounds good!
I would welcome your input on the new framework and its suitability to
testing i18n issues and expect to work closely with you to develop
this functionality in the future. (links to docs in original msg)
Yes, we will need to work together to create a good i18n testing framework.
If you wish to continue this discussion, can we start a new thread?
This an important issue but one that is pretty far afield from
my original request seeking comment on the new logger output.
Yes lets start a new thread called: "Internationalization Requirements
for CATS testing Framework".
At this stage it makes sense to complete your CATS migration work. I
will review the new framework to make
sure that the i18n changes from CATS-0.1 all got ported to CATS-0.2.
At that point we can start talking about additional i18n requirements
for CATS-0.3.
Thanks,
Brian
Dan
Brian Kirsch wrote:
Hi Dan,
It is very important that CATS-0.2 be design with
internationalization support in mind.
The most common issues / bugs that arise related to i18n are subtle
and a good testing
framework is essential.
Could you detail what work / thought has gone in to CATS-0.2
regarding i18n and what is left to be done.
Specifically in CATS-0.1 I went through and wrapped all displayable
strings in uw() methods from the
i18n.tests package. The uw method inserted a semi-random unicode
character at the beginning of the
displayable string and a unicode character at the end of the
displayable string.
Adding this uw() wrap was key to finding many subtle bugs including
incorrectly logging
unicode characters and not encoding unicode to bytes when using the
Python file system API's.
It even uncovered issues with the CATS-0.1 framework itself including
a few places where items were
converted to strings i.e. str(item) which is a no no!
Using unicode strings is only a small aspect of an i18n testing
framework.
In CATS I would like the framework to provide support for running in
different locales and of course timezones.
For each locale the expected output for many operations will change.
For example a date / time string will
be different, the first day or the week in a calendar can be
different, The UI menu labels and auto generated
collection names will be different.
I am happy to sync up with you guys but I do want to make sure at
this stage we are writing frameworks that
meet the needs of Chandler.
Almost all of the Internationalization work will be completed in the
Alpha 4 time frame.
The testing framework is the key to preventing regression bugs in
i18n as we drive towards Chandler 1.0.
Thanks,
Brian
Dan Steinicke wrote:
We are getting close to being able to switch over to the new test
framework (CATS-0.2). As it is currently implemented the output
from the tests will be quite different from the current test output.
Please take a moment to look over the test output samples below and
let us know what you think so we can address peoples concerns before
the new framework goes live, rather than after.
Two sample outputs are shown below, one where a test fails without a
traceback, another of a failure with a traceback.
For more info on the new framework see :
general docs
http://wiki.osafoundation.org/bin/view/Projects/ChandlerAutomatedTestSystemZeroPointTwo
writing new tests
http://wiki.osafoundation.org/bin/view/Projects/WritingChandlerAutomatedTestsWithCATSZeroPointTwo
Thanks
Dan
############# Here is sample output from do_tests where one tests
fails without a traceback:
Test Report;
*Suite ""ChandlerTestSuite"" Failed :: Total Time ""0:04:55.434000""
:: Comment ""None""
**Test ""TestSwitchTimezone"" Failed :: Total Time
""0:00:00.931000"" :: Comment ""None
None""
***Action ""CheckBlockVisibility"" Failed :: Total Time ""0:00:00""
:: Comment ""(On EditTimeZone Visibility) || detail view = Fa
lse ; expected value = True""
****Report ""(On EditTimeZone Visibility) || detail view = False ;
expected value = True"" Failed :: Comment ""None""
***Action ""CheckEditableBlock"" Failed :: Total Time ""0:00:00"" ::
Comment ""(On EditTimeZone Checking) || detail view value =
Floating ; expected value = US/Pacific""
****Report ""(On EditTimeZone Checking) || detail view value =
Floating ; expected value = US/Pacific"" Failed :: Comment ""None"
"
***Action ""CheckBlockVisibility"" Failed :: Total Time ""0:00:00""
:: Comment ""(On EditTimeZone Visibility) || detail view = Fa
lse ; expected value = True""
****Report ""(On EditTimeZone Visibility) || detail view = False ;
expected value = True"" Failed :: Comment ""None""
***Action ""CheckEditableBlock"" Failed :: Total Time ""0:00:00"" ::
Comment ""(On EditTimeZone Checking) || detail view value =
Floating ; expected value = US/Pacific""
****Report ""(On EditTimeZone Checking) || detail view value =
Floating ; expected value = US/Pacific"" Failed :: Comment ""None"
"
$Suites run=1, pass=0, fail=1 :: Tests run=26, pass=25, fail=1 ::
Actions run=367, pass=363, fail=4 :: Reports run=556, pass=552,
fail=4
#TINDERBOX# Testname = ChandlerTestSuite
#TINDERBOX# Time elapsed = 0:04:55.434000 (seconds)
#TINDERBOX# Status = FAILED
- + - + - + - + - + - + - + - + - + - + - + - + - + - + - +
The following tests failed
(debug)C:\cygwin\home\Dan\chandler\tools\cats\Functional\FunctionalTestSuite.py
(release)C:\cygwin\home\Dan\chandler\tools\cats\Functional\FunctionalTestSuite.py
########## here is some sample output of a test failing with a
traceback
Test Report;
*Suite ""ChandlerTestSuite"" Failed :: Total Time ""0:00:06.139000""
:: Comment ""None""
**Test ""TestCauseTrace"" Failed :: Total Time ""0:00:00.040000"" ::
Comment ""None
Test Failure due to traceback
Traceback (most recent call last):
File "C:\cygwin\home\Dan\chandler\tools\cats\framework\runTests.py",
line 51, in run_tests
test.runTest()
File
"C:\cygwin\home\Dan\chandler\tools\cats\framework\ChandlerTestCase.py",
line 68, in runTest
self.startTest()
File
"C:\cygwin\home\Dan\chandler\tools\cats\Functional\TestCauseTrace.py",
line 24, in startTest
1/0
ZeroDivisionError: integer division or modulo by zero
""
***Action ""Divide by zero "" Failed :: Total Time
""0:00:00.040000"" :: Comment ""None""
****Report ""Action Failure due to traceback"" Failed :: Comment
""None""
$Suites run=1, pass=0, fail=1 :: Tests run=3, pass=2, fail=1 ::
Actions run=5, pass=4, fail=1 :: Reports run=28, pass=27, fail=1
#TINDERBOX# Testname = ChandlerTestSuite
#TINDERBOX# Time elapsed = 0:00:06.139000 (seconds)
#TINDERBOX# Status = FAILED
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev
--
Brian Kirsch
Internationalization Architect/ Mail Service Engineer
Open Source Applications Foundation
543 Howard Street 5th Floor
San Francisco, CA 94105
http://www.osafoundation.org
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev