How to automatically localize qatesttool environment
I am picking up an Idea which was posted some time ago by Shaun
McDonald. It said to use the original localization files from the office
to localize the testtool environment.
This would ease localizing the VCLTestTool and make the Script code more
readable.
But first lets look at the different things which have to be done
localizing the VCLTestTool environment.
When adapting the scripts for new languages there are several types of
adaptions to be done. There are two groups with different functions to
the test
1. Strings and Numbers which are only needed to access a certain UI
element. E.g.: when formatting text the index for selecting italics may
vary for different languages.
2. Strings and Numbers which check wether an UI element has exactly this
caption or is at exactly this position in a list. E.g.: Filter names
must stay the same unless they are changed by a new specification.
For the second group there must not be an automatic translation process
by copying the strings from the OOo resources. They would then still be
equal in OOo and the test script but a change will not be noticed. So
these will be excluded from this process.
The first group can be translated automatically. At least the strings.
Some of the numbers could be omitted by searching in the list for the
desired string instead of picking an entry with a certain number. So
after a change to the algorithm they also could get translated
automatically.
The way to do this would be to only have an identifier of the string in
the script and then fetch the corresponding string at runtime from a
file which contains the real UI string in the desired language.
For example a code like
-----------------------------------
select case iSprache
case 01 : firstChar$ = "=SUM"
case 03 : firstChar$ = "=SOMA"
case 07 : firstChar$ = "=SUM"
case 31 : firstChar$ = "=SOM"
case 33 : firstChar$ = "=SOMME"
case 34 : firstChar$ = "=SUMA"
case 39 : firstChar$ = "=SOMMA"
case 45 : firstChar$ = "=SUM"
case 46 : firstChar$ = "=SUMMA"
case 48 : firstChar$ = "=SUMA"
case 49 : firstChar$ = "=SUMME"
case 55 : firstChar$ = "=SOMA"
case 82 : firstChar$ = "=SUM"
case 84 : firstChar$ = "=SUM"
case 86 : firstChar$ = "=SUM"
case 88 : firstChar$ = "=SUM"
case else : QAErrorlog "The language adjustment is still missing"
goto endsub
end select
-----------------------------------
would then look like
-----------------------------------
firstChar$ = GetString( “213059” , iSprache )
-----------------------------------
The problems would be
a) how to generate a unique identifier for each string which *never*
changes (well at least almost never)
b) keep the scripts readable
For b) there is an easy solution. Just append the English string to the
number as in the example below
-----------------------------------
firstChar$ = GetL10NString( “213059.SUM” , iSprache )
-----------------------------------
The function GetL10NString would search in a file for the ID and return
the localized string.
For a) there are several solutions.
I) there are so called KeyID Builds provided by SUN, but of course only
for some languages. In these builds each string is preceded by its ID
From the database, which is unique and does not change. The problem is
that these IDs are not available to the Community (yet). There are KeyID
Builds by the community, namely Pavel, but their numbers change from
build to build and thus are not suitable in this case.
Is seems however that there are some free fields in the sdf files which
could be used to store these SUN KeyIDs in. This would be done in the
normal l10n CWSes.
The problem wold be that the ID would be available for new strings only
after they have been translated, but OTOH a localized Office is not
available before that either.
II) There could be some kind of checksum mechanism that generates a
checksum out of some of the information in the sdf files (where the
translations are in inside the OOo sourcecode)
These however would be significantly longer than the unique IDs
mentioned in I)
The file(s) containing the translations could be generated by a script
which has two input sources. First A file containing the needed IDs,
second an sdf file containing all translations of the office. It would
then write a file for each language found in the sdf file containing
lines with the fields ID and Translation.
What do you think about this, is it a good way to go? There is some work
to do but in the end localizing a new language is done in no time.
Sincerely
Gregor
f'up to openoffice.qa.dev
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]