Hi there.

First of all, if you have questions about i18n or want to blame someone
for its bad behaviour, just point to me. I'm the guy, who designed it
and the one that also decided on the 3-letter code thingy.

As I can see, this 3-Letter-Code decisions needs a lot of explanation.
So I'll write a document about it and will link it to the
3-letter-Code-Page on the Wiki. I get tired to explain it. 

Okay. let's start with your remarks.

-------- CUT --------
> Hi,
> 
> I'm glad to see that Elisa is getting i18n support again. Have you guys 
> considered using Babel[1] instead of "just" the stock gettext module? I 
> might be a bit biased but I think it provides a few interesting 
> advantages:
> 
>  * Not only for message translation. Babel gives access to most of the 
> CLDR[2] locale data database. This include things like 
> date/number/currency formatting, locale specific names for currencies, 
> countries, languages etc.
During my recherche for the new system, I also saw this one. But the lot
advantages it has are not used in elisa. Not now and maybe never. For
the number and currency stuff, we have also the python internal stuff
[1]. I'm not sure, if we really need it anyway, but I've it in my
mind ;).

> 
>  * A pure python implementation. Does not require GNU gettext for 
> message
>    extraction or catalog compilation. Which is usually not available on
>    non-linux platforms.
We are not using gnu gettext. We are using a gnu gettext compatible
python implementation [2]. We are only using python code what means that
it is existing on all python systems. For the extracting we are shiping
the pot-files today (normally the user does not have to do it on its
own) for a release.

>  * gettext compatible api.
well, aren't we?

> 
>  * Supports message extraction from python, genshi and glade by default
>    but more formats can be added easily using plugins.
no glade, no genshi needed. we only have python.

You see there is no need for such a system here. Especially if it needs
some more libs or modules for the user to be installed (at least on
feisty). We have the python implementation, which is (at this point)
good enough for us.


> [1]: http://babel.edgewall.org/
> [2]: http://unicode.org/cldr/

my 2 cents ;) :
[1] http://docs.python.org/lib/module-locale.html
[2] http://docs.python.org/lib/module-gettext.html


> Btw, why are you using a 3-letter language code instead of the more 
> common
> language_TERRITORY? Will this not make it impossible to have different 
> translations for for example UK and American English (en_UK, en_US)?
I get always this example. Because it is nearly the only one that fits.
What about prussian? I mean your argumentation is not explaining, why
there is at least this 3-Letter code existing. If you could get all
languages with this 2-letter thing, why should there be this ISO-639-3
anyway?

It is very simple: because you won't get all the languages with this
code. In germany itself for example I know without looking up at least 6
different kind of languages. Every one is very different and can be
understood as its own language. And that is exactly what this ISO-639-3
is about: for _all_ languages.

We are always thinking about 'my grandma' as the ultimate user for our
system. And for my grandma it would be a killing feature if the
multimedia system would speak with her in prussia, a language she spoke
before the 2nd World War as she was a child. Even if there is no
prussian translation existing yet, we at least wanted to have a system
that is able to do it.

I my opinion, we should generally more use the 3-Letter-Codes. And btw.
when looking at babel, I wasn't sure if that system offers the
possibility to use the 3-Letter code.

I hope, it is now easier to understand.


After the release I'll write a nice document about it.

> 
> Cheers,
> Jonas

Cheers to you.
benjamin

Reply via email to