2015-05-20 7:11 GMT+02:00 Daiki Ueno <u...@gnu.org>:

> Michele Locati <mich...@locati.it> writes:
>
> > IMHO it can be used to statically generate the rules for all the
> > languages (simply call "bin/export.sh gettext"), so that they can be
> > included statically in gettext (making the "urlget" approach useless).
>
> That's true if we can assume that gettext always includes the latest or
> fixed information.  However, CLDR changes over time, and a user could
> stick with an older gettext version which ships with a plural-table.c
> generated from an older CLDR release.
>

You're right. So, the question is: which remote data should be fetched?
We may elaborate the CLDR data directly, but then we have these problems:

- we need to integrate something like the cldr-to-gettext-plural-rules tool
of mine: that's not a big problem - just a rewrite from php to c.

- we have to take for sure that the CLDR repository structure does not
change: that's a problem (for instance, the CLDR team moved the json data
from http://unicode.org/Public/cldr/ to GitHub)

Another approach would be to have another place/server directly under our
control in which we can store the data in a ready-to-use format. When a new
CLDR version is released we'd need to update that repository, but it would
take just a few seconds ;).


Ciao!
--
Michele

PS: Daiki, I re-sent you this message because I forgot the cc to the
newsletter :(

Reply via email to