Today's developer telco has spawned a discussion about how more translations
should be added to the existing translations hash (code for this is contained
in each Language-module).

Two solutions have been discussed:

1.
sub Data {
    my $Self = shift;

    $Self->{Translation} = { %{$Self->{Translation}},
        'green'  => 'grün',
        'yellow' => 'gelb',
        'Being blue is not a lot of fun'
            => 'Blau sein macht im Deutschen schon mehr Spaß',
    };

    return 1;
}

2.
sub Data {
    my $Self = shift;

    $Self->{Translation}->{green}  = 'grün';
    $Self->{Translation}->{yellow} = 'gelb';
    $Self->{Translation}->{Being blue is not a lot of fun}
        = 'Blau sein macht im Deutschen schon mehr Spaß';

    return 1;
}

Solution 1 is (slightly) easier to read, but it is slower and requires more
memory, since it copies the hash with the existing translations into a new
hash. This copying does not scale well if there are a lot of translation
modules being loaded (each one copies the hash, which is getting bigger and
bigger).

In the telco, I had suggested a third way that is as easy to read and (as I
thought at the time ;-) does not suffer from bad performance:

3.
sub Data {
    my $Self = shift;

    $Self->AddTranslations( {
        'green'  => 'grün',
        'yellow' => 'gelb',
        'Being blue is not a lot of fun'
            => 'Blau sein macht im Deutschen schon mehr Spaß',
    } );

    return 1;
}

Where AddTranslations would be implemented in Kernel::Language.pm:

sub AddTranslations {
    my ( $Self, $Translations ) = @_;

    return if ref $Translations ne 'HASH';

    for my $TrKey (keys %$Translations) {
        $Self->{Translations}->{$TrKey} = $Translations->{$TrKey};
    }

    return;
}

Benchmarking has shown that solution 2 is in fact unusable for many modules,
as the copying of the hash becomes very slow. Solutions 1 and 3 scale ok (i.e.
O(n)) but solution 1 is still twice as fast as solution 3 (as the creation of
the temporary hash that is being passed into the method costs time).

Please have a look at the three solutions and tell me what you think. I
personally would favour solution 1, as it is the fastest and IMHO only a tiny
bit harder to read than the other two.

Now that I have looked at the code in Kernel::Language, I wonder if we should
not try to find a way to limit the loading of Language modules to the ones
which are actually being used by the current HTTP-request.
Please correct me if I'm wrong (and I hope I am), but as far as I can see, we
currently load *all* available Language modules in the constructor of
Kernel::Language.
In an OTRS setup that has many modules installed, this would cause quite a
performance hit, wouldn't it?

cheers,
    Oliver

_______________________________________________
OTRS mailing list: dev - Webpage: http://otrs.org/
Archive: http://lists.otrs.org/pipermail/dev
To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/dev

Reply via email to