May I ask, why you prefer signing byte-code files and thereby limiting
the exploitation of a security hole in the interpreter to one party over
solving the actual problem, i.e. hardening the interpreter (lt-proc) in
such a way that nothing evil can happen?

If I correctly understand what lt-proc is doing (processing some sort of
state transition table), the worst thing that could happen in a
range-checking implementation would be an infinite loop. That however
could be detected by a time-out, excessive output size compared to the
input size, etc.

I personally think that quirky workarounds should only be used when the
possibility of actually fixing the bug is ruled out.

Regards Benedikt

Am 27.10.2014 um 01:19 schrieb Jim O'Regan:
> On 26 October 2014 22:56, Mikel Artetxe <[email protected]> wrote:
>> On Sun, Oct 26, 2014 at 7:56 PM, Jim O'Regan <[email protected]> wrote:
>>>
>>> On 26 October 2014 15:05, Mikel Artetxe <[email protected]> wrote:
>>>> 2) Cryptographically sign the bytecode and verify this signature every
>>>> time
>>>> that we load it. Probably not too hard to implement, but we would have
>>>> to
>>>> take care of the infrastructure involved, which is not trivial (e.g. who
>>>> would keep the private key? only people with it could publish or update
>>>> language pairs, but we could not make it public either as potential
>>>> attackers would then have access to it).
>>>
>>> A phone or tablet fits the definition of "user product"[1] in the
>>> GPL3, so you would be required (section 6) to also provide the keys
>>> used to sign the binaries for packages covered by the GPL3.
>>
>>
>> If I understood you correctly, using a language pair from Apertium was more
>> or less like playing a video from a video player in terms of licensing, as
>> GPL would consider both of them "mere aggregation". The only thing that I
>> would be doing is verifying that this "mere aggregation" comes from a
>> trusted source. Doesn't GPL allow me to do that?  That sounds like an
>> extremely stupid restriction!
>>
> 
> TiVo did the same thing with signed versions of the Linux kernel, and
> that's why the GPL3 includes this clause, why it's restricted to "user
> devices", and why Linux is GPL2-only.
> 
>> What's more, language pairs must already come from a verified source in the
>> current version. The only thing that changes is the way in which this is
>> achieved: currently it is the OS that makes sure that nobody can modifiy the
>> transfer bytecode, and in the proposed system it would be the app itself by
>> using some very basic cryptography. I see no difference between them.
>>
>> More importantly, is there any workaround for this restriction other than
>> having a security hole in the app? Some ideas that come to my mind:
>>
>> 1) Allow the user to add any other public key that they trust on. So
>> technically any transfer bytecode could be run as long as it is explicitly
>> trusted by the user.
>>
>> 2) Alert the user if the signature check fails, but give the option to run
>> the code under their own risk.
>>
>> 3) An extreme idea: give the option to go into an unsafe mode, but make it
>> ridiculously hard to activate. For instance, if the user presses certain
>> button for, let´s say, 24 hours, the next signature check will be skipped.
>> Technically this would be possible, but we could be pretty sure that it
>> would never happen. A stupid solution for a stupid restriction.
>>
>> PD: Are we also supposed to publish the keys that we use to sign the app
>> itself to upload it to Google Play? That would be even more stupid!
> 
> No; it has to be included in the "installation information", which
> wouldn't normally go somewhere like Google Play.
> 

------------------------------------------------------------------------------
_______________________________________________
Apertium-stuff mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/apertium-stuff

Reply via email to