I think that the complexity of the input language is just something that
frequently co-occurs with security problems. Complicated input has
complicated code, complicated code is hard to understand, code that is
hard to understand will have bugs.

For example, you could make a lambda calculus interpreter in OCaml and
host it on the Internet. You could allow it to process arbitrary strings
from any individual on the Internet. I am extremely confident that this
would not produce any security problems for you beyond availability
problems resulting from non-divergent programs. You can work around this
by killing the interpreter after 10 minutes or something.

So the game is, for security, whether or not some attacker can break
some security invariant you impose. In the game I just proposed, I'll
allow an attacker to carry out *ANY* computation that they want, for as
long as they want up to a bound I specify, but that computation won't
really matter because by construction it won't break abstraction
boundaries and result in arbitrary code execution.

This is where hackers say "oh but there could be bugs in ocamlyacc
generated ML code that results in code exec" or "oh but you could have a
CPU bug that I could get to by doing any computation" but I'm extremely
skeptical.

The defender defines an application. That application gives rise to a
set of possible states. There are probably states that the defender
doesn't know about in that set. I think this is what langsec identifies
as "weird machines". Then the attacker gets to examine states for any
state that they would consider advantageous. This is where making
precise definitions is difficult because we're talking about attacker
goals and the attackers are modular and what they see as an advantage
changes. Some attackers would only care about code execution. Others
would care about information disclosure via any means, whether timing
channels or explicit flows.

If the attacker cannot find any such states, the defender wins. Can the
defender restrict the set of states available for the attacker to
consider, without restricting the space of inputs that the defenders
program considers? I argue that yes, they can.

On 11/12/2014 03:11 PM, Michael E. Locasto wrote:
> A student once asked me a similar questions wrt x86, thinking that
> LangSec somehow made x86 programs safe.
> 
> In reply, I made a distinction between statically disassembling strings
> of x86 instructions (i.e., the encoding of the language) and attempting
> to recognize what an arbitrary x86 program does. (I think this is the
> same observation the OP makes wrt brainfuck)
> 
> Knowing your input is well-formed is but the first (but necessary, and
> often missing) critical step in eliminating a pervasive form of security
> bugs. This approach isn't a panacea or a replacement for other formal
> methods, and it does not guarantee that subsequent arbitrary processing
> is correct or free of bugs.
> 
> I agree w/ the OP when he says that part of the message boils down to
> "Don't write code when you can have it generated for you.", but I think
> there is more to the message, such as: be cognizant of what
> functionality you give up by choosing smaller, less complex languages.
> Personally, I don't have a good feel for what this tradeoff entails in a
> practical sense, and I get the sense that's the motivation behind the
> original question (correct me if I'm wrong).
> 
> -Michael
> 
> On 11/12/14, 12:09 PM, Sven Kieske wrote:
>> On 11.11.2014 22:31, Taylor Hornby wrote:
>>> The fact that HTML5+CSS3 can specify computation that is as
>>> powerful as a Turing machine does not mean the language itself is
>>> undecidable or even requires a Turing machine to decide.
>>
>> In short:
>>
>> yes, this does not mean the language itself is undecideable, but
>> that's not what langsec is about:
>>
>> langsec is about input being undecideable, because the input itself
>> can form a language (in this case html5+css3).
>>
>> so you can hide programs in data
>>
>> I hope I got this right, maybe someone else can explain it better.
>>
>> kind regards
>>
>> Sven
>> _______________________________________________
>> langsec-discuss mailing list
>> langsec-discuss@mail.langsec.org
>> https://mail.langsec.org/cgi-bin/mailman/listinfo/langsec-discuss
>>
>>
> _______________________________________________
> langsec-discuss mailing list
> langsec-discuss@mail.langsec.org
> https://mail.langsec.org/cgi-bin/mailman/listinfo/langsec-discuss
> 
_______________________________________________
langsec-discuss mailing list
langsec-discuss@mail.langsec.org
https://mail.langsec.org/cgi-bin/mailman/listinfo/langsec-discuss

Reply via email to