On Mon, Jan 8, 2018 at 6:29 AM, Hubert Kario <hka...@redhat.com> wrote:
>
> except that what we call "sufficiently hard plaintext recovery" is over
> triple
> of the security margin you're proposing as a workaround here
>
> 2^40 is doable on a smartphone, now
> 2^120 is not doable on a supercomputer, and won't be for a very long time
>

This isn't how these kinds of attacks work. 2^40 would be small for
something that could be attacked in parallel by a very large computing
system. But it's an absolutely massive difficulty factor against a live
on-line attack. Can you propose a credible mechanism where an attacker
would be able to mount say billions (to use the low end) of repeated
connections, without detection? And that's before they wean the signal out.
And since the delay can't be avoided, the attack also costs thousands of
years of attacker-controlled computer time.  I have much much more
confidence in that simple kind of defense giving me real-world security
than I do in my code being absolutely perfect, which I've never achieved.

I'm being stubborn and replying here because I want to argue against a
common set of biases I've seen that I think are harmful to real-world
security, Making attacks billions to trillions of times harder absolutely
does protect real-world users, and we shouldn't be biasing simply for what
we think the research community will take seriously or not scoff at, but
for what will actually protect users.  Those aren't the same.

I'll give another example: over the last few years we have significantly
*regressed* on the real-world security of TLS by moving to AES-GCM and
ChaCha20. Both of their cipher suites leak the exact content-length and
make content-fingerprinting attacks far easier than they were previously
(CBC blocks made this kind attack exponentially more expensive). The
current state is that passive tappers with relatively unsophisticated
databases can de-cloak a high percent of HTTPS connections. This
compromises secrecy, the main user benefit of TLS. That is staggering me,
but it's also an uninteresting attack to the research community, it's long
known about and isn't going to result in much publication or research
grants.



> > This bears repeating: attempting to make OpenSSL rigorously constant time
> > made it *less* secure.
>
> yes, on one specific hardware type, because of a bug in implementation
>
> I really hope you're not suggesting "we shouldn't ever build bridges
> because
> this one collapsed"...
>
> also, for how long was it *less* secure? and for how long was it
> vulnerable to
> Lucky13?


I'm saying that trade-offs are complicated and that constant-time "rigor"
isn't worth it sometimes. Adding ~500 lines of hard-to-follow
hard-to-maintain code with no systematic way to confirm that it stays
correct was a mistake and it led to a worse bug. Other implementations
chose more simple approaches; code-balancing, that were
close-to-constant-time but not rigorously so.  I think the latter was
ultimately smarter, all code is dangerous because bugs can be lurking in
its midst, and those bugs can be really really serious like memory
disclosure and remote execution, so leaning towards simple and easy to
follow should be heavily weighted.  So when we see the next bug like
Lucky13, which was un-exploitable against TLS, but still publishable and
research-worthy, we should lean towards simpler fixes rather than complex
ones, while also just abandoning whatever algorithm is effected and
replacing it.


> > Delaying to a fixed interval is a great approach, and emulates how
> clocking
> > protects hardware implementations, but I haven't yet been able to succeed
> > in making it reliable. It's easy to start a timer when the connection is
> > accepted and to trigger the error 30 seconds after that, but it's hard to
> > rule out that a leaky timing side-channel may influence the subsequent
> > timing of the interrupt or scheduler systems and hence exactly when the
> > trigger happens. If it does influence it, then a relatively clear signal
> > shows up again, just offset by 30 seconds, which is no use.
>
> *if*
>
> in other words, this solution _may_ leak information (something which you
> can
> actually test), or the other solution that _does_ leak information, just
> slowly so it's "acceptable risk"
>

Sorry, I'll try to be more clear: A leak in a fixed-interval delay would be
catastrophic, because it will result in a very clean signal, merely offset
by the delay. A leak in a random-interval delay will still benefit from the
random distribution and require many samples.

-- 
Colm
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to