On 10/12/14 22:24, Eleanor Saitta wrote: > On 2014.12.10 16.12, Ximin Luo wrote: >> There are a bunch of reasons why deniability doesn't "work in the >> field". Once we neutralise those reasons, it would "work in the >> field". For example, metadata leaks and bad endpoint security. So >> blaming "it doesn't work in the field" on deniability itself, is >> unfair - it's other things that are the real root of the observed >> problem. > > This amounts to saying that "as soon as we have a magic wand that > makes computers secure, deniability will be useful". The reason > deniability doesn't work is that when the police present a transcript > in court, they're believed because they're the police. Not because of > the signature status or lack thereof of the transcript. Why do you > think that having better endpoint security would make deniability more > effective? All I hear is a repeated assertion that this is true. >
Systems that provide unlinkability would help to prevent those transcripts from being leaked. Better endpoint security would also help. It's a matter of reducing risk - it will always be there, but taking measures to reduce it helps the end result. Not sure how I can explain it in other terms that would be more acceptable to you as a non-assertion... >> As far as I'm concerned, the issue of deniability is resolved, we >> don't need to talk about it any more. > > Great, I'm glad to hear that, because it's in direct contradiction > with everything else that anyone has said on this subject. > To give some more detail, for (n+1)sec the main points of discussion were/are: - whether the key really needs to be contributive - whether it's acceptable to indicate "chat initiated" before confirming that everyone is present with a fresh key, or if it's OK to indicated "chat initiated" without confirming it (but confirming it when the first message is received from each) - how much we can assume the transport to be reliable / well-ordered - specifics of how to define / achieve consistency, and timing rules - misc things like freshness and rekeying strategies *Who* has been saying deniability is costing us? I do remember this point being passed around at last year's CCC, and maybe it was true for previous efforts - but for the efforts over the past year, we haven't really run into this at all. >> My point in this previous paragraph was that you can't "roll back" >> the lack of deniability. > > Yes, I know. My point is that it's not clear that the lack of > deniability is relevant to the security evaluation of a protocol. > > If it has literally zero cost? Sure, great, let's have protocol > deniability[1]. If it has absolutely any cost? I'd much rather see > all of that effort go into things that we know actually matter, like > doing basic requirements evaluations before designing a protocol, as > it's not clear if was done in the n+1sec case. > > E. > > [1]: The other case of deniability, hidden information repositories > inside disk or file encryption systems, is in almost all cases a > direct harm to users. > > -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Messaging mailing list Messaging@moderncrypto.org https://moderncrypto.org/mailman/listinfo/messaging