On Jul 5, 2013, at 12:07 PM, StealthMonger wrote: >> A lawyer or other (paid) confidant was given instructions that would >> disclose the key. "Do this if something happens to me." > > An adversary can verify an open source robot, but not such instructions. > > NSA cannot verify a claim that such instructions have been given (unless > they know the lawyer's identity, but in that case they can "interfere"). > (On the other hand, NSA cannot afford to assume that such a claim is a > bluff, and that's the strength of this idea.) > > The intended interpretation of the "open source" clause in the original > problem statement is that anyone could inspect the workings of the robot > and verify that it does indeed "harbor a secret" and that if the signed > messages stop coming it will indeed release that secret. A false dichotomy.
If there were an actual physical robot, it could be "interfered with" even more easily than a lawyer. The point of the open source implementation is that it serves as a proof of context: It shows something that could have any number of physical manifestations in unknown locations, and any one of them would be an effective dead-man switch. However, the lawyer's instructions serve the same role: Since *this* lawyer has instructions that would lead to release, there could be others with exactly the same instructions. Software - and instructions to lawyers - on their own don't do anything. They have to be physically instantiated in the appropriate medium to affect the world. That's always the hard part to pull off in an adversarial environment. -- Jerry _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography