One clarification below.

e

On 02/02/2015 02:54 PM, Eric Voskuil wrote:
> On Feb 2, 2015, at 11:53 AM, Mike Hearn wrote:
>>
>> In sending the first-signed transaction to another for second
>> signature, how does the first signer authenticate to the second
>> without compromising the  independence of the two factors?
>>
>> Not sure what you mean. The idea is the second factor displays the
>> transaction and the user confirms it matches what they input to the
>> first factor. Ideally, using BIP70, but I don't know if BA actually
>> uses that currently.
>>
>> It's the same model as the TREZOR, except with a desktop app instead
>> of myTREZOR and a phone instead of a dedicated hardware device. 
> 
> Sorry for the slow reply, traveling.
> 
> My comments were made in reference to this proposal:
> 
>>> On Feb 2, 2015, at 10:40 AM, Brian Erdelyi <brian.erde...@gmail.com
>>> <mailto:brian.erde...@gmail.com>> wrote:
>>>
>>> Another concept...
>>>
>>> It should be possible to use multisig wallets to protect against
>>> malware.  For example, a user could generate a wallet with 3 keys and
>>> require a transaction that has been signed by 2 of those keys.  One
>>> key is placed in cold storage and anther sent to a third-party.
>>>
>>> It is now possible to generate and sign transactions on the users
>>> computer and send this signed transaction to the third-party for the
>>> second signature.  This now permits the use of out of band transaction
>>> verification techniques before the third party signs the transaction
>>> and sends to the blockchain.
>>>
>>> If the third-party is malicious or becomes compromised they would not
>>> have the ability to complete transactions as they only have one
>>> private key.  If the third-party disappeared, the user could use the
>>> key in cold storage to sign transactions and send funds to a new wallet.
>>>
>>> Thoughts?

My comments below start out with the presumption of user platform
compromise, but the same analysis holds for the case where the user
platform is clean but a web wallet is compromised. Obviously the idea is
that either or both may be compromised, but integrity is retained as
long as both are not compromised and in collusion.

> In the multisig scenario the presumption is of a user platform
> compromised by malware. It envisions a user signing a 2 of 3 output with
> a first signature. The precondition that the platform is compromised
> implies that this process results in a loss of integrity of the private
> key, and as such if it were not for the second signature requirement,
> the malware would be able to spend the output. This may be extended to
> all of the keys in the wallet.
> 
> The scenario envisions sending the signed transaction to an another
> ("third") party. The objective is for the third party to provide the
> second signature, thereby spending the output as intended by the user,
> who is not necessarily the first signer. The send must be authenticated
> to the user. Otherwise the third party would have to sign anything it
> received, obviously rendering the second signature pointless. This
> implies that the compromised platform must transmit a secret, or proof
> of a secret, to the third party.
> 
> The problem is that the two secrets are not independent if the first
> platform is compromised. So of course the malware has the ability to
> sign, impersonate the user and send to the third party. So the third
> party *must* send the transaction to an *independent* platform for
> verification by the user, and obtain consent before adding the second
> signature. The user, upon receiving the transaction details, must be
> able to verify, on the independent platform, that the details match
> those of the transaction that user presumably signed. Even for simple
> transactions this must include amount, address and fees.
> 
> The central assumptions are that, while the second user platform may be
> compromised, the attack against the second platform is not coordinated
> with that of the first, nor is the third party in collusion with the
> first platform.
> 
> Upon these assumptions rests the actual security benefit (increased
> difficulty of the coordinated attack). The strength of these assumptions
> is an interesting question, since it is hard to quantify. But without
> independence the entire security model is destroyed and there is thus no
> protection whatsoever against malware.
> 
> So for example a web-based or other third-party-provisioned
> implementation of the first platform breaks the anti-collusion
> assumption. Also, weak comsec allows an attack against the second
> platform to be carried out against its network. So for example a simple
> SMS-based confirmation could be executed by the first platform alone and
> thereby also break the the anti-collusion assumption. This is why I
> asked how independence is maintained.
> 
> The assumption of a hardware wallet scenario is that the device itself
> is not compromised. So the scenario is not the same. If the user signs
> with a hardware wallet, nothing can collude with that process, with one
> caveat.
> 
> While a hardware wallet is not subject to onboard malware, it is not
> inconceivable that its keys could be extracted through probing or other
> direct attack against the hardware. It's nevertheless an assumption of
> hardware wallets that these attacks require loss of the hardware.
> Physical possession constitutes compromise. So the collusion model with
> a hardware wallet does exist, it just requires device possession.
> Depending on the implementation the extraction may require a non-trivial
> amount of time and money.
> 
> In a scenario where the user signs with HW, then sends the transaction
> to a third party for a second of three signatures, and finally to a
> second platform for user verification, a HW thief needs to collude with
> the third party or the second platform before the owner becomes aware of
> the theft (notifying the third party). This of course implies that
> keeping both the fist and second platforms in close proximity
> constitutes collusion from a physical security standpoint. This is
> probably sufficient justification for not implementing such a model,
> especially given the cost and complexity of stealing and cracking a
> well-designed device. A device backup would provide comparable time to
> recover with far less complexity (and loss of privacy).
> 
> Incidentally the hardware wallet idea breaks down once any aspect of the
> platform or network to which it connects must be trusted, so for these
> purposes I do not consider certain hybrid models as hardware wallets at
> all. For example one such device trusts that the compromised computer
> does not carry out a MITM attack between the signing device and a shared
> secret entered in parts over time by the user. This reduces to a single
> factor with no protection against a compromised platform.
> 
> Of course these questions address integrity, not privacy. Use of a third
> party implies loss of privacy to that party, and with weak comsec to the
> network. Similarly, use of hardware signing devices implies loss of
> privacy to the compromised platforms with which they exchange transactions.
> 
> e

Attachment: signature.asc
Description: OpenPGP digital signature

------------------------------------------------------------------------------
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development

Reply via email to