This seems like a good idea, but I think the approach may not go far enough. I have some suggestions.

I think there are a few scenarios that interact with the proposed functionality: 1: Lost, locked device found by a nefarious person with no plans to return it 2: Device in the possession of a nefarious person who intends a persistent attack on the owner of the device.

1) In the forever-lost-to-evil case we want the attacker to not be able to get at the data, so wiping the data as a prerequisite to gaining root-ish access seems absolutely correct. And as Paul notes, it's quite conceivable for even a limited-capability attacker to keep the device out-of-touch via network, etc., making remote wipe insufficient on its own. Of course, since the data is not encrypted on the flash, we will lose to more capable attackers at this time.

2) In the root-and-return case we want the true owner of the device to be able to know that their device has been tampered with. The problem is that once a device has been rooted, the device can no longer be trusted to indicate that it's been tampered with at this time. Obviously if we do enough trusted-computing stuff we could get the boot process to indicate rooting (a Firefox with a robber's mask!), but I think we're pretty far from that right now.

Nuking the user's data is an excellent indicator of potential tampering. But that's not reliably being proposed here. The current proposal as I read it asks the user "want a lock screen code?" at first-run but the actual decision is "want a lock screen code and for it also to enable a super-powerful developer mode that could let someone persistently pwn your device?"

It seems like it would be better to just ask the user outright whether they'd like to enable super-dangerous debugging mode and how they'd like to secure that debugging mode. For example, I would personally prefer that the code that would let an attacker persistently root my device be more sophisticated than the 4-digit code I'm potentially typing in front of everyone all the time and smudging onto the screen with my fingerprints. Or, since people hate FTU screens, make it an opt-in under an "advanced..." or "developer..." call-out in the flow that will catch the eye of developers/tinkerers but not infuriate most users.


There are other possible tamper indication options, and using those could let the user upgrade to "super developer mode" without nuking all their data. For example, if the device is bound to a Firefox account, the Mozilla server could potentially generate a rooting-unlock authorization for the device if-and-only-if the account has been bound to the device for some number of days. The user hits the "hey, I wanna be a super-fancy developer and do dangerous stuff" button. We set some arbitrary delay on this, N hours: - send out an email to the associated email address immediately, and then randomly at some point in the next N/2 hours (the idea being not to be predictable so if the attacker is able to use the email app to delete the email they have to be at least somewhat competent rather than just waiting exactly 2 hours). - present a persistent notification in the tray "still want to root your device?" for the duration of the N hours. At the end of the time period the device gets unlocked and a persistent note is made on the Firefox Account for the device.

The general idea is that you have to lose control of your device for an extended period of time and our web services infrastructure can help provide notifications via other channels if we have them. (In an ideal world everyone has both a Firefox OS phone on them and a Firefox OS tablet at home, right?)

Honestly, the bang/buck effort seems way off for this compared to "opt in to developer mode at first-run, potentially having to wipe your device." And until we provide more support for layered security/encryption, in many cases there isn't much of a point since the weak 4-digit pass-code is all that's standing between the attacker and the user's email account(s)/etc.

Also, many interesting permutations of this potentially want the processor/chipset to have a non-extractable private crypto key that can be used to prove the device is who it says it is. Various things using serial numbers/MACs/etc. are too predictable or just accessible to would-be attackers on the back of the box or inside the battery case. I think many interesting server-assisted mechanisms depend on a non-forge-able device id where the initial owner of the device can reliably bind the device to some other authentication factors. (So it becomes "*initial* possession is nine tenths of the law" rather than just "possession".)

Andrew
_______________________________________________
dev-b2g mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-b2g

Reply via email to