On Thu, Sep 26, 2013 at 8:01 PM, Jim Blandy <[email protected]> wrote: > On 09/26/2013 07:27 PM, Jim Blandy wrote: >> >> On 09/21/2013 05:49 AM, Jonas Sicking wrote: >>> >>> The attack here is if the user gets the device stolen, then the thief >>> could go into the settings and explicitly turn debugging on. He/she >>> could then use the debugger to suck out all sorts of data from various >>> apps. Things like login tokens to your email or even raw passwords >>> from applications that store those client-side. >> >> Right - the debugging protocol allows users, thieves or otherwise, access >> to data apps don't offer via their UIs. But most data the user cares about >> protecting *is* available via the UIs. And access to the user's email >> account will allow the attacker to change the user's passwords (in the guise >> of "password recovery", so auth tokens are effectively available to the >> thief, debugger or not. So the additional exposure here is minor; am I >> missing something? > > > More succinctly: when a developer's phone is stolen, is their reaction going > to be "Oh dear, someone might hook up a debugger!"? > > I'm saying that's going to be pretty far down on their list of concerns. And > if developers don't care about that, non-developer users won't either.
Really? As someone who has had my phone stolen, my first reaction was "oh crap, now they can get at all the data on my phone, I wish I had had a passcode enabled, now I gotta change all my passwords!". I indeed didn't specifically think about debuggers, but I don't think that's really relevant here. The question is if the data can be stolen or not. And how much damage can be done between the time when the device is stolen and the user is able to do something about it. >> One idea that was floated was that we're in a good state if turning on >> debugging only enables debugging of apps installed after debugging was >> enabled. That would let the user turn on debugging, then install an >> app that they want to know how it works, and start debugging away. > > This sounds similar to the "wipe sensitive data when enabling debugging" > approach. If we conclude that the additional exposure via the debugging > protocol is salient, then this might be a good solution, presuming users can > re-install pre-installed apps. But as I say above, I don't really agree with > the premise. I don't see how it's similar given that no wiping occurs? And that we actually have a plan for how it could be implemented (we still have no idea how to "wipe sensitive data" without wiping all data). I still think that enabling debugging for all apps installed after the debugger was enabled brings us 90% of the value (developers can debug their own apps as they are developing as well as inspect other 3rd party apps that they install) at almost 0% of the cost (no additional risk of data leakage, relatively easy to implement). If we on top of that enable debugging any app after a system wipe, that gives us an additional few percent of value as we would enable people to learn from our apps, but only if they use a device which isn't their day-to-day phone. The cost here is definitely higher though as implementing this is more complicated (it's hard to get any information to survive a system wipe, including the information of "start the device in debug-enabled mode). So the question is how much cost we are willing to take for getting the last few percent of value. I.e. how much additional user data leakage risk are we willing to take to enable developers to debug system apps without also wiping their contents? / Jonas _______________________________________________ dev-b2g mailing list [email protected] https://lists.mozilla.org/listinfo/dev-b2g
