Sniping... I write too much :)

On 10/07/2012 0:16, Lucas Adamski wrote:

On Jul 9, 2012, at 3:07 PM, Antonio Manuel Amaya Calvo wrote:

On 09/07/2012 22:55, Jonas Sicking wrote:

Unless you are suggesting that we don't allow trusted apps to connect
to the internet at all, then this will always be possible.

And I think it's a matter of expectations. I expect the World Wild Web
to be lawless, ruthless, and untrustworthy. So if I get conned... well,
comes with the territory. But I also expect not to be conned by
something that that if someone I trust (might it be Mozilla, Google,
Apple, or whatever) have said it's trustworthy. After all, I'm going to
be giving it extra permissions because it's... trusted.

So basically, at least we should be able to guarantee that the core user
interface that has been vetoed isn't going to change without re-certifying.

We should be careful to not confuse data and code loading. We can
ensure that the code that is reviewed and approved is the only code that
runs with privilege. But if that code chooses to load the background
from a remote server, or show an iframe or other content from an
arbitrary 3rd party server, then that is the risk the developer is
accepting (and, by extension, the marketplace that approved it). We
cannot prevent applications from loading assets from other servers
without destroying most of the useful test cases. The particular (UI
spoofing) threat you mention is not mitigated by restricting where the
data can be loaded from. The developer's server could provide that fake
background, or act as a proxy to load data from any server in the world.
Trying to prevent apps from loading data is not an effective solution to
this problem. That is better handled by careful review process and
consequences for developers that violate these conditions.

If I were happy to accept risks that the developer takes (or to trust
unknown or known developers) then I would not need the concept of
'trusted apps'. Just installed apps would suffice.

And I think that all that's a significant part of the user interface
should be restricted to local content. That is, content that's installed
with the application, that has been examined by the reviewer, and that
requires a new certification/signature cycle to be changed. I don't want
it downloaded from anywhere on the Internet and that includes the
developer server. After all, we're certifying applications because we
don't trust developers in the first place.

Yes, that's the security model of the web today... but I stated
previously, the web is anything but trusted. You 'trust' some remote
servers with your data (or to show you data they already have, like
banks) but you don't usually trust them to do anything into your
computer... because you trust them as far as you can throw them basically :)

This isn't just the web though… desktop and mobile ecosystems allow
essentially unlimited network access. Yes, Android has network access as
a permission but I've literally never installed an app that didn't
request it, and the user can't restrict it anyway. A safe ecosystem that
results in no interesting apps is not a successful outcome.

Actually most applications require it but not all applications need it.
One of the reasons my iPhone is jailbroken is so I could install a
firewall on it and thus do what Apple doesn't let me: restrict network
access to those applications that require it for working.



This is the more basic attack. Now depending on what permissions the
application has asked for and been granted because it was trusted, the
basic attack can be complicated, and the application (either the
original developer or a clever attacker that hacks the server) can find
novel ways to trick the user.

So what I'm worried about is that on the web it's quite easy to
transform "downloaded data" into "user interface". In fact it's not only
easy, it's part of the model. And that model works. It just cannot be
called trusted that way.


For such an attack I wouldn't even bother with a trusted app. A
regular web installed app lets me mount that attack without any review
or oversight, as it doesn't require any interesting privileges.

What the trusted part adds to this attack is context. A trusted app gets
access to more interesting privileges and thus the attacker can have
more context information about when to execute the attack. Not to
mention that I used to want a trusted UI, if you remember. Something
that informed the user when he was interacting with a window that the
owner application for that window was trusted or not. Now, if the UI is
hijackable, that makes that idea (the trusted UI) into a horrible one.

In any case, and from Jonas' answer I think we basically agree that the
UI is part of the application, that includes the images on the UI, and
thus they have to be controlled.

Best regards,

Antonio



________________________________

Este mensaje se dirige exclusivamente a su destinatario. Puede consultar 
nuestra política de envío y recepción de correo electrónico en el enlace 
situado más abajo.
This message is intended exclusively for its addressee. We only send and 
receive email on the basis of the terms set out at.
http://www.tid.es/ES/PAGINAS/disclaimer.aspx
_______________________________________________
dev-webapps mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-webapps

Reply via email to