"But what if we want to use a service with a smart proxy without granting trust? So I can use it while running as my Subject, allowing me to use my public credentials for authorisation to run as my Subject on the services server (with another set of Principals assigned by the service server), without granting smart proxy code my local Principal Permissions from my local domain. I want to allow Subjects to cross authority domains (personal, company and country network boundaries), where trust relations aren't implicit."
"Penny for your thoughts?" >From experience, where security really matters seemingly trust relations aren't implicit. There are long discussions about ensuring that a user of a service is who they say they are so they can be billed (amongst other things). Discussions extend further to SLA's, load that is acceptable, responses that are acceptable etc. In cases where security doesn't matter it's often a case of "buyer beware". Banks will often go as far as to bring in and certify particular .jars and not sanction any use of any other .jar or sourcing of code from e.g. an external maven repository. So, whilst we could support all these bits and pieces of security, I'm not sure of the value. It might allow a great deal of flexibility, all sorts of options but maybe the world-at-large doesn't need them because the way it deals in security is rather more polarised? And supporting all these different options makes verification that security is, in fact, maintained and not open to compromise ever more challenging. I'm still left with a feeling that says "if people want co-operation in a secure environment, they will take the simplest means possible to create enough security and no more". Which means virtual networks, firewalls, VPNs, static validation of code, certificates etc. Or put another way, for all but a few security is an exercise in effort versus risk that comes down to "don't bother" or "secure everything to the finest detail". Where would untrusted code be acceptable in such a world?