Thanks Sim,

The drawback of a root cert is; it becomes a very attractive target to an attacker. A compromised root certificate could cause catastrophic damage.

You have a very valid point regarding old trust relationships too.

I have in mind, a process of repeatability, where the production of a jar file can be verified from source to end product.

Example:

I submit to the code staging area for auditing; some java source code (We might only allow Apache compatible license, others can set up their own staging areas too), along with a comment describing the process (javac version, vendor and environment) required to re create it. I also submit two bundles (jar files) that I have signed, one containing the Service Interfaces, the other, a client implementation containing the proxy for my service. I also submit a bundle (jar file) that I have signed There could be any number of volunteer auditors (any willing person, company or entity) who can first verify the process is repeatable, then check the code for vulnerabilities and finally also sign the submitted bundle.

Companies that wish to make available a public service could submit their source code and bundles to the staging area, while other companies wanting to utilise this service can sign it themselves, without granting permissions to any third party based on certificate chains. If a company is pedantic about security, they may wish to utilise only code they have audited and signed.

Someone less pedantic about security might just accept the signed bundle until that certificate is revoked or is known to be compromised.

*Publicly available Service Interfaces might be useful to other companies, the actual service (server) implementations may remain private. *

Another company might want to also provide this service, but utilise a different client proxy implementation, so they create a bundle, sign and upload it, along with the source and instructions to recreate it for auditing. They make sure that their client implementation depends upon the common Service interface bundle ( this ensures that services are interchangeable and comparable by sharing a common interface that will reside within it's own class loader, visible to both implementations that reside in their own class loaders locally).

An Auditor would be able to satisfy themselves that serialised streams are unmarshalled defensively, so an attacker cannot retain a reference to the internal state of a proxy or any of the objects returned by the proxy. See Effective Java 2nd Edition's Chapter on Serialization.

The service would also be able to use the existing TrustVerifier interface to verify the unmarshalled proxy at the client belongs to that service.

The submitted bundles would be made available on public codebase servers, which would be refreshed on a regular basis to capture audits and updates.

If a vulnerability is later found in any client proxy implementation, a new version can be submitted containing the fix and the process repeats itself. The compromised version is reported to a Global Vulnerability Notice Board Service.

The OSGi framework can be utilised to control local node JVM classloading to load the latest signed version, subject to local security policy. The OSGi r4.2 compendium has security mechanism (ConditionalPermissionAdmin) that looks like it can assist in solving some isses. Conditions (an OSGi concept) simplifies the use of permissions. ConditionalPermissionAdmin would allow us to dynamically deny any permission to a bundle that has a known vulnerability, even if signed by our own certificate.

This security framework will take some time to setup and contruct, however the benefits would be substantial.

Such a structure would be tolerant to attack, I'm not saying immune, but due to its distributed nature, where coupling (dependency) has been abstracted, it would be rather difficult. By not depending upon any one signing algorithm, having multiple keys, etc redundancy would be built in.

Cheers,

Peter.


Sim IJskes - QCG wrote:
Sim IJskes - QCG wrote:
So in practice i foresee the following. There is a central deployment source for code & rootcerts. 1 rootcert identifies the deployment cloud/cluster/environment. Every node identifies itself by a indiviual cert signed by this rootcert. There is a cert generation facility running on the central deployment source, that allows for generation of new certs based on a cert request, signed with a external identification. The cert generation facility accepts this request either implicitly or by some other external verification.

And this central deployment facility with own rootcert is run by anybody who wants to source executable code, either by beeing the author or by beeing a clearing house for code vetting.

Gr. Sim


Reply via email to