On Fri, Dec 26, 2008 at 01:28:27PM -0500, Edward Diener wrote:

> If I can get a little finicky, the application needs access to the 
> database/server. Nobody else should be accessing it. But I am sure that 
> is what you meant.

You trust your application, but not its users. This always because,
the security controls that you need are in the application that is in
the hands of the users, and not on the server. Sometimes this situation
is unavoidable (media players, ...), other times this a design flaw, that
can be fixed by beefing up server controls in a 3-tier design with the
middle tier (server) applying the necessary controls.

If the need for client-side security controls (trusted application)
is unavoidable, this is a DRM problem. DRM is an interesting problem,
but the OpenSSL users group is not the right place to explore it.

> The clients are to be trusted using the application.

In other words you only trust the application, and not the users
who with the application's credentials in hand are able to do things
you must prevent them from doing. Still sounds like DRM.

> My employer, not I, 
> felt that putting the certificates where a client, who might be a 
> destructive hacker, could easily access them, might be a security risk. 
> He was told this by a Sun rep whom he knows, so naturally he wanted me 
> to find out if this was the case.

The phrase "Security risk" only has meaning in the context of a threat
model. If you take the time to write down that risk model and analyze it
you will find out whether it is possible to put controls on the server
side, and remove the need to hide the client credentials from the client.
Or you may find that client controls are inevitable, and you have a DRM
problem.

> I come on this forum to ask ...

Well, you have not really explained the nature of the risks (threat model)
you are trying to mitigate. Perhaps you can't do that in a public forum.
Still, without a detailed explanation of the requirements, only general
advice is possible. Your first task is to determine what type of security
problem you are solving:

    - Trusted client-side app, untrusted user => DRM
    - Trusted users, restricted access via any app => User auth management
    - Users face potentially hostile servers => Server auth
    - ...

> >b) You can distribute those client certificates with the software as
> >you were planning before.

This makes no sense in a non-DRM context. Code is distributed separately
from user credentials. Credentials are obtained via a separate
"enrollment" process.

In a DRM context, where the application is trusted, but the users are not,
one tries to hide credentials in the app. This forum is not the right
place to seek help with DRM designs, and such designs have nothing to
do with OpenSSL.

> >2) You can treat the scenario as a DRM issue -- the question is, after
> >all, very similar to: "how can I sell software without having pirated
> >copies floating around the net within X days?" (Note the 'X': putting
> >a number there is a strategic management decision as it directly
> >translates to the amount of (known fallible) DRM/protection effort you
> >are willing to spend.

Piracy of the software does not matter if this an on-line application,
and it only works when the user has the right credentials. At that
point what matters is the user's credentials. If these protect something
of value to the user (user's bank account, user's credit card enrolled
to buy more services, ...) the users won't want to share their credentials.

If the credentials protect content of value to the provider, but the user
has no direct interest in protecting the credentials, we are back to DRM,
unless the design can be changed to make the user want to protect their
credentials.

> >Possible answer: give the client keys enclosed in dedicated hardware.
> >As not trusting the user to not lose his marbles is (almost) identical
> >to the client machine being a source of trouble. Which, broadly
> >speaking, is very similar to 'not trusting your client'.

Distribute keys via an enrollment process, separate from software download.

> The CA authority issued a CA certificate, server key, and server 
> certificate to the MySQL database server to use when it starts up.

This is public information, and requires no protection. The server's
private key was never seen by the CA, and resides only on the MySQL
server. In fact this cannot be protected, since every TLS connection
to the server provides the server's certificate to the client up-front
before any client credentials are presented. If you tell me your server's
IP address and service port, I'll email you the cert, it CAN'T be hidden.

This cert should be distributed together with the software, it will be
used by the client server to ensure the server is legitimate. There
is no need to hide/protect this cert, it is not possible or necessary
to do this.

> My 
> client software was issued a CA certificate, a client key, and a client 
> certificate to use in establishing a connection to the server.

Is this a *single* key for all potential users? That is a mistake. It
is not possible in general to hide such a key in software distributed
to lots of users.

> The client certificates

The certificate is not secret, but the corresponding private key is
secret, and must be distinct for each client. If you are trying to
use a single key to authenticate the application and not the user,
you are back to DRM, and this is still the wrong forum.

> Do any of my client certificates contain private key data which entail a 
> security risk ?

It is a common practice to say "certificate" when one really means a
private key and corresponding certificate that binds the corresponding
public key to an identity. The practice is unfortunate, as sometimes
one needs to be clear about whether one is talking about the public or
private key. The practice is made common by the CA manufacturers who
sell you certificates that attest to the identity associated with a
given public/private key pair.

> It would seem that my client key has a private key, so 
> maybe that indeed needs to be protected in some way.

This is inevitable. A public key alone can only be used to verify the
identity of a peer, not to prove one's own identity. To prove (sign)
you need both the private and public key.

> Of course I would 
> be glad for you or anyone else to elucidate for me just what the server 
> certificates and client certificates entail, but I do understand the 
> gneral mechanism of public-private keys so hopefully no one will find a 
> need to talk down to me.

    - A participant in a protocol signs with their private key. This key
      is to be kept secret.

    - A participant in a protocol freely sends their public key certificate
      to the other participant(s). This is used to bind the signature to
      a CA-certified identity. The certs are not secret.

    - A participant in a protocol uses public key certificates to verify
      the signatures (authenticity) of messages, locally stored trusted
      certificates of root CAs are used to verify their signatures of
      the peer public key cert.

Each side keeps their private key private and makes their public key
freely available.

> "Simply jacking into your software" seems pretty difficult to me. Is it 
> really that simple for a capable cracker to inject code into the middle 
> of my application software somewhere at some time when executing to 
> change the way it works ?

Debuggers and disassemblers are quite sophisticated...

> >I hope you see this for what it is: this is where, ultimately, you
> >/have to/ trust the client (AND his machine!) or you will not be able
> >to do business (i.e. allow the client access to your server).
> 
> I agree that the client and his machine have to be trusted. The idea 
> which my employer has is to trust the client but at the same time make 
> it as difficult as possible for a client who might be a software cracker 
> to use the client software to break into the server database. Perhaps 
> that is a contradiction but I do not see it as such per se.

What does "break into the server database" mean? If the database only
allows each user to see and do exactly what they can see and do via
the application, then not using your application is not a break-in,
and leakage of the client keys is not a threat to the server. If the
database security controls reside in the client, and the application
has full access to the database, but chooses to exercise a limited
subeset of that access, your design is flawed.

> >/proper/ implementation of certificate validation/checking in
> >user-level code, i.e. in the code written by us who use OpenSSL.
> 
> I could probably go deep into OpenSSL [ ... ]

The SSL/TLS protocol and APIs are not relevant to this discussion,
you are asking questions about key management risk models, not SSL
communications security. Such questions are protocol idependent (given
a sufficiently strong protocol).


> because I have been programming using the MySQL client C++ API, which so 
> far hides SSL issues from me.

You seem to have a 2-tier application, where users connect directly to
the database server and not a middleware application server which
imposes appropirate access control restrictions on each user.

If this (2-tier model) is indeed the case, and the server is not limiting
user access via carefully designed stored procedures (and no access to
direct SELECT, INSERT, ... statements), the problem is the application
design and not the key management. The only solution is to redesign the
incorrectly designed application. Protecting client-side business
logic in the hands of untrusted users is a lost cause.

> >3) or... you can decide that server-damage is too much of a risk and
> >get your 'way out' for /that/ particular scenario by simply /not/
> >connecting to a server /at all/, but, instead, providing the client
> >with a copy of your database. Updates are feasible as well, as long as
> >you send them to the client, so no database 'rsync'/'auto-update',
> >unless you provide such a service from a dedicated machine, which in
> >no way is ever connected/connectable to your database server (read:
> >manual one-way transfers, no physical network connectivity).
> 
> No this is not acceptable. It is a client-server application and we need 
> to keep it that way for the functionality to be effective. We can not be 
> the only software in the world looking to run a secure commerical 
> client-server application.

All the successful ones that operate on open networks are 3-tier. 2-tier
applications cannot be made secure unless the database encapsulates the
security controls via stored procedures and blocks all other database
access.

> Money may be less of an issue than ease of use. This is a commercial 
> application which however needs good security to protect the database 
> data. There is no way the application is going to be sold to the public 
> with some type of hardware security mechanism. I thought such mechanisms 
> went out with the dinosaurs.

Also consider that midleware protects the datbaase from resource
starvation via connection pooling and other means. Direct database
connections from clients over public networks are a poor design.

> Of course you can send queries if you can get through. The certificates 
> issue I brought up in my OP was simply to determine if others thought 
> that hiding the certificates in some way (encrypting them, putting them 
> in resources, loading the resources and decrypting them on-the-fly ) was 
> a worthwhile security method that actually accomplished anything.

This is doomed to fail if users are motivated to recover the keys. Making
this difficult/expensive to the motivated attacker is what DRM tries
to do.

-- 
        Viktor.
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to