Ger Hobbelt wrote:
Mr. Diener,

What the folks here are saying is that your current scenario is a
catch22: unless at least one part of the requirements (as perceived by
your client) is changed, there is no way out.
Put in other words used in the discussion so far: this fact turns any
answer into a "security theater".

Let me try my hand at explaining this, based on your original scenario
description:

On Wed, Dec 24, 2008 at 1:54 PM, Edward Diener <el...@tropicsoft.com> wrote:
In a client application communicating with a MySQL server, I am using
SSL to encrypt/decrypt data sent to and from the database. This requires
me to have the PEMs for the CA, client key, and client certificate
distributed as part of the application. Of course these certificates
will not work except with the corresponding server certificates on the
MySQL server to which I am communicating.

My initial choice was to distribute the client certificates in the same
directory as the application's modules, as they are easy to find at
run-time there in order to make my SSL connection with the database. It
has been suggested to me that this is inherently insecure. Nonetheless I
must distribute them somewhere since the certificates have to exist in
the file system when I make the call at run-time to create a SSL
connection to the server.
[...+...]
I am working for an employer who will be selling a product to end users.
The risk model is that my employer feels it would be bad if a hacker were
able to easily understand where the client certs reside in the end user
application and were able to use the client certs to communicate to the
server, ie. if someone who buys the product were able to use the client
certs in a destructive way. My employer has also been told by a Sun
representative he knows that if the client certs are distributed in the
directory of the application it is a serious security risk. So he has asked
me to investigate alternative ways of distributing the client certs.


The above is why Victor points out this is a DRM problem.

Your risk model says: "the clients are not to be trusted".
Meanwhile, your business model (the software you are selling them)
requires those 'untrustworthy individuals' have access to your
database/server - or the application you are creating/have created
won't work.

That is a clear cut case of catch22.

If I can get a little finicky, the application needs access to the database/server. Nobody else should be accessing it. But I am sure that is what you meant.

The clients are to be trusted using the application. My employer, not I, felt that putting the certificates where a client, who might be a destructive hacker, could easily access them, might be a security risk. He was told this by a Sun rep whom he knows, so naturally he wanted me to find out if this was the case.

I come on this forum to ask, and what I get, not from you, is some good information mixed with a bunch of heavy ego which wants to tell me that only "security experts" are qualified to understand the deep issues involved and that since I have not proved myself a security expert simply by asking my question I should not even be dealing in any way with security problems, much less this current simple one.


Sorry for the 'general response' now - I'll get a bit more detailed
further down:

You have basically two ways out of here:

1) you can state that you trust your clients. (Thus replacing your risk model.)
Consequences:

a) Assuming your clients are to be trusted (I wouldn't do business
with untrustworthy ones, right? ;-) ), the risks are that, through
inability, accident or neglect, Bad Crackers(tm) gain access to those
client certificates and thus -- assuming, as we should, that Bad
Crackers know your system throughout -- can access your server and do
/anything/ the (now compromised client) could do as well. Extremely
Clever Bad Crackers might even be able to gain additional server-side
access if server-side protection is not up to snuff.

b) You can distribute those client certificates with the software as
you were planning before.

Hence the issue becomes the question: how can we make sure the client
cannot accidentally 'loose his keys'?

If the client "loses", ie. destroys or changes, the certs, the application will not work. That is fine by my employer since it is much the same as the client "destroying" a DLL distributed with the application. The client has messed up and does not deserve a working application.


Which, in a way, makes this a variant of #2 here:

2) You can treat the scenario as a DRM issue -- the question is, after
all, very similar to: "how can I sell software without having pirated
copies floating around the net within X days?" (Note the 'X': putting
a number there is a strategic management decision as it directly
translates to the amount of (known fallible) DRM/protection effort you
are willing to spend.

We do have registration protection which is another issue but which should prevent piracy. I am not concerned with that.


Possible answer: give the client keys enclosed in dedicated hardware.
As not trusting the user to not lose his marbles is (almost) identical
to the client machine being a source of trouble. Which, broadly
speaking, is very similar to 'not trusting your client'.

We do not want to go that route.



For software protection, the higher end keyword is 'dongles'; the same
keyword applies here: store all client-private keys in a dongle (which
is quite closely related to OpenSSL 'engines'! ;-) )
And most importantly: all 'private key'-related operations should be
performed in such 'secure hardware' -- that's where the OpenSSL
engines come in. (Because once a private client key travels outside
that secure hardware, it is, de facto, 'compromised'. i.e. Bad
Crackers will be able to steal it.

By 'dongle' do you mean a hardware 'dongle'. If it is a software dongle you need to spell out for me what you mean.



Before you run off (I did not give you a sufficient answer, despite
the looks of it ;-) ), consider a few things here:

- as I said, anything 'private key related' (that is: data PLUS
calculations) should ultimately be executed in secure hardware.
Anything less is knowingly fudging it and thus inherently flawed. (See
few notes below for comparative thought)

- Any 'public key' material, e.g. the CA and other certs for your
server are fine in public storage. After all, that's what public keys
and certificates are for (note: I assume here the certificates only
carry public key data: client public key for sending to server; CA and
server public key cert for verification of server by client software,
etc.).

I admit confusion on my part here. So perhaps you can explain it to me, since others seem to feel I am too dumb to be given a clear answer by the security gods on high. I can assure you I am not.

The CA authority issued a CA certificate, server key, and server certificate to the MySQL database server to use when it starts up. My client software was issued a CA certificate, a client key, and a client certificate to use in establishing a connection to the server. The client certificates must be on the file system when I make the call to the MySQL client library routine which sets the certificate paths and the encryption algorithm type just before making an individual connection.

Do any of my client certificates contain private key data which entail a security risk ? It would seem that my client key has a private key, so maybe that indeed needs to be protected in some way. Of course I would be glad for you or anyone else to elucidate for me just what the server certificates and client certificates entail, but I do understand the gneral mechanism of public-private keys so hopefully no one will find a need to talk down to me.



Additional thoughts:

1) Once a 'secure connection' has been set up, using such 'secure
hardware', a capable cracker will be able to 'piggy back' on top of
that connection and do his own thing while your client does his dance
in the server-side database at the same time. (Mister cracker does not
need to break OpenSSL/crypto for that! Simply jacking into your
software at the proper spot is good enough.)

"Simply jacking into your software" seems pretty difficult to me. Is it really that simple for a capable cracker to inject code into the middle of my application software somewhere at some time when executing to change the way it works ? My experience as a programmer tells me it is not. I would really love to see a destructive hacker do this. It might be an education for me, but I admit I am skeptical.


THIS is the next (and UNsolvable level) of security risk: by stating
that the client and/or client machine is a 'threat', you are trying to
run 'secured' connection end-points (!) from within a hostile
environment after all -- no matter the quality of your 'secure
hardware' which helped /set up/ that connection.

I hope you see this for what it is: this is where, ultimately, you
/have to/ trust the client (AND his machine!) or you will not be able
to do business (i.e. allow the client access to your server).

I agree that the client and his machine have to be trusted. The idea which my employer has is to trust the client but at the same time make it as difficult as possible for a client who might be a software cracker to use the client software to break into the server database. Perhaps that is a contradiction but I do not see it as such per se.


In other words: time to decide how much money (equipment) and effort
(more money) your company is willing to spend on this 'war on
crackers' - which, by definition, you will loose. You only need one
determined, capable cracker and your system is dead. Once again. (Note
that the whole idea of 'allowing clients with security risks access'
is flawed from the very start.)


2) Given the above, ponder why security-aware military systems are
locked into secure buildings, only permitting screened personnel
access while using /physically/ secured connectivity (while, of
course, also employing software to secure communications) and ditto
locations.

How far are you willing to take this game?

We do not want to take the game into the extreme areas of hardware protection.



3) A business example. In certain circles, Steinberg (Cubase) is quite
famous for their software protection efforts. They have been using
dongles for a long time. They have used sophisticated cryptography for
a long time too. They have taken the DRM issue so far it actually
slows down the software proper: a zillion checks, spread all over the
place, which try to make sure no Evil Bastard is using it nor even
poking around.

Yet. There still float pirated copies of that software on the net,
which are actively used by people. Thus proving in the real world that
the mightiest protection effort is useless -- WHEN YOUR RISK MODEL
STATES THE CLIENTS ARE A SECURITY RISK.

Why is this comparable to your situation? You (your client) now uses
the same risk model as Steinberg et al: a client can 'lose his keys'
and should be prevented from doing so at all costs. (Okay, I put that
last bit in your mouth to show you what taking this to the limit
means: a incredulously high number for that X value above -- which
should include additional maintenance/support calls and a /necessarily
huge/ test effort!)

Again, clients who "lose" their certs are not out concern.



The point?

Either you stick to the model and make a strategic decision how far
you are willing to take this dance, this 'security theater'. Because
that's what it is. That's what Steinberg, Autodesk, Microsoft (WGA,
anyone? another case of 'protect client key' while 'we do not trust
client') and a lot of other big and small corporations are doing all
the time. The dance floor is packed, and you are implicitly asked to
join as well. From the start, it is a flawed approach, thus 'security
theater', 'arms race', whatever you call it. Bottom line: it's not
about 'providing security' (as that is known to be infeasible given
your risk model), it's a 'DRM dance'. As Victor, David, Kyle, Michel
(did I miss anyone?) already pointed out.

The point I want to make is that others seem to feel as if my original OP was meant to somehow be provocative while it was actually meant innocently to ask a serious question in the mind of my employer.



So it's back to square one:

1) either you adjust your risk model (i.e. trust client), THEN sit
down with your client to discuss how (a) you are both going to make
sure any client can only do what he's allowed to do server-side (this
has also been mentioned by the others) PLUS you decide on a course of
action (SOP: Standard Operating Procedure) for when the trust is
broken. What to do when the server is damaged? How to check if the
server is damaged (how will you know trust was broken)? Tech detail
like 'data and installation backups and comparative (scheduled?)
restore actions', 'certificate revocation', 'certificate expiry' (the
latter a means to put a duration on your trust, allowing you to 'renew
your mutual trust' in a controlled manner), etc. See also Victor
mentioning (in another thread) the often absent, yet oh so desirable,
/proper/ implementation of certificate validation/checking in
user-level code, i.e. in the code written by us who use OpenSSL.

I could probably go deep into OpenSSL but I am trying to avoid it because I have been programming using the MySQL client C++ API, which so far hides SSL issues from me. I also find digging information out of the OpenSSL programming documentation to be really daunting. The only programming link I could find were at http://www.columbia.edu/~ariel/ssleay/. Yes there is some information at http://www.openssl.org/docs/ but it is so chaotic as to be almost completely unusable IMO. Maybe I just need to get a good book about SSL and/or OpenSSL if I can find one. I will accept any recommendations.

If you would like to point me OpenSSL programming functionality which allows me to manipulate the client certificates or even OpenSSL documenatation which lets me understand SSL certificates better in general, I will be glad to take a look at it.


2) or you take the 'software protection route', while knowing it's
flawed by design. Decide on the amount X you are willing to spend on
this dance, based on estimates of cost in case your flawed scheme
attracts a sufficiently capable cracker. (And it sure will. There are
ways to 'buy' a custom crack, if your (client's) ethics enable him to.
Sometimes it sufficient to point out that your software is 'hard to
crack' (you'll get lots of derisive laughter when you say it's
'uncrackable' though).) And for you, 'cracking software' is equivalent
to 'cracking your client keys'. (And I don't have to, if I can find a
way to piggy-back on a 'good' client.)

I am still not sure what you mean by this route but if it is hardware solution we will forgo it right now. Besides I am a software programmer/designer and you know how programmers feel about "hardware" solutions <g>.


3) or... you can decide that server-damage is too much of a risk and
get your 'way out' for /that/ particular scenario by simply /not/
connecting to a server /at all/, but, instead, providing the client
with a copy of your database. Updates are feasible as well, as long as
you send them to the client, so no database 'rsync'/'auto-update',
unless you provide such a service from a dedicated machine, which in
no way is ever connected/connectable to your database server (read:
manual one-way transfers, no physical network connectivity).

No this is not acceptable. It is a client-server application and we need to keep it that way for the functionality to be effective. We can not be the only software in the world looking to run a secure commerical client-server application.


Of course, this #3 protects your database *server* from becoming
compromized, yet it does /not/ protect the /data/: if you need to do
that as well, you're back to the DRM dance all over again.





In closing, a tiny question:

Is it commercially viable to supply /each/ client with a
(personalized!) hardware dongle (which includes a (OpenSSL-compatible)
crypto engine -- see above)?

No.


If the answer is no, that tells us a lot about the amount of X you can
spend on the DRM dance. It will be so low, you should actually ask
yourself if you should spend the money at all. (The suggestions from
previous emails for 'providing warm and fuzzy feelings', such as a
simple password protected software store, come to mind.)

Money may be less of an issue than ease of use. This is a commercial application which however needs good security to protect the database data. There is no way the application is going to be sold to the public with some type of hardware security mechanism. I thought such mechanisms went out with the dinosaurs.


If the answer is yes: the next question is: "would that make you feel
safe now?" -- it tells us that the need is sufficient to mandate a
round of 'square one #1 + #2 + #3' discussion with your client, then
take it from there. Always remember though: as long as you don't trust
the client AND his machinery, you are dancing the DRM dance, i.e.
running down a path which is a struggle with Evil on /equal terms/.
(Some would argue Evil will have the better hand, and I agree.)
Cryptography is a means, given /very/ particular mandatory conditions,
to make that UNequal terms in your favour. But then, you /will/ have
to trust somebody at the other end.


It is very easy to say storing certificates in files with the
application is unsecure. I'm terribly sorry, but my granny could have
told you that cliche. The real questions are: (a) what do you
want/need to protect? (b) And who do you trust?

None of both are answered by stating 'certificates in files are
insecure'. (Besides, it's only valid when one or more certificates or
other entities carry private keys. The client needs his own private
key, so it'll reside somewhere in that collection of files, and
ultimately, it is the only piece worth protecting. Apart from the
connection itself: remind yourself about my 'piggy-backing': if you
can send queries, *I* can send queries. And I am The Evil Bastard
sitting in your (untrusted) box. For this occassion. ;-)

Of course you can send queries if you can get through. The certificates issue I brought up in my OP was simply to determine if others thought that hiding the certificates in some way (encrypting them, putting them in resources, loading the resources and decrypting them on-the-fly ) was a worthwhile security method that actually accomplished anything. It was not an attempt to claim that this encompassed an entire security solution for everything that involved the client-server application.




Did this help you?

Yes, it did, and I appreciate the time you took to answer my initial question.

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to