Mr. Diener,

What the folks here are saying is that your current scenario is a
catch22: unless at least one part of the requirements (as perceived by
your client) is changed, there is no way out.
Put in other words used in the discussion so far: this fact turns any
answer into a "security theater".

Let me try my hand at explaining this, based on your original scenario
description:

On Wed, Dec 24, 2008 at 1:54 PM, Edward Diener <el...@tropicsoft.com> wrote:
> In a client application communicating with a MySQL server, I am using
> SSL to encrypt/decrypt data sent to and from the database. This requires
> me to have the PEMs for the CA, client key, and client certificate
> distributed as part of the application. Of course these certificates
> will not work except with the corresponding server certificates on the
> MySQL server to which I am communicating.
>
> My initial choice was to distribute the client certificates in the same
> directory as the application's modules, as they are easy to find at
> run-time there in order to make my SSL connection with the database. It
> has been suggested to me that this is inherently insecure. Nonetheless I
> must distribute them somewhere since the certificates have to exist in
> the file system when I make the call at run-time to create a SSL
> connection to the server.
[...+...]
> I am working for an employer who will be selling a product to end users.
> The risk model is that my employer feels it would be bad if a hacker were
> able to easily understand where the client certs reside in the end user
> application and were able to use the client certs to communicate to the
> server, ie. if someone who buys the product were able to use the client
> certs in a destructive way. My employer has also been told by a Sun
> representative he knows that if the client certs are distributed in the
> directory of the application it is a serious security risk. So he has asked
> me to investigate alternative ways of distributing the client certs.


The above is why Victor points out this is a DRM problem.

Your risk model says: "the clients are not to be trusted".
Meanwhile, your business model (the software you are selling them)
requires those 'untrustworthy individuals' have access to your
database/server - or the application you are creating/have created
won't work.

That is a clear cut case of catch22.

Sorry for the 'general response' now - I'll get a bit more detailed
further down:

You have basically two ways out of here:

1) you can state that you trust your clients. (Thus replacing your risk model.)
Consequences:

a) Assuming your clients are to be trusted (I wouldn't do business
with untrustworthy ones, right? ;-) ), the risks are that, through
inability, accident or neglect, Bad Crackers(tm) gain access to those
client certificates and thus -- assuming, as we should, that Bad
Crackers know your system throughout -- can access your server and do
/anything/ the (now compromised client) could do as well. Extremely
Clever Bad Crackers might even be able to gain additional server-side
access if server-side protection is not up to snuff.

b) You can distribute those client certificates with the software as
you were planning before.

Hence the issue becomes the question: how can we make sure the client
cannot accidentally 'loose his keys'?

Which, in a way, makes this a variant of #2 here:

2) You can treat the scenario as a DRM issue -- the question is, after
all, very similar to: "how can I sell software without having pirated
copies floating around the net within X days?" (Note the 'X': putting
a number there is a strategic management decision as it directly
translates to the amount of (known fallible) DRM/protection effort you
are willing to spend.

Possible answer: give the client keys enclosed in dedicated hardware.
As not trusting the user to not lose his marbles is (almost) identical
to the client machine being a source of trouble. Which, broadly
speaking, is very similar to 'not trusting your client'.


For software protection, the higher end keyword is 'dongles'; the same
keyword applies here: store all client-private keys in a dongle (which
is quite closely related to OpenSSL 'engines'! ;-) )
And most importantly: all 'private key'-related operations should be
performed in such 'secure hardware' -- that's where the OpenSSL
engines come in. (Because once a private client key travels outside
that secure hardware, it is, de facto, 'compromised'. i.e. Bad
Crackers will be able to steal it.


Before you run off (I did not give you a sufficient answer, despite
the looks of it ;-) ), consider a few things here:

- as I said, anything 'private key related' (that is: data PLUS
calculations) should ultimately be executed in secure hardware.
Anything less is knowingly fudging it and thus inherently flawed. (See
few notes below for comparative thought)

- Any 'public key' material, e.g. the CA and other certs for your
server are fine in public storage. After all, that's what public keys
and certificates are for (note: I assume here the certificates only
carry public key data: client public key for sending to server; CA and
server public key cert for verification of server by client software,
etc.).


Additional thoughts:

1) Once a 'secure connection' has been set up, using such 'secure
hardware', a capable cracker will be able to 'piggy back' on top of
that connection and do his own thing while your client does his dance
in the server-side database at the same time. (Mister cracker does not
need to break OpenSSL/crypto for that! Simply jacking into your
software at the proper spot is good enough.)

THIS is the next (and UNsolvable level) of security risk: by stating
that the client and/or client machine is a 'threat', you are trying to
run 'secured' connection end-points (!) from within a hostile
environment after all -- no matter the quality of your 'secure
hardware' which helped /set up/ that connection.

I hope you see this for what it is: this is where, ultimately, you
/have to/ trust the client (AND his machine!) or you will not be able
to do business (i.e. allow the client access to your server).

In other words: time to decide how much money (equipment) and effort
(more money) your company is willing to spend on this 'war on
crackers' - which, by definition, you will loose. You only need one
determined, capable cracker and your system is dead. Once again. (Note
that the whole idea of 'allowing clients with security risks access'
is flawed from the very start.)


2) Given the above, ponder why security-aware military systems are
locked into secure buildings, only permitting screened personnel
access while using /physically/ secured connectivity (while, of
course, also employing software to secure communications) and ditto
locations.

How far are you willing to take this game?


3) A business example. In certain circles, Steinberg (Cubase) is quite
famous for their software protection efforts. They have been using
dongles for a long time. They have used sophisticated cryptography for
a long time too. They have taken the DRM issue so far it actually
slows down the software proper: a zillion checks, spread all over the
place, which try to make sure no Evil Bastard is using it nor even
poking around.

Yet. There still float pirated copies of that software on the net,
which are actively used by people. Thus proving in the real world that
the mightiest protection effort is useless -- WHEN YOUR RISK MODEL
STATES THE CLIENTS ARE A SECURITY RISK.

Why is this comparable to your situation? You (your client) now uses
the same risk model as Steinberg et al: a client can 'lose his keys'
and should be prevented from doing so at all costs. (Okay, I put that
last bit in your mouth to show you what taking this to the limit
means: a incredulously high number for that X value above -- which
should include additional maintenance/support calls and a /necessarily
huge/ test effort!)


The point?

Either you stick to the model and make a strategic decision how far
you are willing to take this dance, this 'security theater'. Because
that's what it is. That's what Steinberg, Autodesk, Microsoft (WGA,
anyone? another case of 'protect client key' while 'we do not trust
client') and a lot of other big and small corporations are doing all
the time. The dance floor is packed, and you are implicitly asked to
join as well. From the start, it is a flawed approach, thus 'security
theater', 'arms race', whatever you call it. Bottom line: it's not
about 'providing security' (as that is known to be infeasible given
your risk model), it's a 'DRM dance'. As Victor, David, Kyle, Michel
(did I miss anyone?) already pointed out.


So it's back to square one:

1) either you adjust your risk model (i.e. trust client), THEN sit
down with your client to discuss how (a) you are both going to make
sure any client can only do what he's allowed to do server-side (this
has also been mentioned by the others) PLUS you decide on a course of
action (SOP: Standard Operating Procedure) for when the trust is
broken. What to do when the server is damaged? How to check if the
server is damaged (how will you know trust was broken)? Tech detail
like 'data and installation backups and comparative (scheduled?)
restore actions', 'certificate revocation', 'certificate expiry' (the
latter a means to put a duration on your trust, allowing you to 'renew
your mutual trust' in a controlled manner), etc. See also Victor
mentioning (in another thread) the often absent, yet oh so desirable,
/proper/ implementation of certificate validation/checking in
user-level code, i.e. in the code written by us who use OpenSSL.

2) or you take the 'software protection route', while knowing it's
flawed by design. Decide on the amount X you are willing to spend on
this dance, based on estimates of cost in case your flawed scheme
attracts a sufficiently capable cracker. (And it sure will. There are
ways to 'buy' a custom crack, if your (client's) ethics enable him to.
Sometimes it sufficient to point out that your software is 'hard to
crack' (you'll get lots of derisive laughter when you say it's
'uncrackable' though).) And for you, 'cracking software' is equivalent
to 'cracking your client keys'. (And I don't have to, if I can find a
way to piggy-back on a 'good' client.)

3) or... you can decide that server-damage is too much of a risk and
get your 'way out' for /that/ particular scenario by simply /not/
connecting to a server /at all/, but, instead, providing the client
with a copy of your database. Updates are feasible as well, as long as
you send them to the client, so no database 'rsync'/'auto-update',
unless you provide such a service from a dedicated machine, which in
no way is ever connected/connectable to your database server (read:
manual one-way transfers, no physical network connectivity).

Of course, this #3 protects your database *server* from becoming
compromized, yet it does /not/ protect the /data/: if you need to do
that as well, you're back to the DRM dance all over again.





In closing, a tiny question:

Is it commercially viable to supply /each/ client with a
(personalized!) hardware dongle (which includes a (OpenSSL-compatible)
crypto engine -- see above)?

If the answer is no, that tells us a lot about the amount of X you can
spend on the DRM dance. It will be so low, you should actually ask
yourself if you should spend the money at all. (The suggestions from
previous emails for 'providing warm and fuzzy feelings', such as a
simple password protected software store, come to mind.)

If the answer is yes: the next question is: "would that make you feel
safe now?" -- it tells us that the need is sufficient to mandate a
round of 'square one #1 + #2 + #3' discussion with your client, then
take it from there. Always remember though: as long as you don't trust
the client AND his machinery, you are dancing the DRM dance, i.e.
running down a path which is a struggle with Evil on /equal terms/.
(Some would argue Evil will have the better hand, and I agree.)
Cryptography is a means, given /very/ particular mandatory conditions,
to make that UNequal terms in your favour. But then, you /will/ have
to trust somebody at the other end.


It is very easy to say storing certificates in files with the
application is unsecure. I'm terribly sorry, but my granny could have
told you that cliche. The real questions are: (a) what do you
want/need to protect? (b) And who do you trust?

None of both are answered by stating 'certificates in files are
insecure'. (Besides, it's only valid when one or more certificates or
other entities carry private keys. The client needs his own private
key, so it'll reside somewhere in that collection of files, and
ultimately, it is the only piece worth protecting. Apart from the
connection itself: remind yourself about my 'piggy-backing': if you
can send queries, *I* can send queries. And I am The Evil Bastard
sitting in your (untrusted) box. For this occassion. ;-)



Did this help you?



-- 
Met vriendelijke groeten / Best regards,

Ger Hobbelt

--------------------------------------------------
web:    http://www.hobbelt.com/
        http://www.hebbut.net/
mail:   g...@hobbelt.com
mobile: +31-6-11 120 978
--------------------------------------------------
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to