On Fri, Dec 26, 2008 at 7:28 PM, Edward Diener <el...@tropicsoft.com> wrote:
> If I can get a little finicky, the application needs access to the
> database/server. Nobody else should be accessing it. But I am sure that is
> what you meant.
>
> The clients are to be trusted using the application. My employer, not I,
> felt that putting the certificates where a client, who might be a
> destructive hacker, could easily access them, might be a security risk. He
> was told this by a Sun rep whom he knows, so naturally he wanted me to find
> out if this was the case.
>
> I come on this forum to ask, and what I get, not from you, is some good
> information mixed with a bunch of heavy ego which wants to tell me that only
> "security experts" are qualified to understand the deep issues involved and
> that since I have not proved myself a security expert simply by asking my
> question I should not even be dealing in any way with security problems,
> much less this current simple one.

Well, on the surface it may look like that, but it isn't an attempt to
cut you down.

I've been lurking on this list for a decade and only recently decided
I'm (ahem) 'qualified' to answer simple questions. The same folks who
answered you have, directly and indirectly, taught me a lot over the
years.

I don't know how to explain this in 'layman terms', but
security/cryptography is a peculiar field of engineering, which I
might compare to space/aeronautics engineering (I have a mech.
engineering background myself, but that was a previous [short] life).
Mechanical engineers are aware they can screw up, just like software
engineers, and they 'fix' this by introducing 'safety margins': steel
a little thicker, a bigger hammer than really necessary, that sort of
thing. 'Slack' (though everyone uses different terms for it, as
'slack' doesn't sound really engineery, you see? ;-) )
Anyway, aeronautics engineers, like crypto & securirty engineers,
simply are not allowed 'slack'. Ever.
If you fail, as a software engineer, it's a 'bug'. In the design, or
somewhere else. No big fuss, compared to a little 'bug' in a rocket or
space shuttle going up. It's instant death (we have seen that) or at
least huge losses and delays, due to a little flaw.
Same with security: you can't 'fix' it with a little chewing gum; it
has to be _right_ from the get go.
Hence, 'dabbling' in security or crypto (which quite a few folks are
doing when they visit this list with their questions) raises some
serious concerns with the folks who know their stuff.
This is a field where you can't 'learn by doing' but have to learn
through watching and listening.

The advise to 'hire a security expert' is very well meant: it's the
above verbiage, and then some, boiled down into a single line: this
kind of stuff requires another pair of eyes on the spot; a pair of
eyes attached to a brain which knows this stuff inside and out.
It's an expensive piece of advice and it headbutts all kinds of egos,
but anything less is equivalent to still using a rubber ring which
should have been discarded as overaged, while saying, "aw, shucks,
it's all right! See, no cracks! And still stretches like it ought to!"
The mechanic saying that won't feel the hurt, but you can bet your
behind in a little while that plane will make an unauthorized drop out
of the sky and some other folks will have some unscheduled catching up
to do at St. Peter's.
Security is like that: if you don't do your stinkin' best and use the
best tools and practices you can get, you might as well not bother at
all.


[...]
> If the client "loses", ie. destroys or changes, the certs, the application
> will not work. That is fine by my employer since it is much the same as the
> client "destroying" a DLL distributed with the application. The client has
> messed up and does not deserve a working application.

Hm, you have me wondering how I can make you see. (Besides, with
'losing' I meant: a colleague, friend, acquaintance, whoever else,
stumbles across his keys, intentionally or not.)

Say you want to protect your database, as a system. I.e. you want to
ensure the database will remain acting predictable, according to your
decisions, plus it's data must not be 'damaged' by having someone put
data in there which you did not authorize, nor alter any bit of data
in an unauthorized way. Your hypothetical goal here: keep the database
functioning as best as possible AND making sure the data in there is
as unpoluted as possible as well.

(Note: I do not mention 'unauthorized data retrieval' here as part of
the example; that adds some important additional requirements, which
complicate matters quite a bit. Let's stick with this example: no
system damage; no data polution.)

Security-wise -- and please note that I am not a security expert; that
means I'll probably miss a few important bits here -- it goes like
this: the database/server is the piece you wish to protect. So you sit
yourself in that machine and look out. What and who do you see?

First off all, you should only get to see 'authorized, authenticated
customers'. In other words, in a commercial setting, you, being the
server/database, should only see 'nodes' who are tagged as 'valid
customers', each time they connect. Any other 'visitors' should not be
able to get through to you.
(--> possible partial solutions for this: firewall; SSL connections
with client validation/authentication, e.g. through client
certificates. Extra: 'blacklist'-kinda list at your place (= server
side) for those clients your owner just found out are Bad: in TLS/SSL,
that's where 'certificate revocation' comes in. See Victor's advice in
a different thread, where he extolled the virtues of having a full
(i.e. proper) certificate validation check implementation both sides:
server and client. Unfortunately, apps/s_server + s_client (or more
particularly: s_cb.c) sample code is not the end-all for this. It's
rather more like a 'hint' than the real McCoy.)

So you, being the server, 'have made sure' you only see authorized(!),
authenticated(!) clients.

Now we get at where I think your conceptual systems look goes
pearshaped, security wise.

Being the server/database, ask the question: "who are these clients?"
which is two parts: "Am I sure these nodes are _really_ who they say
they are?" plus "What is a node?"

Last question first: a 'node', from the point of view of the server,
it is the MACHINE at the other end of the secure connection AND
ANYTHING ATTACHED TO IT (that would be 1 human; plus, in case of a
network which allows other boxes/equipment access to that 'client
node' machine through ny means we do not control, the entire network
[section] is to be considered 'the node'.

(I implicitly assume we picked the best possible tool for secure
communications; given other requirements, that's TLS/SSL. With v2
disabled, if you don't need that legacy interoperability. And !EXPORT
for the cipher suite to ensure eavesdroppers will have a harder time
to see what you two are discussing. See this list and [recent]
discussion for some more details.)

Sounds rediculous?

It is not. Simple example. I have electronic banking ability through
SSL. I have four machines at home in an intranet, where each box can
access data in the others. That intranet section is connected to a
second network through a dedicated firewall, which offers NAT and only
allows outward connections through HTTPS. (I develop in there; I have
a lot of financial data and processing in there, which require top
grade protection.) Inbound traffic is only permitted, when initiated
from the inside, and, of course, such traffic is only permissible when
it's HTTPS.
The second network has another two machines (three, really: there's a
dormant 'SLUG' sitting idly there); one is a web server
(http://hebbut.net/ and a few other bits 'n pieces; the other is for
email and other user tasks). These can connect to the Internet through
another dedicated firewall, which offers NAT and PAT (for the web
server) while allowing HTTP/HTTPS/FTP/SMTP by default, inbound and
outbound, and, when I feel extremely daring, also permits some chat
protocols on occassion (MSN, ICQ, ...).

When I contact my bank from one of the 'inner network' machines, using
HTTPS and client auth using user/pass/pin, I can exchange data with my
bank; manage accounts, send them instructions, etc.

YOU are the bank server. Now can you tell me which of those 7 machines
are talking to you RIGHT NOW? The one you are talking to, presents
itself as 'Ger Hobbelt', but how can you be sure? SSL only helps you
answer that question PARTLY. Wat is does not, and CANNOT answer is
this bit: It could be meat-me on machine A, but I could have a visitor
on machine C, who injected remote-control software into A and can, in
that way, ride my A-to-BANK SSL-secured connection and send the bank,
through the A-to-BANK connection. Far fetched? Nope. For one, the
visitor could have come by a month ago, sold or gave me a piece of
software I use for my activities on machine A, and as long as he's
made sure his 'trojan' can either wait and act unattended or sets up
its own SSL-connection to a box controlled by my earlier visitor, he
can still access my bank accounts from afar when I do and take a
chance and submit an additional bank transfer, while I do the same
(e.g. when paying the legal maffya called government the new
protection money installment).

Far fetched? Take it to corporate levels and it's bloody easy. Just
requires a wee bit perseverance.

Several years ago I showed someone at a company where I worked, what
one amateur can do. One laptop, one sniffer. As icing on the cake a
little hardware bit, called a keylogger. Company had a 'dynamic
workplace' policy, so no fixed seats: you come in, jack in and start
working.

It is amazing how many people access (a) their private email accounts
(yahoo and gmail make still especially easy), and (b) their private
banking accounts from the office - which is just a bigger version of
my inner network at home.

One day of remote sniffing (laptop connected to network elsewhere) and
reading out the keylogger (which was undetected by the two people
using that workplace that day) revealed a lot: one banking account
open for grabs (user+pass only over SSL, so keylogger took care of
_that_); several private emails which were, ah, 'revealing'.
It was decided not to act on this any further as it would cause a
panic and loads of undesired suspicion towards the messengers of this
good news. Which was rather, err, unwise and somehow sensible at the
same time. It was felt nobody would walk around with keyloggers in the
morning, so "no real harm done". Wrong thought train right there.
Still, three individuals, apart from the person whose bank account it
was, had now liberal access to his money, if they deigned to act upon
the info they had collected. We never did. How decent of us. ;-)

The point is:

1) you, the server, CANNOT TELL machines (and whole intranets
thereof!) and humans apart. It's all the 'end node' for you.
(Biometrics try to solve another part of this quagmire. Still, you
cannot tell which machine I used to contact you. So you cannot tell if
the machine is safe; you can only say it sounded an awful lot like
'Ger'. Or you sate that you trust/expect your uses to adequately
secure their premises, then sit down yourselves and talk about how
you're going to take up damage control, as damage prevention has just
been settled through strategic decision making.)

2) you, the server, CANNOT TELL if you are talking to an individual
(end node) or an individual acting as spokesperson for a group (end
node in unprotected network == network with protection levels
uncontrolled by YOU). (Remember I have NAT? When I contact MY bank,
they will always log, i.e. 'see', the same, single IP address. Heck do
they know who sat at which of those 7 machines in there. That's what a
NAT-ting firewall does for you.)


Can you see now, maybe, why I was stressing the INdifference between
meat body and machine (human vs. your client software vs. the machine
it runs on) at the client side 'end node' of the SSL connection?
Because you say you trust your user, but also seem (to my ears at
least) to not trust his environment in some way, you are sticking your
head into a hornets nest, because trust and not-trust for the SAME
*node* (no matter its size) cannot become anything but not-trust. And
each end-point of peer-to-peer secure connectivity MUST land in an
entirely trusted zone, or the 'secure' in the connection has become a
moot point by default.


If you decide not to trust your clients' 'zone', you HAVE to move your
end-point into a trusted zone by itself: that means you will then have
to supply your own hardware (think: ATM machines, for example) and
install it at each client site, offering the client a very precisely
defined, tiny, thing method of access to that machine and through that
machine, to your server, which is to be protected.

Since you already clearly stated such measures are beyond your
financial willingness to apply them, it automatically means you will
have to sit down and (re)consider the list I gave at the end of my
previous email and discuss among yourselves how you restrict your
users within the database and furthermore, you will now (as you always
should anyhow) have to sit down and discuss how you are going to
handle 'server side' security breaches (a.k.a. damage control), i.e.
how you are going to cope with any occurrences, which, for whatever
reason, damage your database system or pollute the data contained
therein. These risk analysis' should encompass both how you are going
to detect such damages; if detection is possible by
functional/automatic means or unfortunately 'only by accident' AND how
you can recover from such, if at all. (Where a viable, yet unfortunate
conclusion for several such scenarios can be: "we cannot recover from
such", which implies you are there and then relinquishing control to
the gods for that detail. Sometimes we have to, but we should keep
their influence as limited as possible, because the gods are rather
Greek: they like to play havoc as an evening game of leisure.)


I wrote the above to give you an example how you should think and look
into the world when you are considering 'security', which is quite a
different animal from 'processes / functionality' which, by the sound
of it, is your usual metier.

And consider that I haven't even touched upon the 'advanced example'
where the owner would include the one extra requirement that
unauthorized clients should not have ACCESS to your (server-)data at
all. (read-protection)



My last eruption here: as long as you find mixes of trust / not-trust
in any place of your landscape (like trusting the human client, yet
not trusting his equipment/[network]environment) and an unwillingness,
for whatever good reason, to discard the not-trust nor enforce the
trust some other way by redesigning your landscape, you are entering
the DRM dance. That is what Victor has repeatedly been saying here as
well, in entirely different words.



> By 'dongle' do you mean a hardware 'dongle'. If it is a software dongle you
> need to spell out for me what you mean.

Sorry, yes, hardware. 'dongle' in my book, _always_ is a piece of
dedicated hardware. A 'software dongle' (e.g. Flex) is a hack or, at
best, an insecure emulation. As mentioned by others, these days you
can have personalized (i.e. your human client should carry it on him
at al times) USB dongles a.k.a. 'tokens'.

Note that they solve a part of your trouble ONLY. See the example
above. If I would have a token from my bank, sit down at machine A, B,
C or D, the banking action can /still/ be compromised. Yes, Mr. X can
no longer snoop my private key (credentials) (user/pass/pin is
obsoleted by the 'token'!) but he can still 'piggy-back' on the open
connection: that requires him to find a way into my web browser or
other software pieces which sit near the end-node of the SSL secure
connection. Anywhere between my keyboard and that SSL connection
record is good. (Mr. X can 'play your keyboard' as he likes; that's
why you HAVE TO declare you trust the client's machine/environment,
because the whole path between keyboard/screen and connection
end-point is undiscernable from the human /you/ from the point of view
of you, the server. In fact, you need biometrics to check if the meat
body is actually me: could be a colleague, my girlfriend with my
credit card, etc.
And your client software cannot help you either as it is running in
that same, doubted environment, so you can only consider it 'hacked
already'. If you don't trust the machine [and the man], and still want
it to behave 'trustworthy', you are, alas, back to the DRM dance (it's
a trust/not-trust mix for a single entity).


You also said you did not concern yourself with a client losing his
keys. I may have read that wrong, but I think you should. Given the
important security questions above (who/what are you? who/what do I
see?), if you don't care about a client losing his keys, how can you
/ever/ care about who the client really /is/?


My neighbour loses the keys to his car. I find them, pick them up.
Have I now become my neighbour? From the point of view of the car,
YES, I AM. May sound rediculous, or strange, but because it does, it
is not less true. And that is why security is such a bloody hard job
to do. At least when you want to do it right.



[...]
> I admit confusion on my part here. So perhaps you can explain it to me,
> since others seem to feel I am too dumb to be given a clear answer by the
> security gods on high. I can assure you I am not.

I believe they're not. Just rubbed you the wrong way. I hope you can
now see that security is a very harsh and hard job to do.

I would try to explain, but I've been writing enough already, besides,
I always need to check my own little cheat-sheet when it comes to the
exact terms as I still garble some of them, so here's my advice: you
can get much better explanation than from me when you get yourself a
few good books.

Applied cryptography by Schneier is one. I leaned the basics from
another book, but that's useless to you as it's out of print and
Schneier has a good rep and should be readily available at Amazon.

wait... I posted a list a while ago... checking archive... public URL:

http://www.mail-archive.com/openssl-...@openssl.org/msg24913.html


One note: it's something I sometimes gable myself, but 'certs' a.k.a.
certificates are implicitly assumed to only contain public keys. Thus,
'certs' are, theoretically, meant for worldwide publication.

IIRC, you /can/ put private keys in certificates, but that is, to put
it mildly, 'weird'. If you have to use a certificate's capabilities to
validate your private key storage, you have a different, and more
severe/serious, problem on your hands. Running around and screaming is
a good option then. So: no private keys should end up in any cert,
thank you.


And about CA's: those come into the game as 'trusted third party' as
in: you trust the 'root CA' to do his job: that is: do his stinkin'
best to ensure the entity who requested the issued certificate is who
he says he is. Not only the 'root CA's, but this applies to all other
CAs in the chain as well. Because 'trust' is a non-washable item and
fades under sunlight, certificates and cert chains come with two
features, which you should explicitly check for in your cert
validation callbacks at both sides of the peer-to-peer SSL connect:
expiry (i.e. we limit the duration of our trust. Philosophically,
that's a hack, but it's mant to cut down the risk of a trusted entity
being turned to the Dark Side and you getting damaged by that, so we
enforce a periodic FULL re-check of the entity trustworthiness that
way. Keeps the trust nice 'n fresh.
Plus we limit the 'chain depth' of our cert chains as each CA in the
chain is to be treated as a kind of subsidiary, who can never
enhance/raise the level of trust but only water it down, if ever so
slightly. Heck, if a CA in the middle of the chain is extremely
trustworthy in your eyes, the bugger would've been made ROOT CA by you
anyhow!




[...]
> "Simply jacking into your software" seems pretty difficult to me. Is it
> really that simple for a capable cracker to inject code into the middle of
> my application software somewhere at some time when executing to change the
> way it works ? My experience as a programmer tells me it is not. I would
> really love to see a destructive hacker do this. It might be an education
> for me, but I admit I am skeptical.
[...]
This is not for this list, but two bits: 1) I can 'hook' all your
Windows API calls through documented means. (Don't panic; making them
UNdocumented just raises the challenge a bit and shows the designer
was just scared, which makes him dumber than he could've been.)

I can 'wrap' your DLL by offering my own, who calls yours. A few
tweaks to your binaries, and we're good to go.

2) Another way is this: your binaries contain quite a few bits of
'empty space' (alignment padding between linked blocks, that sort of
thing) where I can write a few statements of my own (assembly, of
course); redirect yours to my bits and with a little effort I can have
my additional queries or other activity processed along with yours,
and no alarm bells ringing.

I know a bit where to look, that's all; the folks who won't take weeks
to accomplish this but mere hours, you won't hear from. You may meet
them but they have their own human-space certificate chains which
determine your level of access. You and I don't make it past first
gate.

For public knowledge, google 'fravia' for starters. Then decide if you
want to spend years of learning that technology.


[...]
> We do not want to take the game into the extreme areas of hardware
> protection.
[...]
> Again, clients who "lose" their certs are not out concern.
[...]

See above. It is a concern. (For, maybe, a different definition of 'lose'.)

[...]
> I could probably go deep into OpenSSL but I am trying to avoid it because I
> have been programming using the MySQL client C++ API, which so far hides SSL
> issues from me. I also find digging information out of the OpenSSL
> programming documentation to be really daunting. The only programming link I
> could find were at http://www.columbia.edu/~ariel/ssleay/. Yes there is some
> information at  http://www.openssl.org/docs/ but it is so chaotic as to be
> almost completely unusable IMO. Maybe I just need to get a good book about
> SSL and/or OpenSSL if I can find one. I will accept any recommendations.
>
> If you would like to point me OpenSSL programming functionality which allows
> me to manipulate the client certificates or even OpenSSL documenatation
> which lets me understand SSL certificates better in general, I will be glad
> to take a look at it.

See URL mentioned above; I have the O'Reilly book about OpenSSL by
Viega et al. I don't know which edition it is in now, but first
edition was (and still is) quite useful and well written. It also
explains what OpenSSL and TLS/SSL as a
protocol/system/algorithm/whatchammacallit does NOT do well, which is
a very useful piece of information as well.

[...]
> I am still not sure what you mean by this route but if it is hardware
> solution we will forgo it right now. Besides I am a software
> programmer/designer and you know how programmers feel about "hardware"
> solutions <g>.

And security is all about discarding each and every preconception /
preoccupation. ;-)

>> Is it commercially viable to supply /each/ client with a
>> (personalized!) hardware dongle (which includes a (OpenSSL-compatible)
>> crypto engine -- see above)?
>
> No.

Okay. No problem. I still decided to write the above in an attempt to
show you a bit of how to shape your thought processes for this subject
matter. Not using 'tokens' and such, does not mean you can't do a good
job. It just means your degree of security will be lower, but not all
of us need milspec security to function adequately. Just as long as we
remember to always, for /any/ level of security, consider and plan how
we go about our damage control; even deciding we won't is far better
than finding out on the spot that we can't.

[...]
> Money may be less of an issue than ease of use. This is a commercial
> application which however needs good security to protect the database data.
> There is no way the application is going to be sold to the public with some
> type of hardware security mechanism. I thought such mechanisms went out with
> the dinosaurs.

'good' is a 'taste' word; not a hard, measurable quantity. For a given
level of 'good', one can tailor his processes and security around and
inside those processes. All it takes is answering questions such as
"can we tolerate data corruption / pollution? System failure? Data
loss? How 'public' is our data: who do we tolerate looking at our
data? What can possibly go wrong, by accident or intention? And do we
wish to survive such scenarios, which ones, and how uch effort are we
willing to expend, before, during and after?" And, funnily, during
that Q&A process you will also discover what your (company's) actual
desired definition of 'good' really is.

And regarding dinosaurs: boy, are you in for a surprise! T.Rex is
ruling. It's just that those serial/parallel port dongles have moved
up in the world along with anything else computing; now we've got USB
and, dear sir, we use it!  ;-)  Nowadays it looks to me like it's less
'per application' as more 'per use' security; single-sign-on being a
big item, for example. But given ever-cheaper smartcards and tokens,
contact and contact-less, the number of corporations using this tech
is ever increasing.

Anyway...


Minimum book to read would be Schneier (applied Crypto); probably
O'Reilly OpenSSL as well; after reading, reread the other responses in
this thread for additional insight (I've been skimming the surface
only and it's rather more hopping than surfing) and, yes, this will
take some time and effort for you to do.

For now, in final closing, I'd say:

- the only thing worth protecting is the private key of the client as
it is the only 'secret' in the collection you distribute.

- take the suggestion from elsewhere in this thread, since you already
indicate 'tokens' are out, for user friendly reasons (I think you
should check that reasoning again, but that's just me): distribute or
otherwise 'derive' the passphrase required to decode that private key
in software.

- accept that anyone with any access to that client machine is going
to have client-level access to your server; from anywhere on the
globe. Take measures to protect your server internally (damage
prevention) by thoroughly checking user access limits

- sit down, onsider all the possible ways your server / processes can
be damaged, include even the 'improbable' and the 'impossible' ways
and decide, at strategic level, what you want done for each of those
scenarios. "Nothing" as an answer is good too; just so everyone
realizes what they're deciding right there. Then implement it down the
line.

's all.



> Of course you can send queries if you can get through. The certificates
> issue I brought up in my OP was simply to determine if others thought that
> hiding the certificates in some way (encrypting them, putting them in
> resources, loading the resources and decrypting them on-the-fly ) was a
> worthwhile security method that actually accomplished anything. It was not
> an attempt to claim that this encompassed an entire security solution for
> everything that involved the client-server application.

That's the danger using the 'security' word in a security conscious
list: it has a certain meaning and you re-use the word with a slightly
different meaning; not intentionally, but it happens. Garbled jargon.
Let me rephrase here:

> resources, loading the resources and decrypting them on-the-fly ) was a
> worthwhile security method that actually accomplished anything. It was not

^^^ worthwhile *obfuscation* method ...

> hiding the certificates in some way (encrypting them, putting them in
> resources, loading the resources and decrypting them on-the-fly ) was a
> worthwhile security method that actually accomplished anything. It was not
> an attempt to claim that this encompassed an entire security solution for
> everything that involved the client-server application.

Hm... and my guess? Good enough when the expenses don't allow "this
software suite is delivered with a personalized hardware token". As
you will have realized by now, this "yes, it does" has quite a few
implications. As would a "no, it does not".

And all that fuss, just because you've woken up and inquired about
security / protection technology, instead of ignoring the subject and
waiting for a nasty surprise down the road. Dang! ;-))

Hope you enjoyed Christmas anyhow.

Cheers,

Ger









-- 
Met vriendelijke groeten / Best regards,

Ger Hobbelt

--------------------------------------------------
web:    http://www.hobbelt.com/
        http://www.hebbut.net/
mail:   g...@hobbelt.com
mobile: +31-6-11 120 978
--------------------------------------------------
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to