Paul,
I don't know how to respond, you have skipped lightly through so many
subjects here, very few of which have much to do
an authentication protocol. It seems to me that what is needed most is
"Security 101". Internet Security is an oxymoron not
because the technology doesn't exists, but because it is mostly ignored
by the "Internet Community".
Security 101:
"The purpose of Security is to establish accountability to an
individual."
A secure system consists of a definition of three things; subjects,
objects, and policy.
A subject is a person. A subject is NEVER anything other than a
person.
It is not a computer, process, address space, device, ... non of
these things can be held accountable.
An object is the thing(s)/Data/... that you want to protect. -
files, database records, eMail msgs, HTML pages, ...
The Policy is a set of rules that determine _"which_ subjects
can do _what_ to _which_ objects"
(SPAM exists because the write policy on your
inbox says that anyone can do it)
In the computer security world, the _"what_" in the above
statement refers a a set of verbs - things that can be done.
There are only two verbs - read and write. (delete a file is not
the actual verb - write to the directory is)
When we actually build a secure system we want one that is full of
security features, right?
Yes, it must be full - but there are actually fery few "Security
Features" - 3 or 4 depending on how you look at it.
The features are:
1) I&A - Identification & Authentication (this is where we
started)
AC - Access control. There are two types (the reason for
3 or 4 features as stated above)
2) DAC - Discretionary Access Control
3) MAC - Manditory Access Control
4) Audit - verifies trust placed in subjects
For a security system to be "feature complete", it must have I&A, DAC
and/or MAC, and Audit.
But features alone, does not a secure system make - it also takes some
assurance.
The assurance has to do with the systems ability to reach and maintain a
secure state.
To reach a secure state, the system needs:
Trusted Development - did we actually develop the features we claim
Trusted Distribution - did we actually deliver (to the customer) the
system we developed
Trusted Install - did we actually install the system that was delivered
Trusted Boot - are we actually running the system that was installed.
To maintain a secure state, it must be able to:
prevent features from being modified or circumvented
identify and separate the actions of one user from another
correctly handle object reuse
minimize covert channels
...
Only systems that are feature complete with assurance can meet the
purpose of security - "to establish accountability to individuals"
There are also operational issues that must be dealt with.
If the system is too hard to use and maintain, it will not be successful.
Also, cryptography should be used with care. If used improperly, it will
make things worse.
"Cryptography is the opiate of the naive"Dr. Roger Schell
More thoughts, Systems are NOT more or less secure - they either do or
do not defend against certain attacks.
what makes us believe that things are "more or less" secure is that
we believe that
the is a "more or less" chance of a system being subjected to an
attack that the system does not defend against.
Security Engineering is like engineering in general - there are trade
offs to make - A successful is one where
good trade offs were made, an unsuccessful system is one where bad
trade offs were made.
The trade offs are between the cost of the system(acquire cost and
continuing/use costs) and the
potential loss of a breach of policy. I shouldn't spend $2 to protect
$1 or if the data is only sensative for say
a day, it is ok to use crypto that can be broken in 2 days.
So with all of that said, I need a break.
If any of this resonates with you, and you are interested, I'll try to
go through your post and make comments
on how things relate to the above.
Let me know - I don't want to waste my time if you are not interested.
(if Paul does not convey an interest, but others on the list are
interested, I'll continue)
Doug Hale
"There is a science to security
- if you follow the rules, you will know what you get
- if you don't follow the rules, you will not know what you
get."
Paul McDonald wrote:
> Doug, I wanted to discuss this more with you because it really seems a good
> start to the solution that I am thinking about. My initial thoughts when I
> started this project was to establish some way to authenticate the computer
> to the database server using some sort of complex process that would be
> unique to each client (without making this too complicated) either by a
> complex hash of software and hardware so even if you ghosted an image of the
> client machine, the code would be invalid on each new client. To initialize
> the client authentication, two separate users would have to validate the
> machine is allowed to connect to the database server. This I figured would
> come from a Sr. IT administrator, and a department head for the particular
> division that had access to the server.
>
>
>
> The validation of those codes would be set on the server side by the
> president of the company in case a change occurred in the staff (either the
> Sr. IT or Department head) so the president o the company would have the
> ability to change, remove, and add a new authentication person. The reason
> for the two people needing to be involved in the process would be validation
> that neither of the two are trying to establish a connection outside of the
> approved machine.
>
>
>
> Once the machine is authenticated, and the handshake is done between the
> client and database server, the user would have to validate his credentials
> with the database engine to have access to the database.
>
>
>
> The benefits of such a system I thought would be:
>
> 1. You would have to convince another person in order to bring a new
> computer into the system
>
> 2. Presidents have definite control over the access ultimately, and
> would only have to involve himself when there is a change in personnel
>
> 3. Audit trails would be easily followed in all steps of the process
>
> 4. Regardless of most attempts of evil doers, the machine would have
> to validate a unique key code before a hacker could use any keystrokes he
> has captured from whatever process he uses to obtain user names and
> passwords.
>
> 5. Employees (the client's user) would not be able to access the
> information remotely without validating their laptop through the same
> process as his or her desktop at the office, this way an audit trail could
> be established.
>
> 6. Outside of a stolen laptop, it would then become very difficult to
> gain access to the system. When a laptop is stolen, the Sr. IT
> administrator can remove the access key from the server and all information
> that could be obtained by stealing the laptop would only be open to the
> public until the key is removed from the server.
>
> Does this process make sense to anyone else? Is it something that sounds
> reasonable or is there issues that I am missing in this process? I really
> like the idea of using the 0K protocol in authenticating the user on the
> client and the database server. Any thoughts?
>
>
>
> Paul
>
>
>
>
>
> From: [email protected] [mailto:[EMAIL PROTECTED] On Behalf
> Of Doug Hale
> Sent: February 12, 2008 11:03 AM
> To: [email protected]
> Subject: Re: [delphi-en] I am looking for suggestions
>
>
>
> [Non-text portions of this message have been removed]
>
>
>
> -----------------------------------------------------
> Home page: http://groups.yahoo.com/group/delphi-en/
> To unsubscribe: [EMAIL PROTECTED]
> Yahoo! Groups Links
>
>
>
>
>
>
[Non-text portions of this message have been removed]