Re: Trusting the Tools - was Re: Open Source ...

2003-10-13 Thread kent
On Sun, Oct 12, 2003 at 08:25:21AM -0600, Anne  Lynn Wheeler wrote:
 
 It wouldn't have been impossible ... but quite unlikely. It is somewhat
 easier in C-based programs since there are additional levels of indirection
 and obfuscations between the statements in a C program and the
 generated machine code.

Hmm.  While I agree with your assessment of likelihood, I think you
understate the seriousness of the issue in both the C case and the
assembler case -- they are not really that different.  It's not just a
matter of indirection and obfuscation -- there can be large blocks of
code generated for which there is no external visibility whatsoever (ie,
the map files and other traces of generated code can simply not show the
hidden code.  This is true both for C and assembler.  The only way you
can really tell is if you capture *all* of the live memory of the
computer, and disassemble it with a verified disassembler. 

(eg, what shows as bss 0 in the assembler listing is really code; what shows 
as one set of instructions in the listing is in reality different.)

Kent
-- 
Kent Crispin   Be good, and you will be
[EMAIL PROTECTED],[EMAIL PROTECTED] lonesome.
p: +1 310 823 9358  f: +1 310 823 8649   -- Mark Twain

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Trusting the Tools - was Re: Open Source ...

2003-10-13 Thread Anne Lynn Wheeler
At 03:48 PM 10/12/2003 -0700, [EMAIL PROTECTED] wrote:
Hmm.  While I agree with your assessment of likelihood, I think you
understate the seriousness of the issue in both the C case and the
assembler case -- they are not really that different.  It's not just a
matter of indirection and obfuscation -- there can be large blocks of
code generated for which there is no external visibility whatsoever (ie,
the map files and other traces of generated code can simply not show the
hidden code.  This is true both for C and assembler.  The only way you
can really tell is if you capture *all* of the live memory of the
computer, and disassemble it with a verified disassembler.
(eg, what shows as bss 0 in the assembler listing is really code; what shows
as one set of instructions in the listing is in reality different.)
well ... you can take and compare the listing file against the txt deck 
output
of the assembler listing for each module . Each txt: deck is input to the 
loader
which builds the actual executable (almost) memory image. past discussion
of the TXT file format:
http://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First 
video terminal?)
http://www.garlic.com/~lynn/2001c.html#87 Bootstrap
http://www.garlic.com/~lynn/2001k.html#31 Is anybody out there still 
writting BAL 370.
http://www.garlic.com/~lynn/2002f.html#41 Blade architectures
http://www.garlic.com/~lynn/2002o.html#26 Relocation, was Re: Early 
computer games

then the issue isn't if the assembler has been compromised ... it is 
whether the
loader has been compromised. then you compare the memory image file
against the aggregate of the txt decks ... if you've done the assembler
listing comparison against the txt deck correctly  then the memory
image comparison is looking for a loader compromise ... not an
assembler compromise.

some past discussion of memory/debugger analyser from approx. period that
gnosis started (precursor to keykos):
http://www.garlic.com/~lynn/subtopic.html#dumprx
which also had some capability to work from memory image of the program
in conjunction with assembler listing files.
of course it primarily relied on the REXX interpreter for its functionality so
if there was a compromise in the REXX interpreter  or any of the utilities
written for analyses and comparison were compromised from the standpoint
of masking compromise in other components related to insertion of malicious
code.
--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


WYTM?

2003-10-13 Thread Ian Grigg
As many have decried in recent threads, it all
comes down the WYTM - What's Your Threat Model.

It's hard to come up with anything more important
in crypto.  It's the starting point for ... every-
thing.  This seems increasingly evident because we
haven't successfully reverse-engineered the threat
model for the Quantum crypto stuff, for the Linux
VPN game, and for Tom's qd channel security.

Which results in, at best, a sinking feeling, or
at worst, endless arguments as to whether we are
dealing with yet another a hype cycle, yet another
practically worthless crypto protocol, yet another
newbie leading users on to disaster through belief
in simple, hidden, insecure factors, or...

WYTM?

It's the first question, and I've thought it about
a lot in the context of SSL.  This rant is about
what I've found.  Please excuse the weak cross over!



For $40, you can pick up SSL  TLS by Eric
Rescorla [1].  It's is about as close as I could
get to finding serious commentary on the threat
model for SSL [2].

The threat model is in Section 1.2, and the reader
might like to run through that, in the flesh, here:

  http://www.iang.org/ssl/rescorla_1.html

perhaps for the benefit of at least one unbiased
reading.  Please, read it.  I typed it in by hand,
and my fingers want to know it was worth it [3].

The rest of this rant is about what the Threat
model says, in totally biased, opinionated terms
[4].  My commentary rails on the left, the book
composes centermost.



  1.2  The Internet Threat Model

  Designers of Internet security protocols
  typically share a more or less common
  threat model.  

Eric doesn't say so explicitly, but this is pretty
much the SSL threat model.  Here comes the first
key point:

  First, it's assumed that the actual end
  systems that the protocol is being
  executed on are secure

(And then some testing of that claim.  To round
this out, let's skip to the next paragraph:)

  ... we assume that the attacker has more or
  less complete control of the communications
  channel between any two machines. 



Ladies and Gentlemen, there you have it.  The
Internet Threat Model (ITM), in a nutshell, or,
two nutshells, if we are using those earlier two
sentance models.

It's a strong model:  the end nodes are secure and
the middle is not.  It's clean, it's simple, and
we just happen to have a solution for it.



Problem is, it's also wrong.  The end systems
are not secure, and the comms in the middle is
actually remarkably safe.

(Whoa!  Did he say that?)  Yep, I surely did: the
systems are insecure, and, the wire is safe.

Let's quantify that:  Windows.  Is most of the
end systems (and we don't need to belabour that
point).  Are infected with viruses, hacks, macros,
configuration tools, passwords, Norton recovery
tools, my kid sister...

And then there's Linux.  13,000 boxen hacked per
month... [5].  In fact, Linux beats Windows 4 to 1
and it hasn't even challenged the user's desktop
market yet!

It shows in the statistics, it shows in experience;
pretty much all of us have seen a cracked box at
close quarters at one point or another [6].

Windows systems are perverted in their millions by
worms, viruses, and other upgrades to the social
networking infrastructure.  Linux systems aren't
much more trust-inspiring, on the face of it.

Pretty much all of us present in this forum would
feel fairly confident about downloading some sort
of crack disc, walking into a public library and
taking over one of their machines.

Mind you... in that same library, could we walk
in and start listening to each other's comms?

Nope.  Probably not.

On the one hand, we'd have trouble on the cables,
without being spotted by that pesky librarian.
And those darn $100 switches, they so ruin the
party these days.

Admittedly, OTOH, we do have that wonderful 802.11b
stuff and there we can really listen in [7].

But, in practice, we can conclude, nobody much
listens to our traffic.  Really, so close to nobody
that nobody in reality worries about it [8].

But, every sumbitch is trying to hack into our
machine, everyone has a virus scanner, a firewall,
etc etc.  I'm sure we've all shared that wierd
feeling when we install a new firewall that
notifies when your machine is being port scanned?
A new machine can be put on a totally new IP, and
almost immediately, ports are being scanned

How do they do that so fast?



Hence the point:  the comms is pretty darn safe.
And the node is in trouble.  We might have trouble
measuring it, but we can assert this fact:

the node is way more insecure than the comms.

That's a good enough assumption for now;  which
takes us back to the so-called Internet Threat
Model and by extension and assumption, the SSL
threat model:

the actual end systems ... are secure.
  the attacker has more or less complete
 control of the communications channel between
 any two machines.

Quite the reverse pertains [5].  So where does that

Re: NCipher Takes Hardware Security To Network Level

2003-10-13 Thread Peter Gutmann
Anton Stiglic [EMAIL PROTECTED] writes:

But the problem is how can people who know nothing about security evaluate
which vendor is most committed to security? For the moment, FIPS 140 and CC
type certifications seem to be the only means for these people...

Yeah, it's largely a case of looking where the light is.  An extreme example
of this is the use of formal methods for high-assurance systems, as required
by FIPS 140-2 level 4.  Why is it in there?  Because FIPS 140-1 had it there
at the highest levels.  Why was it in there?  Because the CC has it in there
at the highest levels.  Why was it in there?  Because the ITSEC had it in
there at the highest levels.  Why was it in there?  Because the Orange Book
('85) had it in there at the highest levels.  Why was it in there?  Because
the proto-Orange Book ('83) had it in there at the highest levels.  Why was it
in there?  Because in the 1970s some mathematicians hypothesised that it might
be possible to prove properties of complex programs/systems in the same way
that they proved basic mathematical theorems.

(Aside: This is starting to sound like that apocryphal Why are railway tracks
 spaced X units apart saga).

To continue: At what point in that progression did people realise that this
wasn't a very practical way to build a secure system?  Some time in the late
1970s to early 1980s, when they actually tried to reduce the theory into
practice.  There were quite a number of papers being published even before the
first proto-Orange Book appeared which indicated that this approach was going
to be extremely problematic, with problems... well, insert the standard
shopping list here.

So why is this stuff still present in the very latest certification
requirements?  Because we're measuring what we know how to measure, whether it
makes sense to evaluate security in that way or not.  This is probably why
penetrate-and-patch is still the most widely-used approach to securing
systems.  Maybe the solution to the problem is to figure out how to make
penetrate-and-patch more rigorous and effective...

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-13 Thread Ian Grigg
Minor errata:

Eric Rescorla wrote:
  I totally agree that the systems are
 insecure (obligatory pitch for my Internet is Too
 Secure Already) http://www.rtfm.com/TooSecure.pdf,

I found this link had moved to here;

http://www.rtfm.com/TooSecure-usenix.pdf

 which makes some of the same points you're making,
 though not all.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Now Is the Time to Finally Kill Spam - A Call to Action

2003-10-13 Thread martin f krafft
also sprach R. A. Hettinga [EMAIL PROTECTED] [2003.10.13.0639 +0200]:
 The time to stop this nonsense is now, and there's a non-governmental,
 low-cost, low-effort way it could happen. Here's my plan of action, it's
 not original to me but I want to lay it out publicly as a battle plan:

Of course the plan is good, and I am all for it. But it won't be
carried in less than 10 years.

I am much in favour of Graham's fight back approach, which is to
simply visit webpage URLs in all emails automatically. This will
drown spammer websites in requests and should make spam a lot less
worthy.

Who has been working with this system already? Are there reference
implementations?

-- 
martin;  (greetings from the heart of the sun.)
  \ echo mailto: !#^.*|tr * mailto:; [EMAIL PROTECTED]
 
invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver!
 
oh what a tangled web we weave,
 when first we practice to deceive.
-- shakespeare


pgp0.pgp
Description: PGP signature


Re: WYTM?

2003-10-13 Thread Tim Dierks
At 12:28 AM 10/13/2003, Ian Grigg wrote:
Problem is, it's also wrong.  The end systems
are not secure, and the comms in the middle is
actually remarkably safe.
I think this is an interesting, insightful analysis, but I also think it's 
drawing a stronger contrast between the real world and the Internet threat 
model than is warranted.

It's true that a large number of machines are compromised, but they were 
generally compromised by malicious communications that came over the 
network. If correctly implemented systems had protected these machines from 
untrustworthy Internet data, they wouldn't have been compromised.

Similarly, the statement is true at large (many systems are compromised), 
but not necessarily true in the small (I'm fairly confident that my SSL 
endpoints are not compromised). This means that the threat model is valid 
for individuals who take care to make sure that they comply with its 
assumptions, even if it may be less valid for the Internet at large.

And it's true that we define the threat model to be as large as the problem 
we know how to solve: we protect against the things we know how to protect 
against, and don't address problems at this level that we don't know how to 
protect against at this level. This is no more incorrect than my buying 
clothes which will protect me from rain, but failing to consider shopping 
for clothes which will do a good job of protecting me from a nuclear blast: 
we don't know how to make such clothes, so we don't bother thinking about 
that risk in that environment. Similarly, we have no idea how to design a 
networking protocol to protect us from the endpoints having already been 
compromised, so we don't worry about that part of the problem in that 
space. Perhaps we worry about it in another space (firewalls, better OS 
coding, TCPA, passing laws).

So, I disagree: I don't think that the SSL model is wrong: it's the right 
model for the component of the full problem it looks to address. And I 
don't think that the Internet threat model has failed to address the 
problem of host compromise: the fact is that these host compromises 
resulted, in part, from the failure of operating systems and other software 
to adequately protect against threats described in the Internet threat 
model: namely, that data coming in over the network cannot be trusted.

That doesn't change the fact that we should worry about the risk in 
practice that those assumptions of endpoint security will not hold.

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Software protection scheme may boost new game sales

2003-10-13 Thread Jerrold Leichter
| I've not read the said article just yet, but from that direct quote as
| the copy degrades... I can already see the trouble with this scheme:
| their copy protection already fails them.  They allow copies to be made
| and rely on the fact that the CDR or whatever media, will eventually
| degrade, because their code looks like scratches...  Rggghtt.
You should read the article - the quote is misleading.  What they are doing is
writing some bad data at pre-defined points on the CD.  The program looks
for this and fails if it finds good data.

However ... I agree with your other points.  This idea is old, in many
different forms.  It's been broken repeatedly.  The one advantage they have
this time around is that CD readers - and, even more, DVD readers; there is
mention of applying the same trick to DVD's - is, compared to the floppy
readers of yesteryear, sealed boxes.  It's considerably harder to get at the
raw datastream and play games.  Of course, this cuts both ways - there are
limits to what the guys writing the protection code can do, too.

The real new idea here has nothing to do with how they *detect* a copy - it's
what they *do* when they detect it.  Rather than simply shut the game down,
the degrade it over time.  Guns slowly stop shooting straight, for example.
In the case of DVD's, the player works fine - but stops working right at some
peak point.  Just like the guy on the corner announcing first hit's free,
they aim to suck you in, then have you running out to get a legit copy to
save your character's ass - or find out how The One really lives through
it all.  This will probably work with a good fraction of the population.

Actually, this is a clever play on the comment from music sharers that they
get a free copy of a song, then buy the CD if they like the stuff.  In effect,
what they are trying to do is make it easy to make teasers out of their
stuff.  There will be tons of people copying the stuff in an unsophisticated
way - and only a few who will *really* break it.  Most people will have no
quick way to tell whether they are getting a good or a bad copy.  And every
bad copy has a reasonable chance of actually producing a sale

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-13 Thread Ian Grigg
Eric,

thanks for your reply!

My point is strictly limited to something
approximating there was no threat model
for SSL / secure browsing.  And, as you
say, you don't really disagree with that
100% :-)

With that in mind, I think we agree on this:


  [9] I'd love to hear the inside scoop, but all I
  have is Eric's book.  Oh, and for the record,
  Eric wasn't anywhere near this game when it was
  all being cast out in concrete.  He's just the
  historian on this one.  Or, that's the way I
  understand it.
 
 Actually, I was there, though I was an outsider to the
 process. Netscape was doing the design and not taking much
 input. However, they did send copies to a few people and one
 of them was my colleague Allan Schiffman, so I saw it.

OK!

 It's really a mistake to think of SSL as being designed
 with an explicit threat model. That just wasn't how the
 designers at Netscape thought, as far as I can tell.


Well, that's the sort of confirmation I'm looking
for.  From the documents and everything, it seems
as though the threat model wasn't analysed, it was
just picked out of a book somewhere.  Or, as you
say, even that is too kind, they simply didn't
think that way.

But, this is a very important point.  It means that
when we talk about secure browsing, it is wrong to
defend it on the basis of the threat model.  There
was no threat model.  What we have is an accident
of the past.

Which is great.  This means there is no real objection
to building a real threat model.  One more appropriate
to the times, the people, the applications, the needs.

And the today-threats.  Not the bogeyman threats.


 Incidentally, Ian, I'd like to propose a counterargument
 to your argument. It's true that most web traffic
 could be encrypted if we had a more opportunistic key
 exchange system. But if there isn't any substantial
 sniffing (i.e. the wire is secure) then who cares?


Exactly.  Why do I care?  Why do you care?

It is mantra in the SSL community and in the
browsing world that we do care.  That's why
the software is arranged in a a double lock-
in, between the server and the browser, to
force use of a CA cert.

So, if we don't care, why do we care?  What
is the reason for doing this?  Why are we
paying to use free software?  What paycheck
does Ben draw from all our money being spent
on this i don't care thing called a cert?

Some people say because of the threat model.

And that's what this thread is about:  we
agree that there is no threat model, in any
proper sense.  So this is a null and void
answer.

Other people say to protect against MITM.
But, as we've discussed at length, there is
little or no real or measurable threat of MITM.

Yet others say to be sure we are talking
to the merchant.  Sorry, that's not a good
answer either because in my email box today
there are about 10 different attacks on the
secure sites that I care about.  And mostly,
they don't care about ... certs.  But they
care enough to keep doing it.  Why is that?



Someone made a judgement call, 9 or so years
ago, and we're still paying for that person
caring on our behalf, erroneously.

Let's not care anymore.  Let's stop paying.

I don't care who it was, even.  I just want
to stop paying for his person, caring for me.

Let's start making our own security choices?

Let crypto run free!

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-13 Thread Joseph Ashwood
- Original Message - 
From: Ian Grigg [EMAIL PROTECTED]
Sent: Saturday, October 11, 2003 1:22 PM
Subject: Re: NCipher Takes Hardware Security To Network Level

 Is there any reason to believe that people who
 know nothing about security can actually evaluate
 questions about security?

Actually, there are reasons to believe that they won't be able to, just as I
would not be qualified to evaluate the functionality of a sewage pump
(except from the perspective of it seems to work).

 And, independant assessors are generally subvertable
 by special interests (mostly, the large incumbents
 encourage independant assessors to raise barriers
 to keep out low cost providers).  Hence, Peter's
 points.  This is a very normal economic pattern, in
 fact, it is the expected result.

I take the counter view, assuming that a independent assessor can be found
that is truly independent, that assessor helps the small companies _more_
than the larger ones. To make a pointed example I will use a current
situation (which I am active in).

Trust Laboratories is a software assurance firm, whose first service is the
assurance of PKCS #11 modules. From the marketting perspective the large
incumbents (e.g. nCipher which started this conversation) have little
incentive to seek such assurances, they already have a solid lock on the
market, and the brand recognition to keep it that way. The small companies
though have a much stronger incentive, with an assurance they can hint and
in some cases maybe even outright claim technological superiority over the
encumbents, giving them a strong road into the market. The only purpose the
encumbents have for such assurances is combatting the small companies
assurances (not that I wouldn't love to have nCipher as a customer, I think
it would lend a great deal of credibility to the assurance, as well as
solidifying their marketshare against the under-developed technologies).

 So, right now, I'd say the answer to that question
 is that there is no way for someone who knows nothing
 about security to objectively evaluate a security
 product.

That will likely always be the case. In order to judge what level of
security is required they simply must have some knowledge of security.
Otherwise it is very much like asking John Smith what Ian Grigg's favorite
food is, (a typical) John Smith simply does not have the knowledge to give a
useful answer.
Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]