Re: Leo Marks

2001-01-30 Thread Steven M. Bellovin

The obituary has, at long last, prompted me to write a brief review of 
Marks' book "Between Silk and Cyanide".  The capsule summary:  read it, 
and try to understand what he's really teaching about cryptography, 
amidst all the amusing anecdotes and over-the-top writing.

The main lesson is about threat models.  If asked, I dare say that most 
readers of this mailing list would say "of course keying material 
should be memorized if possible, and never written down".  That seems 
obvious, especially for agents in enemy territory.  After all, written 
keys are very incriminating.  It's obvious, and was obvious to the SOE 
before Marks.  It was also dead-wrong -- accent on the "dead".

The cipher that agents were taught was a complex transposition, keyed 
by a memorized phrase.  The scheme had several fatal flaws.  The first 
is the most obvious:  a guess at the phrase was easily tested, and if a 
part of the key was recovered, it wasn't hard to guess at the rest, if 
the phrase was from well-known source (and it generally was).  

More subtly, doing the encryption was an error-prone process, 
especially if done under field conditions without the aid of graph 
paper.  Per protocol, if London couldn't decrypt the message, the agent 
was told to re-encrypt and re-transmit.  But that meant more air time 
-- a serious matter, since the Gestapo used direction-finding vans to 
track down the transmitters.  Doing some simple "cryptanalysis" -- too 
strong a word -- on garbles permitted London to read virtually all of 
them -- but that was time-consuming, and really pointed to the 
underlying problem, of a too-complex cipher.

The duress code was another weak spot.  If an agent was being compelled 
to send some message, he or she was supposed to add some signal to the 
message.  But if the Gestapo ever arrested someone, they would torture 
*everything* out of that person -- the cipher key, the duress code, 
etc.  And they had a stack of old messages to check against -- they 
made sure that the duress code stated by the agent wasn't present in 
the messages.  The failure was not just the lack of perfect forward 
secrecy; it was the lack of perfect forward non-verifiability of the 
safe/duress indicators.

Marks' solution was counter-intuitive:  give the agent a sheet of 
"worked-out keys", printed on silk.  These were not one-time pad keys; 
rather, they were the numeric indicators for the transposition.  This 
avoided the guessable phrases; more importantly, it eliminated the most 
trouble-prone part of the encipherment, the conversion of the key 
phrase to a numeric version.  The authentication codes were a function 
of part of the key.  Agents were instructed to destroy each "WOK" after 
use; this provided not just forward secrecy, but also stop the 
Gestapo from verifying any statements about the duress code.  

Why silk?  Because it was easily concealed in coat linings and the 
like, and wouldn't be detected in a casual street-frisk.  Sure, if the 
Gestapo was really suspicious, they'd find it.  So what?  This is the 
*Gestapo*; if they were really suspicious, it didn't matter much if you 
weren't guilty, because you'd be in no shape to appreciate their failure 
to find anything.  We joke about rubber hose cryptanalysis; the SOE 
agents had to contend with the real thing.  And real agents had enough 
other incriminating stuff lying around that unused keys didn't matter.

There's more, but the basic lesson is clear:  understand the *real* 
threat model you face before you design any sort of security system.  
The SOE didn't, and that cost the life of many agents.

--Steve Bellovin, http://www.research.att.com/~smb






Re: NONSTOP Crypto Query

2001-01-12 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], John Young write
s:

This loops back to NONSTOP and the question of what may 
be the signatures and compromising emanations of today's 
cryptosystems which reveal information in ways that go beyond 
known sniffers -- indeed, that known sniffers may divertingly 
camouflage. 

Again going back to "Spycatcher", Wright described a number of other 
emissions.  For example, voices in a room could modulate the current 
flow through a telephone's ringer.  (This was, of course, back in the 
days of electromagnet-actuated ringers...)  One can also find signals 
corresponding to the plaintext superimposed on the output waveform of 
the ciphertext, and possibly see coupling to the power supply.  (One of 
the rules I've read:  "Step 1:  Look for the plaintext".)

I've seen brochures for high-grade encryptors that speak of "red-black 
separation" and separate power supplies for the two halves.


--Steve Bellovin, http:/www.research.att.com/~smb






Re: What's Up with AES FIPS

2001-01-02 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], John Young writ
es:
NIST states on its Web site that a draft FIPS for AES would 
be issued for comment "shortly after announcement of the 
winner (probably in November 2000)." Anything scandalous 
behind the delay?

From what I've heard, it's just process issues.  (I discussed it 
briefly with someone from NIST during the IETF meeting last month, and 
I did not come away with any feeling of concern.)

--Steve Bellovin






Re: Fwd: from Edupage, December 22, 2000

2001-01-02 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], David Honig writ
es:
At 10:27 PM 1/1/01 +0530, Udhay Shankar N wrote:
Did this slip between the cracks in holiday season or has it already been 
discussed here ?

Udhay

Its just yet another 'secure' scheme that uses quantum theory
(here, discrete photons; elsewhere, entangled photons) 
to detect or prevent leaking bits.  

More elegant than gas-pressurized, pressure-monitored 'secure' cables, but
the same idea. 

Right -- and for most situations, it solves the wrong problem.  Crypto 
is fun, but conventional crypto is likely good enough -- the real 
threats are at the end-points.




--Steve Bellovin






Re: IBM press release - encryption and authentication

2000-12-10 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "P
.J. Ponder" writes:
from: http://www.ibm.com/news/2000/11/30.phtml

IBM develops algorithm that encrypts and authenticates simultaneously 


More precisely, this is a new mode of operation that does encryption 
and authentication in one pass.  It's also amenable to parallelization, 
thus making it suitable for very high speed networks.  (Traditional 
modes of operation, such as CBC, are problematic, since every block 
depends on the encryption of the previous block.)

--Steve Bellovin






Re: /. Yahoo delivers encrypted email

2000-12-04 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], [EMAIL PROTECTED] writ
es:


Yahoo's new system works like this: Once a message is composed, it
travels, unencrypted, to Yahoo,

So feel no fear in sending anything you wouldn't mind being read before 
it's encrypted?
I'm surprised AOL isn't offering this "security feature" as well ... 
I feel safer already :~)

While I don't like (what I've seen described here of) the Yahoo secure 
mail system, it isn't a priori preposterous.  In fact, in many cases it 
makes a great deal of sense.

The question is what your threat model is.  Although it's possible to 
pick up email on the wire, in many cases it's quite hard.  
Eavesdropping on an ISP's backbone is extremely difficult; the links 
are very fast, are often not Ethernet or some other easily-tapped 
medium, and providers have learned not to put general-purpose (and 
hence hackable) machines on their backbone.  

The real threats come near the edges, and in the spool files where the 
mail sits before being delivered or picked up.  The latter, in 
particular, is quite great.  If the encryption happens on a separate, 
secure machine before storage, it might be quite good against that 
threat -- and if both parties to a conversation are using dial-up 
links, there is little to worry about.

Sure, if you're a possible target of Carnivore, this is grossly 
insufficient.  But that doesn't descibe most people.  Their threat 
probably comes from their very own machines, and is best defeated by 
any mailer that doesn't leave plaintext lying around, either in a Trash 
folder or in a Web cache.

--Steve Bellovin






Re: Republic targeted for sale of 'unhackable' system

2000-11-16 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Willi
am Knowles writes:
Snakeoil?

[Smells like it. --Perry]

http://www.ireland.com/newspaper/finance/2000/1110/fin10.htm


I don't know if it's really snake-oil -- it's possible, of course, that 
they've developed a new, useful encryption algorithm, though of course 
the odds are against that -- but no matter how good the algorithm is, 
it's not "unhackable".  Leaving aside the distinction between 
cryptanalysis and hacking (and it's a huge one!),  most security 
problems are due to buggy code and/or bad systems administration.  The 
best encryption in the world can't stop buffer overflows, to give just 
one example.

--Steve Bellovin






Re: AES winner(s) to be announced 11am Monday

2000-09-30 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], [EMAIL PROTECTED] writes:

http://csrc.nist.gov/encryption/aes/

   will we see official DOI # for IPsec/IKE right after this?

itojun


Certainly, very soon thereafter.

--Steve Bellovin






Re: I foresee an increased interest in secret-sharing

2000-09-08 Thread Steven M. Bellovin

You're being a mathematician.  Be a cop instead.  

Police manage to arrest people all the time for, say, murder, even 
though mathematically there are lots of people who could have committed 
the crime.  Perhaps 10 different people have had to disclose shares of 
the key to Inspector Lestrade.  But only one has been observed 
scribbling clocks in a reporter's NY Times, and hanging around parking 
garages at weird hours.

Put another way, legal proof is not the same as mathematical proof.  To 
use geeky language, cops should, can, and will look at out of band 
information.  Maybe using secret-sharing will increase the pool of 
suspects.  But the increase is by less than you think, since even if 
just one person had the key, many more people are likely to be aware of 
*some* interaction.  The court (or police) order isn't served directly 
on precisely the right individual; rather, it's served on the company, 
which brings in its general counsel, several layers of pointy-hairs, 
etc., until the key-holder is located.  (The CEO doesn't keep 
operational keys in his or her safe; rather, it's likely to be some 
bearded nerd in the operations dept. who hangs out on cypherpunks)  
To be sure, the order may not specify that the key is wanted, but 
finding the right individual is hard without *someone* else knowing 
what's going on.

--Steve Bellovin






Re: reflecting on PGP, keyservers, and the Web of Trust

2000-09-05 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Dan Geer writes:

   How do they exchange public keys?  Via email I'll bet.

Note that it is trivial(*) to construct a self-decrypting
archive and mail it in the form of an attachment.  The
recipient will merely have to know the passphrase.  If
transit confidentiality is your aim and old versions 
of documents are irrelevant once the ink is dry on the
proverbial bond paper, this is quite workable and involves
no WoT at all, just POTS.

No!  We've discussed this point many times before -- what if the 
attacker sends a Trojan horse executable?

--Steve Bellovin






Re: RC4 vs RC5

2000-08-02 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Chris 
Duffy writes:
I was searching around and chanced upon your list.  I am trying to compare
RC4 vs RC5 encryption.  Can someone fill me in on the
advantages/disadvantages of these two?  Thanks,

They're not related.

RC4 is a stream cipher.  It's very fast per byte; key setup is roughly 
equivalent to encrypting 256 bytes.  As with all "generate a key stream 
and XOR with the data" ciphers, you have to be very careful to avoid 
reusing the same key stream.  Also, in the absence of a 
cryptographically strong integrity check mechanism, it is possible for 
an attacker to introduce predictable changes in the eventual plaintext.

RC4 is very widely used, with TLS.

RC5 is a block cipher, and is used with the same modes of operation as 
other block ciphers.  RC5 is not widely used.

In the legal arena, RC4 is a trademark; the non-RSA implementations are 
sometimes referred to as "ARC4".  There's a long story about the 
original appearance of the code on the net, but I won't go into it now. 
RC5 is protected by patents.

To forestall the next question, RC6 is also a block cipher, using some 
of the same design principles as RC5.  RC6 is one of the five AES 
finalists.  If it is selected as the winner, the owner has agreed to 
(in effect) waive its patent rights for use in AES.  If it isn't 
selected -- well, I don't believe there are any guarantees on that.


--Steve Bellovin






Re: Elgamal

2000-07-26 Thread Steven M. Bellovin

In message [EMAIL PROTECTED] 4.1.2721150740.00
[EMAIL PROTECTED], John Kelsey writes:
-BEGIN PGP SIGNED MESSAGE-

At 10:37 PM 7/19/00 -0400, Steven M. Bellovin wrote:
The important thing is that the random number really has to be
random  and unguessable.  

There was a clever trick for doing signatures like this without a
random number generator, using the one way hash function and the
private key only.  I am away from my library right now, so I can't
look up the reference, but the gist of the idea is:

r = hash(hash(private key),hash(message))

and then expand r to the necessary length by one of the standard
mechanisms, e.g.

r0 = hash(0,r)
r1 = hash(1,r)
...
r_n = hash(n,r)

The idea is that if the hash has some nice pseudorandomness
properties and is really one-way, we get everything we need from r
(or r0,r1,...,r_n) without a random number generator.

That works, though I think I'd include a counter or some such in the 
hash, so that the same r was not used for two identical messages.

The trick is reminiscent of the way PGP uses a hash of the message as 
part of its pool of randomness.

--Steve Bellovin






Re: Self Decrypting Archive in PGP

2000-07-24 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Eric Murray writes:


Why not send then a SDA that contains a copy of PGP, installs it,
generates a key for the user, posts it to a keyserver, sets up the
correct MIME content-type hooks in the user's browser, and then send
them the real PGP-encrypted file 10 minutes later when they're equipped
to deal with it?

It's still not secure, but it's a lot less insecure than a SDA.

No, it's not, since it reinforces the habit of opening random pieces of 
mobile code.  (If nothing else, maybe the copy of PGP it installs has a 
Trojan horse that exports the user's private key.  But there are lots 
of other threats here, and I don't think I need to point them out yet 
again.)

Someone referred to my Web page on secure email.  It's at
http://www.research.att.com/~smb/securemail.html, though only the last 
few paragraphs deal with this question.

--Steve Bellovin






Re: FBI announcement on email search 'Carnivore'

2000-07-15 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Meyer Wolfs
heim writes:
-BEGIN PGP SIGNED MESSAGE-

On Fri, 14 Jul 2000, Steven M. Bellovin wrote:

 According to the AP, the ACLU has filed a Freedom of Information Act 
 request for information on Carnivore.  See http://www.aclu.org/news/2000/n07
1400a.html
 and http://www.nytimes.com/aponline/w/AP-FBI-Snooping.html

I notice in this article that one of their programs is
"EtherPeek". Assuming this is the same as the well known ethernet sniffer,
you don't need to file for FOIA to learn about it.

http://www.aggroup.com/

Additionally, I don't believe the source is available, and I would doubt
the FBI would have the source for it. But, assuming that a) this is the
same product that the FBI is using, and b) they were given the source
under the agreement that it not be disclosed, could the FOIA force the
disclosure of this code?

Probably not.

I was trying to avoid quoting the whole NY Times article; if you don't 
subscribe to the Times, you can find the same article (I think) at
http://www.accesswaco.com/shared/news/ap/ap_story.html/Washington/AP.V0971.AP-FBI-Snooping.html

Anyway -- according to the story, there are a number of exemptions in 
the Freedom of Information Act that might prevent disclosure of the 
source code.  But the FOIA request was also for any internal FBI 
documents on the subject; those are much less likely to be protected by 
the exemptions.

--Steve Bellovin






Re: FBI announcement on email search 'Carnivore'

2000-07-14 Thread Steven M. Bellovin

According to the AP, the ACLU has filed a Freedom of Information Act 
request for information on Carnivore.  See http://www.aclu.org/news/2000/n071400a.html
and http://www.nytimes.com/aponline/w/AP-FBI-Snooping.html


--Steve Bellovin






Re: FBI announcement on email search 'Carnivore'

2000-07-13 Thread Steven M. Bellovin

I had posted a note saying that pen register usage in New York was 
barred by the courts unless a wiretap warrant had been issued.  I need 
to update that posting.

First, that opinion was rendered in People vs. Bialostok, 80 NY2d 738, 
http://www.law.cornell.edu/cgi-bin/nyctap.cgi?80+738  But it is no 
longer in force.  In People vs. Martello, 99 N.Y. Int. 0113, 
http://www.law.cornell.edu/ny/ctap/I99_0113.htm, the Court noted that 
subsequent to the events in the earlier case, the legislature passed a 
law specifically defining pen registers and providing for their use.  
The earlier ban is thus no longer in effect.  Furthermore, since they 
had made their decision on statutory grounds, rather than 
constitutional grounds, the legislature was free to change the 
procedures required.

So -- I doubt that that case would have any bearing on any Federal 
lawsuit.

--Steve Bellovin






Re: FBI announcement on email search 'Carnivore'

2000-07-12 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Meyer Wolfs
heim writes:
-BEGIN PGP SIGNED MESSAGE-


I guess this explains the FBI's opposition to the Verio merger. I wonder
if a colocation company or service provider could be forced to disclose
its participation in the Carnivore project. Any AboveNet/Exodus customers
here want to try?

There's been speculation about NSA black boxes in such facilities for
years. The FBI, however, isn't quite as "above the law" as the NSA likes
to think it is. What would the legality of operation a random email
sniffer be? Unlike a phone system, you can't wiretap email on the network
level without violating the privacy of all the other users sharing that
switch. 

Is there any old case law on wiretaps on telephone party-lines, where
uninvolved parties were monitored?

There was an interesting case in New York in 1993, where the Court of 
Appeals (the highest state court in New York -- the Supreme Court there 
is the trial-level court) ruled that pen registers (devices for 
recording dialed numbers) could not be used without a wiretap warrant -- 
and wiretap warrants are much harder to get.  Their reasoning was that 
in order to record the dialed number, you had to tap the line; 
therefore, the same requirements should apply.  (I don't have a precise 
citation for this case; the text of the opinion I have says "not yet 
published".)

In this situation, everyone's email has to be scanned in order to 
isolate the desired traffic.  In other words, we have a general wiretap 
device that -- according to the FBI -- is used only in accordance with 
the restrictions of the warrant.  But that was the case with pen 
registers in New York, and the court wouldn't buy it.

This precedent isn't binding on the FBI, but Federal courts do refer to 
state court opinions when appropriate.  It might be an interesting case.


--Steve Bellovin






Re: FBI announcement on email search 'Carnivore'

2000-07-12 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Marc Horowitz writes:
"Steven M. Bellovin" [EMAIL PROTECTED] writes:

 In this situation, everyone's email has to be scanned in order to 
 isolate the desired traffic.

I've seen this claim before, and I don't think it's true.  It's like
saying to wiretap my phone calls, you need to tap an entire fiber, and
do voiceprint ID to find my calls.  It's much easier and more
effective only to tap my line.



In general, I can't see why the FBI needs tools like Carnivore to tap
email.  The store-and-forward nature of email means there's a place
you can go to find the email, and the structure of most email systems
means there's a place which contains only the email for that user.

Right -- but this is a network device.  From the AP wire:

Marcus Thomas, who heads the FBI's cybertechnology section,
told the Wall Street Journal that the bureau has about 20
Carnivore systems, which are PCs with proprietary software.
He said Carnivore meets current wiretapping laws, but is
designed to keep up with the Internet.

``This is just a specialized sniffer,'' Thomas told the
Journal, which first reported details about Carnivore.

If the FBI says that it's a sniffer, rather than something that looks
at spool files, I'm not really in a position to argue...




Re: FBI involves itself in Verio merger

2000-07-07 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Damien 
Miller writes:
On Fri, 7 Jul 2000, Bill Stewart wrote:

 The current UK effort is why we also need "Perfect Forward Secrecy
 In Everything"; it's hard to force someone to turn over their
 decryption keys when their equipment doesn't store them past a
 session, and it's easier to argue that you shouldn't be required to
 turn over a signature key that can only be used for forgery than a
 decryption key which could reveal past session keys.

IANAL but wouldn't the UK's proposed legislation make software that
won't provide access to all keys implicitly illegal?

"Implicit" rarely counts in law -- at least in the U.S., and most 
likely in the U.K., given the common foundations of the legal systems.  
What matters is what the statute says.  If it says "you must turn over 
any keys you possess, upon proper demand", there's no problem.  If it 
says "if you use encryption, you must be able to turn over the keys", 
you might have a problem.  And if it says "you must keep track of all 
keys you use" -- well, yes, that does seem to rule out perfect forward 
secrecy...

--Steve Bellovin






Re: WIPO: e-Government ante portas

2000-06-28 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "Axel H Horns" writes:


1. The first striking item (page 3, section 3.1) is that despite 
relaxation of crypto regulations, a clause is provided according to 
which "an industrial property Office or recognized Certification 
Authority may decide to offer Key Recovery for the confidentiality 
key pair when allowed (or required) under national laws". It seems 
not to be clear whether this "service" is offered to the Offices 
aimed at the applicants. 

The U.S. Patent and Trademark Office pulled a similar stunt a couple of 
years ago.  This is preposterous.

2. According to the WIPO paper, acceptable digital signatures in the 
context of any PKI are to be bound to PKCS#7 (page 4, section 3.4):

  ftp://ftp.rsasecurity.com/pub/pkcs/doc/pkcs-7.doc

What I don't know is whether PKCS Standards are under the control of 
any public standards body or they are simply a de-facto industry 
standard made by RSA Labs. Can this standard (at least theoretically) 
be changed without notice by RSA Labs? Is there any corresponding 
"official Standard" which might be used instead of referencing 
PKCS#7? Would it have been a better idea from a technical point of 
view to use the emerging OpenPGP standard instead? Are there modular 
implementations of software packages for dealing with PKCS#7 formats 
available under the GNU GPL License?  

The PKCS standards are regularly reviewed by an open group.

3. Under section 3.5, the WIPO paper recommends a symmetric 
encryption algorithm called "dES-EDE3-CBC". I have never heard of 
that. What is the meaning thereof?

That's triple DES in encrypt-decrypt-encrypt fashion, using cipher 
block chaining.  It's common, secure, and conservative.  Expect it to 
be changed to AES-CBC in a couple of years.

4. Under section 3.7, SHA-1 is selected as Message Digest Algorithm. 
Would you say that this algorithm is a proper state-of-the-art choice 
for an upcoming new business standard?

Yes.

5. Obviously due to political considerations, a least common 
denominator has been implemented regarding to the requirement that 
the applicant has to provide an electronic signature when filing a 
PCT patent application with the respective Receiving Office: In 
section 4 ("Signatures Mechanisms") the Receiving Offices are allowed 
to require/allow

 (a) Basic Electronic Signatures
 (i)  Facsimile image of the users signature
 (ii) Text string, e.g. "/John Doe/"
 (ii) "Click Wrap" signature; a text string simply indication
  that the applicant has pressed the "OK" button on his
  electronic filing software;
 (b) Enhanced Electronic Signature
 (i)  PKCS#7 Signature

With other words: Receiving Offices are free at their discretion to 
choose snake oil or virtually nothing instead of cryptographical 
signatures.

It's not snake oil, in that no one is being deceived, nor is the 
primary meaning depending on the cryptography.  A patent application 
(as you well know) is not a one-shot transaction where you toss 
something over the fence, and where it is uncertain who the other party 
is.  The point of the signature is to state that you are attesting, 
under penalty of law, to the truth of certain statements; in that 
sense, the signature is more "solemnification", a word that I believe 
has been used in court cases on the validity of computer-printed 
signatures.

6. Regarding text formatting, the WIPO paper starts with XML which 
seems to be a well done choice. However, the text of patent 
applications can also be filed in .PDF format (page 8, section 5.1.2) 
"Acrobat V3 compatible" whatever that means. I would be happy to know 
whether or not the .PDF data format is proprietary to ADOBE, Inc. or 
a public standard managed by a proper standards body. Has it been 
publicly disclosed at all? Or is the available knowledge on .PDF 
based on some kind or reverse engineering?

I'd rather see something else; however, PDF is a de facto standard that 
is necessary to deal well with diagrams.  XML alone wouldn't cut it.

7. All data constituting the PCT application are packaged into a 
single container file using the ZIP format:

  http://www.pkware.com/appnote.html

Again, this looks rather proprietary. Is it really a good idea to 
rely on ZIP instead e.g. on MIME? Is there software avalilable under 
the GNU GPL License for dealing with .ZIP formats?

Yes, there's the 'zip' and 'unzip' commands.  I don't know of 
corresponding MIME-based commands for most platforms, except for 
mailers.



--Steve Bellovin






Re: legal status of digital signatures

2000-06-09 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "
P.J. Ponder" writes:

I think Perry is right, generally speaking.  An argument could certainly
be made - with or without this federal act, or without any of the various
state laws on the books - that a _real_ digital signature (like an RSA
digital signature) is legally binding for any purpose and in the same
context that a holographic or handwritten signature would be binding.  I
assume that when Perry says 'digital signature' he means digital
signature, and not 'electronic signature' as defined above.  The Statute
of Frauds doesn't really present that big of a legal obstacle, since the
modern interpretations of 'writing' are broad enough to include electronic
writings. 

Absolutely.  There was an opinion by some piece of the U.S. government 
a fair number of years ago which cited lots of case law on this 
subject.  The bill I mentioned covers many other things, including, for 
example, use of email instead of paper mail to notify consumers of 
certain things.

--Steve Bellovin






Re: random seed generation without user interaction?

2000-06-07 Thread Steven M. Bellovin

In message [EMAIL PROTECTED] 4.1.2607054551.00
[EMAIL PROTECTED], John Kelsey writes:
At 10:33 PM 6/6/00 -0400, Arnold G. Reinhold wrote:

...
The patent appears much broader than just focusing a camera on a Lava 
lamp. They claim digitizing the state of any chaotic system and then 
hashing it to seed a PRNG. The Lava lamp is given as a specific 
example (claim 3).

Wouldn't Don Davis' work on hard drive timings, in which he specifically
claimed that the system was chaotic, qualify as prior art for this?  

[Wouldn't all the work done on things like hashing inputs in general
to distil entropy, which was around for years before this patent,
count? --Perry]

Perry's point is actually more pertinent.  If you read the patent, 
they explicitly cite use of chaotic systems as prior art.  But they 
point out that such a system may be deterministic over a short enough 
interval.  They therefore propose the "novel" step of hashing the 
output of the digitized chaotic system...

Now, where did I put my datasheet for the ATT 7001 chip, which did in 
fact hash the output of one of the chaotic sources they specifically 
cite?

--Steve Bellovin






Re: random seed generation without user interaction?

2000-06-06 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Dennis Glatting writes:

 

There is an article (somewhere) on the net of digital cameras focused
on lava lamps. Photos are taken of the lava lamps and mixed into a
hash function to generate random data. I believe the author had some
algorithm for turning the lamps on and off, too.

See lavarand.sgi.com.

I had thought it was patented, but a quick search of uspto.gov didn't 
turn it up.

--Steve Bellovin






Re: NSA back doors in encryption products

2000-05-27 Thread Steven M. Bellovin

In message v04210109b5531fa89365@[24.218.56.92], "Arnold G. Reinhold" writes:
At 11:17 AM -0500 5/25/2000, Rick Smith wrote:


o There is the proposed legislation I cited earlier to protect these 
methods from being revealed in court.  These are not aimed at news 
reports (that would never get passed the Supreme Court), but would 
allow backdoors to be used for routine prosecutions without fear of 
revealing their existence.

That's tricky, too, since the Constitution provides the *defense* with 
a guarantee of open trials.  At most, there are laws to prevent 
"greymail", where the defense threatens to reveal something sensitive.  
In that case, the judge reviews its relevance to the case.  If it is 
relevant -- and a back door used to gather evidence certainly would be 
-- the prosecution can either agree to have it revelated or drop the 
case.

I'm not saying there aren't back doors that wouldn't fall into this 
category; I am saying that such a law would have to be very narrowly 
crafted to pass constitutional muster.

--Steve Bellovin






Re: NSA back doors in encryption products

2000-05-24 Thread Steven M. Bellovin

In message 001a01bfc599$355fc440$31cf54ca@emnb, "Enzo Michelangeli" writes:


 John Gilmore wrote:
  Anybody tested the primes in major products lately?

 Interesting point ... of course, these days one can produce checkable
 certificates of primality - but I'm not aware of any free software to do
 it ... is there any?

What about the one quoted below?

Enzo


A beta release of CERTIFIX (a primality proving program I am
writing) is available. It is based on the Goldwasser, Kilian
 Atkin algorithm.

CERTIFIX is an executable for Win95, Win98, NT (hardware Intel
compatible). It is a freeware.

Currently, it can certify a 1024 bit integer in less than
10 mn (AMD K6-2/450 processor).

Download link:
  http://www.znz.freesurf.fr/files/certifix.zip  (300 Kb)

The package contains the 5 following files
  certifix.exe
  certifix.hlp
  readme.txt
  todo.txt
  changes.txt


Let me see if I understand -- we're worried that NSA has buried secret 
composite numbers in our code.  So we're going to solve that problem by 
running a random binary -- source isn't in the zip file -- from someone 
else?

--Steve Bellovin






Re: Critics blast Windows 2000's quiet use of DES instead of 3DES

2000-05-22 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Eivind Eklund writes:
On Sat, May 20, 2000 at 10:40:01AM -0700, David Honig wrote:
 At 11:07 AM 5/20/00 -0400, Steven M. Bellovin wrote:
 concern buggy crypto modules, and ask yourself how using triple AES 
 would have helped.))
 
 Was this a slip of the finger or are you proposing a 3x256-bit key
 mode for the reeealy paranoid?

3x256 bit isn't enough for the rally paranoid.

For protection against the intelligence agencies, I do not trust any single
cipher.   I want at least three different ciphers (ones that are generally
considered pretty secure), each running in EDE (3x) mode, and preferrably
with different design principles.

...  Etc.

You miss my point entirely.  At even the level I suggest (and no, 
"triple AES" was not a typo), the cipher is not the weakest link.  Have 
you guarded against TEMPEST?  Tailored viruses?  Physical bugs in your 
keyboard?  Cameras in your ceiling?  Differential power analysis?  
Bribing or suborning your co-conspirators?  A subpoena attack?  In some 
countries, rubber hose cryptanalysis?  Bugs in your software or your 
procedures?  Plaintext left lying around?  What about the cryptographic 
protocols you wrap around the cipher?  For that matter, how are you 
going to guard or remember your private and/or symmetric key?

Good ciphers are certainly very important, but they're far from the 
only cause of security problems  In most cases, they're not even the major 
issue.  In fact, adding too much complexity -- in, say, the software 
you need to implement your three rounds of cipher, your key mix, your 
public key ciphers, your Merkle puzzles -- can itself be problem.

--Steve Bellovin






Re: IP: FBI insists it can tap e-mail without a warrant

2000-05-19 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "Perry E. Metzger" writes:


As interpreted by the FCC, the act also would require telecommunications 
providers to turn over "packet-mode communications" - such as those that 
carry Internet traffic - without the warrant required for a phone wiretap.

I think that the probability that the FBI will be allowed access to 
packet content without a warrant is about zero.  There's too much 
precedent against them.

First, look at the origin of the current Federal wiretap law (18 USC 
2511).  Circa 1967 (I'm on a plane and don't have my references handy), 
the U.S. Supreme Court ruled that warrantless wiretaps were violations 
of the Constitution.  The principle of stare decisis (which I can type
but not pronounce...) makes it quite improbable that they would overturn
that ruling, this soon.  The legislative response was Title III of the 
Ombnibus Crime Control Act of 1968, which set out the current wiretap 
requirements.  These were amended by the ECPA to cover other forms of 
communications.  

The CALEA requirements are for headers -- for better or worse, the 
equivalent data in the voice world, the caller and called numbers, are 
not protected as strongly.  One can argue that those are, legally 
speaking, more easily obtainable.  The argument was made to the FCC 
that it's very hard to separate headers from content (and rightly so, I 
think -- consider all the myriad forms of tunneling).  So the FBI said, 
in effect, "give us all the data and we'll do the hard stuff".

It's very hard to see how that will fly.  It certainly won't in New 
York, where the courts have ruled that even pen registers require a 
wiretap warrant, since the same technology is involved.  Using not 
just the same technology, but actually turning over the content on a 
promise of good behavior, with no warrant -- that's not one I'm worried 
about.  (Of course, if it passes muster, I'll be *very* upset, but I 
very much doubt that that will be the case.)


--Steve Bellovin






Re: Automatic passphrase generation

2000-05-11 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Paul C
rowley writes:
Rick Smith [EMAIL PROTECTED] writes:
 If you can control the risk of off-line attacks (i.e. theft of the password
 file) then attackers are stuck performing on-line attacks. The system under
 attack can usually detect on-line attacks and take countermeasures to
 reduce the risk of a successful penetration.
 
 A related strategy is to combine the simple secret with a larger, more
 random secret. But this provides better security only if you can keep
 attackers from stealing the larger secret. One approach is to embed the
 larger secret inside a tamper resistant device like a smart card, and set
 up a protocol that doesn't allow the secret to leak out. But there's still
 the challenge of protecting the copy of the secret stored on the server.

The SRP authors (http://srp.stanford.edu/) suggest that SRP can be
enhanced such that the server knows neither secret, only a verifier
for the secrets.  This means you have to extract the secret from the
smartcard itself.

Mike Merritt and I described such a mechanism in our A-EKE paper,  
http://www.research.att.com/~smb/papers/aeke.ps (or .pdf), several 
years earlier.  Briefly, use a DSA public key as the shared secret for 
EKE (http://www.research.att.com/~smb/papers/neke.ps or .pdf), then 
send an additional message from the client that uses the private key to 
sign a random value, perhaps the negotiated key.  



--Steve Bellovin






Re: key agility and IPsec

2000-04-27 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Ron Rivest writes:

Steve --

Don't your statistics support the argument that key agility is
*not* likely to be terribly important by itself?

With a cache capable of storing only 5 key setups, you get at least a
75% hit rate, by your statistics.  

This effectively reduces key setup time by a factor of *four*, making it
really second-order compared to the bulk of the encryption work to be
done.

Depending on the algorithm, a cache for 5 key setups is pretty
minimal.  For example, a setup key for RC6 requires only 176 bytes; a
kilobyte of RAM would easily do for a five-key cache.

I like your miss-rate statistics, but feel they support better the
argument that ``key agility is not terribly important by itself''
rather than the statement ``key agility is terribly important.''

The short answer is "I don't know yet, and that's why I'm doing
experiments".  But let me give a more detailed answer.

I assume that any hardware implementation of AES for something like
IPsec will have some cache for algorithms that would benefit from
one.  (It's possible that some of the other candidates can do
key-scheduling in-line, and hence won't need a cache.)  The question
is how big a cache is needed.

My experiment was couched in those terms because I regard that
question as more useful.  Without a cache, the limiting case is
alternating short packets for two SAs, in which case you're doing
a setup operation for every packet.  In that case, then, the
calculation is simple:  the chip has to be able to run at a rate
so that the time for key setup plus cryptography is less than the
total transmission time for the packet.  Otherwise, it would not
be able to keep up with a large burst.  The smaller the packet,
the less time there is to amortize the cost of the key setup.  If
key setup took as much time as encryption of 3-4 blocks, it would
require a chip to run about twice as fast if the data were exclusively
short blocks.  My numbers certainly support the notion that in IP,
you'll see very few back-to-back packets for the same destination;
even in my small-scale experiment, I saw that 60-70% of the pairs
were to different destinations.  That number is very likely to go up
for higher-speed nets with more endpoints.

As I said, though, that's a limiting case and not realisitic.
First, chips will most likely have caches.  Second, not all blocks
are short in IP, thus providing some headroom.  But we need traffic
data to quantify the effects of those figures.  For example, the
mix of short and long packets determines how much input buffer
space the cryptographic engine would need to hold incoming packets
while doing key setup for a sequence of short ones.  Similarly, we
don't yet know how large a cache is needed for more SAs -- does it
scale linearly?  (I suspect not, but I don't know.)  I was working
with about 150 SAs over a period of several hours, but their are
products on the market today that support 2000 simultaneous SAs,
at 70 Mbps.  (See, for example, Timestep's Newbridge 237.)  What
size gateways will be needed in 10 years?

There are also other sorts of VPNs than IPsec-based ones.  In ATM,
cells are 53 bytes, stressing any key setup algorithm if fully
mixed.  If the ATM net is used to carry IP traffic, you'll see
bursts more or less equivalent to IP's statistics, but if secure
voice is being carried, you really will see interleaved packets
from different conversations.  I experimented on IPsec gateways because
I know them better, and have one available for experimentation.

What we need is more data.  If someone has a larger IPsec gateway and
can monitor the traffic, I'll be happy to work with them to crunch the
data.  What I did was just a start.


--Steve Bellovin






Re: from Interesting People: Record encryption puzzle cracked -- finally

2000-04-17 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "Perry E. Metzger" writes:
 
 Anyone know anything about this?
 
 -- 

See http://www.inria.fr/Presse/pre67-eng.html or 
http://www.inria.fr/Presse/pre67-fra.html -- it was an attack on an 
instance of elliptic curve cryptography.

--Steve Bellovin






Re: Key agility

2000-04-16 Thread Steven M. Bellovin

I'll try to reply in more detail tomorrow; for now, let me say that the network 
traffic situation is vastly more complex than you describe.

First, the papers you and Hari cite are for wide-area traffic.  IPsec VPNs 
will probably have characteristics much more like LAN or site-local traffic.  
We don't have reliable data on modern intrasite traffic; the latest good study 
I've found is Gusella's, from around 1991 or thereabouts.  75% of WAN traffic 
is http; while there is certainly a lot of internal Web use, you'll also see 
networked file systems, a lot of email polling and submission, and very many 
custom applications.  We don't know what this traffic is like.

I do agree that traffic has a bimodal size distribution (in fact, I think I 
said so at AES-3); I based my statement in part on the same papers and Web 
sites that you mention.  And while the average packet size is indeed tending 
upwards, there are a number of complicating factors.

First, Path MTU usage is being hindered by firewalls that block the necessary 
ICMP messages.  Second, packet size is also affected by flow size, and the 
average flow is remarkably short.  In other words, if you're not sending very 
much data, you can't send it in very big packets.  (This statement is also 
based on measurements of wide-area traffic, and hence is dominated by the 
behavior of Web clients and servers.  That, in turn, is unlikely to change 
until we have many more HTTP 1.1-compliant browsers and servers deployed; that 
is happening rather slowly (http://www.research.att.com/~bala/papers/procow-1.ps.gz).

The real question, though, is what the probability is of a packet for one 
security association being followed by a packet for another.  On this, we have 
little data; what we do have suggests to me that we will see rather more 
context switches.  Raj Jain's work on "packet trains" suggests that large 
bursts of data will be sent out in several consecutive packets; depending on
the ACK delay strategy, there will be fewer small ACK packets, and those will 
be widely spaced; that in turn allows more room for interleaving.  If nothing 
else, the shorter transmission time for a small packet will allow gaps in 
which other packets may be sent.  But how are packet trains affected as they 
pass through many hops in today's Internet?  We don't know.  What about the 
effect of TCP congestion control strategies on packet spacing?  Again, there 
isn't nearly as much data as we'd like.  

One last point for tonight.  Your third note suggests that it is the total 
work that matters, which in turn depends on the average packet size, rather 
than the distribution.  That, I think, is only true if one has sufficient (and 
sufficiently fast) buffers.  That is, suppose that a packet arrives for a 
different SA, necessitating a key change.  That packet has to be buffered 
while the new key schedule is computed or loaded.  If the packet is short, and 
is followed by several more short packets for other SAs, each will be delayed 
by this process.  Yes, you'll make it up on the next large packet that 
arrives; however, the cumulative delay can be significant.  Apart from the 
buffering, latency is quite important for throughput, and can have bad effects 
on real-time traffic, such as voice-over-IP.  (I should note that if VoIP 
develops as some expect, it will completely change the nature of Internet 
traffic.)

Bottom line -- the actual dynamics of network traffic seem to matter quite a 
lot, and while we don't know all that we should, I see enough trends that I do 
believe that key agility is an important issue.

--Steve Bellovin






Re: IP: Gates, Gerstner helped NSA snoop - US Congressman

2000-04-13 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Matt Blaze writes:

 But I still don't believe there are secret back-doors in commercial OSes
 because such things are too hard to keep secret. And I think the Lotus
 incident is more evidence that NSA isn't going to try to keep something
 like that secret since they can't depend on it staying secret.

I agree, assuming we're talking about *deliberate* back doors.  But,
as we all know all too well, the major commercial OSs have repeatedly
proven to ship with bugs (and default configurations) that make them
vulnerable to all kinds of mischief, secret back doors or not.

Precisely.  Remember that NSA et al. -- as well as the industry of the country 
they're trying to protect -- use those same systems.  I don't think they'd 
take the risk of such a back door leaking; it would endanger too many other 
systems.

But this a problem more believably attributed to the usual software bloat,
bad quality assurance practices, incompetent programming, and overly
aggressive schedules, than to the secret influence of spies.

Precisely.


--Steve Bellovin






injunction issued against cphack

2000-03-17 Thread Steven M. Bellovin

The AP reports that a U.S. judge has issued an injunction against the
Canadian and Swedish authors of cphack, the program that unlocks and
displays the blocked site list from CyberPatrol.  The order extends to
distribution by others as well, including -- according to the plaintiff's
attorney -- all mirror sites.

Even without questions of the reach of U.S. law, this is a preposterous
ruling.  If you add them in, it's insane.



Re: time dependant

2000-03-10 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], John Kelsey writes:

 
 Nor do I.  But there's a related engineering question:  Does
 it make sense to build large systems in which there's no way
 for humans to overrule the actions of programs once they're
 set in motion?  *That* is the question I'm raising, not
 whether mathematicians and scientists should have tried to
 somehow suppress the research that has made this possible.
 It's clearly possible; that doesn't mean it's a good idea to
 design systems like this.

Yup.  To many of you, the phrase "mine shaft gap" will provide a clear example 
of what I'm talking about.

Of course, in a crypto context this is a very hot button -- one of the 
arguments used for key escrow was that we should make sure that messages are 
decryptable in case of dire need
 
 To use a more common example, I believe there were some cars
 (maybe experimental, I don't know) which would simply refuse
 to start the ignition until all passengers had their
 seatbelts on.  There's no doubt that it's possible to design
 such a car.  But you couldn't sell them without making it
 illegal to buy any other car, and users would flock to
 mechanics to have the feature removed in droves, regardless
 of the law.

Circa 1976, U.S. Federal regulations required that cars implement a state machine 
-- get in, close the door, buckle your seat belt, start the car, leave it 
buckled or loud obnoxious noises sounded.  These were built and sold -- and 
were so unpopular that Congress passed a law rescinding that (administratively 
promulgated) regulation.  


--Steve Bellovin





Re: time dependant

2000-03-08 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "Matt Crawford" writes:
 
 If you're going to trust that CryptoSat, inc. hasn't stashed a local
 copy of the private key, why not eliminate all that radio gear and trust
 CryptoTime, inc. not to publish the private key associated with date D
 before date D?

The minor answer is that I could postulate that CryptoSat sells slots for 
various parties (including senders of time-delayed messages) to install their 
own tamper-resistant boxes.

But the major answer is time scale -- I only have to trust CryptoSat for a 
short period, while I have to trust CryptoTime for the entire delay period.

The real answer, though, is that you're probably right -- there's too much 
temptation in this field to use technical mechanisms, when contract law will 
suffice.

--Steve Bellovin





Re: please help FreeNet by becoming a node

2000-03-03 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Steve Schear writes:
 At 09:56 AM 3/2/00 -0500, Steven M. Bellovin wrote:
 It is worth noting that some bans on running servers are based on technology
 ,
 not the business model of the provider.  In IP over cable systems, there is
 much less bandwidth available upstream than downstream, and it's much more
 expensive to add more upstream bandwidth than it is to add downstream
 bandwidth.  If you run a server, you're chewing up a lot of capacity, and
 affecting your neighbors.
 
 But you're right, it's a real concern for users of Freenet (btw, isn't that 
 a
 trademarked term?) -- I have the same problem as you do.
 
 Seems the firewall restriction is more of a concern.  Anyone who cares 
 about their PC's integrity and communication privacy should have a firewall 
 for always-on connections.  In the next year or so look for many/most cable 
 modems and DSL boxes to provide a firewall function or have it as an option.

There are a lot of responses to that; the real issue is who controls the 
security policy.

--Steve Bellovin





Re: please help FreeNet by becoming a node

2000-03-02 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Bill Stewart writes:
 It would be very nice if there were a Freenet _client_ 
 instead of, or in addition to, the Freenet _server_.
 What's the functional difference?  None, actually :-)
 The problem is that many US cable modem networks,
 and some US xDSL networks, have strong policies
 against running network servers, but not network clients.
 So if Freenet is a server, I can't run it on my cable modem,
 but if it's just a client with maybe some peer-to-peer capabilities,
 that's just fine :-)  Unfortunately, it would have more credibility
 if there were some server to talk to in addition to other clients,
 perhaps analagous to ICQ registry server, but that's probably not
 very compatible with the rest of the system.
 
 The transparency and stupidity of the request has nothing to do
 with its importance.  :-)

It is worth noting that some bans on running servers are based on technology, 
not the business model of the provider.  In IP over cable systems, there is 
much less bandwidth available upstream than downstream, and it's much more 
expensive to add more upstream bandwidth than it is to add downstream 
bandwidth.  If you run a server, you're chewing up a lot of capacity, and 
affecting your neighbors.

But you're right, it's a real concern for users of Freenet (btw, isn't that a 
trademarked term?) -- I have the same problem as you do.

--Steve Bellovin





Echelon press coverage

2000-02-24 Thread Steven M. Bellovin

The mainstrem American press has finally noticed Echelon.  See

http://www.nytimes.com/library/tech/00/02/biztech/articles/24spy.html
http://www.nytimes.com/library/tech/00/02/biztech/articles/24secure.html
http://www.washingtonpost.com/wp-dyn/articles/A24275-2000Feb23.html


--Steve Bellovin





Re: Copy protection proposed for digital displays

2000-02-23 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Ian Farquhar writes:
  5. Sony spends millions on recalls, PR damage control, etc.
 
 Look at it this way:
 
 "Sony, you'd better do a pretty good job of securing your keys, as if
 your systems are compromised you'll wear the financial consequences."

It's worth mentioning that many current business models seem to favor 
subscription-based services, rather than simple static content or hardware.  
Consider Tivo's VCR replacement, which requires a phone connection to update 
its viewing guide, etc. -- a feature you pay ~$10/month for.  Or look at the 
late, (unlamented?) DIVX variant on DVD.

The cost of hardware is going asymptotically to zero, and ordinary content is 
relatively easy to copy.  Everyone knows that -- and smart companies are trying
to make their money some other way.

--Steve Bellovin





Re: The problem with Steganography

2000-01-27 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Marc Horowitz writes:

 
  In short, is steganography the ultimate surveillance tool?
 
 Like most surveillance technologies, this is a game of constant
 incremental improvements.  You watch me through a window, I put up
 curtains.  You listen through a hidden microphone, I increase the
 background noise.  Etc.
 
 As was discussed here a few weeks ago, it's very difficult to do
 undefeatable watermarking, and I'd say it's impossible to do
 undetectable watermaking in a digital medium (just compare the
 documents).  My point is that stego could be used as a surveillance
 tool, but it would be difficult, and defeating it would be feasible.
 Therefore, I don't believe it is the "ultimate" surveillance tool.

So -- has anyone on this list found the watermarking present in color copier 
output?

--Steve Bellovin





solve a web puzzle, work for gchq?

2000-01-14 Thread Steven M. Bellovin

The AP reports that GCHQ -- the British cryptologic agency -- has posted a 
puzzle on its Web site.  If you can solve the puzzle (it's at 
http://www.gchq.gov.uk/challenge.html), they want to talk to you...

Of course, the AP quoted a former MI5 agent as saying "The kind of people
with lively minds this appeals to will soon discover that  this kind of
thing is all done by computer anyway."



Re: New Yorker article on NSA surveillance, crypto regs

1999-12-03 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Declan McCullagh wri
tes:

 While much of it resonates as true, the timing -- just before crucial
 oversight hearings and concerns about illegal NSA spying -- might be a
 little coincidental:
   http://www.wired.com/news/politics/0,1283,32770,00.html
 
 Last week's CNN article and televised report raised near-identical concerns
 about newfound NSA eavesdropping ineffectiveness:
   http://www.cnn.com/US/9911/25/nsa.woes/

These two articles state that "The worldwide move to digital, rather than
analog, phones and other equipment is making eavesdropping more difficult.
So are fax machines".  Can someone tell me why "digital" is harder for NSA?
Fax should be easier than voice, since there is in-band caller information.
(In the U.S., that information is legally required to be accurate.  I wonder
if they've ever seen pages from "Cali Cartel, Inc.")



Re: Thawte SuperCerts

1999-12-01 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "Marcus Leech" writes:
 The Thawte folks are busily promoting their "SuperCerts" which enable
 128-bit
   symmetric modes in "International" versions of the various browsers.
 
 I guess I've been out of touch--is there an extension in web certs that
 enables
   better than 40-bit symmetric SSL modes?  My assumption has always been
 that
   a 40-bit (or 56-bit) browser was "nailed" to that particular key size,
 or
   lower.
 
There's an exemption that permits 128-bit keys when talking to financial 
institutions.  In SSL, this is enabled by some field in the merchant's
certificate.  Perhaps a "SuperCert" has that bit set?

--Steve Bellovin





Re: Thawte SuperCerts

1999-12-01 Thread Steven M. Bellovin

In message 00ee01bf3c40$08c1df00$[EMAIL PROTECTED], "Matthew Ham
rick" writes:
 This moves the problem of what gets
 exported from the application developer to the CA issuing the super
 cert. While I'm not sure, I'm guessing that VeriSign can't issue a
 super cert to Uncle Saddam, but Thawte being in South Africa may have
 more leeway in this regard.

There are some fascinating imlications here.  First, as you note, export 
controls are now tied to certificate issuance.  Certificates are for 
authentication, and would not normally be controlled -- but this type is.  
Second, a browser that accepts a magical (i.e., strong crypto) certificate 
from J. Random CA can be seen as "crypto with a hole", and hence not 
exportable -- unless, of course, there's a built-in list of trusted CAs.  (We 
can take that idea even further by imagining certificates that actually 
contain the crypto code, a la PolicyMaker.  Active code in your certificates, 
which have to be processed inside (or at least close to) your trusted base...) 
Finally, if the government has a new excuse for poking its nose into CA 
operations, what other requirements will they impose?  I think we know the 
answer to that...  (If you're in any doubt, have a look at 
http://www.uspto.gov/web/offices/ac/ahrpa/opa/pulse/9712.htm -- electronic 
filing of patent applications has been held up until a completely gratuitous
key recovery mechanism is built in.)

--Steve Bellovin





Re: Thawte SuperCerts

1999-12-01 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], EKR writes:

 I'm assuming it's compiled into the code, since if it were in the
 cert database, it could be tampered with.

Sure -- just like Fortify can't exist...

--Steve Bellovin





they should have used crypto...

1999-11-23 Thread Steven M. Bellovin

[ Steve asked me to add:
  Perry, could you amend my posting to include the following URL, too:
  http://www.nytimes.com/aponline/f/AP-Internet-Bookseller-Settlement.html ]

Naturally, those of us on this list advocate routine use of cryptography.  But 
cases where cryptography or the lack thereof is demonstrably 
commercially significant are rare.  A new one has just come to light.

According to the 23 November Wall Street Journal (if you subscribe to their 
Web site, see http://interactive.wsj.com/articles/SB943305134720688183.htm), a 
rare book dealer has pleaded guilty to intercepting more than 4,000 email 
messages, in an effort to gather market intelligence.  (The company claimes 
that the interceptions were innocent; the Justice Department disagrees.)  
Alibris is both a rare book dealer and an ISP serving other book dealers; they 
kept copies of messages from Amazon.com to other dealers.

There's no mention in the article about how routine encryption would have 
proteced the victims...

--Steve Bellovin





Re: a smartcard of a different color

1999-11-17 Thread Steven M. Bellovin

In message v04220814b457e31782c9@[204.167.101.35], Robert Hettinga writes:
 
 --- begin forwarded text
 
 
 To: [EMAIL PROTECTED]
 Subject: a smartcard of a different color
 Date: Tue, 16 Nov 1999 22:15:07 -0500
 From: Dan Geer [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]
 
 
 
 Yesterday I saw a smartcard of a different color.  In particular,
 it is the smartcard chip but in a key-ring thing that is more or
 less identical to the Mobil SpeedPass except that it has a USB
 connector on one end and a keyring hole on the other.  Total length
 circa 1.25"; color purple; maker Rainbow Technologies.  As my pal
 Peter Honeyman said in showing it to me, "There are already all
 the USB ports we'll ever need."  I'd point out that without the
 7816 requirement for flex a whole lot more memory is a trivial
 add-on and that USB is not a bandwidth bottleneck.
 
 --dan
 
 ref:  http://www.rainbow.com/ikey/graphics/iKey_DS.pdf

Folks I've talked to about products like that say that USB ports aren't 
designed for that many insertion/removal cycles.  (We'll ignore, for now, all 
of the PCs that have their USB ports in the back, where you can't get at
them easily.  One could always add on a hub.)

--Steve Bellovin





Re: Digital Contracts: Lie in X.509, Go to Jail

1999-10-19 Thread Steven M. Bellovin

In message v0421012db4321dc2f55c@[204.167.101.62], Robert Hettinga writes:

 
 
 The solution to this madness, is, of course, bearer credentials, as
 Stephan Brands points out in his recently published doctoral dissertation
 "Rethinking Public Key Infrastructures and Digital Certificates --
 Building in Privacy", now published by Ponsen and Looijen in the
 Netherlands, ISBN 90-901-3059-4.

Do you know where to order this?  None of the amazon.com sites has it, nor doe
s barnesandnoble.com.

--Steve Bellovin





Re: IP: IETF considers building wiretapping into the Internet

1999-10-14 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Steve Reid writes:
 On Wed, Oct 13, 1999 at 03:08:49PM -0400, Steven M. Bellovin wrote:
  But it's also clear that folks who manufacture this gear for sale in
  the U.S. market are going to have to support CALEA, which in turn
  means that someone is going to have to standardize the interface --
  the FBI regulations at the least strongly urge that
  industry-standard protocols be used for such things.
 
 I'm no lawyer, so I'm probably going out on a limb here, but I don't
 think CALEA can apply to encryption.
 
 If you use a 3DES-encrypted phone over a CALEA-compliant carrier it
 doesn't invalidate the carrier's CALEA compliance. The LEAs still have
 access to the communications, just not to the plaintext. So in practice
 CALEA does not guarantee access to plaintext.

Yes and no.  Yes, you're quite correct that CALEA doesn't bar 3DES.  *However* 
-- where the key comes from matters a lot.  If the carrier participates in the 
key exchange -- say, by acting as the KDC -- then it has to make available 
either that key or the plaintext of the call.

If, on the other hand, the end systems do the key management themselves, say 
via PGPphone, Starium, or STU-III -- then the telephone company is off the 
hook.

In other words -- CALEA obligates carriers to provide their piece of the 
conversation; end-user stuff isn't covered.

And no, I'm not a lawyer, either, but I have to worry about some of this stuff 
for my day job.

--Steve Bellovin





Re: IP: IETF considers building wiretapping into the Internet

1999-10-13 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Declan McCullagh wr
ites:

 
 This followup might be relevant too. Has the FBI ever publicly weighed in
 on an IETF debate before? Are there any implications here in other areas,
 such as taxes, content, or encryption?


There are clearly many aspects to this question.  The particular IETF 
discussion was triggered by a move in a working group that was concerned with 
connectivity to the PSTN; they wanted to add CALEA support to their protocol.  
Should that be done in the IETF?

It's clear that such capabilities lower the security of the system.  (A 
fascinating Wall Street Journal story (Oct 1, front page) describes how a 
"data tap" was used to monitor some hackers.  Among other things, assorted 
hackers found databases of phone numbers being monitored by the FBI.  What 
will these folks do when they can get to CALEA ports?)  But it's also clear 
that folks who manufacture this gear for sale in the U.S. market are going to 
have to support CALEA, which in turn means that someone is going to have to 
standardize the interface -- the FBI regulations at the least strongly urge 
that industry-standard protocols be used for such things.  (And yes, it's 
quite clear that many uses of this particular working group's protocol would 
be within the scope of the law.)

So -- how should the back door be installed?  In the protocol? In the telco 
endpoint?  Is it ethical for security people to work on something that lowers 
the security of the system?  Given that it's going to be done anyway, is it 
ethical to refrain, lest it be done incompetently?

--Steve Bellovin





Re: IP: IETF considers building wiretapping into the Internet

1999-10-13 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "P.
J. Ponder" writes:

 
 Is it a given that IETF standard protocols will contain backdoors?  I
 support the idea of bringing the issue before the IETF.  Surely the vast
 majority will oppose weakening the protocols.  
 

No, it is by no means a settled question.  The IESG posted a note soliciting 
comments on a new mailing list; it will also be discussed during the regular
plenary session.

Here's the exact text of the announcement:


The use of the Internet for services that replace or supplement
traditional telephony is, predictably, causing discussions in many
countries about the point at which special rules about telephony
services begin to apply to Internet service providers.  In many
countries, these rules could impose new legal obligations on ISPs,
particularly requirements to comply with requests from law enforcement
agencies or regulators to intercept, or gather and report other
information about, communications. For example many traditional
telephony devices, especially central-office switches, sold in those
countries are required to have built-in wiretapping capabilities to
allow telephone carriers to fulfill these obligations.

A number of IETF working groups are currently working on protocols to
support telephony over IP networks.  The wiretap question has come up
in one of these working groups, but the IESG has concluded that the
general questions should be discussed, and conclusions reached, by the
entire IETF, not just one WG.  The key questions are:

  "should the IETF develop new protocols or modify existing protocols 
  to support mechanisms whose primary purpose is to support wiretapping 
  or other law enforcement activities" 

  and 

  "what should the IETF's position be on informational documents that 
  explain how to perform message or data-stream interception without 
  protocol modifications".   

We would like to encourage discussion of these questions on the new
[EMAIL PROTECTED] mailing list. Subscription requests should be mailed to
[EMAIL PROTECTED] OR subscribe via the web at
http://www.ietf.org/mailman/listinfo/raven

Time will be allocated at the Plenary session at the November IETF to
discuss this orally and try to draw a consensus together. (PLEASE
DISCUSS THIS ON THE NEW MAILING LIST AND NOT ON THE GENERAL IETF LIST)

In addition to the general questions identified above, we believe it would
be helpful for mailing list comments to address the following more specific
questions:

  Adding wiretap capability is by definition adding a security hole. 
  Considering the IETF's commitment to secure protocols, is it a reasonable

  thing to open such a hole to meet these requirements?

  Should the IETF as an international standards organization shape its 
  protocols to support country-specific legal requirements?

  If the companies who employ the IETF participants and deploy the 
  IETF's technology feel that having wiretap capability is a business 
  necessity due to the regulatory requirements in the countries where 
  they want to sell their products, would that make a difference to the 
  IETF position on this subject?

  What is the appropriateness or feasibility of standardizing mechanisms 
  to conform to requirements that may change several times over the life
  cycle of equipment built to conform to those standards?   

  When IPv6 was under development, the IETF decided to mandate an 
  encryption capability for all devices that claim to adhere to those
  standards.  This was done in spite of the fact that, at the time the 
  decision was made, devices meeting the IPv6 standard could not then 
  be exported from the U.S. nor could they be used in some countries.
  Is that a precedent for what to do in this case?

  Could the IETF just avoid specifying the part of the technology that 
  supports wiretapping, presumably assuming that some industry consortium 
  or other standards organization would do so?  Would letting that 
  responsibility fall to others weaken the IETF's control over its own 
  standards and traditional areas? 

  If these functions must be done, is it better for the IETF to do them 
  so that we can ensure they are done in the most secure way and, where 
  permitted by the regulations, to ensure a reliable audit capability?

  What would the image of the IETF be if we were to refuse to standardize 
  any technology that supported wiretapping? In the Internet community? 
  In the business community? To the national regulatory authorities?

The goal of the mailing list and then plenary session is to address the
broad policy and direction issue and not specific technical issues such
as where exactly in an architecture it would be best to implement
wiretapping if one needed to do so.  Nor are they to address what
specific functions might be needed to implement wiretapping under which
countries' laws.  The intent is basically to discuss the question of
what stance the IETF should take on the general issue.



  

Re: Is SSL dead?

1999-10-08 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Bill Stewart writes:
 At 04:35 PM 10/6/99 , Phillip Hallam-Baker wrote:
 
 That means that you can only succeed against web-users whose browsers
 still accept SSL2.0, which is most Netscape users by default;
 I don't know if IE also defaults to that, but it probably does.
 Even if the https://www.target.com uses SSL3.0, the user isn't talking to it 
 -
 they're talking to https://www.attacker.com, which can use 2.0 if it wants.

Right -- and as long as sites like amazon.com -- to pick a real-world, 
just-verified example -- accept only SSL 2.0, asking folks to turn it off just 
isn't real.

--Steve Bellovin





new DoD high-tech crime-fighting center opens

1999-09-25 Thread Steven M. Bellovin

According to the AP, the Defense Department has opened a new center to help 
deal with electronic evidence in case of serious crimes involving the military
(http://www.nytimes.com/aponline/a/AP-Defense-Computer-Crime.html).  Among 
their purported capabilities are being able to track hackers across the 
Internet, undelete files, even reassemble shredded floppies.  The FBI has its 
own minilab in the building; the center expects to share equipment and 
techniques with law enforcement agencies.

What makes this article especially interesting to this group is the deliberate 
proximity of the new lab to NSA.  They will indeed attempt to deal with 
encrypted messages, though the director stated that such cases were rare.  He 
also said that he was "confident" that their techniques would be adequate to 
deal with messages encrypted under the new export regimes after Congress gives 
the FBI $80 million more over four years.

--Steve Bellovin





Re: IP: Smart Cards with Chips encouraged

1999-09-21 Thread Steven M. Bellovin

In message v04210104b40d7088a106@[24.218.56.100], Arnold Reinhold writes:

 And what is the value proposition for the consumer? SSL works swell.

Bingo.  Consumers will adopt this if and only if cost savings are passed on to 
them, which in turn can only happen if the credit card companies (a) see a 
reduction in fraud or other decrease in their costs, and (b) pass those 
reductions on to the merchant.

--Steve Bellovin





Re: Why did White House change its mind on crypto?

1999-09-19 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Howie Goodell writes:

 It's (2) that's the real problem.  They have this message they
 claim came from you, but the link to you is secret (maliced
keyboards; Windows 2000 backdoors, etc.)  This has nothing to do
 with encryption -- since the evidence is plaintext -- it's a
 bugging case.  However unlike wiretaps, a seized plaintext is
 not self-authenticating, unless you signed it with a private key
 the jury believes the Government didn't steal (hard to believe;
 how do we know they didn't watch you type your password and then
 fake the signature?)  So if I were on a jury, why should I
 believe them?

I'm not a lawyer, but...

It's always possible to challenge the authenticity of evidence.  The 
government may not have to explain how they got it (though as I noted, I think 
there's a good chance for a constitutional challenge here), but that won't 
stop a clever defense attorney from casting doubt on it -- say, by pointing 
out that Mark Furhman helped with the cryptanalysis

--Steve Bellovin





Re: Why did White House change its mind on crypto?

1999-09-18 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Adam Shostack write
s:

 | I suspect his security experts realized that export controls were 
 | ineffective in keeping crypto out of the hands of bad guys and that 
 | the DOD was suffering because the commercial products on which it 
 | depends lack strong security.
 
 To pick a nit, strong crypto will not solve a large number of the
 security problems we possess today.  It will make a class of attacks
 harder, but not the easiest class, which is to exploit flaws in
 software and configuration to bypass controls.

You're both right.

First, it's quite correct that crypto won't solve most problems.  Last year, I 
analyzed all of the CERT advisories that had ever been issued.  85% described 
problems that cryptography can't solve.  To give just one example, 9 out of 13 
advisories last year concerned buffer overflows -- and 2 of the remaining 4 
described problems in crypto modules.

That said, the problems that are solvable with cryptography -- sniffers, 
sequence number guessing, etc. -- are very important ones.  DoD machines --
and,  perhaps more importantly, vital private-sector computers -- use
off-the-shelf hardware and software.  (Remember the battle cruiser run by NT?) 
To the extent that these machines are vulnerable because of the lack of 
crypto, national security suffers.  There are lots of folks in the Pentagon 
who understand this.

One last point -- there is no one "government" view.  The government is 
composed of many individuals and many agencies; they each have their own 
agendas.  Sure, the SIGINT folks and the FBI want weak crypto, because it 
makes their jobs easier.  Other folks are more concerned with, say, keeping J. 
Random Terrorist from getting to the power grid (see Operation Eligible 
Receiver for details).  For that matter, there are people in the government 
who want American companies and non-DoD government agencies to be able to keep 
data secret from the prying eyes of pick-your-least-favorite-foreign-
government.

--Steve Bellovin





Re: more re Encryption Technology Limits Eased

1999-09-16 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Declan McCullagh wr
ites:
 What I found most interesting was what Attorney General Reno said about the
 government's cryptanalysis abilities. When asked if she can break strong,
 64 bit equivalent crypto, she said, "We have carefully looked at this and
 think it's possible," and declined to add details.
 
 DoD's Hamre said that there would be a big chunk assigned to cryptanalysis
 RD in DoD's requested FY2001 budget but added "some of the parts you may
 be interested [in] I can't discuss." (I wouldn't necessarily read much into
 this. It could simply be a face-saving move.)

This isn't at all improbable -- just do the math.

Deep Crack cost $250,000; it works against a 56-bit cipher.  Multiply that
by 256 and you get $64,000,000 -- hardly a preposterous increase in NSA's
budget.  Sure, they want faster results; they'll also have economies of
scale, processors faster than 40 Mhz, etc.



Re: NSA key in MSFT Crypto API

1999-09-07 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Peter Gutmann writes:

 Revealing the fact that CryptEncrypt() maps to a function in the 
 crypto hardware called ENCRYPT probably isn't a major threat to national 
 security.  Existing PKCS #11 drivers also reveal details of classified crypto
 algorithms like Juniper and Baton without this being a major problem.  I don'
 t
 think this is a valid claim.

As much as the NSA would like to use only hardware for encryption, they -- or 
rather, their customers -- can't afford it.  SKIPJACK wasn't published for the 
sheer joy of it; it was declassified because NSA wanted it deployed on 
insecure platforms, in software.

Sure, they want to use hardware.  But the armed forces -- the folks who use 
all this fancy crypto -- are under tight budget constraints, and don't want to 
add hundreds of dollars worth of extra gear to $600 PCs.
 

 ActiveX means every Windows machine is vulnerable (see Richard Smith's talk
 at Usenix, reported in Wired, for details).  What you do is use one of the
 zillions of ActiveX holes to install the trapdoored crypto, and the NSA 
 signing key to make sure it loads.  If you're not the NSA (or whoever it is
 who really have the corresponding private key), you use ActiveX to install 
 the trapdoored crypto *and* replace the NSA key with your own, and that'll 
 also make sure it loads.  This is at least as big a threat for US users as 
 for non-US ones (imagine a site like www.hotbabes.iq or www.freewidgets.sy
 which quietly sidegrades the crypto of everyone connecting from a .mil 
 address to get an idea of the implications).

The ability to abuse ActiveX creates far bigger holes than this.  Tell me -- 
if you'd just heard of Smith's talk and no one had found the NSA key, wouldn't 
the threat from www.hotbabes.iq still be serious?  I think so.  But the NSA 
key -- if it had been  protected by Microsoft, as its own public key seems to 
be -- provides some insurance against the crypto module being replaced this 
way.  That's also why the MS-signed module wouldn't be just a shim -- NSA 
really does want some assurance.  (Of course, their customers want to use 
Windows, which makes a mockery of that assurance...)

I heard the talk at CRYPTO; my immediate reaction was the same as Schneier's 
and Kuhn's -- this is simply NSA's way of adding their own crypto, without 
having to ask Microsoft's permission.  Having the key added was probably part 
of the quid pro quo for granting export permission to the entire scheme.

--Steve Bellovin





Echelon in the news

1999-09-06 Thread Steven M. Bellovin

Readers of this list may be interested in 
http://www.nandotimes.com/technology/story/body/0,1634,89923-142316-981920-0,00.html,
which discusses Echelon and its impact in Europe.  It's also the first mention 
I've seen of Echelon in mainstream American-based media.

--Steve Bellovin





Re: going around the crypto

1999-08-14 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "MIKE SHAW" writes:
 It's my understanding that in order to exploit this, you'd have to essentiall
 y
 set yourself up as a proxy after sending the RDP advert  If this is the case,
  
 wouldn't the fact that the man in the middle did not have the cert that
 corresponded to the domain name cause at least one warning for most
 browsers?  ('certificate name check' in netscape, 'wrong certificate name' in
 Opera).  Otherwise, you'd just be acting as a router and SSL would prevent
  sniffing.  Am I missing something?

Not as a proxy, since that's a different protocol from the host, but as the 
end-system.  Yes, you have to issue yourself a fake certificate, but I suspect 
that that's not an insurmountable problem.  And of course, that certificate is 
signed by someone you've invented with a plausible name -- probably something 
corresponding to the name of the site you're impersonating.  Say, "Amazon.com 
Electronic Security Services" or some such.





Re: going around the crypto

1999-08-14 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], EKR writes:
 "Steven M. Bellovin" [EMAIL PROTECTED] writes:
   Now, this does require that the CAs that your browser trusts follow
   the Common Name=domain name convention, but that's just a special
   case of trusting your CAs.
  
  The attacker could also present a certficate from a fake CA with an 
  appropriate name -- say, "Netscape Security Services", or something that
  plays on the site name they're trying to impersonate -- "Amazon.Com Encrypt
 ion
  Certification Center" if someone is trying to reach Amazon.com or some such
 .
 Right. In which case Netscape brings up a different dialog which
 says that the server certificate is signed by an unrecognized
 CA. Again, you can proceed, but it's not like it's automatic.

It's clearly not automatic, but I suspect it would work




going around the crypto

1999-08-13 Thread Steven M. Bellovin

The L0pht has issued a new advisory for an routing-type attack that can,
they say, allow for man-in-the-middle attacks against SSL-protected sessions
(http://www.l0pht.com/advisories/rdp.txt).

The implication -- that there's a flaw in SSL -- is probably wrong.  But 
they're dead-on right that there's a real risk of man-in-the-middle attacks, 
because the attacker can go around the crypto.

By sending the proper ICMP packets to a vulnerable host (most Windows 95/98 
boxes, and some Solaris/SunOS systems), outbound traffic can be routed to an 
attacker's machine.  This machine can pretend to be the destination of 
the SSL-protected call; it in turn calls the real destination.

The obvious protection is for users to check the certificate.  Most users, of 
course, don't even know what a certificate is, let alone what the grounds are 
for accepting one.  It would also help if servers used client-side 
certificates for authentication, since the man-in-the-middle can't spoof 
the user's certificate.  But almost no servers do that.

This is why I wrote, a year ago, that we effectively have no PKI for the Web.
It also underscores the importance of looking at the entire system design, 
rather than just the crypto.  Crypto alone can't save the world; it's 
necessary, but far from sufficient.




preliminary comments on the finalists

1999-08-10 Thread Steven M. Bellovin

It's going to be hard to pick one of the five finalists.  But if
the criteria remain (substantially) the same, I think the field
may be narrowed significantly.  I'm making one very crucial assumption
here, of course -- that to the extent it is knowable, all five
finalists (Rijndael, MARS, RC6, Serpent, and Twofish) will be
equally secure.  In that case, performance and confidence become
major criteria.

NIST marked down MARS and RC6 for their bias towards 32-bit platforms
with particular architectural characteristics.  RC6 is denigrated
for a (relatively) low security margin; MARS is criticized for
complexity.  Serpent, though quite strong, is slow.  Twofish is
flexible, but perhaps too complex.  Nothing negative was said about
Rijndael in the summary -- it seems to be very secure, have a fast
key setup time, and excellent performance on all platforms.

When I look at those judgments (all taken from 2.7.3 of the NIST
report), I suspect that MARS, RC6, and Serpent are going to be
dropped for performance reasons.  Twofish and Rijndael are both
excellent performers across the board.  The latter is simpler; the
former seems to have a higher security margin (if I'm not reading
too much into the difference between a "large security margin" and
a "good security margin").  The answer may depend on the weighting
of those two criteria.



noise, random and otherwise

1999-07-31 Thread Steven M. Bellovin

Folks, this list has been getting rather noisy of late, mostly with 
discussions of political philosophy.  Can we move those discussions somewhere 
else?  

Most of us on this list want free crypto.  Loudly proclaiming that you do, 
too, isn't particularly new or useful.  And while we have different reasons 
for feeling that way, ranging from libertarian anarchism to ACLU-type 
liberalism to pragmatic human rights concerns, those distinctions are not the 
topic of this list.  (It doesn't help that most of the arguments I've seen,
on all variants of all of these, have been quite naive.  "There are more things in 
heaven and earth, Horatio, / Than are dreamt of in your philosophy.")

Let's save the politics for debates over suitable quantities of ethanol, and 
use this list for technical material and the *occasional* factual announcement 
about the world around us, such as Reno's recent statement.  And if you do 
feel the need for (another?) political crypto form, perhaps another list is 
in order.




Re: Justice Dept asks Court of Appeals to reconsider ruling in Bernstein case

1999-06-22 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Declan McCullagh wri
tes:
 I have a more detailed report on Wired News:
 
   http://www.wired.com/news/news/politics/story/20333.html
 
 My favorite part of the brief (I quote it):
 
 
  Another argument: That this type of 
  regulation is an executive-branch policy 
  decision involving "extraordinarily 
  sensitive" info that's too secret to 
  disclose publicly.

Gee -- did they happen to mention that the CRISIS report concluded that
the question could be discussed without reference to classified info?





Re: hushmail security

1999-06-17 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], [EMAIL PROTECTED] wr
ites:

Also remember that hushmail is acting as the CA, with all that implies
for ultimate security.




Re: New Intel Celeron chip set has random number generator

1999-04-29 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Jim Thompson writes:

Here in my hands, I have an "Atom-Age" HW RNG device.


Sounds interesting -- do you have a URL or other contact info?

But -- and it's a big "but" -- what assurance mechanisms does their device provide?  
The Intel folks say that being sure that the random numbers
are really random was the hard part.  Other RNGs I'm familiar with do
statistical tests at power-up, and use post-whitening besides.  Does this
device do that?  The worst combination, of course, is something that does
the latter without the former.






Re: Hearing on Melissa and Privacy

1999-04-21 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Bjorn Remseth writes:
 
 
 On Sat, Apr 17, 1999 at 05:41:56PM -0400, Lynne L. Harrison wrote:
  
 
 Another issue that bugs me is the fact that viruses are an essentially
 unnecessary. They are a consequence of basically flawed security
 mechanisms in a few operating systems (Windows and MacOS, mainly), and
 total disregard for security in a few important office programs using
 macros (word, excel).  Had these systems both used hardware memory
 protection to create a barrier stopping writes to operating system
 files (such as the boot sector), and restrict macros from doing
 anything to any file, as the are allowed to today, then we wouldn't
 have had the virus menace. Viruses would still have been possible, but
 mainly a curiosity, as they now are on proplerly designed systems such
 as VMS, MVS, Unix etc. (and probably on java based systems to).

Unfortunately, this is largely a myth.

Yes, OS controls can prevent many boot sector viruses.  Depending on the
permissions assigned to the floppy drive, it may be possible for a virus
to scribble on the boot sector of a floppy, and if you ever boot with
that inadvertently left in -- well, you know what happens.  (Actually,
the real protection is that real operating systems are rebooted much
less often...)

The problem is with macro viruses.  Assuming that the user can customize
the local text formatter's equivalent of normal.dot -- and I regard that
as not just an operational necessity, but highly desirable -- a macro
virus could be spread in the same exact fashion as today.  Sure, you can't
get at the system's copy of normal.dot -- but that doesn't matter; it's
the active copy that counts.  There may be OS-level protections possible,
but they're going to be a lot more subtle than file protection.

Now -- why does this have anything to do with cryptography?




Re: new bill getting through congress?

1999-03-11 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], "Perry E. Metzger" writes:
 
 Anyone know anything about this?
 
 Thursday March 11 11:15 AM ET 
 
 Bill To Relax U.S. Controls On Encryption Advances
 
 WASHINGTON (Reuters) - A bill to relax strict U.S. export controls on
 computer data-scrambling products passed a small hurdle Thursday,
 gaining approval from a House Judiciary subcommittee by voice vote.

Let me point folks at thomas.loc.gov.  30 seconds with it shows that the
bill is H.R.850.  Hearings were held a week ago by the Subcommittee on
Courts and Intellectual Property; this is presumably the panel that approved
it.  It's also been sent to the Committee on International Relations.




Re: Intel announcements at RSA '99

1999-01-20 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Ben Laurie writes:
Steve Bellovin wrote:
 
 Intel has announced a number of interesting things at the RSA conference.
 The most important, to me, is the inclusion of a hardware random number
 generator (based on thermal noise) in the Pentium III instruction set.
 They also announced hardware support for IPSEC.

An interesting question (for me, at least) is: how will I know that the
hardware RNG is really producing stuff based on thermal noise, and not,
say, on the serial number, some secret known to Intel, and a PRNG?

That's a very good question, especially since Pentium III's will also have
per-CPU serial numbers...

Seriously, you're already trusting your vendors.  Intel did say that the
hard part of the problem was verifying the output of the RNG; beyond that,
the driver runs SHA-1 on the output to further randomize the bits used.





German government to fund GPG

1999-01-17 Thread Steven M. Bellovin

According to http://www.nytimes.com/library/tech/99/11/cyber/articles/19encrypt.html,
the German government is going to help fund the GPG effort.  GPG is an 
open-source program that is compatible with (some versions of) PGP.

The U.S. government doesn't seem to be amused...

--Steve Bellovin





Re: Rivest Patent

1998-11-13 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], John Young writes:
Ron Rivest received on November 10 "US Patent 5835600: 
Block encryption algorithm with data-dependent rotations:"

   http://jya.com/rivest111098.htm  (22K)


Has anyone compared this with the earlier IBM patent that is cited in their
AES submission?