Re: 802.11 Wired Equivalent Privacy (WEP) attacks

2001-02-13 Thread Arnold G. Reinhold

At 5:55 AM +0900 2/10/2001, [EMAIL PROTECTED] wrote:
 WF1

In WF1 the 802.11 WEP keys would be changed many times each hour, say
every 10 minutes. A parameter, P , determines how many time per hour
the key is to be changed, where P must divide 3600 evenly. The WEP
keys are  derived from a master key, M,  by taking the low order N
bits (N = 40, 104, whatever) of the SHA1 hash of the master key with
the date and time (UTC) of the key change appended.

  WEPkey = Bits[0-N](SHA1(M | mmddhhmmss))
(snip)
Clearly good synchronization of the time-of-day clock on each node is
essential in WF1,  but protocols already exist that can do this over
a network. Small synchronization discrepancies can be handled by the
802 retry mechanism and should look very much like a short RF outage.

   i see chicken and egg loop here - for instance, if I've got a laptop
   with 802.11 card only, I need to use the 802.11 network to synchronize
   clock.  i'm not sure if WF1 is workable (if you have other secure
   channel for synchronizing clock, you are okay - but then why bother
   using 802.11?).

 
That is one of the reasons I suggested a key change interval of every 
10 minutes. Most PCs internal clocks will keep time to within a few 
seconds from day to day, so re-synchronization should not be a 
problem. If necessary the PC's time can be manually set well enough 
using any number of time sources:
 
Most phone companies in the US have a number you can call.
"News" radio stations announce the time frequently.
Many cell phones have clocks.
GPS receivers give accurate time.
For about $60 you can buy a clock that synchronizes itself to WWVB.
802.11 has a short range, so there are likely other PCs nearby 
that you can get the time from.

I don't know how tolerant actual 802.11 systems are to a delayed key 
change (experiments are welcome), but if a user sets their PC time in 
the middle of a ten minute interval, there will be no delay at all.

Actually there is a highly accurate time synchronism mechanism built 
into 802.11. The transceivers must be sync'd way before they get to 
worry about encryption.  But I don't know if that time value is 
accessible by the client computer.  If so, more frequent key changes 
should be workable.


Arnold Reinhold




Re: 802.11 Wired Equivalent Privacy (WEP) attacks

2001-02-09 Thread Arnold G. Reinhold

The draft paper by Borisov,  Goldberg, and Wagner 
http://www.isaac.cs.berkeley.edu/isaac/wep-draft.pdf presents a 
number of practical attacks on 802.11 Wired Equivalent Privacy (WEP). 
The right way to fix them, as the paper points out, is to rework the 
802.11 protocol to use better encryption and message authentication 
algorithms.  Unfortunately a huge infrastructure has grown up around 
802.11 and large numbers of transceiver/modems are installed. If I 
understand things correctly, the encryption and authentication is 
done in firmware, so any changes to these algorithms requires new 
hardware.

Thus there is a need for a short term remedy that can work with the 
existing standard.  The BGW paper suggests changing keys more 
frequently. Here are a couple of suggestion for fairly simple ways to 
do this that I believe would significantly improve the security of 
802.11, without requiring changes to the protocol or obsoleting all 
existing equipment.  They consist of a couple of higher security 
modes,  I'll call them WF1 and WF2 (WF stands for WEP Fix).

WF1

In WF1 the 802.11 WEP keys would be changed many times each hour, say 
every 10 minutes. A parameter, P , determines how many time per hour 
the key is to be changed, where P must divide 3600 evenly. The WEP 
keys are  derived from a master key, M,  by taking the low order N 
bits (N = 40, 104, whatever) of the SHA1 hash of the master key with 
the date and time (UTC) of the key change appended.

  WEPkey = Bits[0-N](SHA1(M | mmddhhmmss))

M can be any size, up to, say, 256 bytes. This allows direct entry of 
a passphrase.

WF1 would eliminate the dictionary attack described in the paper. 
Note that since the master key is not limited to 40 bits,  WF1 would 
also reduce the value of direct attacks on 40-bit keys.  In this 
regard, it is worth noting that IV collisions also facilitate a 
direct attack on the encryption.  If an attacker accumulates n 
packets with the same IV, he can attack all the packets at the same 
time, reducing the time required by a factor of n, if n isn't too 
big.  Since the time required to crack 40-bit RC4 on a single 
workstation is on the order of a week, even a factor of 3 reduction 
is significant.

WF1 does not completely eliminate the problem of IV collisions. With 
a 24-bit IV, some are inevitable.  Each IV collision has the 
potential of compromising the data in both packets. But WF1 does 
allow the rate at which they occur to be reduced and controlled. The 
rate of collisions varies linearly with the period between key 
changes. If there are R packets per second and the time between key 
changes is T (T =3600/P) then the expected number of collisions in 
the time interval T is roughly (T*R)^2/2^25,  so the rate of 
collisions is roughly T*R^2/2^25.

WF1 also does not eliminate the authentication attacks described in 
part 4 of the paper. However most of the attacks described there 
require multiple attempts to succeed and the shortened key window 
might make them more difficult to mount.

Clearly good synchronization of the time-of-day clock on each node is 
essential in WF1,  but protocols already exist that can do this over 
a network. Small synchronization discrepancies can be handled by the 
802 retry mechanism and should look very much like a short RF outage. 


The BGW paper mentions that some 802.11 modems reset their IV counter 
when they are initialized. I don't know if a key change counts as an 
initialization. If so then my proposal runs the risk of creating 
additional collisions.  However it should be possible to test modems 
for this property and refuse to enter the key changing security mode 
if such a modem is installed. Manufacturers could eliminate this 
behavior with a firmware change and there would be no impact on other 
uses of the modem.  Similarly modems that do not change the IV for 
each packet could be barred.

Unless I have missed something, WF1 could be implemented as an option 
in 802.11 driver software. It might also be possible to implement WF1 
with currently available 802.11 software by using a scripting 
language. Note that a crude version of WF1 can be implemented today 
with no new software at all: just change the WEP key every night. A 
weekly WEP key list could be distributed to authorized users by paper 
mail or encrypted e-mail.

WF2

WF2 would change keys periodically just like WF1, however the packet 
sender's address would also incorporated in the hash.

  WEPkey = Bits[0-N](SHA1(M | Sender's address | mmddhhmmss))

WF2 requires that hubs encryption programming be changed. Assuming 
most hubs are programmed in firmware, this will generally require new 
hubs. However existing  client modems can still be used. WF2 will 
essentially eliminate IV collisions if the keys are changed at least 
every few hours.

WF2 still does not eliminate the authentication attacks. However 
since we have to change the hub programming anyway, it might be 
possible to add 

Re: it's not the crypto

2001-02-06 Thread Arnold G. Reinhold

At 8:58 AM -0500 2/5/2001, Steve Bellovin wrote:
Every now and then, something pops up that reinforces the point that
crypto can't solve all of our security and privacy problems.  Today's
installment can be found at
http://www.privacyfoundation.org/advisories/advemailwiretap.html

For almost all of us, the end systems are the weak points, not the
transmission!



While I certainly agree with your general point, I don't think this 
case is good exemplar.

"The exploit requires the person reading a wiretapped email
message to be using an HTML-enabled email reader that also
has JavaScript turned on by default."

The notion that e-mail should be permitted to contain arbitrary 
programs that are executed automatically by default on being opened 
is so over the top from a security stand point that it is hard to 
find language strong enough to condemn it.  It goes far beyond the 
ordinary risks of end systems.

The closest analogy I can thinking of is the early days of the 20th 
century when some doctors began prescribing radium suppositories for 
a variety of ills.

Arnold Reinhold




Re: electronic ballots

2001-02-04 Thread Arnold G. Reinhold

At 1:01 PM -0500 2/4/2001, John Kelsey wrote:
-BEGIN PGP SIGNED MESSAGE-

At 11:02 PM 1/27/01 -0500, William Allen Simpson wrote:

...
"Arnold G. Reinhold" wrote:
 There are a lot of reasons why open source is desirable,
 but it does simply the job for an attacker.

I disagree.  Security by obscurity is never desirable.

Right.  This is doubly important in this application, where
the big threat is insider fraud.  The people we're really
worried about doing some kind of large-scale fraud are
the ones being trusted to man voting stations, transport
ballots, count votes, and certify elections.  Outsiders
who've read through the source code looking for buffer
overflow bugs aren't likely to have the access needed to
mount an attack.


I feel like I am being quoted out of context here.  I was not 
suggesting closed source, but proposing a new type  of compiler that 
produce obfuscated object code under a key. This could make an 
attackers job more difficult, particularly in the narrow time window 
of an election.

In the attack model I am addressing, the people who man the voting 
stations would be supplied with malware tools based on just such an 
analysis of the source code. Under my scheme they could not rely 
knowing the exact object code they will encounter. The compilation 
key or keys would be published after the election, allowing the 
object code used in the field to be compared with the source.


At 10:38 AM -0800 2/4/2001, David Honig wrote:
On Banning Video Cameras From Voting Places

The voting apparatus may keep a serial record of each vote, in order, for
auditing purposes.  This is also mentioned in WAS's legislative text.  Now,
if an evil vote buyer had someone recording who entered which booth
and also had access to the audit records, the correlation lets them
buy or blackmail votes.  Note that this requires only *one* conspirator if
that conspirator is a poll worker with a concealed camera.


One doesn't need a concealed camera. There is nothing to stop a poll 
watcher from keeping written notes of the time when each voter votes. 
In fact, here in Massachusetts the election officials are required to 
call out the name of each voter when they get their ballots and when 
they turn them in.

Arnold Reinhold




Re: issuing smartcards is likely to be cheap [Was: electronicballot s]

2001-02-01 Thread Arnold G. Reinhold

At 1:36 PM -0800 1/31/2001, Heyman, Michael wrote:
  -Original Message-
 From: William Allen Simpson [mailto:[EMAIL PROTECTED]]
 Subject: Re: electronic ballots
 [SNIP much]
 
  It seems that something like a smartcard would be the best scheme.

 Not likely.  Voting is very different from banking transactions.  And
 issuing smartcards with special software for voting is likely to be
 prohibitively expensive.


Hmmm, I have a "voter registration card" and I believe that is the case
across the USA. Current smartcards are not very protective of their private
data and I think the security requirements for vote-only cards would be even
less stringent. Finally, those folks at the MIT Media lab are printing
digital circuits onto plastic using "semiconductor ink".

Which state is that? Are you required to produce the card at the 
polls? Voter registration cards usually refer to the form you fill 
out when registering to vote. See for example 
http://www.lwvmn.org/voting.html#REG There are no voter ID cards in 
Massachusetts and weren't in New York when I lived there.  I was 
under the impression that requiring registered voters to produce 
identification at the polls was impermissible in the US.

Arnold Reinhold




Re: Leo Marks

2001-01-31 Thread Arnold G. Reinhold

At 9:58 PM -0500 1/30/2001, Steven M. Bellovin wrote:
The obituary has, at long last, prompted me to write a brief review of
Marks' book "Between Silk and Cyanide".  The capsule summary:  read it,
and try to understand what he's really teaching about cryptography,
amidst all the amusing anecdotes and over-the-top writing.

I generally agree with what you have to say, but I can't resist 
adding some comments. I liked the book a lot. My review is at 
http://world.std.com/~reinhold/silkandcyanide.html


The main lesson is about threat models.  If asked, I dare say that most
readers of this mailing list would say "of course keying material
should be memorized if possible, and never written down".  That seems
obvious, especially for agents in enemy territory.  After all, written
keys are very incriminating.  It's obvious, and was obvious to the SOE
before Marks.  It was also dead-wrong -- accent on the "dead".

I think it is also wrong advice for most civilian users of cryptography today.


The cipher that agents were taught was a complex transposition, keyed
by a memorized phrase.  The scheme had several fatal flaws.  The first
is the most obvious:  a guess at the phrase was easily tested, and if a
part of the key was recovered, it wasn't hard to guess at the rest, if
the phrase was from well-known source (and it generally was).

It was a tad worse than that. With long enough effort, a message 
could be broken by cryptoanalytic techniques given no knowledge of 
the key. Each break would reveal a few letters of the Agents poem, 
enabling the Germans to guess the rest, if it was a famous poem, and 
thereby easily break all that agent's traffic. Marks' first response 
was to supply agents with his own poems, which were far less likely 
to be guessed.


More subtly, doing the encryption was an error-prone process,
especially if done under field conditions without the aid of graph
paper.  Per protocol, if London couldn't decrypt the message, the agent
was told to re-encrypt and re-transmit.  But that meant more air time
-- a serious matter, since the Gestapo used direction-finding vans to
track down the transmitters.  Doing some simple "cryptanalysis" -- too
strong a word -- on garbles permitted London to read virtually all of
them -- but that was time-consuming, and really pointed to the
underlying problem, of a too-complex cipher.

I don't agree that cryptanalysis is too strong a word. It sounded 
like Marks developed some fairly sophisticated tools to speed up the 
process.  He had quite an operation going.


The duress code was another weak spot.  If an agent was being compelled
to send some message, he or she was supposed to add some signal to the
message.  But if the Gestapo ever arrested someone, they would torture
*everything* out of that person -- the cipher key, the duress code,
etc.  And they had a stack of old messages to check against -- they
made sure that the duress code stated by the agent wasn't present in
the messages.  The failure was not just the lack of perfect forward
secrecy; it was the lack of perfect forward non-verifiability of the
safe/duress indicators.

The problem with the duress code (Marks calls them "agent security 
checks" -- these were patterned errors that were to be made in each 
message, but omitted after capture) was not that they were recovered 
under torture, tho "worked out keys" would make that even less 
likely. Agents were taught that capture meant torture and death and 
they should reveal everything but their security checks.  By Marks' 
account, most captured agents bravely followed  those instructions.

Philippe Ganier-Raymond gives the German side of the story in "The 
Tangled Web." The German commander was interested in getting the 
Agents to cooperate and did not attempt torture at first.  The 
agents, knowing that their security checks, once sent, would alert 
London to their captured status, generally complied.

The horrifying problem was that SOE management routinely ignored the 
duress codes when they were asserted.  They did not want to believe 
their operation in Holland was so badly compromised and chalked up 
the omitted checks to poor training.  As a result dozens more agents 
and tons of supplies were parachuted into waiting German hands.

Another big lesson for us today is that if you use authentication 
techniques you had better take them seriously and have clear 
procedures in place for what to do when an invalid signature is 
detected.  It calls into question practices like the routine signing 
of plaintext e-mail that is often garbled enough by mail handlers to 
render the signature invalid.  All that does is get people used to 
accepting invalid signatures. Half-hearted security measures may stop 
lesser threats, but can actually increase an organization's 
vulnerability to the most dangerous attackers.

Marks' solution was counter-intuitive:  give the agent a sheet of
"worked-out keys", printed on silk.  These were not one-time pad keys;

He did 

Re: electronic ballots

2001-01-30 Thread Arnold G. Reinhold

At 1:03 PM -0500 1/25/2001, William Allen Simpson wrote:
-BEGIN PGP SIGNED MESSAGE-

I've been working with Congresswoman Lynn Rivers on language for
electronic ballots.  My intent is to specify the security sensitive
information, and encourage widespread implementation in a competitive
environment.  We'd like feedback.

While it is good that you are taking the time to work with Congress 
on this, I have a number of problems with what you have proposed. 
I've indicated a few specifics below but here are some general 
objections.

First, and most important, it is far from a given that public key 
cryptography can be used to build a better voting system than the 
best paper systems that are presently in use (even assuming as true 
the unproven mathematical foundations of the technology).  There is 
much more room for undetectable shenanigans in an electronic system 
than in a paper system. Political leaders should understand that it 
is not just a question of issuing the right RFP.  In particular,  it 
is premature to start drafting a law.

Second, I find it unsatisfactory to review a proposed cryptosystem 
design presented in legal language. At the very least, a careful 
system design document, preferably with pseudo code, and a detailed 
threat model should be presented. A working model would be better.

You should separate the performance criteria a voting system must 
meet from the technical design.

It is not enough that a voting system be secure, or that it be 
reviewed by experts. It's security must be evident to the average 
voter. Otherwise it is possible to intimidate voters even if the 
system isn't breakable. ("The boss has computer experts working for 
him so you better vote for his candidate if you want to keep your 
job.")

Finally, there are those unproven mathematical foundations. Assuming 
them true may be acceptable for message privacy or financial 
transactions of modest size, but basing our entire political system 
is another matter.



Unlike last year's so-called "electronic signatures act", this one
specifies real digital signatures, with definitions culled from the
usual Menezes et alia Handbook.

I would much rather you specify specific technologies, such as FIPS 
standards (SHA1, SHA2, AES,  (it will be out soon  enough), DSA, and 
P.1363. You can always add "or demonstrated equivalent"  (though I 
wouldn't). The Handbook definitions are far too loose in legal hands. 
System security analysis is very dependent on the exact algorithms 
used, bit lengths, protocol etc., so I wouldn't want every vendor 
making these choices.  That would complicate security review 
enormously. Plus, in my experience even demonstrated weakness are 
pooh-poohed by vendors.


Here's what it looks like so far (draft #1.2).

Summary:

Minimal requirements for conducting electronic elections.  Technology and
vendor neutral.  Promotes interoperability, robustness, uniformity, and
verifiability.  Easily integrated into existing equipment and practices.

Handle duplicate votes and/or denial of service through submission of
bogus votes.  Permit multiple persons to use the same machinery.  Inhibit
persons with access to the machine from fraud.  Provides penalties for
circumvention.

Education  telecommunications; all computing equipment purchased for
schools or libraries with federal money under "eRate" or other
assistance program [cite] shall be capable of use for federal elections.
States receiving such funds shall participate in electronic federal
elections.



Title __ -- Electronic Election Requirements

SEC. xx01. SHORT TITLE.

This title may be cited as the ``Electronic Election Requirements Act''.


SEC. xx02. DEFINITIONS. -- In this title:

(A) BASE64 ENCODING -- A standard method for compact display of
arbitrary numeric data, described in Multipurpose Internet Mail
Extensions (MIME), Internet RFC-2045 et seq.

(B) DIGITAL CERTIFICATE -- A verifiable means to bind the identification
and other attributes of a public key to an entity that controls the
corresponding private key using a digital signature.  In this
application, the certificate shall be self-signed, and signed by the
appropriate authorizing state server.

(C) DIGITAL SIGNATURE -- A verifiable means to bind information to an
entity, in a manner that is computationally infeasible for any
adversary to find any second message and signature combination that
appears to originate from the entity.  Any method used for an
election shall ensure integrity and non-repudiation for at least ten
years.

(D) ELECTION SOFTWARE -- Applications or browser applets that display an
electronic ballot and record the voter choices.

(E) ELECTRONIC ELECTION SYSTEMS -- A collection of electronic
components, including election software, hardware, and platform
operating system, on both local clients and remote servers, used in
the election.

(F) 

Spark gap digitizers (was NONSTOP Crypto Query)

2001-01-15 Thread Arnold G. Reinhold

I remember those. They were made by Summagraphics. We purchased a 
large format one (about 4 feet X 5 feet) to digitize apparel 
patterns. They had linear microphones along the top and left sides of 
the table.  You had to be careful not to put your free hand between 
the spark pen and the microphones. I recall reading about 3-D 
versions.

The tablets were accurate to a few hundredths of an inch but were not 
that reliable. I think they simply started two counters when the 
spark went off and stopped each when the microphone registered a 
sound.  If I remember right the did around 5 points per second. We 
eventually switched to mechanical technology.

The noise from a spark probably has a much faster rise time than a 
keyboard click, but with modern signal processing it might well be 
feasible to resolve key presses. Of course, if one can get access to 
the room where the computer is used, it is probably easier to bug the 
keyboard directly.  Still it may be time to add mouse-based 
passphrase input as an option to programs like PGP.

Arnold Reinhold



At 10:24 AM -0500 1/15/2001, Trei, Peter wrote:
I've seen an existance proof which indicates that this is possible.
Back when I was first getting involved with computers (circa 1972),
some digitizer tablets worked by speed-of-sound measurements.
The stylus tip contained a small  spark gap which was energized
when the stylus pressed on the  tablet. This created a spark,
and the spark a minuscule roll of  thunder. Microphones situated
along the edges of the tablet recorded the arrival times of the sound,
and the location of the stylus calculated within a millimeter or two.

This was a peripheral for a DEC PDP-8E.

This was calculating a position over about 20 cm to a millimeter,
in real time, in 1972. Doing so to a resolution of a centimeter or
two, in 2001, ever several meters sounds feasible.

Peter Trei

 --
 From:Ray Dillinger[SMTP:[EMAIL PROTECTED]]
 Sent:Friday, January 12, 2001 4:37 PM
 To:  John Young
 Cc:  [EMAIL PROTECTED]
 Subject: Re: NONSTOP Crypto Query



 On Fri, 12 Jan 2001, John Young wrote:

 Wright also describes the use of supersensitive microphones
 to pick up the daily setting of rotors on cryptomachines of the
 time, in particular the Hagelins made by CryptoAG.

 Hmmm.  That sounds like a trick that could be brought up to
 date.  If you get two sensitive microphones in a room, you
 should be able to do interferometry to get the exact locations
 on a keyboard of keystrokes from the sound of someone typing.
 I guess three would be better, but with some reasonable
 assumptions about keys being coplanar or on a surface of known
 curvature, two would do it.  Interesting possibilities.

  Bear

 [A quick contemplation of the wavelength of the sounds in question
  would put an end to that speculation I suspect. --Perry]
 





Re: NONSTOP Crypto Query

2001-01-14 Thread Arnold G. Reinhold

One interesting question is exactly how strong radio frequency 
illumination could cause compromise of information being processed by 
electronic equipment. I have an idea for a mechanism whereby such 
illumination could induce generation of harmonic and beat frequencies 
that are modulated by internal data signals.

This mechanism is based  on an effect that is familiar to ham radio 
operators, who are often bedeviled by neighbors complaining of 
television interference. Here is a quote from the chapter on 
interference in an old (1974) edition of the ARRL Radio Amateur's 
Handbook:

"Harmonics by  Rectification"

"Even though the transmitter is completely free from harmonic output 
it is still possible for interference to occur because of harmonics 
generated outside the transmitter. These result from rectification of 
fundamental-frequency currents induced in conductors in the vicinity 
of the transmitting antenna. Rectification can take place at any 
point where two conductors are in  poor electrical contact, a 
condition that frequently exists in plumbing, downspouting, BX cables 
crossing each other, ...It can also occur ... in power supplies, 
speech equipment, etc. that may not be enclosed in the shielding 
about the RF circuits."

In the case of computer equipment, the conductor could be a wire, 
external cable or even a trace on a printed circuit board. Now 
imagine that the source of rectification is not a poor connection, 
but a transistor junction in a logic gate or line driver. As that 
device is switched on and off, RF rectification may be switched on 
and off as well, modulating the generated harmonic with the input 
signal. If that signal carries sensitive information, all the 
information would be broadcast on the harmonic output. Keyboard 
interfaces, video output circuits and serial line drivers come to 
mind as excellent candidates for this effect, since they often carry 
sensitive information and are usually connected to long wires that 
can absorb the incident RF energy and radiate the harmonics.

All an attacker has to do is monitor a site transmitting at frequency 
f and analyze any signals at 2*f, 3*f, etc. If the site has more than 
one transmitter, say a command hut, or a naval ship,  there are also 
beat frequencies to consider f1+f2, f1-f2, 2*f1+f2, 2*f1-f2,  etc. 
Note that harmonics and beats radiated from the equipment under 
attack are vastly easier to detect that any re-radiation at the 
fundamental frequency, which would be swamped by the primary 
transmitter's signal.

There is also a potential active attack where an adversary 
frequency-sweeps your equipment with RF hoping to find a parasitic 
harmonic generator. This might be the "resonance" technology Peter 
Wright referred to.  If the source illumination causes a resonance 
by, say, operating at 1/4 the electrical wavelength of the video 
output cable, any effect might be magnified greatly. (The even 
harmonics would be suppressed, but odd harmonics would not be.) 
Illumination could be done directly or over telephone, cable TV or 
power lines.

This might also explain "NONSTOP testing and protection being 
especially needed on vehicles, planes and ships." since they often 
carry multiple radio transmitters and are more easily exposed to 
monitoring and external illumination than a fixed site inside a 
secure perimeter.

The two code names (NONSTOP and HIJACK) might possibly refer to the 
passive and active modes.  Or NONSTOP may refer to radiated signals 
and HIJACK to signals over hardwire lines. Or one could cover all the 
effects I am proposing and the other something completely different. 
Whatever.

FWIW,

Arnold Reinhold


At 2:23 AM + 1/13/2001, David Wagner wrote:
In a paper on side channel cryptanalysis by John Kelsey, Bruce Schneier,
Chris Hall, and I, we speculated on possible meanings of NONSTOP and HIJACK:

   [...]
   It is our belief that most operational cryptanalysis makes use of
   side-channel information.  [...]  And Peter Wright discussed data
   leaking onto a transmission line as a side channel used to break a
   French cryptographic device [Wri87].

   The (unclassified) military literature provides many examples of
   real-world side channels.  [...]  Peter Wright's crosstalk anecdote
   is probably what the HIJACK codeword refers to [USAF98]. Along
   similar lines, [USAF98] alludes to the possibility that crosstalk from
   sensitive hardware near a tape player might modulate the signal on the
   tape; [USAF98] recommends that tapes played in a classified facility be
   degaussed before they are removed, presumably to prevent side channels
   from leaking. Finally, one last example from the military literature
   is the NONSTOP attack [USAF98, Chapters 3-4]: after a careful reading
   of unclassified sources, we believe this refers to the side channel
   that results when cryptographic hardware is illuminated by a nearby
   radio transmitter (e.g. a cellphone), thereby modulating 

Re: NSA abandons some cool stuff

2001-01-10 Thread Arnold G. Reinhold

At 6:09 PM -0800 1/8/2001, David Honig wrote:
At 07:51 PM 1/8/01 -0500, Arnold G. Reinhold wrote:
...
 By shielding the fixtures, they effectively
place the lights outside of the enclosure.

Yes.  But 1. you'd still want a filter the power mains
inside your physically secured zone 2. The site had a
generator... and presumably a guarded perimeter (think
1/R^2) so emissions were probably less important than
listening sensitivity...

I suspect they would not rely on the guarded perimeter for TEMPEST, 
at least not back then.  The 1/R^2 attenuation applies to reception 
as well. One would put distance between the antennae and the 
buildings housing the computers and other sources of noise.


I'll bet the wiring to
 those fixtures is within carefully grounded conduit.

Building codes often require this, anyway, though probably
not grounded to the extent of someone concerned with emissions.

I doubt they require conduit in rural NC. And my guess is you'll see 
welded straps bridging each joint.

Again, it makes much more sense (cost, number of items to check
periodically) to put isolation centrally.

The kind of filtering you need for TEMPEST is pretty fancy (and 
expensive no doubt).  I have heard numbers like 100+ db.  The filters 
have to be located at boundary of the shielded enclosure. I don't 
believe you can do it centrally.

The more I think about it, the less convinced I am that this was a 
intercept receiving site.  If it were, why was it abandoned? Surely 
NSA does not have less need for that sort of thing in the post-cold 
war era? And why put one in North Carolina?

It may have been a site for operational control of NSA satellites. 
The large antennae and secluded location would make jamming more 
difficult. The dual systems and self-contained power would insure 
high availability and the shielding and fibre optics might also be 
directed to EMP protection. The 1995 abandonment might have been due 
to a realization that NSA could safely share satellite control 
facilities with other DOD satellite owners, once the 
money-is-no-object era ended.


It would be fun to take a tour!

It looks like those RF astronomers would be willing, if you
shut your cell phone off while visiting :-), though likely
miffed that you're more interested in the facility than in the
astronomy...

-

Another possibility is that they were so freaked by the static sensitivity
of early MOS devices that they grounded the carpets...





















Re: NSA abandons some cool stuff

2001-01-09 Thread Arnold G. Reinhold

At 01:27 PM 1/7/01 -0500, Arnold G. Reinhold wrote:
"Every inch of floor in more than four buildings was covered with
two-by-two-foot squares of bleak brown carpet. When the astronomers
tried to replace it, they discovered it was welded with tiny metal
fibers to the floor. The result, they eventually realized, is that
the rugs prevent the buildings from conducting static electricity.

Even the regular lighting looks different, covered by sleek metal
grids that prevent the light bulbs from giving off static
interference. "

Sounds more like TEMPEST shielding.


It resembles TEMPEST, but shielding works both ways.  The spooks chose
the site because it was RF quiet, but had to run their computers in the
same area as sensitive dishes.  It makes sense that the shielding
was to quiet their own emissions to help their receiving.  After
all, fluorescent bulbs don't leak much intelligence :-) but they
sure cause electrical noise.

You may be right about their concern being to prevent interference 
with their listening equipment, but I don't agree with your last 
point.  As I understand it, all electrical wiring coming out of a 
TEMPEST enclosure has to be carefully (and expensively) filtered. 
The power wiring to lighting fixtures can pick up and re-radiate 
compromising signals. By shielding the fixtures, they effectively 
place the lights outside of the enclosure.  I'll bet the wiring to 
those fixtures is within carefully grounded conduit.

It would be fun to take a tour!

Arnold Reinhold





Re: Perfect compression and true randomness

2001-01-08 Thread Arnold G. Reinhold

I don't think Chaitin/Kolomogorv complexity is relevant here. In real 
world systems both parties have a lot of a priori knowledge. Your 
probably_perfect_compress program is not likely to compress this 
sentence at all, but PKZIP can.  The probably_perfect_compress 
argument would work (ignoring run time) if Alice first had to send 
Bob the entire PKZIP program, but in reality she doesn't. Also 
discussing "perfect compression" doesn't make sense in the absence of 
a space of possible messages and a probability distribution on that 
space.

I don't agree that the assumption of randomness in OTP's is on the 
same footing as "perfect" compression.  The laws of physics let you 
put a lower bound on the entropy per bit for practical noise 
generators.  You can then distill the collected bits to produce fewer 
bits which are completely random.

In any case, as I tried to point out before, perfect compression, 
what ever it may be, does not prevent a know-plaintext attack.  If 
Malfoy knows the plaintext and the compression algorithm, he has 
every thing he needs to guess or exhaust keys. If he has a large 
number of plaintexts or can choose plaintexts he might be able to 
effect more sophisticated attacks attacks.

Arnold Reinhold


At 9:20 PM -0800 1/4/2001, Nick Szabo wrote:
Anonymous wrote (responding to the idea of "perfect compression"):
 ... Once you have specified
 such a probability distribution, you can evaluate how well a particular
 compression algorithm works.  But speaking of absolute compression or
 absolute entropy is meaningless.

These ideas have on a Turing machine the same meaning as the idea of
"truly random numbers", and for the same reason.  The assumption of
randomness used in proving that OTPs and other protocols are
"unconditionally" secure is very similar to the assumption that a string
is "perfectly compressed".  The problem is that determining the absolute
entropy of a string, as well as the equivalent problem of determining
whether it is "real random", is both uncomputable and language-dependent.

Empirically, it seems likely that generating truly random numbers is much
more practical than perfect compression.  If one has access to certain
well-observed physical phenomena, one can make highly confident, if
still mathematically unproven, assumptions of "true randomness", but
said phenomena don't help with perfect compression.

If we restrict ourselves to Turing machines, we can do something *close*
to perfect compression and tests of true randomness -- but not quite.
And *very* slow.  From a better physical source there is still the problem
that if we can't sufficiently test them, how can we be so confident
they are random anyway?  Such assumptions are based on the extensive and
various, but imperfect, statistical tests physicists have done (has
anybody tried cryptanalyzing radioactive decay?  :-)

We can come close to testing for true randomness and and doing perfect
compression on a Turing machine.   For example, here is an algorithm that,
for sufficiently long but finite number of steps t, will *probably* give you
the perfect compression (I believe the probability converges on
a number related to Chaitin's "Omega" halting probability as t grows,
but don't quote me -- this would make an interesting research topic).

probably_perfect_compress(data,t) {
for all binary programs smaller than data {
run program until it halts or it has run for time t
if (output of program == data AND
length(program)  length(shortest_program)) {
shortest_program = program
}
}
print "the data: ", data
print "the (probably) perfect compression of the data", shortest_program
return shortest_program
}

(We have to makes some reasonable assumption about what the binary
programming language is -- see below).

We can then use our probably-perfect compression algorithm as a statstical
test of randomness as follows:

probably_random_test(data,t) {
   if length(probably_perfect_compress(data,t)) = length(data)
   then print "data is probably random"
   else print "pattern found, data is not random"
}

We can't *prove* that we've found the perfect compression.  However,
I bet we can get a good idea of the *probability* that we've found the
perfect compression by examining this algorithm in terms
of the algorithmic probability of the data and Chaitin's halting
probability.

Nor is the above algorithm efficient.   Similarly, you can't prove
that you've found truly random numbers, nor is it efficient to
generate such numbers on a Turing machine.  (Pseudorandom
numbers are another story, and numbers derived from non-Turing
physical sources are another story).

We could generate (non-cryptographic) probably-random numbers as follows:

probably_random_generate(seed,t) {
   return probably_perfect_compress(seed,t)
}

For cryptographic applications there are two important ideas,
one-wayness and expanding rather than contracting the seed, that
are 

Re: Cryptographic Algorithm Metrics

2001-01-03 Thread Arnold G. Reinhold

At 10:38 PM + 1/3/2001, Peter Fairbrother wrote:
on 3/1/01 9:25 pm, Greg Rose at [EMAIL PROTECTED] wrote:

  At Crypto a
 couple of years ago the invited lecture gave some very general results
 about unconditionally secure ciphers... unfortunately I can't remember
 exactly who gave the lecture, but I think it might have been Oded
 Goldreich... forgive me if I'm wrong. The important result, though, was
 that you need truly random input to the algorithm in an amount equal to the
 stuff being protected, or you cannot have unconditional security.

Not so. Perfect compression with encryption works too.


How does perfect compression prevent a known plaintext attack?


Arnold Reinhold


PS I am also curious why Mr. Smith considers 1024-bit RSA to be 
"Conditionally Computationally Secure."




Big Number Calculator Applet

2000-12-17 Thread Arnold G. Reinhold

I've written a number calculator applet as a number theory teaching 
tool. It exposes most of the functionality in the Java 1.1 (and 
later) BigInteger package, including prime checking and modular 
arithmetic.  One of its goals is to let people try out various 
cryptographic calculations by hand. For example, I try to describe a 
manual D-H key exchange procedure.

The applet and documentation is at 
http://world.std.com/~reinhold/BigNumCalc.html Unfortunately, 
Netscape 4.7 and earlier do not support Java 1.1, so you will need 
Internet Explorer 4.5 or later or the new Netscape 6.0. I've tested 
the applet using IE 5.0 and 5.5 on Windows 98 and IE 4.5 (MRJ) on the 
Mac.

I'd like to get some feedback before I try to distribute the applet 
more widely. Comments, bug reports and test results on other 
browser/platform combinations would be most helpful. Also I'd be 
interested in suggestions as to a suitable open source repository for 
the Java source code.

Enjoy,

Arnold Reinhold




Re: migration paradigm (was: Is PGP broken?)

2000-12-10 Thread Arnold G. Reinhold

At 3:35 PM -0600 12/7/2000, Rick Smith at Secure Computing wrote:
At 02:43 PM 12/7/00, Peter Fairbrother wrote:

In WW2 SOE and OSS used original poems which were often pornographic. See
"Between Silk and Cyanide" by Leo Marks for a harrowing account.

Yes, a terrific book. However, the book also contains an important 
lesson regarding human memory.

Marks was responsible for training agents in crypto procedures to 
use while operating behind enemy lines, and he was also responsible 
for decrypting the messages they sent back. Marks found himself 
organizing a cryptanalysis team (independent of Bletchley) primarily 
for the purpose of cracking of mis-encrypted messages received from 
their own agents. In short, the agents mis-remembered their poems 
and used their faulty recollection as the basis for their encryption.

The book is excellent. I wrote a review of it from a crypto 
perspective. it's online at 
http://world.std.com/~reinhold/silkandcyanide.html In this context it 
is worth noting that Marks gave up on memorized keys altogether, 
preferring one-use keys printed on silk and hidden in the agents' 
clothing.

Now, just how do we intend to address such concerns in our 
memory-based authentication systems? Our whole technology for using 
memorized secrets is built on the belief that people will remember 
and recite these secrets perfectly. Some applications could take 
more of a 'biometric pattern matching' strategy that measures the 
distance between the actual passphrase and a stored pattern. But 
this won't provide us with a secret we can use in crypto 
applications like PGP.


On simple thing we can do is to stop telling people that must never 
write their passphrase down. For most users memorization greatly 
increases the risk of losing valuable data while doing little to 
protect against real risks.  A written-down key can be kept in the 
user's possession or in a safe deposit box, and, unlike a hardware 
token, it can be backed up.  Does anyone have data on how often the 
average person loses their wallet or purse and has to replace their 
credit cards? From my friends' experiences I'd guess it averages at 
least once every 10 years, with some people seeming to lose theirs 
every couple of years. Lose a token and your (unescrowed) data is 
gone.

Another thing that would make passphrase use easier for most people 
would be to make passphrases case insensitive. Weird capitalization 
may have been a marginally useful way to add entropy when users were 
restricted to 8 character passwords, e.g. Unix Crypt(3), but it 
really makes no sense in longer passphrase systems if you think about 
entropy per keystroke, which I think is the right measure of user 
cost.

Finally, I'd like to see software that employs passphrases offer to 
suggest a passphrase, rather than let the poor user sort through all 
the conflicting -- and often bad -- advice that is out there. After 
all, any public key system has to have a good source of true 
randomness.  And if you don't trust that software, you shouldn't be 
giving it you passphrase under any circumstances.

Arnold Reinhold




DOD rescues Iridium

2000-12-09 Thread Arnold G. Reinhold

 From http://www.defenselink.mil/news/Dec2000/b12062000_bt729-00.html

The Department of Defense, through its Defense Information Systems 
Agency, last night awarded Iridium Satellite LLC of Arnold, Md., a 
$72 million contract for 24 months of satellite communications 
services. This contract would provide unlimited airtime for 20,000 
government users over the Iridium satellite network.

The contract includes options which, if exercised, would bring the 
cumulative value of this contract to $252 million and  extend the 
period of performance to December 2007.

The Department has taken this action because the Iridium system 
offers state-of-the-art technology. It features on-satellite signal 
processing and inter-satellite crosslinks allowing satellite-mode 
service to any open area on earth. It provides  mobile, 
cryptographically secure telephone services to small handsets 
anywhere on the globe, pole-to-pole, 24 hours a  day. The system and 
its DoD enhancements will provide handheld service currently not 
available.
   ...

"Iridium will not only add to our existing capability, it will 
provide a commercial alternative to our purely military systems. 
This may enable real civil/military dual use, keep us closer to the 
leading edge technologically, and provide a real  alternative for the 
future," said Dave Oliver, principal deputy undersecretary of Defense 
(Acquisition, Technology and  Logistics).

Iridium Satellite LLC is now purchasing the operating assets of 
Iridium LLC and its existing subsidiaries, pursuant to a Nov. 22, 
2000 order of the U.S. Bankruptcy Court for the Southern District of 
New York.
...

Early next year, Iridium will offer a classified capability. 
Classified service will not only be provided for users already 
registered to the DoD gateway, but will also be extended to new users 
from DoD, other federal agencies, and selected allied governments.

[Works out to $150/handset/month, not unreasonable for secure, 4*Pi 
coverage. I wonder how many units will end up in the hands of 
political appointees? It could become the status symbol of the next 
administration. -- agr]




Re: migration paradigm (was: Is PGP broken?)

2000-12-07 Thread Arnold G. Reinhold

At 3:43 PM -0600 12/6/2000, Rick Smith at Secure Computing wrote:
Does anyone have a citation as to the source of this 1.33 
bits/letter estimate? In other words, who computed it and how? It's 
in Stinson's crypto book, but he didn't identify its source. I 
remember tripping over a citation for it in the past 6 months, but 
can't find it in my notes.


Bruce Schneier has two cites in Applied Cryptography 2nd Ed. on p. 234:

  Claude E. Shannon, "Prediction and Entropy in Printed English," Bell 
System Technology Journal, v.30 n.1, 1951 pp. 50-64.

Thomas Cover and R.M. King, "A Convergent Gambling Estimate of the 
Entropy of English." IEEE Trans. on Information Theory, v. IT-24, n. 
4, July 1978, pp. 413-421.

Arnold Reinhold




Re: migration paradigm (was: Is PGP broken?)

2000-12-05 Thread Arnold G. Reinhold

At 7:20 PM + 12/4/2000, lcs Mixmaster Remailer wrote:
William Allen Simpson [EMAIL PROTECTED] writes:
 My requirements were (off the top of my head, there were more):

  4) an agreed algorithm for generating private keys directly from
 the passphrase, rather than keeping a private key database. 
 Moving folks from laptop to desktop has been pretty hard, and
 public terminals are useless.  AFS/Kerberos did this with a
 "well-known" string-to-key algorithm, and it's hard to convince
 folks to use a new system that's actually harder to use.  We need
 to design for ease of use! 

This is a major security weakness.  The strength of the key relies
entirely on the strength of the memorized password.  Experience has
shown that keys will not be strong if this mechanism is used.

I agree that the average, untutored user is likely to select a 
passphrase too weak to achieve adequate security. On the other hand, 
storing high-quality keys on a typical server or Internet-connected 
PC presents security risks that are comparable in magnitude.

I believe there are applications where a passphrase generated key is 
preferable. These include situations were keys must be retained for a 
very long time (we know paper lasts) and where people such as 
reporters or NGO workers  have to travel to parts of world where any 
physical keying material in their possession could get them in 
trouble.


There must be something more.  At a minimum it can be a piece of paper
with the written-down, long passphrase.  Or it can be a smart card
with your key on it.  Conceivably it could also be a secure server that
you trust and access with a short passphrase, where the server can log
incorrect passphrase guesses.  But if you can attack a public key purely
by guessing the memorized passphrase which generated the secret part,
the system will not be secure.

Writing down the passphrase is reasonable in many, but not all 
situations. Hardware tokens can be damaged or lost.  That risk may be 
unacceptable in some applications.  And is there is really such a 
thing as a trustworthy, secure server?  Will Santa bring me one?

I think a standard such as Mr. Simpson suggests is a worthwhile idea. 
No one is forced to use a standard just because it exists. One size 
does not fit all. However I would propose including an option for key 
stretching in any such standard. Key stretchers can bridge the gap 
between what people are willing to memorize and reasonable levels of 
security. I have some ideas for methods that would be more effective 
than mere repeated hashing that I would be glad to contribute.

Arnold Reinhold




AES (was Re: migration paradigm)

2000-12-05 Thread Arnold G. Reinhold

At 11:19 PM -0800 12/4/2000, Bram Cohen wrote:
On Mon, 4 Dec 2000, William Allen Simpson wrote:

 We could use the excuse of AES implementation to foster a move to a
 new common denominator.

AES is silly without an equivalently good secure hash function, which we
don't have right now.

[SHA-2 looks pretty good. What's your problem with it? --Perry]

We already have too many common denominators. I'm waiting for something to
stop looking like an experiment to actually start advocating use of a
particular crypto application.

-Bram Cohen

At the risk of adding yet another "common denominator," I think AES 
might be of use in breaking the PGP 2.6 deadlock.  As I understand 
things from this thread, the OpenPGP folks object on principle to 
supporting 2.6 message formats because they require patented IDEA. 
Since source is widely available, it should be easy to create new 
versions of PGP 2.6 with AES128 as a drop-in replacement for IDEA. A 
utility could be kludged up to convert encrypted key rings. If 
OpenPGP supported that format (the patent issue would be gone and I 
gather the code already exists) there might be a basis for compromise.

Arnold Reinhold




Re: migration paradigm (was: Is PGP broken?)

2000-12-05 Thread Arnold G. Reinhold

At 3:04 PM -0800 12/5/2000, Ray Dillinger wrote:
On Tue, 5 Dec 2000, Arnold G. Reinhold wrote:

...

 I believe there are applications where a passphrase generated key is
preferable.

I think a standard such as Mr. Simpson suggests is a worthwhile idea.
No one is forced to use a standard just because it exists. One size
does not fit all. However I would propose including an option for key
stretching in any such standard. Key stretchers can bridge the gap
between what people are willing to memorize and reasonable levels of
security.

Uh, no.  A dictionary attacker can stretch his guesses in exactly
the same way, so there is no security from a so-called "password
stretcher".

It is good that you raise this point, but I believe it is easily 
dealt with. The essence of the dictionary attack is that it allows 
the attacker to spread his investment in creating and storing the 
dictionary over multiple attacks. The standard way to break that up 
is to use salt. There is a straightforward way to apply the salt 
principle in this case. The user merely has to append a non-secret 
but relatively unique string to his secret passphrase before it is 
hashed. This can be something very familiar to him, such as his phone 
number, e-mail address, automobile license tag or social security 
number. Key generation software can prompt for this information 
separately.


On the other hand, long passphrases that are *not* random gibberish
are easy to remember.  As children, many of us (Americans in the
midwest) were called upon to memorize documents like the constitution,
word for word.  Even the "special" kids got through the Preamble to
the Declaration of Independence.  I remember standing up and reciting
"Annabell Lee" when I was a sixth-grader.  Now those documents, along
with all of Shakespeare, are too well known to serve as keys.  But
we are all capable of writing a piece of original prose or poetry and
memorizing the sucker.  Sixty, eighty words -- that's easy.  A thousand
is do-able with some time and effort.  A hundred words of verse, if it's
original and you've never spoken it or shown it to anyone, is a pretty
damn secure passphrase.

If you are comfortable memorizing 100 words of original verse and 
typing it in accurately each time you need to enter a passphrase, 
more power to you; but I believe you represent a tiny minority of 
users. My extended tirade on this subject is at 
http://www.diceware.com


So be conservative with how much entropy you get from the keyphrase
(my preferred standard is about 1 to 1.33 bits per character), ignore
spacing and punctuation, and let the text entry for the passphrase
be a big honkin' text block instead of a teeny little forty-character
line. If someone wants to enter "sex" as a password, s/he deserves
what s/he gets (although you may put up an "insecure passphrase"
warning box for him/her).  But if they want to use the entirity of
a poem in Latin that they made up about their job, the implementor
shouldn't stand in their way.


I don't trust that 1.33 bit per character estimate for made up 
passphrases (people are far more predictable than they like to 
believe), but I agree that users should be allowed to employ long 
passphrases if they wish.

Arnold Reinhold




Re: Is PGP broken?

2000-12-04 Thread Arnold G. Reinhold

At 9:55 AM +0100 11/29/2000, PA Axel H Horns wrote:
On 29 Nov 2000, at 7:07, Stephan Eisvogel wrote:

 Adam Back wrote:
  (And also without IDEA support for patent reasons even now
  that the RSA patent has expired.)

 Do you know when the IDEA patent will expire? I will hold a
 small party myself then. B)

The EP 0 482 154 of ASCOM TECH AG has been filed on May 16, 1991.
Add 20 Years. If ASCOM TECH AG pays annual renewal fees to the
respective national Patent Offices every year. Otherwise it might
lapse earlier.

Axel H Horns

There is also US patent 5214703 which was filed on Jan. 7, 1992.  See 
http://www.delphion.com/details?pn=US05214703__

Arnold Reinhold




Re: Lots of random numbers

2000-11-16 Thread Arnold G. Reinhold

At 10:19 PM -0500 11/15/2000, Rich Salz wrote:
I'm putting together a system that might need to generate thousands of RSA
keypairs per day, using OpenSSL on a "handful" of Linux machines.  What do
folks think of the following: take one machine and dedicate it as an entropy
source. After 'n' seconds turn the network card into promiscuous mode, scoop
up packets and hash them, dump them into the entropy pool. Do this for 'm'
seconds, then go back to sleep for awhile.  The sleep and wake times are
random numbers.  Other systems on the newtwork periodically make an SSL
connection to the entropy box, read bytes, and dump it into their /dev/random
device.

Is this a cute hack, pointless, or a good idea?
   /r$

I think it is a bad idea for two reasons. First, it is hard to 
characterize the entropy in the packet stream. Second, being 
connected to a network makes the noise generating machine vulnerable 
to attack. Compromised noise generators are very difficult to detect 
and devastating to security.

I think you would be far better off using a true noise source, or, 
better two of them.  See 
http://world.std.com/~reinhold/truenoise.html for some suggestions. 
Attach it to a Linux box dedicated to key pair generation and keep 
the machine off the network entirely. If the keys are going into 
tokens, load the tokens from the key gen machine. If the keys are 
being used in other software, encrypt them and transfer them via 
floppy or some simple serial link.

The key gen machine should be physically isolated and secured as 
well, perhaps a laptop in a safe.

Arnold Reinhold




Re: Rijndael Hitachi

2000-10-11 Thread Arnold G. Reinhold

"Steven M. Bellovin" [EMAIL PROTECTED] writes:

 Precisely.  What is the *real* threat model?

 History does indeed show that believed-secure ciphers may not be, and
 that we do indeed need a safety margin.  But history shows even more
 strongly that there are many better ways to the plaintext, and that's
 the real goal.

Ciphers are components of security systems, not complete security 
systems. How best to improve a  component is a legitimate engineering 
question even if there is reason to believe they will often be 
misapplied. At present there is no serious threat to 3DES, so why did 
we bother with the whole AES exercise?

[Look at the benchmarks? --Perry]

Anyway, I think there is an interesting theoretical question here:

Design a cipher algorithm P that assumes as primitives 5 ciphers, C1, 
...,C5 (or more generally N ciphers for odd N  1) with the same 
block size and key length.  P is to have the same block size and key 
length as the Ci and is to be provably secure against chosen 
plaintext attacks even under the following conditions:

1. One of the Ci is a strong cipher (i.e. there is no attack faster 
than trying all the keys)

2. An attacker gets to supply  the other four  Ci, subject to the 
condition that they be cipher like: i.e. they must be bijections 
between the input and output domains, the bijection is the same if 
the key value is the same and there are no extra outputs.

3. The attacker knows the details of the secure algorithm.


P should be as simple as possible not employ any additional 
cryptographic primitives (e.g hashes, S-boxes or special constants).

Derek Atkins adds:


Why try to pick a Medeco when it's locking a glass door?  :-)

The fact that some people put Medeco's in glass doors, doesn't mean 
Medeco should never develop a better lock.


Arnold Reinhold




Re: Non-Repudiation in the Digital Environment (was Re: First Monday August 2000)

2000-10-10 Thread Arnold G. Reinhold

At 12:12 PM -0700 10/7/2000, Ed Gerck wrote:
"Arnold G. Reinhold" wrote:

 In public-key cryptography "Non-Repudiation" means that that the
 probability that a particular result could have been produced without
 access to the secret key is vanishingly small, subject to the
 assumption that the underlying public-key problem is difficult.  If
 that property had be called "the key binding property" or "condition
 Z," or some other matheze name, we would all be able to look at this
 notion more objectively. "Non-repudiation," has too  powerful a
 association with the real world.

Your definition is not standard. The Cryptography Handbook by Menezes
defines non-repudiation as a service that prevents the denial of an act.  The
same is the current definition in PKIX, as well as in X.509.  This 
does not mean, however as some may suppose, that the act cannot be 
denied -- for example,
it can be denied by a counter authentication that presents an accepted proof.

Thus, non-repudiation is not a stronger authentication --  neither a 
long lived
authentication.  Authentication is an assertion that something is true. Non-
repudiation is a negation that something is false. Neither are absolute.  And
they are quite different when non-boolean variables (ie, real-world variables)
are used. They are complementary concepts and *both* need to be used or
we lose expressive power in protocols, contracts, etc..

Cheers,

Ed Gerck

You may well be right about the accepted definition of 
non-repudiation, but if you are then I would amend my remarks to say 
that known cryptographic technology cannot provide non-repudiation 
service unless we are willing to create a new legal duty for 
individuals and corporations to protect their secret key or accept 
what ever consequences ensue.  I don't think that is acceptable.

I find the rest of your comment a tad too opaque.  Could you give 
some examples of what you have in mind?


Arnold Reinhold




Re: human failings question

2000-10-05 Thread Arnold G. Reinhold

At 9:23 AM -0700 10/5/2000, David Honig wrote:
At 09:07 PM 10/3/00 -0400, Nina H. Fefferman wrote:


  Hi all,

  Does anyone know where (if at all) I can find statistics for the
predictable strings humans tend to produce when asked to create a
"random" sequence of zeros and ones? Maybe cognitive science papers?
  Has anyone seen these?

  Thanks,

  Nina Fefferman

I have no specific ref in mind, but I do remember that humans
find long runs (e.g., 0101100110) unrandom, when asked to
pick one excerpt vs. another.   There was a
recent paper on the perception of 'lucky streaks' in basketball,
which unearthed their superstitious nature (ie an artifact of learning
algorithms, like randomly-reinforced pigeons' "superstitions").
So come to think of it, there are more papers on (mis)perceiving randomness
than on (mis)generating it.

Here's an interesting question: could you train someone to give
more random sequences by merely giving them an entropy-measure as
feedback?  (Hmm, one could write a program which ran this experiment on
human subjects)


Many years ago I saw a demonstration of the Apollo guidance computer 
that included a 0/1 guessing game. You'd pick 0 or 1, the computer 
would predict your answer (it helped to have someone else 
supervising). Then your pick would be entered on the display/keyboard 
assembly. The computer kept score and would typically be right about 
65% of the time. Once while playing it, I decided to cheat by 
flipping a coin. I was chagrined by what I thought was a long string 
of "Heads," but the computer's score quickly dropped to about 50%.

Arnold Reinhold




Re: AES winner to be announced Monday.

2000-10-02 Thread Arnold G. Reinhold

The following information from the Rijndael Page 
http://www.esat.kuleuven.ac.be/~rijmen/rijndael/index.html may come 
in handy later today when NIST announces the new Advanced Encryption 
Standard (AES):

'Rijndael FAQ

 1.How is that pronounced ?
If you're Dutch, Flemish, Indonesian, Surinamer or 
South-African, it's pronounced like you think it should be. 
Otherwise, you could pronounce it  like "Reign Dahl", "Rain Doll", 
"Rhine Dahl". We're not picky. As long as you make it sound different 
from "Region Deal".

 2.Why did you choose this name ?
Because we were both fed up with people mutilating the 
pronunciation of the names "Daemen" and "Rijmen". (There are two 
messages in this  answer.)

 3.Can't you give it another name ? (Propose it as a tweak!)
Dutch is a wonderful language. Currently we are debating about 
the names "Herfstvrucht", "Angstschreeuw" and "Koeieuier". Other 
suggestions are welcome of course. Derek Brown, Toronto, Ontario, 
Canada, proposes "bob".'


At 9:50 PM +0200 9/30/2000, Nomen Nescio wrote:

Though NIST is being very secretive regarding the AES announcement,
they let the following rumors leak:

1. There is a single winner.
2. It is not an American design.

If so, this rules out MARS, RC6, and Twofish. But now comes the
third rumor:

3. The winner is not covered by any patent or patent claim
identified or disclosed to NIST by interested parties.

Assuming this is true, there is only one algorithm that is not
explicitly mentioned in Hitachi's claim: Rijndael.




Re: Oh for a decently encrypted mobile phone...

2000-09-15 Thread Arnold G. Reinhold

At 10:08 PM -0700 9/13/2000, Bram Cohen wrote:
On Thu, 14 Sep 2000, Enzo Michelangeli wrote:

 http://www.the-times.co.uk/news/pages/sti/2000/09/10/stinwenws01007.html

 SOLDIERS are having to use insecure mobile phones to communicate in
 battlefield exercises because, they say, the army's radio
 communications system is so unreliable. Senior commanders be-lieve
 that the reliability of mobile phones outweighs the increased risk of
 conversations being intercepted.

It is interesting to note that scanners capable of monitoring cell 
phone traffic are illegal in the US, making it hard for the Red team 
to go out and buy a unit at Radio Shack and use it to monitor the 
Black team's cell phone traffic.  Such scanners are available 
overseas, at least for analog cell phones, so potential adversaries 
could get them. Of course, most US cell phones won't work in the rest 
of the world anyway.



Wouldn't it be ironic if they resort to buying a bunch of stariums ...


-Bram Cohen

[That would require that Stariums actually appear on the market at
some point. --Perry]


A less ambitious project than Starium might be a line of cell phones 
with symmetric encryption.  You could load the key the same way you 
store speed-dial numbers.  Three 10-digt numbers would be more than 
enough.  Several keys could easily be stored.  Such phones would 
allow small groups to communicate in total secrecy with no additional 
infrastructure.


Arnold Reinhold

Arnold Reinhold




Re: More thoughts on Man in the Middle attacks and PGP

2000-09-13 Thread Arnold G. Reinhold

At 10:15 PM +0100 9/12/2000, Ben Laurie wrote:
"Arnold G. Reinhold" wrote:

 I had some more thoughts on the question of Man in the Middle attacks
 on PGP. A lot has changed on the Internet since 1991 when PGP was
 first released. (That was the year when the World Wide Web was
 introduced as well.)  Many of these changes significantly reduce the
 practicality of an MITM attack:

 1. The widespread availability of SSL.
 SSL might be anathema to the PGP community since it depends on a CA
 model for trust distribution, but it has become ubiquitous and every
 personal computer sold these days includes an SSL enabled browsers
 and a set of certs. If Bob fears he is under MITM attack, he can use
 SSL to tunnel out. Several companies, such as hushmail.com, are
 already using SSL to offer secure e-mail services. These can be used
 directly by Bob to ask people at random to verify the version of
 Bob's public key at the various PGP key servers.

   An even better approach would be to use SSL to secure connections to
 PGP key servers in different parts of the world.  This would force an
 MITM to subvert all the key servers as a minimum.

There's really nothing stopping an implementation of SSL that uses PGP
for key verification. All that's really required at the end of the day
is some ASCII (to check the server name) and a public key, verified
according to the requirements of the, err, verifier.


Allowing SSL to accept PGP keys might be handy in other contexts, but 
not here. If Bob wants to rule out a MITM attack and he somehow has 
an active PGP key (other than his own) that he trusts, he can simply 
send PGP-encrypted mail asking that key holder to verify Bob's public 
key at the key servers.

The value of SSL in this context is that every PC comes with a set of 
certs that can be used to validate an SSL link. (Mine came with 66 
certs) Bob can walk into any computer store and buy a PC or a Windows 
disk off the shelf.  Unless the MITM attacker has access to the 
private portion of these keys (perhaps a risk if your expected threat 
is United Spooks of Earth), and is willing to risk that compromise 
being exposed, his electronic bubble is pierced.

Arnold Reinhold




Re: More thoughts on Man in the Middle attacks and PGP

2000-09-13 Thread Arnold G. Reinhold

At 6:29 PM +0100 9/13/2000, Ben Laurie wrote:
"Arnold G. Reinhold" wrote:



  There's really nothing stopping an implementation of SSL that uses PGP
 for key verification. All that's really required at the end of the day
 is some ASCII (to check the server name) and a public key, verified
 according to the requirements of the, err, verifier.
 

 Allowing SSL to accept PGP keys might be handy in other contexts, but
 not here. If Bob wants to rule out a MITM attack and he somehow has
 an active PGP key (other than his own) that he trusts, he can simply
 send PGP-encrypted mail asking that key holder to verify Bob's public
 key at the key servers.

 The value of SSL in this context is that every PC comes with a set of
 certs that can be used to validate an SSL link. (Mine came with 66
 certs) Bob can walk into any computer store and buy a PC or a Windows
 disk off the shelf.  Unless the MITM attacker has access to the
 private portion of these keys (perhaps a risk if your expected threat
 is United Spooks of Earth), and is willing to risk that compromise
  being exposed, his electronic bubble is pierced.

I was addressing "SSL might be anathema to the PGP community since it
depends on a CA model for trust distribution".


And I guess what I meant by that was that the the PGP community might 
not be happy relying on the PKI/CA's of the world to help PGP counter 
the MITM attack. But in fact the PKI/CA's as they exist today allow 
one to do just that.

Best,

Arnold





Java, zeroize and WW II

2000-09-13 Thread Arnold G. Reinhold

I was searching to see if anyone had done a Zeroize interface for 
Java and found a very interesting page 
http://www.maritime.org/ecm2.htm  on the US military's primary cipher 
machine from World War II, the ECM Mark II, aka CSP-989 aka SIGABA. 
(It turns out the term "zeroize" goes back to the electromechanical 
era and they have a Java version of the ECM-II.)

This page is unusual for the depth of supporting material it 
includes, wheel coding lists for the M-94 "Jefferson Wheel" cipher, 
detailed emergency procedures to use when your ECM-II has been 
compromised, explanations of NSA nomenclature system for 
cryptographic equipment, etc.

Worth a visit.

Arnold Reinhold




More thoughts on Man in the Middle attacks and PGP

2000-09-12 Thread Arnold G. Reinhold

I had some more thoughts on the question of Man in the Middle attacks 
on PGP. A lot has changed on the Internet since 1991 when PGP was 
first released. (That was the year when the World Wide Web was 
introduced as well.)  Many of these changes significantly reduce the 
practicality of an MITM attack:

1. The widespread availability of SSL.
SSL might be anathema to the PGP community since it depends on a CA 
model for trust distribution, but it has become ubiquitous and every 
personal computer sold these days includes an SSL enabled browsers 
and a set of certs. If Bob fears he is under MITM attack, he can use 
SSL to tunnel out. Several companies, such as hushmail.com, are 
already using SSL to offer secure e-mail services. These can be used 
directly by Bob to ask people at random to verify the version of 
Bob's public key at the various PGP key servers.

  An even better approach would be to use SSL to secure connections to 
PGP key servers in different parts of the world.  This would force an 
MITM to subvert all the key servers as a minimum.

2. Instant messaging in its various guises: IRC, MOOs, MUDs, AOL's 
IM, ICQ,  Web-based chat services and virtual reality worlds. A MITM 
attacker has to be prepared for Bob to attempt to use any of these as 
a way to verify his key with Alice. Unlike e-mail, instant messaging 
gives the MITM almost no time to inspect and alter Bob's messages. In 
particular, the MITM cannot allow anything to pass who's meaning the 
MITM does not understand.

Networked video games may present another opportunity to subvert the 
MITM.  Not only might they have subtle ways to allow signalling, but 
they can be used to establish a shared secret (remember what my 
character did to you in Level 5?) that the MITM will have a hard time 
knowing, short of monitoring the game on a continuous basis.

3. CyberCafes.  It might be conceivable to imagine the combined 
forces of NSA and the rest of the world's spooks being able to detect 
Bob's attempt to log in from anywhere in the world on his own 
computer and then automatically redirect his traffic through the 
MITM. However nothing short of 24 hour surveillance is going to 
enable them to know when Bob enter a CyberCafe.  Even then, it is 
tricky to figure out which account he is logged in under.

4. The ubiquity of the Internet..
When PGP was first introduced, few people had even heard of the 
Internet. Today, at least in the US, a majority of homes are 
connected.  You can walk up to stranger almost anywhere and ask them 
to send an e-mail message for you.  All it takes is one message to 
Alice.

5. Many PGP users now have a history.  There are thousands of PGP 
users that have been active for years. If Bob has saved even one 
e-mail or usenet message containing a PGP key fingerprint from long 
ago, he can use that information to build a secure link with the 
author, who can then tell Bob what the servers say is his key.

The one downside it near total user apathy. One way to get users to 
actually verify keys might be to offer a reward for anyone who 
surfaces an actual MITM attack. It is unlikely to be collected.


Arnold Reinhold




Re: DeCSS and first sale

2000-09-07 Thread Arnold G. Reinhold

At 1:08 PM +0100 9/7/2000, Ben Laurie wrote:
John R Levine wrote:
 CSS is entirely about subverting first sale, since the only useful 
thing that
 the CSS crypto does is to assign each DVD a "region code" so that 
the DVD can
 only be played on players with the same region code.  (As has been widely
 noted, if you want to pirate a DVD, you just copy the bits, no crypto
 needed.) The reason that they use region codes is that movies may already be
 on DVD in the US while still in theatres in Europe, or vice versa, and they
 want to prevent people from sending DVDs from one place to the other and
 undermining theatre revenues.  If I were the movie industry, I'd want to
 prevent it, too, but if I were a judge interpreting the copyright law, I'd
 look to the first sale doctrine and say "tough noogies".

That's not quite the only reason for region codes: regional price
differentials are also important to revenue. :-)


I think the issue of enforcing foreign censorship is important too. 
Here is the main part of my comment to the LOC on this matter: 
http://www.loc.gov/copyright/reports/studies/dmca/reply/Reply014.pdf
 

"...The technical protection measures that DCMA addresses can also be 
used by foreign governments to prevent unwanted content from being 
viewed by its residents.  This is the digital-millennium equivalent 
of the jamming of Radio Free Europe during the Cold War. An attempt 
by a US Citizen to bypass those measures, for example by buying a DVD 
movie about Tibet and re-coding it so that it is playable by a 
Chinese-zoned DVD player, could be prosecuted under DCMA as an act of 
circumvention. The tools for producing such a re-coded DVD are 
similarly proscribed under this law, as interpreted by its supporters 
and US District Judge Kaplan.

Here is the testimony of Dean Marks, Senior Counsel, Intellectual 
Property for Time Warner, given at the Stamford Library of Congress 
hearing on DCMA (transcript page 262):

1  MR. MARKS:  Another reason why we need
2  regional coding, why we do regional coding is that
3  the law in various territories is different with
4  regard to censorship requirements.  So we cannot
5  simply distribute the same work throughout the world
6  in the same version.  Local laws impose censorship
7  regulations on us that require us to both exhibit
8  and distribute versions of the films that comply
9  with those censorship requirements.

The DCMA makes violations of the censorship laws of every 
dictatorship in the world enforceable against US Citizens in US 
Courts. This violates the 'first sale' doctrine and is an outrage in 
a country that professes to promote freedom throughout the world."


Arnold Reinhold




Re: reflecting on PGP, keyservers, and the Web of Trust

2000-09-06 Thread Arnold G. Reinhold

At 4:38 PM -0700 9/5/2000, David Honig wrote:
At 05:33 PM 9/3/00 -0400, Dan Geer wrote:

   How do they exchange public keys?  Via email I'll bet.


 Note that it is trivial(*) to construct a self-decrypting
 archive and mail it in the form of an attachment.  The
recipient will merely have to know the passphrase.  If

If you have a secure channel to exchange a passphrase in,
you have no need for PK.


I don't see any need for self-decrypting archives or passphrases. 
The public key can be sent un-encrypted.  All you need is a trusted, 
not secure, channel to send the key fingerprint. This channel can 
have very low bandwidth and need not be electronic.

Without key fingerprint verification, the primary attack against an 
open exchange of public keys is the Man in the Middle. Remember the 
burden on the Man in the Middle attacker against Bob:

1. The MITM must intercept every key exchange messages that Bob sends 
or receives and then every message of any sort that Bob sends or 
receives thereafter.

2. The MITM must be prepared to detect attempts to verify key 
fingerprints in any message Bob sends or receives. These can involve 
foreign languages, anagrams, subtle phrasing, steganography, etc. In 
general this means that all messages must be screened by a well 
trained human, not automatically.

3. If Bob ever discovers he is being attacked, he can use the MITM to 
feed false information to his adversary.

4. If the attacker ever decides to stop,  Bob will immediately be 
alerted that something was wrong.

I think it is much cheaper and less risky to get one of the party's 
private key by planting a worm program or bugging their keyboard.


At 7:22 PM -0700 9/5/2000, Ed Gerck wrote:

PGP is based on an “introducer-model” which depends on
the integrity of a chain of authenticators, the users
themselves. The users and their keys are referred from one
user to the other, as in a friendship circle, forming an
authentication ring, modeled as a list or “web-of-trust”.
The web-of-trust model has some problems, to wit:

I would add one more problem with the web-of-trust model: the classic 
p**n reliability equation. If there is a 90% chance that any given 
introducer is reliable, then there is only a 34% chance that a chain 
of 10 introducers is reliable.  Would you give even a 90% trust 
rating to a bunch of strangers?  To really work, the web-of-trust 
requires multiple, independent paths between any two individuals so 
you can take the "or" of several chains. That level of density is not 
likely to happen with individuals.

On the other hand, PGP does not depend on the he web-of-trust model 
and I doubt very many people try to use it.  I suspect most users 
find other ways to exchange keys with their friends.  As Paul Crowley 
points out, what exactly does it mean to have trust in a stranger's 
public key?


Arnold Reinhold




Re: reflecting on PGP, keyservers, and the Web of Trust

2000-09-05 Thread Arnold G. Reinhold

At 3:48 PM -0700 9/1/2000, David Honig wrote:
At 09:34 AM 8/30/00 -0700, Ed Gerck wrote:

BTW, many lawyers like to use PGP and it is a good usage niche.  Here, in the
North Bay Area of SF, PGP is not uncommon in such small-group business users.

How do they exchange public keys?  Via email I'll bet.


So what if they do? A Man in the Middle attack is difficult to mount 
and expensive to maintain. It is also easy to detect if the parties 
ever use out-of-band means to verify keys. I would judge the risk of 
a MITM attack as much lower than the risk of keys being stolen from 
the lawyers' computers.

I think one reason that the web of trust has not caught on is that 
there is not much need in the real world for what it offers: the 
ability for strangers to trust each others' keys.  The one exception 
is in dealings with commercial organizations and the certificate 
authorities and SSL seem to handle that very well, at least in one 
direction. Individuals who already know each other have many ways of 
exchanging and verifying keys without resort to the web of trust.

That said, I do think web of trust is an important concept and one 
that could and should be strengthened. For example, I have managed to 
sneak my key fingerprint in to my books (in the section where I 
explain public key cryptography) but I think authors who wish should 
be allowed and encouraged to do so in a more straightforward way, 
perhaps on their book's copyright page.  If only !0%, say,  of 
computer authors did this, it would build a large pool of people 
whose keys would be very easy to verify. I'd also encourage PGP users 
to post their key fingerprint in a publicly accessible place, perhaps 
in a window near their front door or place of business.

Finally, I'd like to see large compilations of key fingerprints 
published on the web on, say, a quarterly basis. A master fingerprint 
for these files could then be widely distributed, both on the 
Internet and using other means such as billboards, display boards in 
university and public libraries, even blinked out in Morse code from 
a window in a tall building. (I call this the billboard defense.)

An MITM attack requires building an electronic balloon around its 
victim. A mere pin-prick, like the billboard defense, is all that is 
needed to burst that balloon.

Arnold Reinhold




Re: Tipster voluntary payment protocol

2000-08-28 Thread Arnold G. Reinhold

At 11:21 AM -0400 8/26/2000, Jeff Kandt wrote:
On or about 11:52 AM -0400 8/24/00, Arnold G. Reinhold wrote:
The design goals:  http://tipster.weblogs.com/designgoals
The crypto protocol:  http://tipster.weblogs.com/tipsterblock/

Both of these are open to debate.


First let me say something positive. I like your design goals. They 
are reasonable and clear.  I have a couple of quibbles, but they 
are minor. On the other hand, I must say that I do not see how your 
crypto protocol is justified by your design goals.  I'd like to see 
a justification for you protocol based on the design goals.

What would that look like?  A sentence or two after each goal which 
says "Tipster fulfills this goal by..."?  I was thinking of doing 
that.

I'd like to see a sentence or three after each element in the 
*protocol* describing which goal they are addressing, what type of 
attack they are intended to prevent and how they prevent it.   A 
separate threat model would also be helpful.

...



o The requirement of certificate cancellation seems to imply a 
central server. That violates goal 7.

Yes, goals 6 and 7 are in conflict here, I think. I'm not terribly 
happy with relying on a central registry, but I couldn't think of 
another way for an artist to cut off a server, given that the 
provider's URL may still be stamped on files which were created long 
ago.

I judged that since revokation certificates can't be created by 
anyone who doesn't possess the private key, the worst trouble a 
malicious central registry could cause would be to _fail_ to 
propagate a revokation cert.  Since this should be easy for the 
artist to discover, I thought this was a low risk.

As a practical matter there is a high likelihood that revocations 
simply won't happen . Since your goal is to prevent artists from 
being cheated, I would find this unacceptable. Again, I think there 
are better ways.



o Why does every server have to sign each piece of content? Why do 
you even care about the hash of the content? That seems to violate 
goal 2.

My goal #3, "difficult for thief/attacker to steal tips" should have 
added the word "successfully."

While it's impossible to prevent someone from attaching their 
contact info to someone else's file, my goal with Tipster's 
signatures was to make such naughtiness easy to discover and 
prosecute.

If the artist discovers one of his files posted somewhere with 
someone else's contact info on it, the signature makes it difficult 
for the persons who run the malicious server to deny that they were 
active participants.  If the signature from the thief's server 
matches the key used to sign the pirated file, that's a pretty 
effective smoking gun. Not only did he sign the file, but he's 
running a server which uses that same key to collect payments.

I expect that phony servers would be set up in jurisdictions where 
they will be difficult (and expensive) to pursue.  They may also have 
short lives -- take the money and run. Then start a new server, seed 
phony content and do it again.  In any case, if the fan knows the 
artist's sig then a phony sever cannot provide the necessary cert to 
collect money.


And here's where we get into the trust issue.  Despite goal 7, 
decentralization, I'm afraid lately I'm leaning towards a 
certificate authority model, maybe similar to SSL.  Given the above 
threat model, all that is really needed is for the key to be signed 
by someone who has checked the artist's credentials as a legal 
entity, making sure they are someone who can be tracked down in 
meatspace by the cops and lawyers if they abuse their cert by using 
it to steal others' tips.

Artist and server certs have different needs. All artists require is 
a binding to the band name. Servers need the tie to meatspace. You 
can still avoid the centralization problem by allowing multiple CAs, 
and other sources of trust for artist sigs.


o Goal 4 suggests that only information that will never change 
should be included with content. Servers will come and go. All you 
really need is the artist's URL.  Why add more?

Yes, an artist could maintain a single, stable URL.  But most 
artists aren't technically savvy enough to administer their own 
server, any many are too low-budget to afford their own server, 
domain and Verisign/Thawte certificate.  So they're going to have to 
delegate to others, and this is where I really want to be careful.

Servers and domain names do not have to be expensive.  The is a lot 
of competition in the hosting business. Domain names cost as little 
as $8 these days, I'm told.

Cooperatives may be an alternative model. There is a whole legal 
structure in the US that gives them a lot of protection. It might be 
a worthwhile project to develop a musicians cooperative web site 
hosting software package, along with a suitable draft legal 
agreement. Such  coops could also handle payment receipts, 
eliminating any need for payment servers..

There is a

Re: PGP ADK Bug Fix

2000-08-27 Thread Arnold G. Reinhold

How hard would it be to filter the public key servers for unsigned 
ADKs and either notify the keyowner or just remove the unsigned ADKs? 
The cert containing the unsigned ADK could be moved to a separate key 
server, equipped with suitable warnings, so the forensic record would 
be preserved.

Arnold Reinhold




Re: Tipster voluntary payment protocol

2000-08-24 Thread Arnold G. Reinhold

At 11:50 PM -0400 8/23/2000, Jeff Kandt wrote:
On or about 12:49 PM -0400 8/23/00, Arnold G. Reinhold wrote:
Certificate revocation is one of the thorniest issues in public key 
cryptography. Maybe you can solve it in this narrow context, but I 
would avoid it if there is another way and I believe there is.

I agree, and as I really want Tipster to be as decentralized as 
possible, I wish there was a better way.  Someone suggested putting 
expiration dates on the keys, but that doesn't work because we want 
the signatures on the file to remain valid for a long time -- years 
or decades maybe, or as long as a file is likely to be traded around 
between fans.

The phrase "why not put in some crypto to give the fan some feeling 
of security" really gets my fur up.  There is no reason not to 
design a system that really works.  I support your overall goal, 
but you will severely damage your credibility and the credibility 
of voluntary payment models in general by abusing crypto in this 
way.

I'm sorry if I failed to take the proper reverential tone when 
referring to crypto; please be assured that I take the crypto behind 
Tipster very seriously, even if I'm not a professional at it.  I'm 
not sure what makes you say I'm "abusing" crypto.

My point was that one _could_ simply place an unsigned URL on the 
file, but both the fan and the artist would probably prefer if we 
could make it at least somewhat resistant to tip theft, and for that 
we can use the toolset provided by cryptography.

Anyway, and more importantly, when you say "There is no reason not 
to design a system that really works," are you're saying that 
Tipster won't work?

Thanks for the invitation. I think I've said my piece on the 
philosophy. If you want a critique of your cryptographic design 
(and are prepared to listen) I prefer a forum where other 
cryptographers are present.

Yes please!  That's why I posted it here.

The design goals:  http://tipster.weblogs.com/designgoals
The crypto protocol:  http://tipster.weblogs.com/tipsterblock/

Both of these are open to debate.


First let me say something positive. I like your design goals. They 
are reasonable and clear.  I have a couple of quibbles, but they are 
minor. On the other hand, I must say that I do not see how your 
crypto protocol is justified by your design goals.  I'd like to see a 
justification for you protocol based on the design goals.

Will tipster as specified work? I don't think so, at least not well, 
based on your design goals:

o Having the artist sign the block does not prevent theft (Goal 3). 
If the fan does not know (i.e. have a reason to trust) the artist's 
signature, the thief can use any sig. If the fan does know the 
artist's signature, his client can ask the server for a certificate 
signed by the artist.  Either way, the artist's signature in the 
content is pointless (unless you are trying to snow the fans).

o Goals 1 and 7 and maybe 6 (not to mention common sense) would seem 
to require that new servers can be added. Your protocol does not 
allow this for content that is already published.

o The requirement of certificate cancellation seems to imply a 
central server. That violates goal 7.

o Why does every server have to sign each piece of content? Why do 
you even care about the hash of the content? That seems to violate 
goal 2.

o Goal 4 suggests that only information that will never change should 
be included with content. Servers will come and go. All you really 
need is the artist's URL.  Why add more?

I think one can design a much simpler system that meets your design 
goals. My suggestion would be to just have the artist's URL in the 
content and maybe a standard way for identifying the title of the 
work. If the artist obtains a URL that matches their group name 
exactly (www.moronenvy.com), that in itself could provide enough 
trust for transactions on the order of a dollar. The fan could check 
the artist's signature in other ways as well: from a commercial CA, 
from an artistic key server, from a key fingerprint printed on a 
concert program, from signed lists of artist keys circulated by self 
appointed notaries on music lists, etc.

The fan's client could download a list of acceptable servers and the 
artist's signature from the artist's URL. Each server would get a 
certificate from each artist when the artist agrees to let the server 
collect for them. This certificate would be signed by the artist and 
could have an expiration date. Artists would then have two ways to 
revoke a server's authorization: remove the server from the list of 
acceptable servers on the artist's web site or refuse to renew the 
server's authorization. No central revocation server or CRL is 
required. Adding a new server would simply require signing a cert and 
listing the new server on the artist web site.

There are details to be worked out o course, but I believe this would 
be a lot less complex and more effective than wh

Re: Tipster voluntary payment protocol

2000-08-23 Thread Arnold G. Reinhold

At 10:59 PM -0400 8/20/2000, Jeff Kandt wrote:
...
Tipster allows the artist to revoke any given key with a revokation 
certificate.  By allowing the artist to encode multiple 
URL/signature pairs onto the file, they can set up multiple, 
redundant revenue streams, and you encourage competition among 
service providers.  The ability to revoke individual server keys 
means that the artist can cut off any service provider for any 
reason without interrupting the revenue stream.

Of course, revokation certs will have to be kept in a central 
location, but that can be arranged.

Certificate revocation is one of the thorniest issues in public key 
cryptography. Maybe you can solve it in this narrow context, but I 
would avoid it if there is another way and I believe there is.


Under your scheme, each user will need a payment client or an MP3 
player that includes a payment feature. It would make more sense to 
have just the artist's URL included with the content and create a 
protocol to let the payment client download a list of servers from 
the artist's site.

If you're going to include a URL with the content, you need 
something which will parse the file and read that URL.  And if 
you're writing new code anyway, why not put in some crypto to give 
the fan some feeling of security (that they're paying the right 
person).  As a bonus we end up empowering the musicians to an 
unprecedented degree.

The phrase "why not put in some crypto to give the fan some feeling 
of security" really gets my fur up.  There is no reason not to design 
a system that really works.  I support your overall goal, but you 
will severely damage your credibility and the credibility of 
voluntary payment models in general by abusing crypto in this way.

...
The recording industry is not that stupid. They can see the threat 
almost as clearly as you can. Napster woke them up and have plenty 
of lawyers.  Expect any voluntary payment system to be sued.

Please.  On what grounds, counselor?

Get some lawyers on your team and ask them to look at what you are 
doing from the recording industry's perspective. Also ask what a 
defense will cost if you are sued.


(While I enjoy arguing these philosophical and economic points, 
these lists (esp. [EMAIL PROTECTED]) probably aren't the best 
place for it.  I invite you, and anyone else who's interested in 
these issues, to http://tipster.weblogs.com where we have a 
discussion group intended for just this sort of debate.)


Thanks for the invitation. I think I've said my piece on the 
philosophy. If you want a critique of your cryptographic design (and 
are prepared to listen) I prefer a forum where other cryptographers 
are present.

Arnold Reinhold




Re: Tipster voluntary payment protocol

2000-08-18 Thread Arnold G. Reinhold

Jeff,

I think a voluntary payment system is a fine idea, but I am not sure 
that your proposal address the right issues. If I understand what you 
are proposing correctly, your scheme allows a CD buyer to verify that 
a particular payment server is authorized by the recording artist to 
collect payments in their behalf. It does this by attaching server an 
artist URLs and sigs to the downloadable content.

First, why bother attaching all that info to the content? One can 
simply set up the servers and let them present signed credentials 
from the artists.  Content is certainly one way to publicize the 
servers, but their are many other ways.  Why depend on the content 
uploaders to do this?

Second, it would seem you require the artist's cooperation. Some may 
not want to cooperate. Maybe that's OK: they don't get paid. But 
others --perhaps most-- could be barred from cooperating by their 
record companies. Their contracts may allow the record companies to 
control all uses of their name and may even give them access to the 
voluntary payments (if the contracts don't, they soon will.). The 
record companies may even sue the servers claiming they are 
interfering with the record companies contractual agreement with the 
artists.

A better approach might be to set up one or more servers that 
collects money as a way of voting for people's favorite artist. The 
funds collected would be placed in one of several audited escrow 
accounts: in the artist's name, if they give permission, in an 
account dedicated to a charity that the artist designates, or, if 
neither is available, one of several music-related charities (pension 
funds, libraries, museums, etc.) that the donor can select.  A small 
portion, say 5-10%, would go to pay for the server expenses.

A user could prepay money -- say $10 at a time, into an account to be 
disbursed in smaller increments to artists.  Individual payments 
would be charged a higher rate to cover expenses.  Each donor would 
get a statement at the end of the year showing what portion of their 
donations went to IRS approved charities for tax purposes.

The recording industry can be expected to try to shut down any 
voluntary payment system, so careful legal design is more of an issue 
IMHO than cryptographic protocols. A reputable bank as escrow holder 
and CPA firm should provide enough trust.

If a system like this takes off and a lot of money is collected in 
the artists' names, then future artists might bypass the recording 
companies altogether or refuse to sign contracts that bar them from 
accessing the voluntary system.

Arnold Reinhold

At 8:33 AM -0400 8/17/2000, Jeff Kandt wrote:
"Tipster" is the name I'm using for the voluntary payment scheme I 
posted to the coderpunks and cypherpunks lists (among others) a few 
weeks ago under the title "Kill the RIAA: a protocol."

http://www.inet-one.com/cypherpunks/dir.2000.07.24-2000.07.30/msg00387.html

Since that post, I've set up a weblog to track the development of 
the protocol and related voluntary payment issues, and just tonight 
I finished the first draft of the cryptographic protocol which 
enables Tipster's authenticated connection mechanism.

I would appreciate feedback.

http://tipster.weblogs.com

Thanks in advance.

-Jeff
--
--
|Jeff Kandt |  "When cryptography is outlawed, bayl bhgynjf  |
|[EMAIL PROTECTED] |   jvyy unir cevinpl!"  -Brad Templeton of ClariNet |
|[PGP Pub key: http://pgp.ai.mit.edu/pks/lookup?op=getsearch=0x6CE51904 |
|  or send a message with the subject "send pgp key"]|
--





Re: Tipster voluntary payment protocol

2000-08-18 Thread Arnold G. Reinhold

At 8:28 PM -0400 8/17/2000, Jeff Kandt wrote:
On or about 12:57 PM -0400 8/17/00, Arnold G. Reinhold wrote:
I think a voluntary payment system is a fine idea, but I am not 
sure that your proposal address the right issues. If I understand 
what you are proposing correctly, your scheme allows a CD buyer to 
verify that a particular payment server is authorized by the 
recording artist to collect payments in their behalf. It does this 
by attaching server an artist URLs and sigs to the downloadable 
content.

Correct so far, except for the "CD buyer" part; this is for people 
who download their music from the net, even via peer-to-peer 
mechanisms like Napster.

Sorry. That was a slip on my part.


First, why bother attaching all that info to the content? One can 
simply set up the servers and let them present signed credentials 
from the artists.

The reason for attaching the info to the file makes is that it makes 
it a no-brainer to pay for a song.  Just right-click on the file in 
the Windows Explorer/Finder and choose "Tip Artist". Or alternately, 
my MP3 player software might support it directly so that I can pay 
based on who I'm actually listening to most.

One of my primary goals is to make this as easy as possible for the 
consumer to send a tip, since the system only works if people are 
willing to do it on a regular basis.

I agree that making it easy is essential.  But I still do not think 
attaching all the info to the content is needed to make things easy. 
First of all, there is no need to have the servers' keys attached. At 
most you need the artist's public key or key fingerprint.  When the 
client software contacts the server, it can get a copy of the 
server's key signed by the artist. That lets artists add servers 
after the content has been posted. Each artist's signature on the 
server key could also have an expiration date to allow artists to 
drop a server, say for non payment. You can't do that if the server 
keys are in the content.

Under your scheme, each user will need a payment client or an MP3 
player that includes a payment feature. It would make more sense to 
have just the artist's URL included with the content and create a 
protocol to let the payment client download a list of servers from 
the artist's site.  That might not require more than agreeing on a 
file naming convention and file format (e.g. 
www.myhotnewband.com/PaymentServerList.asc, which would contain a 
signed list of URLs).


Content is certainly one way to publicize the servers, but their 
are many other ways.  Why depend on the content uploaders to do 
this?

It would be the content encoders. Once the payment info is attached 
to the file, it will be there no matter how many times it gets 
swapped around.  Given a voluntary model, there's no motivation for 
anyone to strip it.

People ripping their own MP3s from CDs is, I think, a temporary 
phenomenon which will go away as soon as everyone realizes what an 
inefficient way of moving bits they are.

It won't be long before music will come straight from the artist in 
a compressed, net-friendly form.  If it's the artists creating the 
file, then they'd might as well stamp their contact info on it 
before releasing it to the world.

My disagreement here is over the best way to effect change. There is 
significant inertia in the recording industry. New artists still 
dream of signing a record contract. Change is coming and I agree that 
an effective voluntary payment mechanism could speed change, but it 
is a form of circular reasoning to make that change a condition for 
introducing the payment system.  The likelihood of a new payment 
model succeeding must be judged on things as they are now, not as 
they will be once the payment system is in place.



Second, it would seem you require the artist's cooperation. Some 
may not want to cooperate. Maybe that's OK: they don't get paid. 
But others --perhaps most-- could be barred from cooperating by 
their record companies. Their contracts may allow the record 
companies to control all uses of their name and may even give them 
access to the voluntary payments (if the contracts don't, they soon 
will.). The record companies may even sue the servers claiming they 
are interfering with the record companies contractual agreement 
with the artists.

I address exactly this issue here:
http://tipster.weblogs.com/discuss/msgReader$31

In the above link you say: "Its a good bet that it will be the 
independent (aka small) bands which first adopt Tipster (or whatever 
the inevitable voluntary protocol turns out to be,  even if it's not 
Tipster). The ones with no existing recording contract to slow them 
down will be quickest to move to the new model. Whatever success they 
have will drive the rest of the industry ..."

Depending on new artists, as you propose, is a very slow and risky 
way to introduce a new recorded music payment model. Christine Lavin 
once lamented "you can

Re: RSA expiry commemorative version of PGP?

2000-08-04 Thread Arnold G. Reinhold

Another reason for PGP 2.x compatibility is that there are a lot of 
old computers out there that will not run more modern versions. Many 
of these machines find their way into 3rd-world countries and NGOs 
where there is a life-and-death need for security.

Also there is a argument that these old machines are significantly 
more secure than new equipment. The real threat to PGP security is 
clandestine software that captures and leaks your secret key. 
Bloatware (30-50 million lines of code in Windows 2000) has made any 
kind of independent OS security checking nearly impossible.  BIOSs 
and CPU firmware have also grown enormously and offer room for all 
sorts of mischief. An old 68000 Mac or 8086 PC with no hard drive is 
a lot more trustworthy in my opinion, and can make a very effective 
crypto box.

Arnold Reinhold


At 3:58 PM -0400 8/3/2000, Derek Atkins wrote:
The problem is not necessarily in getting users of PGP 2.x to upgrade.
That will happen on its own.  The problem is that users of PGP 2.x
have old keys and, worse, old DATA that is encrypted and signed in the
PGP 2.x formats using the PGP 2.x algorithms.

The point is not to be able to create new messages that older
implementation can read (although I certainly wouldn't complain if
that actually happened).  Rather, the point is to be able to access
all that old, encrypted data.  I still use PGP 2.6 because I have
years worth of data encrypted and signed using PGP 2.6 formats, and I
don't want to lose the information.  Some of the information is signed
by OTHER people, so just decrypting and re-encrypting isn't
sufficient.

-derek

Frank Tobin [EMAIL PROTECTED] writes:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Adam Back, at 12:01 -0400 on Thu, 3 Aug 2000, wrote:

  I beg to differ.  The fastest way to get people to upgrade is if the
  new version works with the old version.  There are still many pgp2.x
  users who don't upgrade because they then lose the ability to
  communicate with other 2.x users.

  Your proposal just perpetuates the problem.

 My proposal is realistic in the face that RFC 2440 is the standard to
 follow.  One problem that people face today is that they still only think
 there are 3 real classes of PGP implementations out there; PGP 2.x, PGP
 5.x and above, and GnuPG.  However, as more and more implementations
 arise, the need for RFC 1990 users to abandon their implementations will
 become more obvious.

 People also think that the only difference between 2.x and OpenPGP
 implementations it the algorithms used.  Key formats have changed, the
 message format has changed, compression algorithms, and a host of other
 changes.  To think that maintaining compatiblity is as simple as plugging
 in RSA and IDEA is ridiculous.

 Look at signed messages posted to BugTraq, or other widely-known lists. 
 The signatures are all made by OpenPGP-compatible implemenations.  I would
 argue the pressure should be placed on 2.x users, not blaming PGP Inc. or
 GnuPG or the rest.

  The GNU ethic about not using IDEA, is counterproductive; that just
  means more poeple use IDEA, because they can't upgrade because it
  won't work if they do.

 (while this paragraph does not make much sense to me, I'll try to reply)
 Irregardless, the GNU ethic is about creating and promoting Free(tm)
 software.  Period.  Any usage of IDEA would go contrary to it.

 - --
 Frank Tobin  http://www.uiuc.edu/~ftobin/

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.0.2 (FreeBSD)
 Comment: pgpenvelope 2.9.0 - http://pgpenvelope.sourceforge.net/
 
  iEYEARECAAYFAjmJnGwACgkQVv/RCiYMT6MwsACfbw27PLFXn8hJ/0WmoeMqpDlg
  be0AmgMLaZ7sCODr8DohZar0/qzJEwQt
  =91f9
  -END PGP SIGNATURE-
 
 

--
   Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
   Member, MIT Student Information Processing Board  (SIPB)
   URL: http://web.mit.edu/warlord/  PP-ASEL  N1NWH
   [EMAIL PROTECTED]PGP key available





Re: names to say in late september

2000-08-02 Thread Arnold G. Reinhold

 From http://www.yahoo.com  8/2/2000 1pm

WASHINGTON (Reuters) - A federal judge ordered an emergency hearing 
on Wednesday on a privacy rights group's request for the immediate 
release of details on Carnivore, the Federal Bureau of 
Investigation's e-mail surveillance tool.

The Electronic Privacy Information Center (EPIC), in its 
application to the judge, accused the FBI and the U.S. Justice 
Department of breaching the law by failing to act on a request for 
fast-track processing of a Freedom of Information Act query about the 
snooping system.

The FBI told Congress last month that Carnivore is designed to 
intercept data from the electronic mail of a criminal suspect by 
monitoring traffic at an Internet service provider. EPIC wants the 
FBI to disclose how it works.

U.S. District Judge James Robinson set the hearing for 3:30 p.m. 
EDT (1930 GMT) at the federal courthouse in Washington.

Attorney General Janet Reno said last week that technical 
specifications of the system would be disclosed to a ``group of 
experts.''





Re: names to say in late september

2000-07-31 Thread Arnold G. Reinhold

At 11:51 PM -0400 7/30/2000, dmolnar wrote:
On Sun, 30 Jul 2000, Arnold G. Reinhold wrote:

 By the way, I could not find the April 2000 RSA Data Security
 Bulletin on three primes at
 http://www.rsasecurity.com/rsalabs/bulletins/index.html  Is there a
 better link?

The link I had in mind was

ftp://ftp.rsasecurity.com/pub/pdfs/bulletn13.pdf

The discussion is an appendix to the discussion of RSA key lengths.
Note that it is actually more general than just 3 primes; various
combinations of number of primes and their length are discussed,
along with security against known factoring algorithms.

Thanks. I hadn't gotten that far. The bulletin is actually available 
in the link I cited, in both pdf and html forms.


Even if you may disagree with Silverman's assumptions about "safe"
security levels, this is a very good place to start when looking at
RSA with more than two factors. As for terminology, I would prefer to keep
the RSA name and just modify it (e.g. "polyprime RSA," or better
"3-384-prime RSA") to indicate that a modulus with more than two factors
is in use.

-David


It's not so much that I disagree with Silverman's assumptions about 
"safe" security levels, it's that they are just that: assumptions. 
Multiprime RSA is different from two-prime RSA, which is the version 
most researchers have studied over the years.  Silverman's numbers 
show that. Consumers have a right to know what they are getting, even 
in this arcane world of crypto (maybe especially in this world).

Suppose the 14 round version of Rijndael is adopted as AES and a few 
years down the road someone decides that he can make his encryption 
system a lot faster by using only 8 rounds. Would it be acceptable 
for him to call his cipher AES-8? I don't think so. On the other 
hand, "RSA" is RSA Security Inc.'s trademark and if they want to 
dilute it -- to whatever extent -- by allowing multiprime moudli, I 
suppose they can. That is why I think we need some nomenclature for 
each member of this class of algorithms that does not depend on RSA 
Security Inc.'s judgement, however informed it may be.

Arnold Reinhold









Re: names to say in late september

2000-07-30 Thread Arnold G. Reinhold

While the RSA/Security Dynamics second letter to the P1363 committee 
http://grouper.ieee.org/groups/1363/P1363/letters/SecurityDynamics2.jp 
g
pretty much alleviates my concerns about using the "RSA" name from a 
legal perspective, the two messages below demonstrate why I think an 
unambiguous generic name is also needed.

The RSA algorithm with a modulus that is the product of three primes 
is a different cryptographic algorithm from RSA with a modulus that 
is the product of two primes. In cryptography, a little bit different 
is like a little bit pregnant. In particular, the three prime 
approach appears more vulnerable to an advance in quadratic sieving 
than the two prime approach.  I am not saying three prime approach 
should never be used, just that its security must be evaluated 
separately.

That RSA Security Inc. is considering allowing the use of three prime 
moduli under the umbrella of the RSA name doesn't change the fact 
that this is a different design. I think it is important to have some 
nomenclature (triprime?) that reflects exactly which method is in 
use. If I had recommended to a client that they use a particular 
product based, in part, on the claim that they employed the RSA 
algorithm and it turned out later that they used a triprime modulus, 
I would be quite annoyed.

Also, someone sending a secret message using PKC depends on the 
security of the recipient's algorithm and keys.  With triprime 
moduli, there would not even be a change in algorithm to alert the 
sender. There needs to be some way to let people know what security 
they are getting. I am not aware of any efficient test to distinguish 
numbers with two factors from numbers with more than two. Does anyone 
know of one?

By the way, I could not find the April 2000 RSA Data Security 
Bulletin on three primes at 
http://www.rsasecurity.com/rsalabs/bulletins/index.html  Is there a 
better link?


Arnold Reinhold

At 1:06 PM -0700 7/28/2000, Steve Reid wrote:
On Thu, Jul 27, 2000 at 03:00:16PM -0400, Arnold G. Reinhold wrote:
 I like "Biprime Cryptography," or maybe "Biprime Public Key
 Cryptography," where a biprime is defined as the product of two prime
 numbers.  I doesn't get close to any trademark and it is descriptive
 of the algorithm.

Sounds like "composite modulus cryptography" which I think has been
mentioned on the crypto lists before.

"Biprime cryptography" is not really accurate, because RSA doesn't
require that the modulus be the product of two primes. I seem to
remember someone (I think it was Richard Schroeppel) a few years ago
advocating RSA with a three-prime modulus. The idea was that having
three primes instead of two would not weaken the algorithm in any
practical way, but it could make CRT operations even faster. It
wouldn't make the number field sieve any easier because the number of
primes doesn't affect NFS workfactor. It would make (I think) the
quadratic sieve more efficient, but at normal keysizes (1024 bits?) the
three primes would all be large enough that quadratic sieve would still
be less efficient than the number field sieve.

At 6:26 PM -0400 7/28/2000, dmolnar added:
...
Note that Compaq is trying to push this under the name "Multiprime."
Bob Silverman has a nice analysis of the number of factors and size of
factors vs. security tradeoff in the April 2000 RSA Data Security
bulletin. It's only in the PDF version (or was), though.
PKCS #1 is also being amended to allow for multiple distinct primes.
...





Re: names to say in late september

2000-07-27 Thread Arnold G. Reinhold

At 7:05 AM -0700 7/27/2000, Rodney Thayer wrote:
What shall we call
that-public-key-algorithm-that-will-not-be-patent-protected in late
September?  we should not use a trademarked or copyrighted term, in my
opinion.
There was discussion of this a while ago, I think.  I don't recall what
was around.

I suggest "Rivest Public Key", or 'RPKey'.  It's not the prettiest
buzzword I've ever
suggested, but is there something better to call it?

I like "Biprime Cryptography," or maybe "Biprime Public Key 
Cryptography," where a biprime is defined as the product of two prime 
numbers.  I doesn't get close to any trademark and it is descriptive 
of the algorithm.

Arnold Reinhold




Re: Extracting Entropy?

2000-07-19 Thread Arnold G. Reinhold

At 12:31 AM +0100 7/18/2000, Paul Crowley wrote:
A variant on this question that we might see for lots of questions
soon: what's the best way to do this given only AES as a primitive?

Here's a simple way that uses all of the passphrase to control a
cryptographic PRNG that can be used to generate keys or whatever: use
the passphrase as the key to the block cipher, and run it in counter
mode.

If the passphrase is less than 256 bits (32 characters), this works
directly.  If it's less than 64 characters, use Triple-AES.  In
general, I assume that to use a key n times longer than the native key
length of the block cipher, you need to run it in 2n-1 mode; I'm
pretty sure this is so if the meet-in-the-middle attack is the only
one you have to worry about.  Append a 1 bit to the passphrase, then
fill to the next key boundary with zeroes as usual.

This takes O(mn) time, where n is the passphrase length and m is the
number of key bits you need.  I suspect any good solution will have
this property.  Still, you only have to keyschedule n times and things
should be pretty fast after that.

Any thoughts on the security or efficiency of this proposal?
--

I don't understand how a meet-in -the-middle attack applies to 
passphrase entropy extraction. Longer running time may be desirable 
from a key stretching perspective, but I don't see a security 
requirement.  Am I missing something?

Arnold Reinhold




Re: Electronic Signatures Yield Unpleasant Surprises

2000-07-04 Thread Arnold G. Reinhold

At 12:08 PM -0400 7/3/2000, William Allen Simpson wrote:
-BEGIN PGP SIGNED MESSAGE-

"Arnold G. Reinhold" wrote:
 Nothing new here. I often buy stuff on line and only get e-mail
 receipts. My credit card statements are a backup, I suppose. If
 anything the new law will strengthen our case with the IRS.

Possibly, but I also see language in the Act

   (iv) informing the consumer (I) how, after the consent, the
   consumer may, upon request, obtain a paper copy of an electronic
   record, and (II) whether any fee will be charged for such copy;

So, what happens when they "inform" you that your statement has a fee?
Saying, of course, that the new fee was authorized by Congress?

Will lack of a fee be a competitive pressure?  Remember how a few
years ago _every_ bank began adding statement fees?  Even some credit
unions began charging.  (I changed credit unions over this issue.)

I expect all ATMs to add a few words to the "Would you like a receipt?"
query, "for only 50 cents?"

It will be very hard for municipalities to outlaw such ATM charges,
as the Federal legislation explicitly supercedes state and local laws.


I'm not convinced that congressional language requiring that 
consumers be informed of any fee would be read by the courts as 
authorizing those fees and pre-empting local regulation. In fact 
section 101(b) of the new law specifically states:

"This title does not-- (1) limit, alter, or otherwise affect any 
requirement imposed by a statute, regulation, or rule of law relating 
to the rights and obligations of persons under such statute, 
regulation, or rule of law other than a requirement that contracts or 
other records be written, signed, or in nonelectronic form;..."

That said, unbundling of fees for hard copy statements and receipts 
may be a good idea. We always were paying for those statements, 
either in monthly account fees or in reduced interest on our 
deposits.  I am drowning in a sea of little pieces of paper and an 
all-electronic way of managing my financial information seems very 
attractive. Unbundling might move that along.

Arnold Reinhold




Re: random seed generation without user interaction?

2000-06-08 Thread Arnold G. Reinhold

At 8:52 PM -0400 6/7/2000, Don Davis wrote:
...

but, when SGI announced their lavarand patent
application in the press a few years ago, i
decided that it wasn't worth worrying about.
theirs is clearly a defensive patent, intended
only to make sure that noone can keep SGI from
using anything they build around the idea of
hashing analog inputs.

I am not a lawyer, but my understanding is that having a valid patent 
does not give you the right to to use the invention disclosed 
therein, it merely give you the right to stop someone else from using 
it. Parts of your invention may still infringe on someone else's 
earlier patent. If you are merely trying to establish prior invention 
in case someone else attempts to patent your idea, publication works 
about as well and is a lot cheaper. Of course, owning a bunch  of 
patents is very useful in horse trading with other companies that may 
come after you with theirs.

  [Wouldn't all the work done on things like hashing
 inputs in general to distil entropy, which was around
 for years before this patent, count? --Perry]

i'm sorry, but i don't agree;  back then, the
idea of "hashing various inputs" had not been
well-justified as providing true entropy per se,
afaik.  there was a "quasi-randomness" paper by
vazirani from around 1990, but that paper showed
only that biased i/o-derived bits could afford
a source of uniformly-distributed, pseudorandom
bits, whose prediction would cost more than
polynomial-time effort.

   - don davis, boston


Below is an excerpt from the Department Of Defense "Password 
Management Guideline," CSC-STD-002-85, dated 12 April 1985.  (this 
text is in FIPS-112 Appendix E as well). It would seem to embody the 
idea of hashing chaotic inputs to seed a PRNG. Here the hash step and 
the PRNG employ the same algorithm, but I could live with that 
restriction. For example, AES (or one of the AES candidates) will 
serve perfectly well for both purposes.

Arnold Reinhold


=== Begin Quote===

A.3 Pseudo-Random Number Generator

   Using a random seed as input, the pseudo-random number generator that drives
a password generation algorithm should have the property that each bit in the
pseudo-random number that it generates is a complex function of all the bits
in the seed.  The Federal Data Encryption Standard (DES), as specified in FIPS
46, (9) is an example of a pseudo-random number generator with this property.
If DES is used, it is suggested that the 64-bit Output Feedback (OFB) mode be
used as specified in FIPS 81 (10).  In this case, the seed used as input could
consist of:

   * An initialization vector
   * A cryptographic key
   * Plaintext

   Factors that can be used as input to these parameters are:

 For the initialization vector:

   * System clock
   * System ID
   * User ID
   -Date and time

 For the cryptographic key:

   * System interrupt registers
   * System status registers
   * System counters

 The plain text can be an external randomly generated 64-bit value (8
characters input by the SS0). [System Security Officer]

   The resulting pseudo-random number that is output will be the 64 bits of
cipher text generated in the 64-bit OFB mode.  The password generation
algorithm can either format this pseudo-random number into a password or use
it as an index (or indices) into a table and use the contents from this table
to form a password or a passphrase.

=== End Quote===


Arnold Reinhold




Re: random seed generation without user interaction?

2000-06-07 Thread Arnold G. Reinhold

At 3:27 PM -0400 6/6/2000, Steven M. Bellovin wrote:
In message [EMAIL PROTECTED], "Steven 
M. Bellovi
n" writes:
In message [EMAIL PROTECTED], Dennis 
Glatting writes:


 
 
There is an article (somewhere) on the net of digital cameras focused
on lava lamps. Photos are taken of the lava lamps and mixed into a
hash function to generate random data. I believe the author had some
algorithm for turning the lamps on and off, too.

See lavarand.sgi.com.


 I had thought it was patented, but a quick search of uspto.gov didn't
turn it up.

Following up on my own post...  My brain clearly wasn't in gear when I
did my previous search.  It's U.S. patent 5,732,138; see
http://patents.uspto.gov/cgi-bin/ifetch4?ENG+PATBIB-ALL+0+988124+0+6+ 
31831+OF+1+1+1+PN%2f5%2c732%2c138


   --Steve Bellovin

The patent appears much broader than just focusing a camera on a Lava 
lamp. They claim digitizing the state of any chaotic system and then 
hashing it to seed a PRNG. The Lava lamp is given as a specific 
example (claim 3).

Arnold Reinhold





Re: random seed generation without user interaction?

2000-06-06 Thread Arnold G. Reinhold

At 3:15 AM -0500 6/6/2000, John Kelsey wrote:
-BEGIN PGP SIGNED MESSAGE-

At 07:08 PM 6/5/00 -0700, [EMAIL PROTECTED] wrote:
So I'm curious about what all methods do folks currently use (on NT
and unix)  to generate a random seed in the case where user
interaction (e.g. the ol'  mouse pointer waving or keyboard tapping
approaches) isn't a viable option? 

If the machine has a microphone, you can get some unpredictable bits
from internal noise in the circuit, and also from real noise in the
room the computer's in.  There's probably a tiny bit of entropy
available even in the worst case imaginable from network packet
arrival times, if you can get them.

I have a page listing inexpensive noise sources that can be use with 
a computer's sound input port:
http://world.std.com/~reinhold/truenoise.html

Arnold Reinhold




Re: Electronic elections.

2000-05-30 Thread Arnold G. Reinhold

I'm not sure I care for the elitist tone in Dan's posting either, but 
he raises some points that deserve serious consideration. Sure we 
have mail-in absentee ballots now, but the number of people who 
choose to vote that way is small and an absentee ballot split that 
varied markedly from the regular vote would certainly stand out.

Today's headline's include concerns over the fairness of Peru's 
election, just ended. Elections in the US have been free from major 
ballot tampering for so long that most of us have forgotten the 
reasons for the complex voting procedures we use. These were hard 
fought reforms when they were introduced. We should look at Internet 
voting from every angle, including historical lessons, before 
employing it to select our governmental leaders.

Of course Internet voting has many applications besides political 
elections. And I don't think anyone would seriously consider its use 
in political elections until access to the Internet is nearly 
universal.  We have time. Let's err on the side of caution.

Arnold Reinhold



At 6:39 AM -0700 5/29/2000, David Honig wrote:
At 07:52 AM 5/29/00 -0400, Dan Geer wrote:
There is no doubt whatsoever that the sanctity of a vote once
cast can be absolutely preserved as it is moved from your house
to the counting house.  What cannot be done, now or ever, is to
ensure the sanctity of the voting booth anywhere but in a
physical and, yes, public location attended to by persons both
known to each other and drawn from those strata of society who
care enough to be present.

So I typically elect to vote by mail.  Is my vote worthless because of that?


There are no replacements for the
voting booth as a moment of privacy wrapped in inefficient but
proven isolation by unarguable witness, a place where we are
equal as in no other. 

'Sanctity'?  'Moment of privacy?'  Sorry, no sacred cows allowed
here, unless they're seeing eye cows, or nicely barbequeued.

Move the dispatch of a vote to a remote
browser and $100 bills

So standing in line with the masses like some Russian waiting for
bread somehow immunizes against voter fraud?

Internet voting is anti-democracy and those who cannot bestir
themselves to be present upon that day and place which is never
a surprise to do that which is the single most precious gift of
all the blood of all the liberators can, in a word, shut up.

Yeah right...  real purty flame there, real Daughters of the American
Revolution material, blood of the liberators and all, but how about a real
argument?   Or is your retro dogma supposed to be lapped up
on the basis of your empty, inflamatory assertions?
























Re: NSA back doors in encryption products

2000-05-28 Thread Arnold G. Reinhold

At 8:39 AM -0400 5/27/2000, Steven M. Bellovin wrote:
In message v04210109b5531fa89365@[24.218.56.92], "Arnold G. 
Reinhold" writes:

o There is the proposed legislation I cited earlier to protect these
methods from being revealed in court.  These are not aimed at news
reports (that would never get passed the Supreme Court), but would
allow backdoors to be used for routine prosecutions without fear of
revealing their existence.

That's tricky, too, since the Constitution provides the *defense* with
a guarantee of open trials.  At most, there are laws to prevent
"greymail", where the defense threatens to reveal something sensitive. 
In that case, the judge reviews its relevance to the case.  If it is
relevant -- and a back door used to gather evidence certainly would be
-- the prosecution can either agree to have it revelated or drop the
case.

I'm not saying there aren't back doors that wouldn't fall into this
category; I am saying that such a law would have to be very narrowly
crafted to pass constitutional muster.

   --Steve Bellovin


My point in mentioning this legislation was not to suggest it was 
likely to become law or withstand constitutional scrutiny. I am 
saying that the mere fact that the administration proposed this 
legislation demonstrates that they expect a large number of cases 
which will rely on evidence decrypted using means that they do not 
wish to come to light.

Arnold Reinhold




Re: NSA back doors in encryption products

2000-05-26 Thread Arnold G. Reinhold

At 11:17 AM -0500 5/25/2000, Rick Smith wrote:

As usual with such discussions, lots of traffic hides substantial amounts
of agreement with touches of disagreement.

Agreed.  Let me summarize what I am trying to say.  Then maybe it is 
time to move on.

1. I think citizen access to strong cryptography is an important 
counter to a growing, seemingly unstoppable trend toward a 
surveillance society.

2. My central point was that commercial operating system do not and 
will not protect the average user against a directed attack by a high 
resource attacker like NSA.

3. I am not suggesting that the NSA is out of control or exceeding 
its authority. If they do plant backdoors in commercial products, I 
believe they will gotten the blessings of the executive branch and 
the intelligence committees of the Congress. I suspect the latter 
have been pressuring NSA to do more in this area.

4. I am not addressing the domestic/foreign jurisdiction issues in 
the US intelligence community. When I say NSA I am also encompassing 
the FBI the "Technical Advisory Center" and whomever else in the US 
government is in on this game.

5. Given the sorry state of Microsoft software security, it is 
entirely possible that NSA has not had to alter a single bit in any 
Microsoft product to accomplish its ends.  Or they may find firmware 
and processor chip designs a more lucrative target. My point is that 
commercial operating systems are a major target for them and they 
will do what they need to do to acquire means to attack them.

6. I am not suggesting that NSA has infiltrated covert agents into 
Microsoft. I am saying they could. It's more likely  they would just 
vet selected Microsoft employees (with Microsoft's knowledge) and 
that this would suffice for security.  The undercover programmer/spy 
you seem to find unbelievable probably does exist, but is working 
overseas.  The intelligence community can handle what ever level of 
training is needed to pull this off.

7. I agree that NSA has to worry that any backdoor it plants will be 
used against US government and industry. There is always a risk that 
your weapons will be used against you. NSA will try to minimize those 
risks, develop protections for mission critical government computers, 
and find ways to deploy backdoors selectively. In the end, they will 
weigh the risks against the likelihood that their stream of signals 
intelligence will dry up if they don't act.

8. Usually in discussions about what intelligence agencies might do, 
one is limited to citing what is possible and then saying "that's 
what I'd do if I were in charge." But in this case there is evidence 
of the US governments intentions:

o There have been many leaks indicating NSA's concerns about falling 
behind due to Internet technology.  (e.g.the Hirsch New Yorker 
article about NSAs concerns over the impact of PC's and the 
Internet).  Leaks like these are often intended to prepare the public 
and congress for remedial proposals.

o The US government have not been shy about meeting with senior 
computer executives to discuss law enforcements' problems with 
encryption and announcing that they had received assurances of 
cooperation. This happened right around the time they announce 
liberalized crypto export rules.

o There is the proposed legislation I cited earlier to protect these 
methods from being revealed in court.  These are not aimed at news 
reports (that would never get passed the Supreme Court), but would 
allow backdoors to be used for routine prosecutions without fear of 
revealing their existence.

o The Clinton administration is requesting a large budget for a new 
"Technical Assistance Center" as part of a counter terrorism act.


Arnold Reinhold




RE: Critics blast Windows 2000's quiet use of DES instead of 3DES

2000-05-19 Thread Arnold G. Reinhold

Someone made the comment in this thread (I can't seem to find it 
again) that a bug in MS security that counts as a hole, not a 
backdoor. But a cooperative relationship between Microsoft and NSA 
(or any vendor and their local signals security agency) can be more 
subtle. What if Microsoft agreed not to fix that bug?  What if 
Microsoft gives NSA early access to source to look for bugs? The NSA 
may not need much more than an agreement that certain portions of, 
say, the RNG object code will never change (or only change 
infrequently, with lots of notice). That might be enough to insure 
that NSAs viruses and Trojan horses can always find the right spot to 
insert a patch that weakens random number generation.

It may be time to question whether we should ever expect that mass 
market operating systems from commercial vendors will protect users 
against a targeted attack from a high resource operation such as the 
major signals intelligence agencies.  Users may have to rely on open 
source OS's and security tools that are light weight,  easy to audit 
and isolated from the OS. Perhaps the best we can expect from a 
commercial OS is enough protection to make it hard to scan data in 
transit for users who super encrypt with stronger tools.

Arnold Reinhold






Re: Pass phrases, Hushmail and Ziplip

2000-05-15 Thread Arnold G. Reinhold

At 2:56 PM -0400 5/12/2000, Peter Wayner wrote:
I think all crypto products rely on passphrases. Every wallet is 
locked with a passphrase. Every private key is locked away. Even the 
smart cards are usually sewn up with PINs. It's just a fact of life 
and it seems unfair to me to pick upon Hushmail.

-Peter

I'm not picking on Hushmail. Hushmail is a fairly good privacy 
product.  It should protect against the average office snoop or an 
employer that wants to monitor employee e-mail. In fact, I'd give 
their work a  95%. Unfortunately, 95% is not a passing grade in high 
security cryptography.  They have, however, opened their design to 
public critique and that is the only way I know to get close to 100%. 
So I'm just trying to help.

It's true that most encryption products rely on passphrases, however 
most do not rely on them to the same extent that Hushmail does. A 
well-designed a smart cards will only accept a limited number of PIN 
attempts before freezing up for some period of time. The primary 
security PGP comes from keeping the private key secret; the 
passphrase is a secondary protection in case the encrypted private 
key is stolen. This is generally adequate to protect against random 
surveillance. Protecting a private key from a resourceful targeted 
attack is difficult, but it can be done, especially in the era of 
small laptops and  PDA's.

But Hushmail is different.  Your Hushmail private key is kept on a 
central server at Hushmail encrypted by your passphrase. If an 
attacker can figure out the passphrase they can simply login to hush 
mail and to read the your mail.  Even worse, the hush mail central 
server stores a hash of your passphrase.  If an attacker can purloin 
a copy of the hash values, he can compare them to a pre-computed 
dictionary.

Many if not most Hushmail users will choose weak passphrases.  My 
survey of PGP passphrase usage 
http://world.std.com/~reinhold/passphrase.survey.asc found that 25% 
of PGP users chose passphrases of 14 characters or less. The median 
passphrase length was 21 characters. Hushmail users are likely to be 
less informed abd motivated about the need for a strong passphrase 
than PGP users.

Suppose the attacker's dictionary yields up 40% of the passphrases. 
Each exchange of messages involves at least two different people, so 
the probability that at least one of them will have a cracked 
passphrase is 68%.  If the dictionary yields 60% of the passphrases, 
84% of the traffic can can be read.  Since many people quote the 
message they're responding to -- and even if they don't it's usually 
possible to follow a conversation by reading only one party's e-mail 
--  a majority of the traffic will be readable.  Remember the 
messages you send are protected by the other guy's passphrase.

I have a no knowledge about the security procedures that Hushmail 
takes.  I'm sure they try very hard.  But I suspect that they are no 
match for the likes of the signals intelligence agency of any of the 
major powers: U.S.,Russia, Britain, France, India, Israel, Japan, 
China, etc.  (I wonder if the intelligence operatives from various 
countries attempting to penetrate Hushmail know about each other and 
go out for beer every now and then.)

There are other ways to get hold of the hash value besides 
penetrating Hushmail's security.  Some users may log in using 40 bit 
browsers for example.  Quite a few users will select a passphrase 
that they already use on other accounts that are not secure.  Those 
are easy to get.

If you buy my analysis, Hushmail has built a system that concentrates 
all the e-mail from people who think they have something to hide in 
one place.  If an intelligence agency succeeds in in getting at 
Hushmail's files, the weak passphrases that  most users select will 
let them read much if not most of the mail stored there. That's a 
pretty good deal for the intelligence community.

Here are some things Hushmail could do to make things better:

o Advise against 40-bit browsers and put up an alert of a 
user attempts to login on one
o Offer better passphrase advice: not one you already use, 
minimum length  14, offer to generate one for the user a la Diceware
o Add salt
o Use a key stretcher
o Report the last time a user logged in
o Develop an independent way for users to verify the Hushmail applets

Adding salt would at least break up the dictionary attack.  An 
intelligence agency could still an attempt to crack the passphrase is 
of individual targets one at a time but mass surveillance would 
become much more expensive.  Getting people to select a strong 
passphrases and to insure that their correspondents also have 
selected strong passphrases would turn Hushmail into a fairly secure 
system instead of a trap for the unwary.  If they fear turning off 
the average user, they could offer an enhanced security package that 
enforced these rules.

I hope Hushmail heeds this advice and I wish 

Re: NYT reporter looking for advice re: encryption products

2000-05-12 Thread Arnold G. Reinhold

Here are my comments on Hushmail and ZipLip:

HUSHMAIL

Hushmail publishes their design and it seems to be generally well 
constructed. However it is extremely important for your readers to 
understand that the security of their HushMail account depends 
*entirely* on the strength of the passphrase they select. HushMail 
acknowledges this in their technical description: 
http://www.hushmail.com/tech_description.htm

"The user creates any passphrase he or she wishes. The strength of 
the system directly  correlates to how hard it would be to guess or 
brute force this passphrase. Users should be told clearly to create a 
strong passphrase."

However the advice that Hushmail actually gives users under "Choosing 
a Good Passphrase" is buried pretty far down in their help system and 
is not adequate in my opinion:

"The strength of the system is equivalent to the strength of your 
passphrase.  For
example,

?Mysistermary, Wasonce11?

would be a good example of a strong passphrase. The passphrase is an English
sentence, easy to remember; however, it includes both letters and numbers, thus
increasing the strength of the passphrase.

When choosing a passphrase, keep in mind that you will have to type 
it every time
you log into your HushMail address. Keep your passphrase in a safe 
place. If you
forget your passphrase, Team Hush cannot retrieve it for you! "

The average user who even bothers to read that text will still not 
have a clue as to how to create a strong passphrase.

Take a look at my Diceware page http://www.diceware.com which 
includes step by step advice on what users can do to create a strong 
passphrase -- one that they can be confident really is strong. It is 
a good idea for users to prepare their passphrase before they start 
the account creation process since it takes a while to do it right. 
Note that the standard password advice (6-8 characters with letters 
and numbers) is nowhere near good enough for use with Hushmail.

ZIPLIP

Unlike Hushmail, ZipLip does not make a technical description of 
their security approach available on their Web site (at least, I 
could not find one).  That is a red flag, in my opinion. They also 
rely on passwords, but give no advice about how to create strong 
ones. Indeed they seem to encourage the 6-8 character model which is 
totally inadequate in this kind of application.

Feel free to contact me if you need more info,

Arnold Reinhold



Hello,

I'm working on a story that mentions several encryption systems, and I've
heard that many companies often claim they have good products when in fact
they have the equivalent of snake oil. John Gilmore suggested that I check
in with the folks on this mailing list. I'd be interested to hear if any of
these companies/products have problems that I, and Circuits readers, should
be aware of. Here are the ones that I have looked at for this story:

Freedom/Zero Knowledge Systems
PGP
Anonymizer.com
Hushmail/HushCom
ZipLip.com
PrivacyX (I'm aware of the security problem publicized in November with the
Web browser system)

Thanks in advance for your time.

Best regards,
Lisa Guernsey








Lisa Guernsey
Reporter, Circuits
The New York Times
229 W. 43rd Street
New York, NY 10036
212-556-5905
[EMAIL PROTECTED]





Re: GPS integrity and proactive secure clock synchronization

2000-05-11 Thread Arnold G. Reinhold

At 12:43 PM +0300 5/11/2000, [EMAIL PROTECTED] wrote:
Thanks to all for the very interesting info. For people interested, here's
a summary of answers and ideas:

You left out my direction finding approach :(   I think it has merit. 
Electronically steerable antennas are quite practical at L band and 
they could also be used to null-out a single point jammer.

...

Some thoughts on research directions:
 ...
   Can one analyse a design which will involve communication between GPS
   receivers using local (wired or radio) communication which will provide
   `real` anti-spoofing (notice my criticism of the use only of encryption
   of the p-code for anti-spoofing)?

It seems to me that all we need is an independent, authenticated 
source for the GPS satellites' ephemerides. We would then know where 
the satellites are at any given time and we presumably know the exact 
location of our GPS receiver. Given that knowledge,  I don't think it 
is possible to generate spoofing signals that produce the correct 
location but the wrong time. (Even in the unlikely event that the 
exact satellite constellation repeated every few orbits, we can keep 
approximate time by other means well enough to avoid being fooled.)

I can think of three ways to get the ephemerides:

1. We could exchange received ephemerides with a number of other 
GPS-time users on a regular basis using PKC signed e-mail. This would 
delay detection of a problem until a little while after a spoof 
occurred, but that might suffice in many applications.

2. We could use long range predictions of the satellites' orbits. 
While these do not have full accuracy, it should be possible to 
perform an error analysis that sets an upper bound on how much our 
time could be distorted given the expected errors in the long term 
orbital projections.

3. The U.S. government or some other trusted source could be 
persuaded to broadcast GPS satellite ephemerides frequently via the 
Internet and/or special radio stations (e.g. WWV) along with public 
key signatures. Natural Resources Canada already does something like 
this on a daily basis (unsigned, I presume). See 
http://www.geod.nrcan.gc.ca/html-public/GSDproductsGuide/CACS/English/ 
ephem.html

   Can we reverse the roles here... and use highly secure time services
   (thru wired sources) to detect tampering with GPS signals (also for
   location???)

I am not sure you even need independent time if you have enough GPS 
receivers in known locations and authenticated knowledge of the 
ephemerides.

It seems an interesting and challenging area.


Agreed. So OK, one more hair-brained scheme: use a moving GPS 
receiver.  Run a long, opaque pneumatic tube on the roof of a 
building, say for several hundred feet. (Pneumatic tubes are still in 
wide use and available from many sources.) Place a GPS-equipped Visor 
in the pneumatic carrier and send it randomly to one of several 
stations. At each station, it would read out its position and the 
time via IR. Since an attacker would not know where the receiver was 
at any given moment, it would be impossible to construct a correct 
spoofing signal. (I has originally thought of model train technology. 
More fun, but I think it would require the ability to receive signals 
inside a building.)

Arnold Reinhold




Re: GPS integrity

2000-05-09 Thread Arnold G. Reinhold

Dorothy Denning wrote an interesting paper on authenticating location using
GPS signals... I think it's reachable from her home page as well as the
following citation:

D. E. Denning and P. F. MacDoran, "Location-Based Authentication: Grounding
Cyberspace for Better Security," Computer Fraud and Security, Feb. 1996

Ian :)

The article, at 
http://www.cs.georgetown.edu/~denning/infosec/Grounding.txt, 
describes a commercial product from International Series Research, 
Inc.  of Boulder, Colorado  called CyberLocator, for achieving
location-based authentication.  But it is short on details. 
Apparently a user to be authenticated sends a received GPS signal 
"signature" to the host which has its own GPS receiver and compares 
the signature with the GPS signal it received. The scheme took 
advantage of selective availability to some extent. I wonder if it 
being turned off has hurt them.The company has a white paper on 
CyberLocater: http://www.CyberLocator.com/WP_LBA.doc It is not clear 
if they have a shippable product yet.

Their scheme does not seem directly applicable to the problem of 
getting authenticated time from GPS since they assume a trusted host 
site. Also, if GPS had authentication features built into the 
unencrypted signals, I think they would have taken advantage of those 
features and mentioned them.

I can think of some non-cryptographic ways to authenticate GPS time. 
One way would be to use an electronically steerable antenna and track 
the satellites. A related approach might be to use two or more GPS 
receivers connected to directional antennas pointing in different 
directions. Given knowledge of the satellites orbits, it should be 
possible to predict the variations in received signal strength during 
each orbital pass. The antennas could be concealed in an RF 
transparent enclosure, preventing an attacker from knowing their 
orientation.

A third technique might be to use one or more local clocks. The 
various PC clocks on a network might do. Any attack other than a a 
very slow time drift would trigger an alarm.

A fourth might be to use several GPS receivers scattered around a 
building, campus or city.  Creating a spoof that produced the correct 
location for all the receivers might be hard.

Arnold Reinhold






RE: Clinton signs bill to count wiretaps that encounter encryption

2000-05-08 Thread Arnold G. Reinhold

At 1:05 AM -0700 5/8/2000, Lucky Green wrote:
Arnold wrote:
 It will be interesting to see what the reports say. But it is worth
 noting that according to
  http://www.uscourts.gov/wiretap99/contents.html there were 1350
 wiretaps approved by state and federal judges in the US in 1999. 72%
 were for drug cases.  Over the last 10 years, wiretaps have accounted
 for an average of less than 2500 convictions per year. Hence wiretaps
 convict only a tiny fraction of the US prison population, which is
 now over 1.3 million.

While it is a popular myth that the USG counts wiretaps, the USG does not in
fact do so. The USG counts wiretap orders. There is a significant difference
between the number of wiretap orders issued and the number of wiretaps
performed. I am not even talking about the wiretaps that are being performed
without court order typically showing up at trials as a "confidential
informant" source.

Wiretap orders can, and virtually almost always do, cover multiple phone
lines. At a minimum, a wiretap order will cover a person's home and work
numbers. Even if you work at a small office, that's likely to be several
lines at least. But wiretap orders can and do go beyond that. The glimpse at
wiretap reality the cases in LA have afforded the public show that judges
will issue wiretap orders for entire cellular providers. One wiretap order
listed in the official statistics may well correspond to several hundred, or
even thousands, of wiretaps.

Statistics are good thing, but they need to be read carefully.
--Lucky


You are correct that a single wiretap order can cover several lines. 
However the DOJ report has a lot of information on this.  See. for 
example, http://www.uscourts.gov/wiretap99/table499.pdf  The average 
wiretap installed intercepted 1921 communications, of which 390 were 
considered incriminating. The average wiretap was installed for 50 
days, so that works out to 38 interceptions per day per tap 
installed.  There are numbers given for single vs multiple locations 
which suggest that single location taps predominate, but  a large 
"other" block makes that question hard to answer for sure.

More to my point, which is that authorized wiretaps catch only a 
small fraction of criminals, are the arrest and conviction numbers. 
http://www.uscourts.gov/wiretap99/table999.pdf  These are a little 
tricky to interpret because of time lags, but seem to run around 2500 
convictions per year. Even if the average criminal convicted on 
wiretap evidence spends 20 years in prison, that only accounts for 
50,000 prisoners, a drop in the bucket given a U.S. prison population 
of 1.3 million.


Arnold Reinhold




Re: Clinton signs bill to count wiretaps that encounter encryption

2000-05-07 Thread Arnold G. Reinhold

On Fri, 5 May 2000 08:58:45 -0400 "Arnold G. Reinhold" 
[EMAIL PROTECTED] writes:
 It's worse than that. The new reports are to cover "law enforcement
 encounters with encrypted communications in the execution of wiretap
 orders." http://www.politechbot.com/docs/clinton-crypto.050300.html
 "Encounters" suggests that there will be no distinction between
 encryption that hinders law enforcement access and encryption that
 does not. For example, any tap of a GSM cell phone could be reported
 even though the cipher GSM uses is relatively easy to break.  In 1999
 there were 676 authorized taps for cell phones and pagers vs. 399 for
 stationary phones. (1998: 576 vs 494, so the trend is toward cell
 phones)

Any tap on the GSM cell phone will _not_ be on the encrypted over-the-air
interface but simply on the plaintext leaving the base station on the fixed
network.

According to the White House press release the test was "encountered 
encryption" and they could well have counted GSM even if they could 
get around the encryption as you describe. Declan points out that the 
law was worded more carefully than the press release, so things are 
not as bad as I feared. Point for Congress.

It will be interesting to see what the reports say. But it is worth 
noting that according to 
http://www.uscourts.gov/wiretap99/contents.html there were 1350 
wiretaps approved by state and federal judges in the US in 1999. 72% 
were for drug cases.  Over the last 10 years, wiretaps have accounted 
for an average of less than 2500 convictions per year. Hence wiretaps 
convict only a tiny fraction of the US prison population, which is 
now over 1.3 million.

Furthermore, law enforcement has many ways to deal with encryption: 
traffic analysis, bugs, viruses, informers,...  If it gets the bad 
guys to talk more, encryption could be a boon for LE.

Arnold Reinhold





Perfect Forward Security def wanted

2000-05-04 Thread Arnold G. Reinhold

Can anyone point me to a good definition of "Perfect Forward Security"?

Arnold Reinhold




Re: IP: Gates, Gerstner helped NSA snoop - US Congressman

2000-04-14 Thread Arnold G. Reinhold

I am not a conspiracy nut. I think Oswald killed Kennedy all by 
himself; Roosevelt had no idea Pearl Harbor was about to be attacked; 
and Ben  Jerry only wanted to make great ice cream. But I think 
people are underestimating NSA if they think they would be afraid to 
introduce crypto vulnerabilities, especially with the cooperation of 
software (and hardware) manufacturers.  I can think of a number of 
ways that this could be done with relatively low risk of detection or 
exploit by adversaries. The fact that the Microsoft NSAKEY story blew 
over so quickly indicates they have little to fear from publicity.

There were several statements around the time the export rules were 
liberalized late last year saying large computer manufacturers had 
agreed to cooperate more closely with NSA. Also an early draft of the 
administration's bill to authorize intrusive measures to get keys had 
language that would make revealing built-in vulnerabilities a crime. 
Add that to all the stories about how NSA is losing ground because of 
the Internet and encryption and I think there is plenty of reason to 
suspect that real fire is making all this smoke.

Arnold Reinhold.




Re: PRNG State [was: KeyTool internal state]

2000-04-04 Thread Arnold G. Reinhold

Ben Laurie [EMAIL PROTECTED] wrote:

"Arnold G. Reinhold" wrote:

 I wonder if you are confusing the length in bits of a PKC key, e.g. a
 prime factor of an RSA public key, with the entropy of that private
 key. The prime factor may be 512 bits long, but it usually does not
 have anyway near 512 bits of randomness. Usually a secret prime is
 generated by adding a 128 or 160-bit random quantity to some
 non-secret base and then selecting the next prime number. In such a
 scheme a 20 bytes (160 bits) random pool is not unreasonable for
 generating one key or a small number of keys.

In what sense is this "usual"? Who does it this way?

It's been a while so maybe I should not have been so categorical, but 
I last time I looked PGP did this. They generated entropy by asking 
the user to type at the keyboard and timing key strokes. Accumulating 
1000 bit of entropy this way would take quite a while, especially on 
Windows, which has poor timer resolution.

Most implementations that I am familiar with (not a lot) assume 
entropy is expensive to generate. If you have a copious supply of 
randomness, there is no reason not to consume n bits of entropy to 
generate an n-bit prime. Does anyone know of systems that currently 
do this?

Arnold Reinhold




Re: PRNG State [was: KeyTool internal state]

2000-04-02 Thread Arnold G. Reinhold

I wonder if you are confusing the length in bits of a PKC key, e.g. a 
prime factor of an RSA public key, with the entropy of that private 
key. The prime factor may be 512 bits long, but it usually does not 
have anyway near 512 bits of randomness. Usually a secret prime is 
generated by adding a 128 or 160-bit random quantity to some 
non-secret base and then selecting the next prime number. In such a 
scheme a 20 bytes (160 bits) random pool is not unreasonable for 
generating one key or a small number of keys.

On general principles I would prefer a larger pool of randomness, 
especially if it is available in the operating system, but if the 20 
bytes are truly random, I don't think you can call the keytool scheme 
insecure.

Arnold Reinhold

At 10:10 PM +0100 4/2/2000, Andrew Cooke wrote:
Hi,

Can someone please correct the following?

I expect a PRNG with an internal state of n bits to produce output that
is predictable given n consecutive bits of output.  Is that correct?  If
so, then doesn't a PRNG used to generate a key require at least as large
an internal state as the key length (otherwise, given the first n bits,
the rest is predictable, reducing teh effective length to n)?

This is the basis of my earlier post questioning the security of Sun's
keytool.  I've included the relevant parts of my post and the reply
below - I've also checked the FIPS document, which doesn't mention this
(I cannot find a public copy of the IEEE document).

My apologies if the reasoning above is incorrect - I would really
appreciate comments on this as it is important that I understand whether
or not the keytool is useful (and also, given the response, I suspect I
am labouring under some pretty major misconception about random PRNGs).

Thanks,
Andrew


Gary Ellison wrote:
  " " == Andrew Cooke [EMAIL PROTECTED] writes:
[...]
   Most importantly, as far as I can tell, keytool does not generate "fully
   random" keys when used "naively".
[...]
 The Sun provider for SecureRandom follows the IEEE P1363 standard,
 Appendix G.7: "Expansion of source bits", and uses SHA1 as the
 foundation of the PRNG. Additionally, we verify that the Sun
 SecureRandom provider complies with NIST's FIPS PUB 140-1 section 4.11.
[...]
   - The Java SecureRandom class contains only 20 bytes of random state and
   these are initialised by some kind of thread contention process (which
   may not generate "really random" values either).
[...]
 In 1996 when SecureRandom was implemented 20 bytes of state seemed
 sufficient for FIPS 140-1 compliance. Perhaps this is no longer a
 sufficient target (your suggestions are welcome).  The Sun provider
 for SecureRandom mixes in other sources of entropy with the results
 from the thread contention process. Among other things this includes
 the current time, the state of the VM's memory usage, system
  properties, and file system activity.
[...]





EU Echelon probe and Sony PS2 DVD zone oops

2000-03-17 Thread Arnold G. Reinhold

http://dailynews.yahoo.com/h/nm/2317/tc/eu_spying_1.html

EU to Set Up Major Probe Into U.S. 'Spy' Charges

BRUSSELS (Reuters) - The European Parliament is set to announce next 
Wednesday that it will set up a special inquiry committee into 
allegations that the United States uses an electronic surveillance 
system for industrial espionage.
...
The Parliament very rarely sets up such committees; the last time it 
did so was to probe mad cow disease.
...
===

http://dailynews.yahoo.com/h/nm/2317/tc/tech_sony_3.html
Sony Embarrassed by Another PlayStation2 Flaw
By Yuka Obayashi
...

Sony's game making unit Sony Computer Entertainment (SCE) said it had 
found users of PS2, launched two weeks ago in Japan amid huge 
publicity and frenzied demand, could manipulate it to watch digital 
video disk (DVD) software sold overseas.

That is in breach of an agreement among DVD player makers worldwide 
that stipulates machines can only play domestically sold disks 
equipped with disenabling codes.
...



Re: New York teen-ager win $100,000 with encryptionresearch(3/14/2000)

2000-03-16 Thread Arnold G. Reinhold

Arnold G. Reinhold writes:

  If you know the DNA sequences of alphabet letters, you can PCR probe
  for common words or word fragments like "the" or "ing" and avoid
  total sequencing.

That's true. Luckily, there is no such test for random base sequences,
though a pseudorandom sequence would certainly be very visible, but
only if the genome has been totally sequenced (currently, an expensive
and slow enterprise, despite Celera making large headways into
it). Hence the need for steganography, which is further worsened by
significant evolutionary conservation throughout the biological
kingdom. The payload will be not be very high.

I am not sure I understand the difference between "random" and 
"pseudorandom" as you are using it here. In any case, I expect more 
sensitive cryptoanalytic tools for DNA can be developed if the need 
(and funding) arise.  For example,  has anyone done an n-tuple 
frequency analysis on natural DNA? Probes targeting n-tuples that are 
significantly less likely to occur in nature could be used to find 
human generated DNA strings without total sequencing.  It might even 
be possible to do something like autocorrelation by fragmenting the 
DNA, separating the strands, recombining and looking for 
complementary strands that bind inappropriately. (e.g. the first 
occurrence of "the" in strand A might bind to the second occurrence 
of "the" in strand A'.) You don't need the letter codes to do this.


  A recent Genetic Engineering News says the price for synthetic DNA is
  dropping from $1 per base to about $0.50 per base. That works out to
  $0.25 per bit. That's about 8 orders of magnitude more expensive than
  PC disk storage.

This only applies to short sequences. If you have to (PCR-) ligate
your sequences from shorter segments as output by the synthesizer
robot, the price will skyrocket. Hey, nobody said it's going to be
cheap, nor fast ;)

The problem seems to be error rates. Here is what one DNA synthesis 
company has to say:  http://www.alphadna.com/special.html#long 
oligonucleotides

Longer than 35-mer oligonucleotides.  Polyacrylamide gel 
electrophoresis (PAGE) purification, HPLC (high performance liquid 
chromatography) purification

   Let's assume that the efficiency of DNA synthesis is 99%. 
With the addition of each consecutive base, the proportion of the 
"aborted" oligonucleotides increases and at 40 bases the final 
reaction will contain 67% "true" oligos and 23% shorter products. At 
100 cycles only 36% of the products will be of the correct sequence. 
Therefore, the synthesis of long oligos necessitates purification 
by PAGE or HPLC,  the two reliable methods for purification of long 
oligonucleotides.  For oligos longer than 50 bases, PAGE gives much 
better results than HPLC.

   We offer PAGE purification of oligonucleotides at the price 
of additional $100 per oligo (35- to 70-mer) or $300 per oligo (70- 
to 200-mer). In addition to this fee, we require an extra 24-48 
hours to complete the PAGE purification.  Please note that PAGE 
purification, although the best currently available method, does not 
guarantee 100% error-free oligonucleotide products.  It was 
reported by others that a PAGE-purified 123-mer and 126-mer, when 
used for cloning, were proven to contain errors in about half   of 
the clones (Hecker KH, Rill R. Error analysis of chemically 
synthesized polynucleotides.  Biotechniques 1998 Feb;24:256-60).

A mer (as in polymer) is a DNA base pair.  There are four 
possibilities, so a mer encodes two bits. A 200-mer chain holds 400 
bits. That's long enough to start thinking about packet technology. 
You could use ECC to deal with the base errors, or just assume you 
will have enough copies of each packet to do majority voting.

Anyway, I expect Moore's law will apply here as it does in 
electronics. Price per base might be a good number to chart over 
time. I don't think that Moore's time constant is due to the peculiar 
nature of semiconductors, but rather it is results from the 
so-far-unlimited richness of the technology. DNA technology is just 
as rich in possibilities as semiconductors. I think Moore's 18 months 
is the limit as resources go to infinity of the time needed for 
humans to understand the limitations of the last innovation and come 
up with an approach to overcome them.


The good part is extremely dense storage (i dot can contain far more
than microfilm), and potential for destruction on demand: via
packaging in a container bisected with a breakable membrane, one part
containing the DNA (precipitated, or as solution) and the other a
strongly fragmenting chemical (DNAses probably too slow, something
strongly oxidizing like concentrated perchloric acid should do).

You are right, of course, about density, but I'd be reluctant to rely 
on DNA's destructibility. On the contrary, I am told that PCR can 
reliably detect ten molecules and has 

Re: New York teen-ager win $100,000 with encryption research(3/14/2000)

2000-03-15 Thread Arnold G. Reinhold

At 7:39 PM -0800 3/14/2000, Eugene Leitl wrote:
Of course it ain't actual encryption, only (high-payload)
steganography at best. Now, if you sneak a message into a living
critter (a pet ("the message is the medium"), or creating the ultimate
self-propagating chainletter, a pathogen), that would be an
interesting twist.

Interesting is that you can tag the message with a longish
pseudorandom base sequence, which allows you to fish for the fragment
(from digests) via a complementary sequence. Anyone not in posession
of that sequence would have to do a total sequencing.

If you know the DNA sequences of alphabet letters, you can PCR probe 
for common words or word fragments like "the" or "ing" and avoid 
total sequencing.


Of course, using real steganography (camouflaging messages in DNA
(say, "(c) by God, Inc.") as genes or ballast as highly repetitive
sequences) is also an option.

A recent Genetic Engineering News says the price for synthetic DNA is 
dropping from $1 per base to about $0.50 per base. That works out to 
$0.25 per bit. That's about 8 orders of magnitude more expensive than 
PC disk storage.

David G. Koontz writes:
  http://www.sjmercury.com/svtech/news/breaking/merc/docs/013955.htm


Arnold Reinhold



China Eases Rules on Encryption Software

2000-03-13 Thread Arnold G. Reinhold

By Matt Pottinger

  BEIJING (Reuters) - China has eased tough new restrictions on 
encryption technology,
  announcing that a vast category of consumer software and equipment 
-- including mobile
  phones and Microsoft Windows -- would be exempt from the rules.

  The government agency in charge of enforcing the rules sent a 
``clarification'' letter to U.S.
  business organizations last week which steps back from the hard 
position it had taken when the
  rules were adopted on January 31.
...

More at http://dailynews.yahoo.com/h/nm/2313/tc/china_encryption_1.html



Re: time dependant

2000-03-10 Thread Arnold G. Reinhold

At 12:55 AM -0600 3/10/2000, John Kelsey wrote:
[much deleted]

Actually, the subpoena threat means that we need to put the
entities holding shares of the secret in places where even
we can't find them.  In the extreme case, there's some
machine somewhere with e-mail access, which may carry some
cover traffic of some kind, and which holds some secret
until a specified date.  On that date, it sends it out.  The
setup procedure has to establish this machine (or a set of
such machines) in such a way that ideally nobody ends up
knowing where they are, and that there's no way for anyone
to figure out which time-delayed secret is being held on
which machine.

I agree that something like that would be desirable. The big problem 
is how to actually do it. A bounty or threat of legal action might 
get a lot of people to sweep their systems. One thought might be very 
small (cigarette package sized) lithium-battery powered computers 
that could be hidden in walls and clipped onto existing phone wires. 
They would be silent until the time came to release their key. Then 
they would call a phone number (or several) in the middle of the 
night and divulge their secret. The calls might be to computers or 
they might be to random individuals who would be read a list of 
passphrase words, and told to contact Time-Escrow Inc. for a reward.

I am also starting to like satellite approach more. There is a 
technology called nanosatellites that is essentially a small PC board 
dumped into orbit. Time escrow would be an ideal nanosatellite 
application.  Several groups could each be given a satellite to 
program.  The satellites would then be place in the launch vehicle by 
each group and guarded until launch. Actual key generation could be 
deferred until after launch.  One way to ensure this would be to 
select the computation group (e.g. the prime p for DH over Zp or a 
particular elliptic curve for ECC) by some public process after the 
satellites are in orbit.  The nanosatellites would then generate the 
key pairs and communicate the public halves to earth.  The public 
keys would be signed by the nanosatellites using a secret key 
inserted by each group in their nanosatellite, insuring that they 
were actually computed in space. The private halves of the generated 
keys would of course be broadcast when their time came. I think all 
this could be done for tens of millions of dollars.  Is there a 
market that big for time-escrow service?



[stuff deleted]

You may be right in practice, but it seems to me that a
major goal of crypto research is to figure out how do do
things in a way that does not rely on contract law and other
forms of "trust me."

I have mixed feelings about this.  On one hand, the legal
system in the US looks fundamentally broken to me.  On the
other, even massively overworked, corrupt, or incompetent
judges are *human*.  We are on the verge of building
computer systems which are intentionally outside the reach
of any human control.  We've done this to some limited
extent now with anonymous remailers and even the internet.

But this means that these systems are really outside human
control.  The trivial example of this is using PGP to
encrypt all your files with a long, hard-to-guess
passphrase, and then forgetting the passphrase.  If you do
this, you're just out of luck--your files are gone.  In one
sense, this is much better than storing your files
unencrypted in a safety deposit box on ZIP disks: you don't
have to trust that the bank won't drill out your box and get
at the contents, or that someone won't have made a copy of
the key before you got it, or that a court somewhere will
order the box opened so your ex-wife's lawyers can read
through your private files.  But it also means that there's
no human that can open your files for you when you forget
the passphrase.  It means that if you die, all the
information you encrypted is forever lost to the world.  It
means that no matter how good a reason exists, nobody can
get that information without the original passphrase.

In this context, I'm reasonably comfortable with things.
But when we talk about the general automated contract
enforcement schemes, I worry a lot about what weird
unforseen interactions will happen.  This is especially
worrisome when the system is designed so that there's no
human in the loop to make a judgement about whether there's
something going wrong.  Does the car stop working when
your payment is a month late?  Does this happen even when a
major terrorist attack has taken down the whole payment
system for the last month, with the result that half the
cars on the road stop on the same date?  Does the car
suddenly become yours for free an hour after someone posts
the recently-compromise top-level key for the payment
system's CA hierarchy?  Do thousands of cars suddenly stop
an hour after someone starts using the recently-compromised
top-level key for the bank's e-repo-man division?


Scientific research is generally conducted 

Re: time dependant

2000-03-09 Thread Arnold G. Reinhold

At 10:56 AM -0500 3/8/2000, Steven M. Bellovin wrote:
In message [EMAIL PROTECTED], "Matt Crawford" writes:

 If you're going to trust that CryptoSat, inc. hasn't stashed a local
 copy of the private key, why not eliminate all that radio gear and trust
  CryptoTime, inc. not to publish the private key associated with date D
 before date D?

The minor answer is that I could postulate that CryptoSat sells slots for
various parties (including senders of time-delayed messages) to install their
own tamper-resistant boxes.

Indeed, each box could have a share of each secret key and the 
satellite would simply broadcast the year's key once enough of the 
trusted boxes had released their shares.  Each box would have its own 
clock and release share information by modulating an LED with a light 
pipe through the box's skin.  The central computer would have no 
communication channel back to any of the trusted boxes. Then there is 
no need to mess with time delays. You'd need a couple of satellites 
in case one failed.


But the major answer is time scale -- I only have to trust CryptoSat for a
short period, while I have to trust CryptoTime for the entire delay period.

In particular a satellite is pretty much subpoena proof.  The 
subpoena threat is very real for CryptoTime, Inc. because courts tend 
to lean in favor of granting them, even if the underlying case 
presented is weak. E.g. Jones v. Clinton.  So someone with a fairly 
frivolous case can undermine confidence in the whole system even 
assuming CryptoTime has the best of intentions.

All that said, I still think a ground based system using multiple 
repositories in many jurisdictions is worth trying. One wrinkle might 
be to forget about secret sharing since it requires a central 
coordinated act. Instead each repositor (A, B, C, D, ...) generates a 
separate set of public and private keys for each year. All the public 
keys are posted on a key server. A user that wishes to time escrow 
data can do m out of n by encrypting with subsets of the full set of 
posted public keys for any given year. For example, to  achieve two 
out of three security ten years out, the user would super-encrypt his 
symmetric key with A2010,B2010 and A2010, C2010 and B2010, C2010.
 

The real answer, though, is that you're probably right -- there's too much
temptation in this field to use technical mechanisms, when contract law will
suffice.

You may be right in practice, but it seems to me that a major goal of 
crypto research is to figure out how do do things in a way that does 
not rely on contract law and other forms of "trust me."

Arnold Reinhold



VERISIGN ACQUIRES NETWORK SOLUTIONS

2000-03-07 Thread Arnold G. Reinhold

VERISIGN ACQUIRES NETWORK SOLUTIONS TO FORM
WORLD'S LARGEST PROVIDER OF INTERNET TRUST SERVICES
 
Mountain View, CA  Herndon, VA, March 7, 2000 - - VeriSign, Inc. 
(Nasdaq:VRSN), the leading
provider of Internet trust services, and Network Solutions, Inc. 
(Nasdaq: NSOL), the world's leading
  provider of Internet domain name registration and global registry 
services, today announced the
signing of a definitive agreement for VeriSign to acquire Network 
Solutions in an all-stock purchase
transaction. This transaction combines two infrastructure leaders 
whose trust services support
businesses and consumers from the moment they first establish an 
Internet presence through the
entire lifecycle of e-commerce activities.

Under the agreement, VeriSign will issue 2.15 shares of VeriSign 
common stock for each share of
Network Solutions stock as constituted prior to the 2-for-1 split of 
Network Solutions stock to be
completed on March 10, 2000. The transaction, valued at approximately 
$21 billion based on
yesterday's closing price of VeriSign common stock, has been approved 
by both companies' Boards of
Directors and is subject to approval by VeriSign and Network 
Solutions stockholders. The acquisition is
expected to close in the third quarter of 2000, subject to customary 
conditions, including obtaining
necessary regulatory approvals. The resulting company expects to add 
to its existing employee base to
exploit new market opportunities. At closing, Network Solutions will 
become a subsidiary of VeriSign
and CEO of VeriSign.

[more at http://www.verisign.com/press/2000/corporate/netsol.html]



Re: Interesting point about the declassified Capstone spec

2000-02-13 Thread Arnold G. Reinhold

At 5:09 PM -0500 2/11/2000, Dan Geer wrote:
I agree with Peter and Arnold; in fact, I am convinced that
as of this date, there are only two areas where national
agencies have a lead over the private/international sector,
namely one-time-pad deployment and traffic analysis.  Of those,
I would place a bet that only traffic analysis will remain an
area of sustainable lead, that traffic analysis is the only
area where commercial interests will not naturally marshall
the resources to threaten the lead of the national agencies.

--dan


Um, I think you are agreeing with something Peter attributes to me, 
but [EMAIL PROTECTED] actually wrote. (C'mon Peter, I know it's summer 
down there, but...). That said, here is my list of areas where I 
think national agencies will enjoy a lead for some time to come:

1. Traffic analysis (as habs points out.)

2. Monitoring vast amounts of unclassified conversations and gleaning 
intelligence from them

3. Exploiting the large amount of weak encryption that is already out there.

4. Black bag jobs to plant bugs and steal keys. The NY Times quoted a 
source who said that the average jewelry store has better security 
than most foreign consulates. How many of you know for sure where 
your laptop spent last Thursday night?

5. Transmitting viruses and Trojan horses over networks to capture 
and leak keys or plaintext. (infowar)

6. Exploiting Tempest

7. Getting large chip and software manufacturers to incorporate 
exploitable hooks.

8. Penetrating secret organizations by bribes, brutality and 
blackmail (think of all those usenet alt.sex.whatever messages saved 
away for later use.)

9. Storing vast quantities of intercepted ciphertext so that they can 
exploit any crack retrospectively.

10. Exploiting technological breakthroughs: quantum computing, better 
factoring algorithms,... if and when they happen.

11. Exploiting small time screw ups like weak passwords, failure to 
log off terminals, inadequately erased media, poorly designed 
protocols, etc.

12. Waiting patiently for big time screw ups like Nikita Kruschchev's 
gabbing on an unclassified car phone or John Deutch's using the same 
laptop to store Top Secret reports and access the Internet from home.

In spite of strong encryption, the explosive growth of computing 
power and the ubiquity of digital communication may make the 21st 
century the golden age of SIGINT.


Arnold Reinhold





Re: Interesting point about the declassified Capstone spec

2000-02-11 Thread Arnold G. Reinhold

At 8:02 AM -0500 2/12/2000, Peter Gutmann wrote:
Late last year the Capstone spec ("CAPSTONE (MYK-80) Specifications",
R21-TECH-30-95) was partially declassified as the result of a FOIA lawsuit[0].
The document is stamped "TOP SECRET UMBRA" on every page.  UMBRA is a SIGINT
codeword, not an INFOSEC one, so the people who designed the thing were very
clear about what it was to be used for at a time when it was still 
being touted
as a privacy device (the fact that it's described in the abstract as "a SIGINT
friendly replacement for DES" probably doesn't help either).

Peter.

[0] I don't know if it's online, they were handed out at Crypto'99.

The Capstone spec is available at http://cryptome.org/capstone.htm

Clipper/Capstone was always advertised to the public as providing a 
higher level (80-bits) of security than DES while allowing access by 
law enforcement agencies.  I don't see anything in the spec that is 
at variance with this. The abstract says in full:

"(U) CAPSTONE started as an embodiment of a Law Enforcement SIGINT 
friendly replacement for the Data Encryption Standard (DES). This 
requirement would offer greater security than DES while permitting 
legitimate access to traffic. Given these restraints, the project 
goals were a commercially viable, single-chip solution offering data 
integrity, confidentiality, public key based key management and 
authentication. R21 undertook the development of an algorithm suited 
for CAPSTONE in support of the aforementioned requirements and goals. 
"

More tantalizing is the stuff that was censored , e.g.:

III. Anti-Reverse Engineering Circuitry and Techniques

1. (S) [Five lines redacted.]

2. (TSC-NF) [Six lines redacted.]

3. (TSC-NF) [Seven lines redacted.]

...
B. Random Power Fluctuations

1. (TSC-NF) [Six lines redacted.]

2. (S) [Four lines redacted.]

...
VI. Key Escrow Circuitry

1. (S U.S./Can Eyes Only) [Eight lines redacted.]

2. (S U.S./Can Eyes Only) [Fifty lines redacted.]

3. (S U.S./Can Eyes Only) [Three lines redacted.]

...

VIII. Randomization and Key Variable Generation

A.

1. (S-NF) [Forty-five lines redacted.]

etc.

The UMBRA code word may have been required due to SIGINT-sensitive 
references in the censored sections to vulnerabilities that NSA has 
exploited in the past.

Arnold Reinhold



Re: Interesting point about the declassified Capstone spec

2000-02-11 Thread Arnold G. Reinhold

At 12:38 PM -0800 2/11/2000, David Wagner wrote:
In article v04210102b4ca1b7a641f@[24.218.56.92],
Arnold G. Reinhold [EMAIL PROTECTED] wrote:
 Clipper/Capstone was always advertised to the public as providing a
 higher level (80-bits) of security than DES while allowing access by
  law enforcement agencies.

Law enforcement friendly is very different from SIGINT friendly.

I agree completely. That is why I copied the Capstone abstract 
verbatim. Peter Gutmann had written:

it's described in the abstract as "a SIGINT friendly replacement for DES"

What the abstract actually says is: "CAPSTONE started as an 
embodiment of a Law Enforcement SIGINT friendly replacement for the 
Data Encryption Standard (DES)."

"SIGINT friendly" might suggest an NSA back door. "Law Enforcement 
SIGINT friendly" doesn't imply anything more than what was originally 
advertised. I assume Peter was just careless in his quoting, but it 
is an important distinction.

Arnold Reinhold



Re: [PGP]: PGP 6.5.2 Random Number Generator (RNG) support

2000-02-04 Thread Arnold G. Reinhold

I'd like to tone this discussion down a bit and get back to basics. 
First of all, I am happy to thank Intel for finally releasing the 
hardware interface. I hadn't known about its release until this 
thread. I'm always grateful when someone does the right thing, even 
if it's late.  Second, I have to agree, reluctantly, that people 
building diskless nodes should use the Intel RNG if they have it and 
can't get anything better designed into their hardware.  The software 
alternatives are just not acceptable.

Anonymous asks what we want from Intel. OK, here is my list:

First, a principal of operations document for the RNG under Intel's 
name.  More details than Paul gave would be better, particularly 
design margins and test procedures, but at least the level of 
information he gives.  What difference would it make? It would put 
Intel's name and reputation squarely behind the claimed design being 
what is delivered, not just Paul's.

Second, I want access to the raw bits. Short out the on-chip 
whitener, if necessary.  There is no need for it and it prevents us 
from characterizing the RNG design ourselves. It also reduces the 
random bit rate for no good reason.  The danger associated with 
making the raw bits available is negligible. The few people that will 
use the raw bits are going to be cluefull enough to whiten them with 
a hash.  Intel can cover its backside by explaining the need to do so 
clearly in its manual.  (They now have to explain that the code for 
extracting the bytes has to be protected in a multithreaded 
environment.  Had Intel not been trying to produce "perfect" random 
bytes, they could have included a status bit in each random byte and 
avoided all that complexity.) And even if someone did use the raw 
bits without whitening, the added vulnerability is quite small, 
assuming the bias is at all reasonable.

Third, I would like Intel (and other CPU and support chip vendors) to 
recognize that cryptographers need  designs that are transparent, 
verifiable and traceable. As a vendor it's Intel's job to win their 
customer's confidence.  If that means a more open design process and 
independent verification of random samples from the production line, 
so be it. Yes, we will always want more. Sorry. The reason 
cryptographers are hard customers is that we face very hard problems.

A more open process is in Intel interest as well. Intel might get 
some good ideas if they talked to us first. And one of these days 
there it going to be a security screw up big enough to attract the 
class action bar. Lawyers have a field day with unjustified secrecy, 
especially at defendants with deep pockets.


Arnold Reinhold



Re: [PGP]: PGP 6.5.2 Random Number Generator (RNG) support

2000-02-02 Thread Arnold G. Reinhold

At 9:00 PM + 2/2/2000, lcs Mixmaster Remailer wrote:
It may not have been mentioned here, but Intel has
released the programmer interface specs to their RNG, at
http://developer.intel.com/design/chipsets/manuals/298029.pdf.
Nothing prevents the device from being used in Linux /dev/random now.

As for the concerns about back doors, the best reference on
the design of the RNG remains cryptography.com's analysis at
http://www.cryptography.com/intelRNG.pdf.  Paul Kocher and his team
concluded that the chip was well designed and that the random numbers were
of good quality.  (Note, BTW that the RNG is extremely small, crammed
into the margins of the device.  An RNG which produced undetectably
backdoored random date would probably be an order of magnitude larger.)

I respect Paul, but there is a matter of principle here. Crypto is 
hard enough without having to rely on trusted experts to verify what 
should simply be made public. The business case for Intel's RNG 
secrecy is weak at best. I want to make it weaker. As for the RNG 
being crammed in, who knows what will happen in future chips?


Even if Intel wanted to put in a back door, it would be very difficult
to exploit it successfully.  There is no way for the chip to predict how
any given random bit will be used: it may go into a session key directly,
it may be hashed through some kind of mixing function along with other
sources of randomness, it may seed a PRNG which is then used to find
RSA primes.  There are a multitude of different possibilities and it
would be hard in general to design an effective backdoor without knowing
how the output will be used.

I don't agree. All that is needed is for the backdoored RNG to 
produce an output stream that is determined by some state with a 
relatively small number of bits. Then an otherwise infeasible search 
strategy would become feasible. An attacker would still have to know 
how the program-under-attack used the RNG output, but we do not rely 
on software obscurity. (Of course if the RNG output is first mixed 
with another source of high entropy randomness then there is no added 
vulnerability. I am positing that, over time, vendors who use the 
Intel RNG will neglect this step.)


And as pointed out before, this level of paranoia is ultimately self
defeating, as Intel could just as easily put back doors into its CPU.
Unless or until you are willing to use a self-designed and self-fabbed
CPU, you are fundamentally at the mercy of the hardware manufacturer.


CPU back doors are a different risk and are more subject to your 
first criticism than the RNG. Weak random number generation is  a 
vulnerability common to almost all crypto systems. We should not 
lower standards in one area because there are risks in other areas. 
To paraphrase the Strategic Air command, "paranoia is our profession."

Arnold Reinhold





Re: [PGP]: PGP 6.5.2 Random Number Generator (RNG) support

2000-02-02 Thread Arnold G. Reinhold

At 9:15 AM -0800 2/2/2000, Eric Murray wrote:
On Tue, Feb 01, 2000 at 09:00:33PM -0800, Dave Del Torto wrote:
  At 6:19 pm -0500 2000-01-26, Tom McCune wrote:
...
 
 (A) I'm not sanguine about it being a "default" in any version of
   PGP, knowing what I do and having been told more by others,
 (B) I strongly encourage the PGP engineering group to include and
   explicit checkbook preference/option for disabling PGP's use
   of the Intel RNG completely into v7.0,
 (C) I'm troubled that Intel has not yet --even at this late date--
   provided comprehensive technical data on how the RNG works
   for public review and,
 (D) I'm extremely glad there doesn't appear to be one in my Mac or
   SparcStation, and my hand-built PC's have AMD K2/3's in 'em. ;)

[..]


I've also received Intel security info under NDA (and nothing in
this post will violate same).  I do not think that your point D is
fair- even if the Intel RNG is totally and utterly compromised, it's
not a threat to your security just by being there on the chip.
Something has to call it and use it's output in a protocol.
I do agree with point B however.

The threat to my security from Intel's RNG "just by being there on 
the chip" is that more and more encryption products will come to rely 
on the Intel RNG alone, or combined with some inadequate source of 
entropy like the system clock.  Worse, more and more software vendors 
will adopt Intel's "trust us" attitude, and refuse to divulge details 
of their randomness generation. Some may even attempt to block 
reverse engineering that would expose their weaknesses, a la CSS.

Intel's marketing department would love to have a long list of 
products that "take advantage" of their proprietary RNG scheme. The 
open cryptographic community should endeavor to keep that list as 
short as possible, at least until Intel repents and opens its design 
to public inspection and verification.

Arnold Reinhold



Re: The problem with Steganography

2000-01-27 Thread Arnold G. Reinhold

At 1:34 AM -0500 1/26/2000, Marc Horowitz wrote:
Rick Smith [EMAIL PROTECTED] writes:

 The basic notion of stego is that one replaces 'noise' in a document with
 the stego'ed information. Thus, a 'good' stego system must use a crypto
 strategy whose statistical properties mimic the noise properties of the
 carrying document. Our favorite off the shelf crypto algorithms do *not*
 have this property -- they are designed to generate output that looks
 statistically random. So, can't we detect the presence of stego'ed data by
 looking for 'noise' in the document that's *too* random?
 
 For example, many stego implementations involve embedding data in the low
 order bits of a graphical image. Those low order bits undoubtedly have some
 measurably non-random statistical properties. Once we replace those bits
 with data, the bits will have serously random statistical properties. So,
 we can detect stego'ed data if the implementation uses any well known
  strong encryption algorithm.


Closely matching the statistical properties of a physical device 
could be difficult. A different approach would be  encouraging large 
numbers of people with video Internet feeds to "pre-stego" their 
material.  This could be easily done by xor'ing low order bits with 
bits generated by some strong crypto algorithm, frequently rekeyed by 
dev/random.  Perhaps Linux Webcam and Video chat packages could have 
this feature enabled as a default. Since it would be impossible to 
distinguish actual stego from pre-stegoed material, this would be a 
very effective way to protest against attempts to restrict the flow 
of information on the Internet. If enough people participated stego 
would be undetectable.

Arnold Reinhold



Re: NSA Declassified

2000-01-26 Thread Arnold G. Reinhold

John Young [EMAIL PROTECTED] responded:

Your points are valid for the AIA document. However, in the
Navy document, Number 9, image 3, there is the phrase,
"Maintain and operate an ECHELON site."

I had missed that reference. A agree that the capitalization here is 
consistent with a code name. On the other hand, the sentence 
"Maintain and operate an ECHELON site." is the first item in a list 
of specific functions and tasks that the commander of Sugar Grove is 
being ordered to carry out.  The dictionary meaning of "echelon" fits 
well in this context, i.e. the commander is instructed to operate a 
facility subordinate to headquarters in the overall Navel Security 
Group hierarchy. While a few items on the list are blacked out, most 
seem to be boilerplate. The main mission of Sugar Grove appears to be 
detailed in a classified "Enclosure 1."

I did a search on "echelon" at www.navsup.navy.mil (they had a search 
engine that actually worked) and found a number of examples of the 
word's ordinary usage in the Navy:

"Multi-echelon modeling optimizes spares requirements across the 
wholesale and consumer echelons, and provides the ability to compute 
wholesale requirement on a readiness-to-response time basis. " 
http://www.navsup.navy.mil/flash/1096.html

"All naval commanders will report through their immediate superior 
via the chain of command (ISIC) to second echelon commanders when 
this action has been complete. All second echelon commanders will 
report to DON CIO upon completion of this tasking by their claimancy 
NLT 15NOV98. " 
http://www.navsup.navy.mil/corpinfo/net-policy/alnav.html

"Equal Opportunity Assistants provide equal opportunity/sexual 
harassment subject matter expertise to second and third echelon 
commands." http://www.navsup.navy.mil/flash/1996.html

In light of these examples, the appearance of the term "Echelon 2" in 
the document fragment at http://jya.com/xechelon.jpg could also be 
interpreted as telling the recipient that he is responsible for 
documents coming from the second echelon level in the chain of 
command.

The ACLU Echelon watch page http://www.aclu.org/echelonwatch/ says 
"ECHELON is a code word for an automated global interception and 
relay system operated by the intelligence agencies in five nations: 
the United States, the United Kingdom, Canada, Australia and New 
Zealand (it is rumored that different nations have different code 
words for the project)." I have no doubt that NSA runs automated 
global interception and relay systems and has cooperative agreements 
with the nations listed and many others. Interception  is the 
essential first step in signals intelligence (SIGINT) which is a 
major mission of NSA. "Today, SIGINT continues to play an important 
role in maintaining the superpower status of the United States." 
http://www.nsa.gov:8080/about_nsa/

Do these interception capabilities include the monitoring and 
recording of individual phone calls? I am sure they do. I even 
remember press reports decades ago about whether NSA was restricted 
from monitoring intercepted down links from Soviet SIGINT satellites 
that were capturing the phone conversations of US citizens over 
microwave relays.

But I am not convinced that ECHELON is the overarching code word for 
this activity or even a major component.
I wonder why the code word question attracts so much interest. 
SIGINT is such a large part of NSA mission that it must have spawned 
dozens or hundred of code words. ECHELON might be better viewed as 
press moniker for an important story a la Watergate or Whitewater. 
The activities are real enough. Why does the code name matter so much?

Arnold Reinhold




How old is TEMPEST? (was Re: New Encryption Regulations haveother gotchas)

2000-01-24 Thread Arnold G. Reinhold

Regarding the question of how far back TEMPEST goes, I took a look at 
David Kahn's "The Codebreakers" which was copyrighted in 1967. 
TEMPEST is not listed in the index. However I did find the following 
paragraph in a portion of the chapter on N.S.A. that discusses 
efforts to improve the US State Department's communications security 
(p. 714):

"... the department budgeted $221,400 in 1964 for 650 KW-7's. ... The 
per-item cost of $4,500 may be due in part to refinements to prevent 
inductive or galvanic interaction between the key pulses and the 
plaintext pulses, which wire tappers could detect in the line pulse 
and use to break the unbreakable system through its back door. "

This would be the electro-mechanical equivalent of TEMPEST and 
suggests that NSA was well aware of the compromising potential of 
incidental emanations long before the computer communications era.

Another useful data point would be earliest reports about the BBC's 
system for detecting unlicensed television receivers. That system 
used vans equipped to detect a TV's local oscillator, but may well be 
an offshoot of emanations intelligence research.

Arnold Reinhold



Re: NSA Declassified

2000-01-24 Thread Arnold G. Reinhold

I appreciate all the hard work that went into into prying this 
material loose from NSA, but there is a case to be made that 
"Echelon" as use in these documents is being employed according to 
its dictionary meaning "A subdivision of a military force" rather 
than as a code word.

The text in the two paragraphs titled "Activation of Echelon Units" 
describes activities that fit the ordinary usage of the word 
"echelon," which is common military jargon. Also "Echelon" is always 
written in lower case in the text, while  code words generally in all 
caps, e.g. "LADYLOVE or COBRA DANE". (Echelon is capitalized in the 
title of one referenced report, but not in another.) Finally the 
titles "Activation of Echelon Units" are marked "(U)" for 
unclassified in the original text and the referenced reports.  I 
expect that such a sensitive  code word would itself have been 
classified.

I'm not convinced that this batch of documents proves ECHELON's existence.

Arnold Reinhold


At 3:44 PM -0500 1/24/2000, John Young wrote:
Noted intelligence author Jeffrey Richelson and the
National Security Archives have obtained some 17
declassified documents from the NSA tracing its history
and operations. One of them confirms for the first time
in an official document the existence of Echelon
(except for a thumbnail photo of  the word in Duncan
Campbell's EuroParl report):

Richelson's introduction:

   http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB23/index.html

The documents with annotations by Richelson:

   http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB23/index2.html

The lie put out by DoD for years that Echelon was only
a fabrication of journalists is now shown to be what it was.

And there's more good stuff, including a letters of
Stewart Baker and others with a need to know and never
ever tell.




Re: small authenticator

2000-01-19 Thread Arnold G. Reinhold

At 11:13 AM -0600 1/19/2000, Rick Smith wrote:
At 04:49 PM 01/18/2000 -0700, [EMAIL PROTECTED] wrote:
I've got something with around 100 bytes of ram and an 8-bit multiply.
Is there an authentication mechanism that can fit in this?

What types of attacks are you concerned with? That's the main question. If
you have a direct, unsniffable connection from the device to the person
being authenticated, then just stick some secret data in there, and make
the guy provide the secret. Be sure to give him/her a way to change the
secret.

If you're passing the authentication data across a sniffable connection,
then I doubt you have the resources to do unsniffable authentication. That
requires a reasonably strong crypto computation. You can throw some sand in
attackers' eyes by doing challenge/response authentication with weak
encryption, but a determined attacker should be able to recover the secret
from intercepted challenge/response pairs.


You might consider the RC4 algorithm with a 64 byte state array. That 
leaves enough space for a 90 bit secret, stored as 15 six-bit bytes, 
a similar sized  or slightly smaller challenge vector, and a few 
bytes for indexing. The secret and challenge form the key, of course. 
After the key setup, I would generate and discard a large number of 
cypher bytes, say 512,  and use the next 15 cypher bytes as the 
response. The challenge array can be overwritten at this point.


Arnold Reinhold



Re: US law makes it a crime to disclose crypto-secrets

1999-12-13 Thread Arnold G. Reinhold

Documents were being stamped Confidential, Secret, and Top Secret 
under the regulations of various US government departments long 
before the string of Executive Orders. (The first was 10290, 
"Prescribing Regulations Establishing  Minimum Standards for the 
Classification, Transmission, and Handling, by Departments and 
Agencies of the Executive Branch, of  Official Information which 
Requires Safeguarding in the Interest of the Security of the United 
States," issued by Harry Truman in 1951. The current one is 12958.) 
The Executive Orders standardized the rules across all departments.

I believe it the executive branch takes the position that documents 
marked in this way are covered by the Espionage Laws, i.e. sections 
793 and 794, and that position probably goes back to 1917 when the 
laws were first passed. From my reading of the Supreme Court 
decisions in the Pentagon Papers, the courts have the same 
presumption. I suppose an attorney defending someone charged under 
these laws might attack that link, claiming the specific documents in 
question in fact had no bearing on the National Defense as defined in 
793 and 794.  But unless the misclassification was pretty blatant, it 
would be a tough sell.

Has anyone ever done a history of the security classification system?

Arnold Reinhold


At 1:13 PM -0500 12/12/99, Donald E. Eastlake 3rd wrote:
The law you cite is unaffected by whether the information is
classified.  Except for a few special laws, such as the Atomic Energy
Act which makes certain information "born classified" no matter who
comes up with it, and the previously cited Crypto info law which was,
perhaps, an attempt to make comsec and comint information "born
classified", as far as I known the entire classifcation system rests
on a continuing series of Presidential Executive Orders and it is not
clear to me how much they effect someone who is not a government
employee and who has not entered into an agreement regarding such
material.

Donald

From:  "Arnold G. Reinhold" [EMAIL PROTECTED]
X-Sender:  [EMAIL PROTECTED] (Unverified)
Message-Id:  v04210101b4787bb8c0e6@[24.218.56.92]
In-Reply-To:  [EMAIL PROTECTED]
References:  [EMAIL PROTECTED]
Date:  Sun, 12 Dec 1999 08:59:54 -0500
To:  Declan McCullagh [EMAIL PROTECTED], [EMAIL PROTECTED],
[EMAIL PROTECTED], [EMAIL PROTECTED]
Content-Type:  text/plain; charset="us-ascii" ; format="flowed"
Sender:  [EMAIL PROTECTED]
It's not just crypto. The US Espionage laws prohibit the disclosure
of classified information by anyone. See Title 18 Sec. 793(e):

(e) Whoever having unauthorized possession of, access to, or control
over any document, writing, code
book, signal book, sketch, photograph, photographic negative,
blueprint, plan, map, model, instrument,
appliance, or note relating to the national defense, or
information relating to the national defense which
information the possessor has reason to believe could be used
to the injury of the United States or to the
advantage of any foreign nation, willfully communicates,
delivers, transmits or causes to be
communicated, delivered, or transmitted, or attempts to
communicate, deliver, transmit or cause to be
communicated, delivered, or transmitted the same to any person
not entitled to receive it, or willfully
retains the same and fails to deliver it to the officer or
employee of the United States entitled to receive it;
or ...

As I recall, classified documents are required to carry a legend on
each page saying something like "This document contains information
affecting the national defense within the meaning of the espionage
laws, Title 18 793 and 794, the transmission or revelation of which
 to unauthorized persons is prohibited by law." In any case the
restrictions on classified material go far beyond a voluntary
agreement by those given access to keep the information secret.

People who have authorized access take on the additional burden that
negligent handling of classified information is a crime (793 (f)). I
presume this is the basis for prosecuting Dr.Lee of Los Alamos.

It's true that Section 798 specifically includes the word "publishes"
while 793(e)  does not. That distinction, along with legislative
history, was relied on by some of the Justices (e.g. Justice Douglas)
in the Pentagon Papers case. Still I don't think the question of
whether publishing classified material is criminal was clearly
settled. The issue then was prior restraint, not after-the-fact
prosecution. Some of the majority Justices indicated they might even
approve prior restraint if the Government had shown an immediate
danger comparable to publishing the departure time of transport ships
in war time.

Since the Pentagon Papers case, I don't think the Government has
dared to prosecute the press for publishing classified information.
Printing proof that NSA has br

Re: Semantic Forests, from CWD (fwd)

1999-12-02 Thread Arnold G. Reinhold

At 1:34 PM -0800 12/1/99, Udhay Shankar N wrote:
From: [EMAIL PROTECTED]
Date: Wed, 1 Dec 1999 15:18:43 -0500
To: undisclosed-recipients: ;

CyberWire Dispatch // (c) Copyright 1999 // November 30
Sender: [EMAIL PROTECTED]
Precedence: bulk
X-Loop: [EMAIL PROTECTED]

Jacking in from the "Sticks and Stones" Port:
...


Two important reports to the European Parliament, in 1998 and 1999, and
Nicky Hager's 1996 book "Secret Power" reveal that the NSA intercepts
international faxes and emails. At the time, this revelation upset a great
number of people, no doubt including the European companies which lost
competitive tenders to American corporations not long after the NSA found
its post-Cold War "new economy" calling: economic espionage.

Voice telephone calls, however, well, that is another story. Not even the
world's most technically advanced spy agency has the ability to do massive
telephone interception and automatically massage the content looking for
particular words, and presumably topics. Or so said a comprehensive recent
report to the European Parliament.

In April 1999, a report commissioned by the Parliament's Office of
Scientific and Technological Options Assessment (STOA), concluded that
"effective voice 'wordspotting' systems do not exist" and "are not in use".

I wonder about the European Parliament. They sometimes make our 
Congress look intelligent. The existence of speech recognition 
technology is hardly a secret. It's been on the market for years, has 
been improving steadily and is now being offered commercially for 
similar applications. I don't know how effective it is right now at 
telephone monitoring, but it will only get better. Here is an excerpt 
from one vendor's web site: 
http://www.dragonsystems.com/products/audiomining/

"New AudioMining Technology Uses Award-Winning Speech Recognition 
Engine to Quickly Capture and Index Information Contained in Recorded 
Video Footage, Radio Broadcasts, Telephone Conversations, Call Center 
Dialogues, Help Desk Recordings, and More

New advanced technology to retrieve specified information contained 
in hours of recorded  video footage, radio and television broadcasts, 
telephone conversations, call center dialogues,  help desk 
recordings, and more, was demonstrated today by Dragon Systems, Inc. 
of Newton, Mass. ...

The Dragon Systems AudioMining technology converts audio data into 
searchable text, which is  easily accessible by keyword searching. 
This new capability which eliminates the need to listen to hours of 
recordings to find necessary information, can save time and increase 
productivity. It gives users immediate random access to recorded 
materials and enables them to access material using its speech 
content. "

Dragon lists Law Enforcement as one of the potential applications.

I also wonder about stories like this one that might be summarized as 
"Large Government Agency with multi-billion dollar budget for 
monitoring communications is suspected of monitoring communications." 
I remember a story told during the cold war about some reporters in 
Moscow who got together for New Year's Eve in one of their hotel 
rooms. During the evening someone offered a toast to the poor KGB 
operatives who missed out on the holiday celebrations because they 
had to work monitoring their conversations. A short while later the 
phone rang. The person who picked it up heard the pop of a cork, the 
gurgle of a drink being poured and then the caller hung up.

Yes, they really are listening.


Arnold Reinhold



Re: a smartcard of a different color

1999-11-17 Thread Arnold G. Reinhold

At 10:02 AM -0500 11/17/99, Steven M. Bellovin wrote:
In message v04220814b457e31782c9@[204.167.101.35], Robert Hettinga writes:

 --- begin forwarded text


 To: [EMAIL PROTECTED]
 Subject: a smartcard of a different color
 Date: Tue, 16 Nov 1999 22:15:07 -0500
 From: Dan Geer [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]



 Yesterday I saw a smartcard of a different color.  In particular,
 it is the smartcard chip but in a key-ring thing that is more or
 less identical to the Mobil SpeedPass except that it has a USB
 connector on one end and a keyring hole on the other.  Total length
 circa 1.25"; color purple; maker Rainbow Technologies.  As my pal
 Peter Honeyman said in showing it to me, "There are already all
 the USB ports we'll ever need."  I'd point out that without the
 7816 requirement for flex a whole lot more memory is a trivial
 add-on and that USB is not a bandwidth bottleneck.

 --dan

 ref:  http://www.rainbow.com/ikey/graphics/iKey_DS.pdf

Folks I've talked to about products like that say that USB ports aren't
designed for that many insertion/removal cycles.


Per the USB 1.1 Specs, Table 6.7, p.95 
http://usb.org/developers/data/usb11.pdf
Durability Test Performance Requirement is: "1,500 insertion 
extraction cycles at a maximum rate of 200 cycles per hour." That's 
four years of once-a-day use. They are a bit stiff: insertion force 
is up to 35 Newtons, extraction is 10 N minimum.

(We'll ignore, for now, all
of the PCs that have their USB ports in the back, where you can't get at
them easily.  One could always add on a hub.)

Or a cheap USB extension cable, which would also protect the jack on 
your computer from wearing out. However, many (most?) USB keyboards 
have two USB jacks on them to allow the mouse to be plugged in on 
either side.

USB seems to be the wave of the future in PCs. In introducing a new 
product like smart cards to consumers, there is a lot to be said for 
technology that works with what a large number of consumers already 
have.

Arnold Reinhold





Re: DEA says drug smugglers used crypto Net but cops got around it

1999-10-24 Thread Arnold G. Reinhold

At 10:49 AM -0400 10/22/99, Declan McCullagh wrote:
...

...

PRESS CONFERENCE
WITH U.S. ATTORNEY GENERAL JANET RENO
COLOMBIAN AMBASSADOR ALBERTO MORENO

SUBJECT: ARREST OF COLOMBIAN DRUG TRAFFICKERS
IN OPERATION MILLENNIUM
THE DEPARTMENT OF JUSTICE
WASHINGTON, D.C.
OCTOBER 13, 1999, WEDNESDAY

Acting Administrator Donnie Marshall of the Drug Enforcement Administration

...

In this case, the defendants used very sophisticated communications equipment,
including use of the Internet, encrypted telephones, and cloned cellular
telephones, in what was a vain attempt to avoid detection.  But in the end, it
was these very devices which led to the devastating evidence against them.
Through the use of judicial wiretaps and intercepts in both Colombia and in
the
United States, their communications were intercepted and recorded, thus
producing evidence which comes straight from the defendants' own mouths.

I have long doubted the very premise that encrypted communications 
are a asset to criminals and a threat to law enforcement. The 
standard way LE penetrates criminal organizations is to work from the 
bottom.  Someone at the retail level is caught and pressured to 
cooperate. He implicates a superior, and so on.

Remember that encrypted messages from the superior to the cooperating 
underling are sent using the underling's private key.  Providing that 
key to LE is in many ways less risky to the underling than other 
forms of cooperation. The key need only be provided once and then the 
is no need for further meeting with agents. Only a few people in LE 
need to know where the key comes, reducing the risk of leaks and 
making them easier to trace..

Once they have that key, LE gets both an ongoing clear stream of 
communications and evidence that is much more damming in court than 
the traditional hard to hear and obscurely worded wire tap recording. 
And if encryption get criminals to communicate more, it could be a 
boon to law enforcement.


Arnold Reinhold



Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-21 Thread Arnold G. Reinhold

At 11:39 AM -0500 8/13/99, Jim Thompson wrote:
  This thread started over concerns about diskless nodes that want to
 run IPsec.  Worst case, these boxes would not have any slots or other
 expansion capability. The only source of entropy would be network
 transactions, which makes me nervous...

 An interesting alternative, I think, is an add-on RNG which could go on a
 serial or parallel port.  The bandwidth achievable without loading down
 the machine is limited, but we don't need tremendous speeds, and many PCs
 used as routers, firewalls, etc. have such ports sitting idle.  Even
 semi-dedicated diskless boxes would *often* have one of those.

Of course, such a box already exists.  The complete details of its design
are available, and purchasing the box gives you the right to reproduce
the design (once) such that you can, indeed, verify that you're getting
random bits out of the box.

I spent some time searching the Web for hardware randomness sources 
and I have summarized what I found at 
http://www.world.std.com/~reinhold/truenoise.html.  I located several 
serial port RNG devices and some good sources of white noise that can 
be plugged into a sound port. I don't think I found the box Mr. 
Thompson refers to, but I would be glad to add it to the list.  I 
also included serial and USB video cameras, which may be a good 
source of randomness due to digitization noise, if nothing else.

I still feel strongly that diskless machines that are likely to use 
IPsec or other security software (e.g. SSL) should have a built-in 
source of randomness, a la the Pentium III. If the other 
microprocessor manufacturers won't comply, a TRNG should be included 
on one of the support chips. Randomness generation is so critical to 
public key cryptography that we should insist it be engineered in, 
not pasted on.

Arnold Reinhold




Re: Summary re: /dev/random

1999-08-13 Thread Arnold G. Reinhold

At 12:25 PM -0400 8/11/99, Theodore Y. Ts'o wrote:
   Date: Tue, 10 Aug 1999 11:05:44 -0400
   From: "Arnold G. Reinhold" [EMAIL PROTECTED]

   A hardware RNG can also be added at the board level. This takes
   careful engineering, but is not that expensive. The review of the
   Pentium III RNG on www.cryptography.com seems to imply that Intel is
   only claiming patent protection on its whitening circuit, which is
   superfluous, if not harmful. If so, their RNG design could be copied.

I've always thought there was a major opportunity for someone to come up
with an ISA (or perhaps even a PCI) board which had one or more circuits
(you want more than one for redundancy) that contained a noise diode
hooked up to a digitizing circuit.  As long as the hardware interface
was open, all of the hard parts of a hardware RNG, could be done in
software.

This thread started over concerns about diskless nodes that want to 
run IPsec.  Worst case, these boxes would not have any slots or other 
expansion capability. The only source of entropy would be network 
transactions, which makes me nervous. That is why I feel we should 
pressure manufacturers of such boards to include hardware RNG 
capability in one form or another.

Generic PC's these days come with audio input or can have a sound 
card added easily. Open software that would characterize, monitor and 
whiten the output of an analog noise source connected to the audio-in 
port would meet a lot of needs.

Arnold Reinhold




Re: Summary re: /dev/random

1999-08-10 Thread Arnold G. Reinhold

I have found this discussion very stimulating and enlightening. I'd 
like to make a couple of comments:

1. Mr. Kelsey's argument that entropy should only be added in large 
quanta is compelling, but I wonder if it goes far enough. I would 
argue that entropy collected from different sources (disk, network, 
sound card, user input, etc.) should be collected in separate pools, 
with each pool taped only when enough entropy has been collected in 
that pool.

Mixing sources gives an attacker added opportunities. For example, 
say entropy is being mixed from disk accesses and from network 
activity. An attacker could flood his target with network packets he 
controlled, insuring that there would be few disk entropy deposits in 
any given quanta release. On the other hand, if the entropy were 
collected separately, disk activity entropy would completely rekey 
the PRNG whenever enough accumulated, regardless of network 
manipulation.  Similarly, in a system with a hardware entropy source, 
adding disk entropy in a mixing mode would serve little purpose, but 
if the pools were kept separate, disk entropy would be a valuable 
backup in case the hardware source failed or were compromised.

2. It seems clear that the best solution combines strong crypto 
primitives with entropy collection. I wonder how much of the 
resistance expressed in this thread by has to do with concerns about 
performance. For this reason, I think RC4 deserves further 
consideration. It is very fast and has a natural entropy pool built 
in. With some care, I believe RC4 can be used in such a way that 
attacks on the PRNG can be equated to an attacks on RC4 as a cipher. 
The cryproanalytic significance of RC4's imperfect whiteness is 
questionable and can be addressed in a number of ways, if needed.  I 
have some thoughts on a fairly simple and efficient multi-pool PRNG 
design based on RC4, if anyone is interested.

3. With regard to diskless nodes, I suggest that the cryptographic 
community should push back by saying that some entropy source is a 
requirement and come up with a specification (minimum bit rate, 
maximum acceptable color, testability, open design, etc.). An entropy 
source spec would reward Intel for doing the right thing and 
encourage other processor manufacturers to follow their lead.

A hardware RNG can also be added at the board level. This takes 
careful engineering, but is not that expensive. The review of the 
Pentium III RNG on www.cryptography.com seems to imply that Intel is 
only claiming patent protection on its whitening circuit, which is 
superfluous, if not harmful. If so, their RNG design could be copied.


Arnold Reinhold



Re: depleting the random number generator -- repeated state

1999-07-28 Thread Arnold G. Reinhold

At 3:22 PM -0700 7/27/99, Jon Callas wrote:
I built a PRNG that used an RC4 variant as John Kelsey said. The thing is
also actually very Yarrow-like. I modified it later to use a state array
512 long instead of 256 long, just so it would have a larger entropy pool.

When I added more entropy, I added entropy using the same basic algorithm
as RC4 key setup. The difference was that the S-array was not 0..256, but
whatever the state of the array was. You simply *don't* use the input
mechanism that Anonymous described.

I'll also note that the state-loop that Anonymous described can easily be
detected and corrected. Given that this is a PRNG, not a cipher,
predictability is not a requirement (although you can algorithmically
correct in a way that will still make it a cipher).

Someday, I need to update the Entropy Manager (as I called it) and
re-release it.

   Jon

I believe the input mechanism Anonymous described *is* the RC4 key setup
mechanism. In any case, I take Anonymous' remarks about the brittle nature
of RC4 very seriously. I wouldn't mess with it just to double the entropy
pool. If you think more entropy is needed, build a side buffer or run two
copies of RC4.

There is a lot to be said for using a known cryptographic object like RC4
to build other tools. It is very valuable to be able to translate any
imagined attack on the system your are proposing into an equivalant attack
on RC4. You then incorporate all past and future analysis of RC4 to your
system.

Anyway here is my latest nonce maker proposal, based on the thread so far
(in pseudocode):

private unsigned byte i, j, S[256]

mix(K) {
i = i + 1
j = j + S[i] + K
swap S[i] and S[j]
t = S[i] + S[j]
return S[t]
}

setup() {
i = j = 0
for m = 0 to 255 S[m] = m
for m = 1 to 256 mix(get_a_true_random_byte())
i = j = 0
for m = 1 to 256 mix(0)
}

deposit(string) {
prev = mix(0)
for m = 1 to length(string) prev = mix(string[m-1] xor prev)
i = j = 0
}

getnonce(length) {
bytestring nonce
deposit(current_time())
for m = 1 to length concatinate (nonce, mix(0))
return nonce
}

mix is equivalent to the RC4 setup loop. It is also the RC4 cipher loop if
K=0. The indicies i and j are reinitialized every time mix is run in setup
mode to keep out of the repeated state. All entropy deposits are
RC4-encoded to prevent any chosen-entropy attack.

You would call deposit at opportune times like key presses, mouse moves,
disk and network I/O. Because i and j are reset so often and because nonces
and deposits are likely to be short, the begining of the S arrray will get
more mixing than the rest of the array. Therefore, it might be desirable to
stir the S array throughly every so often, perhaps by calling mix(0)
repeatedly during idle time.


Arnold Reinhold




Re: depleting the random number generator -- repeated state

1999-07-28 Thread Arnold G. Reinhold

At 2:51 PM -0400 7/28/99, Steven M. Bellovin wrote:
In message v04011701b3c4f4fbabb1@[24.218.56.100], "Arnold G. Reinhold"
writes
 I'd spin it the other way. The best approach to making nonces -- DH
 exponents, symetric keys, etc -- is to use a true source of randomness.
 That eliminates one area of risk. However most computers do not come with
 random number sources, so one uses unpredictable events and so on to glean
 entropy. To harvest that entropy you use a whitener. If you use a
 cryptographic function to do your whitening you get the added advantage of
 shielding the randomness pool from an attacker.

Define "best approach".

Perhaps I should have said "The best approach ... is to use a
*theoretically perfect* source of randomness." I tried to point out such
things don't exist and come to the same conclusion you do, namely  "A sound
design mixes both."

At 11:16 AM -0700 7/28/99, Jon Callas wrote:
At 10:49 AM -0400 7/28/99, Arnold G. Reinhold wrote:

   I believe the input mechanism Anonymous described *is* the RC4 key setup
   mechanism. In any case, I take Anonymous' remarks about the brittle nature
   of RC4 very seriously. I wouldn't mess with it just to double the entropy
   pool. If you think more entropy is needed, build a side buffer or run two
   copies of RC4.

It doesn't double the entropy pool. It increases it from being order 256!
to being order 512!.

Good point, but the ratio of log2(512!) to log2(256!) is only 2.3, a little
more than double the number of bits.  That's not worth leaving the
accumulated body of RC4 analysis, IMHO.

That's one of the places where we differ. I never directly add in entropy
deposits. I run a separate entropy pool that is hash-based, and
periodically tap that pool to update the secondary pool. I get really
nervous about adding entropy directly into a single pool. I also like to
capitalize on the properties of hash functions for prepping the entropy.

Can you say what you fear might happen if you directly add entropy
deposits? I don't see the problem.

Arnold Reinhold



Re: depleting the random number generator -- repeated state

1999-07-27 Thread Arnold G. Reinhold

At 12:19 AM -0700 7/27/99, James A. Donald wrote:
--
At 08:44 PM 7/26/99 +0200, Anonymous wrote:
 Even aside from active attacks, there is a possible problem based on
 the fact that RC4 can "almost" fall into a repeated-state situation.
 RC4's basic iteration looks like:

 (1)  i += 1;
 (2)  j += s[i];
 (3)  swap (s[i], s[j]);
 (4)  output s[s[i] + s[j]];

 (everything is mod 256)

 The danger is that if it ever gets into the state j = i+1, s[j] = 1,
 then it will stay that way.  It will increment i, then add s[i] to j,
 which will also increment j.  Then which it swaps s[i] and s[j] it will
 make s[j] be 1 again.

 However in normal use this never happens, because this condition
 propagates backwards as well as forwards; if we ever are in this state,
 we always were in this state.  And since we don't start that way, we
 never get that way.

Why don't we start that way?

The initialization rule is  for i = 0 to 255
   j = j+ s[i] + input(i)
   swap s[i], s[j]
next i;

To go bad at the end of initialization it has to wind up in the
state j=1 (which can always be forced true by some suitable input, and
s[1] =1;

What stops it from ending up with s[1]=1?


Nothing, but after RC4 key setup, i and j are reinitialized to zero. That
breaks up the conditions for the repeated state. I must confess that until
yesterday I never understood why these indicies were reinitialized before
cipher generation. Now I know. What I said yesterday about the sequential
incrementing of i preventing repeated states is clearly wrong. (Anonymous'
posting arrived on the same mail check that posted my messages. Sigh.)

I am still not ready to give up on switching between extracting and
depositing entropy in RC4. I think there will be a need for a secure,
lightweight and fast nonce generator and RC4 could provide that.

One fix is to do what RC4 does and always reinitialize i and j after
running in deposit mode. An attacker who could choose the entropy to
deposit and some how figured out the right values to use could put you into
the repeated state during that entropy deposit. But this would do no harm
and you would exit the repeated state after the deposit.  The situation is
equivalent to an attack on vanilla RC4 where the attacker can force you to
choose the last part of your key.

It would also be wise to always extract some cypher bytes between entropy
deposits so an attacker who got you to make two deposits in a row would not
know the value of j at the start of the second deposit.

One other thought I had is to save the previous cipher byte and add it to
the entropy byte in each cycle.  I don't think this step is needed, but it
does force an attacker to break RC4 itself in order to mount a chosen
entropy attack. That might give people more confidence in this approach to
nonce generation.

Arnold Reinhold





Re: depleting the random number generator

1999-07-26 Thread Arnold G. Reinhold

At 1:49 PM -0700 7/25/99, David Wagner wrote:
In article v04011700b3c0b0807cfc@[24.218.56.100],
Arnold G. Reinhold [EMAIL PROTECTED] wrote:
 One nice advantage of using RC4 as a nonce generator is that you can easily
 switch back and forth between key setup and code byte generation. You can
 even do both at the same time. (There is no need to reset the index
 variables.) This allows you to intersperse entropy deposits and withdrawals
 at will.

Oh dear!  This suggestion worries me.
Is it reasonable to expect this arrangement to be secure
against e.g. chosen-entropy attacks?  [John Kelsey makes the same point]

You raise a good question, but I think I can demonstrate that it is safe.
Here is the inner loop of the algorithm I am proposing in its most extreme
case: generating cipher bytes and accepting entropy at the same time.
(using Bruce Schneier's notation from Applied Cryptography, 2nd ed.):

i = i + 1 mod 256
j = j + S[i] + K[n] mod 256
swap S[i] and S[j]
t = S[i] + S[j] mod 256
next cipher byte = S[t]

Here K[n] is the next byte of entropy.

Note that RC4 code generation is exactly the same except that K[n] = 0 for
all n.

Assume an attacker initially does not know the state of the S array or the
value of j (you used 256 bytes of strong entropy as your initial RC4 key
and then discarded the next 256 cipher bytes like your mama taught you),but
does know i. (The attacker has been counting, knows the length of your
initial key setup and was able to shut out all other activity.)  Also
assume the attacker gets to choose each K[n] and then gets to see each
cipher byte.

If you look at the last two lines of the loop, you can see that the
attacker needs to know something about the new value of j to learn any
information about the state of the S array from a cipher byte.  Now focus
on the second line of the algorithm.  To know anything about the new value
of j, he needs to know something about the old value of j AND something
about the value of S[i].  By assumption he knows neither. Therefore he
learns nothing about the new value of j and thus nothing about the state of
the S array.

Since addition mod 256 is a group, being able to choose K is no more
helpful in learning the new value of j than knowing K's value, which you
always know during code generation in RC4 (it's zero, as pointed out
above). You might think there could be a special situation that you could
wait for where you can use your ability to pick K to keep RC4 in a small
loop, but step 1 insures that a new S[i] is brought into the calculations
each time.

I believe this shows that adding entropy as you go, even if it might be
chosen by an attacker, is no more risky than a known plaintext attack
against vanilla RC4.

Of course in the original situation I proposed, the attacker could at best
choose only some of the entropy added.

For extra insurance, someone using RC4 as a nonce generator might want to
discard a random number (256) of cipherbytes after the initial key setup.
This would deny an attacker any knowledge of the value of i beforehand.
Also, generating nonces and adding entropy in separate operations, which is
the natural thing to do from a programming perspective, results in
additional mixing and further complicates the problem for an attacker.


At 11:55 PM -0500 7/25/99, John Kelsey wrote:

[Arnold R] In particular, if you deposit the time of each entropy
withdrawal, the proposed denial of service attack that
started this thread would actually replenish a few bits of
entropy with each service request.

[John K] This isn't a bad idea, but I'd be careful about assuming
that those times hold much entropy.  After all, a given
piece of code which has thirty calls to the PRNG probably
runs in about the same amount of time every time, barring
disk or network I/O.


I was careful to say a "a few bits of entropy with each service request."
The service requests I was refering to were the attacker's attempt to set
up an IPsec tunnel. These involve network traffic and so can be expected to
generate some entropy.  Here is John Denker's [EMAIL PROTECTED]
original description of the attack:

Step 1) Suppose some as-yet unknown person (the "applicant") contacts
Whitney and applies for an IPsec tunnel to be set up.  The good part is that
at some point Whitney tries to authenticate the Diffie-Hellman exchange (in
conformance with RFC2409 section 5) and fails, because this applicant is an
attacker and is not on our list of people to whom we provide service.  The
bad part is that Whitney has already gobbled up quite a few bits of entropy
from /dev/random before the slightest bit of authentication is attempted.

Step 2) The attacker endlessly iterates step 1.  This is easy.  AFAIK there
is no useful limit on how often new applications can be made.  This quickly
exhausts the entropy pool on Whitney.

Step 3a) If Whitney is getting key material from /dev/random, the result is
a denial of ser

Re: depleting the random number generator

1999-07-25 Thread Arnold G. Reinhold

At 8:35 AM -0700 7/21/99, James A. Donald wrote:
--
At 09:24 PM 7/19/99 +0100, Ben Laurie wrote:
 So what you are saying is that you'd be happy to run your server
 forever on an inital charge of 128 bits of entropy and no more
 randomness ever?

Yes, though I would probably prefer an initial charge of 1684 bits of
entropy.  (the number of possible internal states of an RC4 state
machine used as a pseudo random number generator.)


One nice advantage of using RC4 as a nonce generator is that you can easily
switch back and forth between key setup and code byte generation. You can
even do both at the same time. (There is no need to reset the index
variables.) This allows you to intersperse entropy deposits and withdrawals
at will.

In particular, if you deposit the time of each entropy withdrawal, the
proposed denial of service attack that started this thread would actually
replenish a few bits of entropy with each service request.

In addition RC4 is simple, making the code easy to inspect, and about as
fast as you can get in software.


Arnold Reinhold



DES vs RC4 -- A correction (Re: so why is IETF stilling addingDES to protocols?)

1999-07-12 Thread Arnold G. Reinhold

At 1:29 PM -0400 7/1/99, I wrote:

How much of an improvement 56 bit DES actually give over the customary
implementation of "40-bit" RC4 is open to question.  Naively the difference
is 16 bits or a factor of 64K. However, as I understand it, the "40-bit"
RC4 is actually 128 bit RC4 with 88 bits of key revealed, effectively
serving as 88 bits of salt. But there is no way to use salt with DES, so a
search engine can easily test for many keys at the same time. For a
survelance operation one could imagine searching against hundreds of keys
at once.

Also I did a back-of-the-envelope estimate that suggests RC4 takes about
the same amount of silicon as DES for a custom logic search engine, but
runs about 200 times slower due to the key setup.  Together these effects
could eliminate most of that 64K improvement factor.

It might be better to use "56-bit" RC4 (i.e. 128 bit with 72 bits revealed)
if this would still be exportable.


I must retract part of what I wrote above. Using DES in feedback mode (e.g.
CBC) along with a random or unique IV prevents the attack I described, with
the IV providing essentially the same benefits as salt. Thus 56-bit DES-CBC
should be a major improvement over "40-bit" RC4. On the other hand, I still
contend DES-ECB would be a step backward. Does the IETF's DES proposal
include feedback and a suitable IV?

I think there is some relevance here to the more political question of
whether IETF should bless any DES implimentation. Details matter. Well
thought out and publicly reviewed standards are vital, even for weak
encryption.

Arnold Reinhold




RE: DES vs RC4 -- A correction

1999-07-12 Thread Arnold G. Reinhold

At 6:17 PM +0300 7/12/99, Ivars Suba wrote:
In MS-CHAPv.1 data encryption technique named MPPE (MS Point-to-Point
Encryption), which exploit RC-40 OFB encryption mode (with constant salt!) ,
is vulnerable resynchronization attack (http:/www.counterpane.com) from two
sessions encrypted with same  key,  because initial session key are obtained
from 64-bit LM hash determining first tree bytes with 0X1226DE
http://www.ietf.org/internet-drafts/draft-ietf-pppext-mschapv1-keys-00.txt .
If we replace RC-40 OFB with DES-40 CBC with same provision, new DES-40 CBC
wil not to be vulnerable to same attack.


My comparison of RC4 and DES based systems assumed competent
implimentations of both. A good implementation of DES-ECB beats a broken
implementation of anything else.  And MS-CHAPv.1 is clearly broken, as the
Counterpane folks point out. It is true that RC4, being a stream cipher, is
less tolerant of bad implimentation than a block cipher like DES, but it
isn't that hard to get it right.  Was Microsoft under NSA pressure when
they designed this stuff or if they came up with it all on their own? (I
can't decide  which scenario scares me more.)

Also, I was assuming "40-bit RC4" meant 128 bit RC4 with 88 randomly
generated key bits revealed. For what it is worth, a friend of mine
recently attended a digital showing of "Star Wars Episode 1" at a theater
in NJ. The movie was stored on a 300GB RAID which he said was about the
size of a milk crate. A complete dictionary attack on a 40 bit code only
requires about a dozen times that much disk space. So when digital movie
distribution becomes commonplace, your favorite 12-screen suburban
cinimaplex will have enough computing capacity to break salt-free 40 bit
codes like MPPE in real time.

All this only goes to show that terms like "56-bit" and "40-bit" and even
"128-bit" are not enough to specify what level of security a system
provides. The details matter and good standards are vital.

Arnold Reinhold




Book on Internet Security and SSL?

1999-07-09 Thread Arnold G. Reinhold

A friend of mine is looking for a introductory level book that explains
internet
security issues (SSL in particular). Any suggestions?



Re: hushmail security

1999-06-16 Thread Arnold G. Reinhold

At 4:51 PM + 5/31/15, [EMAIL PROTECTED] wrote:

Maybe you could make your own local html page and download the applet
JAR file once and for all, then refer to that when you wanted to use hushmail.
Or better still, build the applet file yourself, if they supply the
source.  I'm not
sure if the Java rules would allow a local applet loaded by a browser to do
internet access, though.
...

The applet source is available from the HushMail site.  I am not aware of
any additional restrictions on a local applet or any way for HushMail to
tell the difference. On the contrary, you could convert their source to a
Java application and then be free of all Java "sandbox" restrictions. You
would have to keep up with future changes HushMail makes in the applet tho.

The source code (1.03) confirms that HushMail does not use salt before
hashing the passphrase. That is a serious weakness, as we have been
discussing. Users can compensate by choosing a longer passphrase or by
appending a unique non-secret value, e.g. their phone number or hushmail
user name, to their pasphrase. The latter approach still means more typing,
but not more memorizing.

HushMail should fix this, perhaps by appending the user name to the
passphrase automatically.  This would eliminate the need to store the salt
in their database. For backwards compatibility, the applet could simply try
both ways (with and without an appended user name).

Arnold Reinhold



Re: Salt (was: ICSA certifies weak crypto as secure)

1999-06-04 Thread Arnold G. Reinhold

At 9:18 AM +1000 6/2/99, Greg Rose wrote:
At 16:38 1/06/99 -0400, it was written: [by Arnold Reinhold]
...

I would argue that UNIX is an excellent object lesson for John's point. 12
bits was a bad design decision, even in the 70's.

I take exception to this last statement. The design (of the salted
passwords) was done in the late 70s, and given the constraints of the time
was quite a reasonable decision. Remember that memory and long term storage
on PDP-11s was tight. The machines couldn't possibly support user
populations greater than hundreds (in particular, UIDs were only 8 bits!);
10-12 was typical, and 40 was a hugely overloaded university.

Implementing a longer salt would have required one or two bytes per user in
the passwd file. By your own numbers that would totaled less than 100 bytes
of disk space, hardly a constraint even way back in '79.

...

In fact, I think that (intentionally or not) this choice makes a great
example of the right kind of engineering tradeoffs. As Bruce Schneier
recently said more eloquently than I, anyone can overengineer it...
designing to constraints is much harder and more interesting. People who
believe in day-to-day use of One Time Pads (I'm not accusing anyone here of
this :-) ) are merely the furthest away from practicality.

Overengineering leads to things like SET, multimegabyte key schedules (I
hope David Scott isn't listening), and software bloat in general. I *like*
the elegance of the UNIX salt scheme, and we all learned from it. Sure,
we'd do it differently today. That's partly because of what it taught us.
But, knowing what we know now, would we have done it differently *then*,
with that set of constraints? The answer to that is not at all obvious.


I am not knocking the designers of UNIX, who got a awful lot right the
first time, and, after all, were doing a research project.  Even
crypt()itself was novel and a big step foward.  But I strongly disagree
that leaving room for growth beyond immediately forseeable requirements is
over-engineering. On the contrary, it is usually the committee that has to
fix an under-designed system that ends up producing the bloatware, since at
that point it is hard to say "no" to anyone.

The exponential relationship between field size and name space means that
the difference between a design that will last for a few years and a design
that will last forever is usually quite small. Compare the 32-bit IP
address space with the 48-bit scheme that the designers of Ethernet chose
in the same era. The former is now a pain in the butt and its replacement,
IPv6, is a candidate for the bloatware moniker. Nobody worries about
running out of Ethernet addresses and no one ever will.

(IMHO the design decision that would most profitably have changed was the
limitation to 8 character passwords, not the salt.

I agree with you here, though as Steve Bellovin pointed out, hashing hadn't
been invented yet. Sigmund Porter first came up with the passphrase idea in
1981 [1]. The hubris-laden decision to make the passwd file world-readable
is another candidate for when we get that time machine working.
...
If I design something like this that is still in very
widespread use in 2020, I'll consider myself to have done very well, or
society will be to blame, one or the other. ...

Unfortunately we never know which of our designs will end up in the bit
bucket and which will be cast in stone. And, when we are talking crypto, we
never know just how important the secrets will be that our designs end up
protecting. I'd rather nail each problem the first time.

Arnold Reinhold


[1] S. N. Porter, A Password Extension for Improved Human Factors,
Advances in Cryptology: A Report on CRYPTO 81, Allen Gersho, editor, volume
0, U.C. Santa Barbara Dept. of Elec. and Computer Eng., Santa Barbara,
1982. Pages 81--81. Also in Computers  Security, Vol. 1. No. 1, 1982,
North Holland Press.



Re: ICSA certifies weak crypto as secure

1999-05-28 Thread Arnold G. Reinhold

At 1:36 PM -0400 5/27/99, Kawika Daguio wrote:
 What I would like to know from you is whether you and others have been
able to construct a "duh" list of typical, but unacceptable current
practices that can easily be remediated.

Here are my top 10 candidates for a "duh" list:

1. Keys that are too short: Anything less than 80 bits for symmetric
ciphers (128-bits prefered), or 1024 bits for integer-based public key
systems. In particular this precludes use of 56-bit DES. (112-bit 3DES is
fine.)

2. Poor quality random number generation. Random quantities are needed at
many places in the operation of a modern cryptographic security system. If
the source of randomness is weak, the entire system can be compromised.

3. Use of short passwords or weak passphrases to protect private keys or,
worse, using them to generate symmetric keys. Bad passphrase advice
abounds. For example, both Netscape and Microsoft advise using short
passwords to protect private keys stored by their browsers. The simple fix
is to use randomly generated passphrases of sufficient length. See
http://www.hayom.com/diceware.html.

4. Re-use of the same key with a stream cipher. I have seen this done many
times with RC4.  Even Microsoft appears to have gotten this wrong with
their VPN (I do not know if it has been fixed). There are simple techniques
to avoid this problem but they are often ignored.  See
http://ciphersaber.gurus.com for one method. The potential for slipping up
in stream cipher implimentation makes a strong case for using modern block
ciphers wherever possible.

5. Using systems based on encryption techniques that have not been
publically disclosed and reviewed. There are more than enough ciphers and
public key systems out there that have undergone public scrutiny.  Many of
the best are now in the public domain: 3DES, Blowfish, Skipjack, Arcfour,
D-H, DSA. Others, e.g. RSA, IDEA can be licensed.

6. Ignoring physical security requirements for high value keys. In
particular, no secret key is safe if it is used on a personal computer to
which someone who is not trusted can gain physical access.

7. Lack of thorough configuration management for cryptographic software.
The best software in the world won't protect you if you cannot guarantee
that the version you approved is the version being executed.

8. Poor human interface design. Cryptographic systems that are too hard to
use will be ignored, sabotaged or bypassed.  Training helps, but cannot
overcome a bad design.

9. Failure to motivate key employees. Action or inaction, deliberate of
inadvertent, by trusted individuals can render any security system worse
than worthless.  David Kahn once commented that no nation's communications
are safe as long as their code clerks are at the bottom of the pay scale.

10. Listening to salesmen.  Any company that is selling cryptographic
products has a good story for why the holes in their product really do not
matter. Make sure the system you deploy is reviewed by independent experts.


Arnold Reinhold




  1   2   >