Re: Russia Intercepts US Military Communications?

2003-04-03 Thread Arnold G. Reinhold
At 2:15 PM -0500 4/1/03, Ian Grigg wrote:
Some comments from about a decade ago.

The way it used to work in the Army (that I
was in) within a battalion, is that there was
a little code book, with a sheet for a 6 hour
stretch. Each sheet has a simple matrix for
encoding letters, etc.  Everyone had the same
sheet, and they were created centrally and
distributed from there.  If any sheets were
lost, it was a major disaster.
All soldiers were taught to code up the messages,
it was one of the more boring lessons.  In
practice, corporals and seargeants did most
of the coding, but it was still a slow and
cumbersome process.
The Army actually has a training course (from 1990) on-line that 
describes such a system in detail. The cipher system, called DRYAD is 
covered in 
https://hosta.atsc.eustis.army.mil/cgi-bin/atdl.dll/accp/is1100/ch4.htm 
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Russia Intercepts US Military Communications?

2003-03-31 Thread Arnold G. Reinhold
At 2:10 PM -0500 3/31/03, reusch wrote:
...

Nosing around on the same site, one finds
How military radio communications are intercepted
http://www.aeronautics.ru/news/news002/news071.htm
Searching for SINCGARS indicates that all US military radios have
encryption capabilities, which can be turned off.  Several, in use,
key distribution systems are mentioned.  Perhaps these systems or even
encryption, with infrequently changed keys are, as you suggest, too
inconvenient to use under the conditions.  -MFR
There is a lot of material on SINCGARS available on line via Google. 
This is a low-VHF system used primarily by U.S. ground forces and 
those who want to talk to them.  It offers both frequency hopping and 
Type-1 encryption (at least the newer models) and can also be used in 
single channel, unsecured mode to talk to older VHF-FM radios. 
According to one source, about 164,000 SINCGARS radios have been 
fielded and all older VRC-12 radios should have been replaced by 2001.

The key management systems (nightmare may be a better term) are 
described in considerable detail in 
http://www.fas.org/man/dod-101/sys/land/sincgars.htm . It's from 1996 
and makes very interesting reading. For example, radios have to have 
their time set to within 0.4 sec of GMT. It's easy to believe that 
units switch to un-encrypted modes under the stress of battle.

Even tho the radios seem quite versatile, the usage is extremely 
hierarchical.  News reports have stated that one advance in this war 
is that the daily tasking order can now be distributed 
electronically.  This probably includes all the material needed to 
set up the SINCGARS (frequency hop list, frequency hopping keys, 
communications security keys, call sign lists, network IDs, etc.). 
That may make things a little better than in 1996.

I went to a lecture at MIT by someone for the US Army talking about 
the soldier of the future, an integrated body 
armor/backpack/electronics system. I asked about encryption and he 
said it was Army doctrine not to use it at the intra-squad level. 
Key management is one of the issues. That is consistent with the 
number of SINCGARs radios produced. So there should be plenty of open 
voice traffic to analyze.

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Kashmir crypto

2003-03-31 Thread Arnold G. Reinhold
While Googling for material on SINCGARS, I found an article about 
crypto in the India/Pakistan conflict. Old style cryptanalysis isn't 
dead yet:

http://www.tactical-link.com/india_pakistan.htm

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Active Countermeasures Against Tempest Attacks

2003-03-11 Thread Arnold G. Reinhold
At 11:43 PM -0800 3/10/03, Bill Stewart wrote:
At 09:14 AM 03/10/2003 -0500, Arnold G. Reinhold wrote:
On the other hand, remember that the earliest Tempest systems
were built using vacuum tubes. An attacker today can carry vast amounts
of signal processing power in a briefcase.
And while some of the signal processing jobs need to scale with the 
target systems,
as computer clock speeds get faster, the leakage gets higher and
therefore shielding becomes harder and leakage gets higher.
Most of the older shielding systems can do fine with the 70 MHz 
monitor speeds,
but the 3 GHz CPU clock speed is more leaky.  Millimeter wavelengths are
_much_ more annoying.
All in all I would not put much faith in ad hoc Tempest protection. 
Without access to the secret specifications and test procedures, I 
would prefer to see highly critical operations done using battery 
powered laptops operating in a Faraday cage, with no wires crossing 
the boundary (no power, no phone, no Ethernet, nada).  In that 
situation, one can calculate shielding effectiveness from first 
principles. 
http://www.cs.nps.navy.mil/curricula/tracks/security/AISGuide/navch16.txt 
suggests US government requirements for a shielded enclosure are 60 
db minimum.
Back when most of the energy lived at a few MHz, it was easy to make 
enclosures
that had air vents that didn't leak useful amounts of signal.  It's 
harder today.
So take your scuba gear into your Faraday cage with you :-)
One of my pet ideas is to used older, 1990's vintage, laptops for 
secure processing, e.g. reading PGP mail, generating key pairs, 
signing submaster keys, etc.  They are cheap enough to dedicate to 
the task, they'd be off most of the time thereby reducing 
vulnerability, older operating systems and firmware have fewer 
opportunities for mischief and most viruses won't run on the old 
software.  Easier shielding due to lower clock rate is an advantage I 
hadn't thought of before.

Basically, if you've got a serious threat of TEMPEST attacks,
you've got serious problems anyway...
You could say that about strong crypto in general. Anyone with 
valuable information stored on a computer has lots to worry about.

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Active Countermeasures Against Tempest Attacks

2003-03-10 Thread Arnold G. Reinhold
At 9:35 PM -0500 3/8/03, Dave Emery wrote:
On Fri, Mar 07, 2003 at 10:46:06PM -0800, Bill Frantz wrote:
 The next more complex version sends the same random screen over and over in
 sync with the monitor.  Even more complex versions change the random screen
 every-so-often to try to frustrate recovering the differences between
 screens of data on the monitor.
Five or six years ago I floated the suggestion that one could do
worse than phase lock all the video dot clock oscillators in a computer
room or office to the same master timing source. This would make it
significantly harder to recover one specific monitor's image by
averaging techniques as the interference from nearby monitors would have
exactly the same timing and would not average out as it does in the more
typical case where each monitor is driven from a video board with a
slightly different frequency dot clock (due to aging and manufacturing
tolerances).
The dot clock on a megapixel display is around 70 MHz, or 14 
nanoseconds per pixel. Syncing that over some distance is not 
trivial. Remember the speed of light is 1 nanosecond/foot. On the 
other hand, I think syncing the sweep signals would be enough to 
implement your idea and that should not be hard to do, possibly even 
in software since they are created on the video card.

Effectiveness is another matter. The attacker could use a directional 
antenna to separate out monitors. Even if his equipment was outside 
the building, the windows would act like an antenna whose radiation 
pattern would be different for the different monitors in the room. 
The attacker might be able to discriminate between different monitors 
just by driving his van around outside.

Even if he can't distinguish between different monitors, he still 
gets a signal that is the sum of the content on each monitor.  That 
is analogous to a book code and likely just as secure, i.e. not very.

Modifying existing video boards to support such master timing
references is possible, but not completely trivial - but would cost
manufacturers very little if it was designed in in the first place.
Modifying existing monitors to shield the video signal wouldn't cost 
that much either. As I understand it the big expense in Tempest rated 
equipment is the testing  and the tight manufacturing control needed 
to insure that the monitors produced are the same as the ones tested.

And of course one could improve the shielding on the monitor
with the dummy unimportant data so it radiated 10 or 20 db more energy
than the sensitive information monitor next to it.   In many cases this
might involve little more than scraping off some conductive paint or
removing the ground on a cable shield.
Simply buying some class A monitors for the dummy data might do what 
you want, but I'm not sure 10-20 db of reduced signal to background 
buys you much.  I've heard numbers of 100 db or more required for 
effective Tempest shielding, with Class B shielding (the higher grade 
FCC requirement) buying you 40-50 db. See for example 
http://www.cabrac.com/RFI_EMI_Tempest.html

I am sure that it would take little effort with a spectrum
analyzer and some hand tools to defeat most of the EMI suppression
in many monitors and whilst this would not be entirely legal under
FCC rules (at least for a manufacturer or dealer) it probably would
be closer to legal than deliberately creating rf interference
with an intentionally radiating jammer.
I imagine, however, that the usefulness of the RF radiated by a
modern TFT flat panel display fed with DVI digital video is already much
less as there is no serial stream of analog pixel by pixel video energy
at any point in such an environment.  Most TFTs do one entire row or
column of the display at a time in parallel which does not yield an
easily separated stream of individual pixel energy.   Thus extracting
anything resembling an image would seem very difficult.
The signal is still serialized in digital form at some point on a 
pixel by pixel basis.  Because flat panels do not have the high-power 
sweep signals of CRT monitors, the overall shielding needed to meet 
Class B may be less.  That might make life easier for attackers.

This does suggest one simple approach that might be useful for flat 
panels displaying sensitive text: chose foreground and back ground 
colors that have the same number of on and off bits in each color 
byte pair, e.g. foreground red and background red each have three 
bits on, both blues have four bits on, both greens have five bits on. 
That might make background and foreground more difficult to 
distinguish via RF radiation in an all digital system.

So perhaps the era of the simplest to exploit TEMPEST threats
is ending as both optical and rf TEMPEST is much easier with raster
scan pixel at a time CRT displays than it is with modern more parallel
flat panel display designs.
On the other hand, remember that the earliest Tempest systems were 
built using vacuum 

Re: Active Countermeasures Against Tempest Attacks

2003-03-09 Thread Arnold G. Reinhold
At 10:46 PM -0800 3/7/03, Bill Frantz wrote:
It has occurred to me that the cheapest form of protection from tempest
attacks might be an active transmitter that swamps the signal from the
computer.  Such a transmitter would still be legal if its power output is
kept within the FCC part 15 rules.
Take, for example, the signal from a CRT monitor.  The monitor signal
consists of large signals which are the vertical and horizontal sync
pulses, and smaller signals which are the levels of each of the phosphor
guns.
The simplest countermeasure would be random RF noise which is many orders
of magnitude stronger than the signal from the monitor.  However, with this
system, the attacker can average many fields from the monitor and perhaps
still recover the signal because any give pixel is the same, while the
noise is random.  (Or at least the pixels change slowly compared with the
fields, giving lots of data to average.)
The next more complex version sends the same random screen over and over in
sync with the monitor.  Even more complex versions change the random screen
every-so-often to try to frustrate recovering the differences between
screens of data on the monitor.
Can such a device be built and still stay within the Part 15 rules?

Cheers - Bill

Part 15 is pretty complex, but reading a summary at 
http://www.arrl.org/tis/info/part15.html suggests a number of 
problems. First there are dozens of bands where intentional radiators 
are not permitted to operate (15.205). Designing a noise source that 
avoided all these band might be difficult.

Second, the permitted signal levels associated with intentional 
radiators (15.209) are very similar to those permitted for 
unintentional radiators (15.109), including most consumer grade CRT 
monitors (Class B). Commercial monitors (Class A) are permitted 
higher levels of radiation, but I suspect most monitors made today 
are Class B.

Now the radiation from a monitor is mostly sweep signals and the 
like, which carry no information. The signals that drive the CRT guns 
are much weaker. But I suspect you will need the noise to be much 
more powerful to obliterate the signal carrying data. The situation 
is even worse if the attacker suspects what the data may contain. He 
can then use correlation techniques to find the data well below the 
noise level.

I'd also point out that the noise source has be be co-located with 
the data signal. Otherwise, the attacker can use a directional 
antenna to capture the noise signal without the data signal, allowing 
it to be subtracted from the data+noise signal.  Similarly, it will 
be vital to change the noise pattern whenever the content of the CRT 
changes, otherwise the attacker who had reason to suspect when the 
screen changed can subtract data1+noise from data2+noise to get 
data2-data1, which is likely to leak a lot of information.

I suspect it would be cheaper to shield the CRT or operate in a Faraday cage.

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period

2003-03-06 Thread Arnold G. Reinhold
At 4:57 PM -0500 3/5/03, John S. Denker wrote:
Tim Dierks wrote:

 In order to avoid overreaction to a nth-hand story, I've attempted to
 locate some primary sources.
 Konop v. Hawaiian Airlines:
http://laws.lp.findlaw.com/getcase/9th/case/9955106pexact=1
[US v Councilman:]
  http://pacer.mad.uscourts.gov/dc/opinions/ponsor/pdf/councilman2.pdf
Well done.  Thanks.

 I'd be interested in any opinions on how this affects the government's
 need to get specific wiretap warrants; I don't know if the law which
 makes illicit civilian wiretapping illegal is the same code which
 governs the government's ability (or lack thereof) to intercept
 communications.
0) IANAL.  But as to the question of same code, the
answer is clearly no.
I2ANAL, but I don't think that's clear at all, unless your are 
talking about specific paragraphs within the Wiretap Act and the 
Stored Communications Act.

1) As to government-authorized intercepts, see

http://www.eff.org/Privacy/Surveillance/Terrorism_militias/20011031_eff_usa_patriot_analysis.html

which gives a plain-language discussion of at least
eight different standards under which some sort of
authorization could be obtained.
Also note that neither Konop nor Councilman involved
government intercepts, so you can't learn anything about
authorized intercepts by studying them.  Also note that
post-9/11 laws have superseded everything you might
previously have known on the subject.
The Konop decision specifically talks about government intercepts. 
See section B7, for example. They even discuss the post 9/11 
situation in B6.

2) As to intercepts by civilians, it's wrong, and it
may be punishable under many different theories and
standards, including invasion of privacy, copyright
infringement, computer trespass, computer vandalism,
simple theft of things of value, and who-knows-what
else.
Add the Railway Labor Act in this case.



4) Crypto-related sidelight: I wonder what would
have happened if Konop had encrypted his sensitive
data. (eBook format or the like. :-)  Then could he
have used the draconian provisions of the DMCA
against his opponent (Hawaiian Airlines)?
There are some who would argue that the simple password protection 
scheme Knopp used would be a technological protection covered under 
DMCA.  However, the penalty for access to protected material, as 
opposed to trafficking in technology, is a $2000 fine, which may not 
seem draconian to an airline.

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-21 Thread Arnold G. Reinhold
At 2:18 PM -0800 2/19/03, Ed Gerck wrote:

Anton Stiglic wrote:


  The statement was for a plaintext/ciphertext pair, not for a random-bit/
  random-bit pair. Thus, if we model it terms of a bijection on random-bit
  pairs, we confuse the different statistics for plaintext, ciphertext, keys
 and
  we include non-AES bijections.

 While your reformulation of the problem is interesting, the initial question
 was regarding plaintext/ciphertext pairs, which usually just refers to the
 pair
 of elements from {0,1}^n, {0,1}^n, where n is the block cipher length.


The previous considerations hinted at but did not consider that a
plaintext/ciphertext pair is not only a random bit pair.

Also, if you consider plaintext to be random bits you're considering a very
special -- and least used -- subset of what plaintext can be. And, it's a
much easier problem to securely encrypt random bits.

The most interesting solution space for the problem, I submit, is in the
encryption of human-readable text such as English, for which the previous
considerations I read in this list do not apply, and provide a false sense of
strength. For this case, the proposition applies -- when qualified for  the
unicity.



Maybe I'm missing something here, but the unicity rule as I 
understand it is a probabilistic result.  The likelihood of two keys 
producing different natural language plaintexts from the same cipher 
text falls exponentially as the message length exceeds the unicity 
distance, but it never goes to zero. So unicity can't be used to 
answer the original question* definitively.

I'd also point out that modern ciphers are expected to be secure 
against know plaintext attacks, which is generally a harsher 
condition than knowing the plaintext is in natural language. 
Furthermore they are usually subject to chosen plaintext attack which 
is always harsher.

Arnold Reinhold


* Here is the original question. It seems clear to me that he is 
asking about all possible plaintext bit patterns:

At 2:06 PM +0100 2/17/03, Ralf-Philipp Weinmann wrote:
I was wondering whether the following is true:

For each AES-128 plaintext/ciphertext (c,p) pair there
 exists exactly one key k such that c=AES-128-Encrypt(p, k).

Of course we can look at the generalized case of Rijndael
with block size == key size and ask the same question. I'd
be happy with an answer for AES-128 nonetheless.

At first I thought this was a trivial question since the round
function minus AddRoundKey is bijective. But I haven't been
able to come up with anything thus far, so I thought I'd
ask the list.

Any ideas?

Cheers,
Ralf

p.s.: I am familiar with Wernsdorf's paper, but it hasn't
  helped me thus far.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-18 Thread Arnold G. Reinhold
At 1:09 PM +1100 2/18/03, Greg Rose wrote:

At 02:06 PM 2/17/2003 +0100, Ralf-Philipp Weinmann wrote:

For each AES-128 plaintext/ciphertext (c,p) pair there
 exists exactly one key k such that c=AES-128-Encrypt(p, k).


I'd be very surprised if this were true, and if it was, it might 
have bad implications for related key attacks and the use of AES for 
hashing/MACing.

Basically, block encryption with a given key should form a 
pseudo-random permutation of its inputs, but encryption of a 
constant input with a varying key is usually expected to behave like 
a pseudo-random *function* instead.


Here is another way to look at this question. Each 128-bit block 
cipher is a 1-1 function from the set S = {0,1,...,(2**128-1)] on to 
itself, i.e. a bijection. Suppose we have two such functions f and g 
that are randomly selected from the set of all possible bijections 
S--S (not necessarily ones specified by AES). We can ask what is the 
probability of a collision between f and g, i.e. that there exists 
some value, x, in S such that f(x) = g(x)?  For each possible x in S, 
the probability that f(x) = g(x) is 2**-128. But there are 2**128 
members of S, so we should expect an average of one collision for 
each pair of bijections.

If the ciphers specified by AES behave like randomly selected 
bijections, we should expect one collision for each pair of AES keys 
or 2**256 collisions.  Just one collision violates Mr. Weinmann's 
hypothesis.  So it would be remarkable indeed if there were none. 
Still it would be very interesting to exhibit one.

For ciphers with smaller block sizes (perhaps a 32-bit model of 
Rijndael), counting collisions and matching them against the expected 
distribution might be a useful way to test whether the bijections 
specified by the cipher are randomly distributed among all possible 
bijections.


Arnold Reinhold


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-18 Thread Arnold G. Reinhold
At 5:45 PM -0600 2/18/03, Matt Crawford wrote:

  ... We can ask what is the

 probability of a collision between f and g, i.e. that there exists
 some value, x, in S such that f(x) = g(x)?


But then you didn't answer your own question.  You gave the expected
number of collisions, but not the probability that at least one
exists.

That probability the sum over k from 1 to 2^128 of (-1)^(k+1)/k!,
or about as close to 1-1/e as makes no difference.

But here's the more interesting question. If S = Z/2^128 and F is the
set of all bijections S-S, what is the probability that a set G of
2^128 randomly chosen members of F contains no two functions f1, f2
such that there exists x in S such that f1(x) = f2(x)?


In general, if G has n randomly chosen members of F, isn't the answer 
just 1/e**(n**2)?  There are n**2 pairs of functions in G (ok 
n*(n-1)) and the probability of no collision for each pair is 1/e as 
you point out above.


G is a relatively miniscule subset of F


Just plain minuscule:  |G| = 2**128,  |F| = (2**128)!  ~= 2**(2**135)


but I'm thinking that the
fact that |G| = |S| makes the probability very, very small.


Even if |G|  |F| that is true.  If G contains 5 functions, there 
are 20 pairs and 1/e**20 ~= 2.06E-9.


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [IP] Master Key Copying Revealed (Matt Blaze of ATT Labs)

2003-01-29 Thread Arnold G. Reinhold
I took a look at the MIT Guide to Lock Picking  August 1991 revision at
http://www.lysator.liu.se/mit-guide/mit-guide.html

It says:

9.10 Master Keys
Many applications require keys that open only a single lock and keys 
that open a group of locks. The keys that open a single lock are 
called change keys and the keys that open multiple locks are called 
master keys. To allow both the change key and the master key to open 
the same lock, a locksmith adds an extra pin called a spacer to some 
of the pin columns. See Figure 9.8. The effect of the spacer is to 
create two gaps in the pin column that could be lined up with the 
sheer line. Usually the change key aligns the top of the spacer with 
the sheer line, and the master key aligns the bottom of the spacer 
with the sheer line (the idea is to prevent people from filing down a 
change key to get a master key). In either case the plug is free to 
rotate.

The parenthetical comment suggests awareness of the general 
vulnerability Matt exploited, but I suspect that had the authors 
known the multiple partial copy trick Matt described, they would have 
published it.

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DOS attack on WPA 802.11?

2002-12-08 Thread Arnold G. Reinhold
At 10:48 PM -0500 11/29/02, Donald Eastlake 3rd wrote:

Arnold,

If you want to play with this as in intellectual exercise, be my guest. 
But the probability of changing the underlying IEEE 802.11i draft
standard, which would take a 3/4 majority of the voting members of IEEE
802.11, or of making the WiFi Alliance WPA profiling and subseting of
802.11i incompatible with the standard, are close to zero.



Cryptographic standards should be judged on their merits, not on the 
bureaucratic difficulties in changing them. Specs have been amended 
before. Even NSA was willing to revise its original secure hash 
standard. That's why we have SHA1.  If I am right and WPA needlessly 
introduces a significant denial of service vulnerability, then it 
should be fixed. If I am wrong, no change is needed of course.

Check out the President's message for September 202 at the 
Association of Old Crows web site (Serving the Electronic Warfare 
and Information Operations Community): http://www.aochq.org/news.htm


Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DOS attack on WPA 802.11?

2002-11-29 Thread Arnold G. Reinhold
At 4:57 AM +0100 11/19/02, Niels Ferguson wrote:

At 21:58 18/11/02 -0500, Arnold G Reinhold wrote:

...




Third, a stronger variant of WPA designed for 11a could also run on
11b hardware if  there is enough processing power, so modularization is
not broken.


But there _isn't_ enough processing power to run a super-Michael. If there
were, I'd have designed Michael to be stronger.


I'm not sure that is true for all existing 802.11b hardware. And 
vendors of new 802.11b hardware could certainly elect to support the 
stronger variant of WPA.


Maybe you are suggesting is to add yet another cryptographic function; the
current Michael for existing hardware and a super-Michael for newer 802.11a
hardware. Developing super-Michael would cost a couple of month and a lot
of money. I would consider that a waste of effort that should have been
spent on the AES-based security protocols. That is where we are going, and
we need to get there ASAP. It is perfectly possible to design 802.11a
hardware today that will be able to implement the future AES-based security
protocols. That is what software updates are for.


That is what I am suggesting. If a stronger version of Michael is too 
expensive to develop, there is still the option of using a standard 
message authentication function, say an HMAC based on MD5 or an AES 
solution. I spoke to several 802.11a/g chip-set vendors at Comdex and 
they seem to be allowing extra processing power to support 11i. 
Intersel said they were using 20% of available MIPS.

...

[regarding my suggestion to rotate the Michael output words in a key 
dependant way:]



[...]


Those are standard design questions. I looked at better mixing at the end
of the Michael function and decided against it. It would slow things down
and the attack that changes the last message word and the MIC value had
much the same security bound as the differential attack that does not
change the MIC value. There is no point in strengthening one link of the
chain if there is another weak link as well. Of course, this isn't how I
normally design cryptographic functions, but Michael is a severely
performance-limited design.

[...]


I have responses to your concerns about using SHA and the issue of 
re-keying, but you point out:

It would be easier just to ask
for 128 key bits from the key management system. It has a PRF and should be
able to do it.


That would be fine. You only need ten additional keying bits for 
arbitrary rotation of the two output words. Maybe an additional bit 
to optionally swap the words. This only adds a few instructions per 
packet.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DOS attack on WPA 802.11?

2002-11-19 Thread Arnold G Reinhold

[please ignore previous mesage, sent by mistake -- agr]
On Sat, 16 Nov 2002, Niels Ferguson wrote:

 At 18:15 15/11/02 -0500, Arnold G Reinhold wrote:
 I agree that we have covered most of the issues. One area whre you have
 not responded is the use of WPa in 802.11a. I see no justification for
 intoducing a crippled authentication there.

 From the point of the standard there is little difference between 802.11,
 802.11a, and 802.11b. The differences are purely in the PHY layer. That is,
 the exact radio modulations are different, but the whole MAC layer is
 identical. It would break modularisation to link a MAC layer feature to a
 PHY layer feature.

 The other reason is that 802.11a hardware is already being shipped, and the
 AES-based cryptographic protocol has not been finalised.


Modularization is a poor excuse for shipping a cryptographically weak
product. Second in this case the PHY layer does affect a MAC layer
feature. 802.11a is much faster than 11b. That makes Michael
even more vulnerable to attack.  If Michael is subject to one forged
packet per year on 11b, it is vulnerable to one every 10 weeks or so in
11a. Third, a stronger variant of WPA designed for 11a could also run on
11b hardware if  there is enough processing power, so modularization is
not broken.

As for shipped hardware, does anyone know that it couldnot run with a
stronger version of Michael? And a few shipped units, is far less
justification than the 10's of millions of 802.11b units out there.


 Also here is one more idea for possibly improving Michael.
 
 Scramble the output of Michael in a way that depends on the MIC key, K.
 This could be as simple as rotating each output word a number of bits
 derived from K. Or you could generate a 8 by 8 permutation from K and
 apply it to the bytes in the Michael output. you might even be able to use
 the
 small cipher that is used to generate the individual packed encryption
 keys in WPA.
 
 This would break up an attack that depends on messing with the bits of the
 MIC in the message. It does nothing for attacks on parts of the message
 body. Any additional integrety check on the message would catch that,
 however.

 This would provide at most a very marginal security improvement. A
 differential attack can leave the final MIC value unchanged, and adding an
 extra encryption would not help. See the Michael security analysis for
 details.


A marginal improvement on a marginal algorithm can be worthwhile. It does
break up one attack mode at negligable cost. It might prevent other
attacks that have not been envisioned.

 Rotating the output in a key-dependent way is dangerous. You expose the
 rotation constants to discovery using a differential attack.

If the rotation constants are derived from the MIC key using a strong hash
(e.g. SHA1) there is little risk of recovering key bits. Since this only
needs to be done when the MIC key changes, the computation time should be
afordable.

There is a risk that an attacker who is doing an exhaustive key search
could use knowledge of the rotation bits to rule out most trial keys with
just a hash computation. But even if they could completely test all MIC
key candidates with just the hash, that would require 2**63 SHA1 trials to
recover the MIC key on average. That is a reasonable level of security
compaired to WPA, and with 10 rotation bits we are very far from even that
situation.

Another cheap varient would be to derive the rotation constants from the
hash of the last two MIC keys. This eliminates even this minute risk.

 
 Additional integrety checks would require extra cycles, which we could also
 have spent on a more secure Michael version.


I wasn't suggesting they be done by 802.11, but by  higher layers.

With greetings form Las Vegas,

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: DOS attack on WPA 802.11?

2002-11-13 Thread Arnold G. Reinhold
At 11:40 PM +0100 11/11/02, Niels Ferguson wrote:

At 12:03 11/11/02 -0500, Arnold G. Reinhold wrote:
[...]

One of the tenets
of cryptography is that new security systems deserve to be beaten on
mercilessly without deference to their creator.


I quite agree.


I hope you won't mind another round then.


2. Refresh the Michael key frequently. This proposal rests on WPA's
[...]

This has no effect on the best attack we have so far. The attack is a
differential attack, and changing the key doesn't change the probabilities.


Tell me if I understand this attack correctly. Bob intercepts a 
packet he knows contains a certain message, even though it is WPA 
encrypted, say Transfer one hundred dollars from Alice's account to 
Bob's account. Have a nice day.  (Maybe he know what time it was 
sent, or the length, whatever.) Because WPA uses a stream cipher, Bob 
can create a message that will decrypt with the same key to Transfer 
one million dollars from Alice's account to Bob's account. Have a 
nice day.  This was one of the problems with WEP.

WPA is designed to prevent this kind of forgery by adding a 64-bit 
MIC. Even so, I could send lots of packets containing the million 
dollars message but with random stuff in the MIC field (or in the 
Have a nice day part that Bob knows nobody reads) and if I do this 
enough times I will accidently create a packet with a valid MIC.  If 
MIC were really strong, this would take about 2**64 tries, a big 
enough number not to worry about.  But because Michael is puny, you 
were able to find some clever tricks for picking the randomizing data 
so that only about 2**29 (aka half a billion) tries are needed. 
Furthermore, you are worried that there might be a way that requires 
only 2**20 (about a million) tries.  And because we are trying MIC 
codes at random, the MIC key in use at the moment doesn't matter. 
Eventually Bob gets lucky and the packet goes through.

The logic behind your countermeasure is that forgery attempts are 
very easy to detect and by shutting down for a minute after 2 forgery 
attempts within one second, Bob needs an average of half a million 
minutes to get his packet through, or about one year. And that's an 
acceptable risk.

If I got this right, here are a couple of observations. Assume for a 
moment WPA as is, but with your time out countermeasure turned off.

1. Bob only gets that one packet through.  If he wants another packet 
he has to start all over with another million or more attempts. So 
that packet had better be worth the effort.

2. This forgery only affects the 802.11 layer. If the Transfer one 
million dollars message has an electronic signature or another layer 
of protection, this attack does nothing to defeat that.

3. The network will get and detect hundreds of thousands of copies of 
the forged message before a valid one gets through. If Bob is 
tampering with the MIC code, they will all be identical. If Bob is 
munging an unimportant section of the message, they will still be 
highly correlated. So we will have hours, maybe days of warning that 
someone is attacking our system and exactly what Bob is trying to do. 
Even if we were asleep and he succeeds, we would know about the 
attack and what message he was trying to send.

4. Bob has to do a lot of transmitting and we will have hours or days 
of warning to track him down with direction finding equipment.


This is not a very attractive attack from Bob's point of view.  He 
must find a single packet so valuable it is worth all risk and time 
involved in mounting this attack. He telegraphs his scheme well in 
advance of its success. He risks being caught in the act and he 
leaves a trail of evidence that can be used to catch him, say when he 
cleans out that bank account. It sounds like a Woody Allen movie 
scenario. (What does this note mean 'I have a bun'? It says 'gun' 
Hey Charlie does this look like a 'b' or a 'g' to you?)

Furthermore, if I got this right, a filter could be turned on that 
simply blocked the packet Bob is attempting to send when it finally 
gets a valid MIC. For extra credit, you could do the following: 
automatically detect forgery attempts and devise a filter for them 
(say, look for the constant region of the forgeries). When a valid 
packet comes through that matches the filter, reject it and force a 
key change.  The transport layer will request a retry. If, by chance, 
the packet was legit, the station that sent it can send it again and 
the Internet goes on. Bob on the other hand, needs another million 
tries, after which the same thing will happen.

Any security hole is a matter for concern, but if my understanding is 
correct, I am more convinced that a valid alternative to your time 
out countermeasure is for WPA to tell us we are under attack and let 
us log the forgery attempts verbatim, which I suggested in my first 
message.


Regardless of whether my understanding of the differential attack is 
correct, I think the nub of our disagreement

Possible fixes for 802.11 WPA message authentication

2002-11-11 Thread Arnold G. Reinhold
Here are some thoughts that occur to me for improving the security of 
802.11 WPA message authentication (MIC), based on what I read in 
Jesse Walker's paper 
http://cedar.intel.com/media/pdf/security/80211_part2.pdf.

One approach is to second guess Niels Ferguson and try to find a 
different combination of operations that will produce greater 
security than his Michael algorithm. That is a worthy research idea 
and might even be automated, since there are relatively few 
possibilities given the tight computation time budget.  My guess is 
that Niels has done a good job and, in any case, revisiting the 
Michael design not likely to produce anything that can be implemented 
before WPA is introduced. So this doesn't seem the most productive 
place to look right now.

A different approach might be to select an MIC algorithm that is much 
stronger but breaks the bank on computing time for older access 
points, yet still works on existing cards. One could then have two 
variants, WPA and WPA-XS (extra strength). Sites that wanted the best 
security would have to junk older access points. XS could also be 
required for 802.11a, the new, faster standard for the 5 GHz band, 
which will presumably require beefier access points anyway.

A third approach for the short term would be to leverage Michael, 
i.e. use Michael as is and add stuff that makes the WPA MIC harder to 
break. Then all the cryptoanalytical work done to date on Michael 
remains valid. Here are several approaches I have come up with. For 
this discussion call the Michael key produced under WPA as it exists 
K. I am not proposing any change in the way K is generated or 
distributed.

1. Shuffle the order of the message words stirred into Michael.  For 
example, divide the message payload into four blocks. Let L be the 
length of the payload in words (after padding). Compute M = L/4 (a 
shift).  Then the blocks are [0 to M-1]. [M to 2M-1], [2M to 3M-1] 
and [3M to L]. At the time a new K is created, compute a randomized 
permutation of 4 elements and four randomized order-determining 
bits, all derived securely from K.  Then for each packet, compute the 
Michael hash of the blocks in the order of the permutation, with the 
additional wrinkle that each block is hashed in either ascending 
order or descending order, based on the value of the corresponding 
bit. Note that each word is hashed exactly once and the added 
overhead is modest and outside the Michael inner loop.

A 4 element permutation has 4.5 bits of entropy, with the four 
order-determining bits, that adds at total 8.5 bits to Michael's 
strength. The same concept with 8 blocks would add 23 bits. The 
source and destination addresses are also hashed. They can simply be 
considered part of the payload, or they can be hashed separately, 
before any of the blocks or at the end, again determined by K, to add 
additional variability.

Since the MIC generated here is exactly the same as the original 
Michael MIC of the permuted message, there is no reduction in Michael 
security.  This method breaks down for very short packets, however 
computation time is presumably less of an issue for short packets, so 
we should be able to come up with something in these cases. Perhaps 
we could apply the permutation to data word bytes and use the order 
determining bits to specify a shift.

2. Refresh the Michael key frequently. This proposal rests on WPA's 
need to keep packet order in sync for the IV counter.  I propose 
generating a sequence of 64-bit sub-keys derived from K using a 
reasonably secure algorithm and using them instead of K to key 
Michael.  Since each sub-key gets very little exposure, breaking 
Michael become much more difficult.

2a. Here is one way to generate the sub-key sequence: Create an 
instance of RC4 in software and initialize it using K as the RC4 key. 
Then generate 8 cipher bytes each time a new sub-key is needed.  One 
could do this for every MIC that is generated. This would require 
eight RC4 cypherbyte generations per packet.   A 258-byte RC4 state 
{i, j, S} will be required for each active K and the RC4 key setup 
will need to be performed each time K is changed. For extra credit, 
one can discard the first 256 cipherbytes, though I think that is 
overkill here.

2b. If that is too much overhead, one could generate one cipherbyte 
for each packet and change keys every time eight had been 
accumulated.  Each Michael key only gets used eight times.  This 
computation and storage load does not seem like a lot to me, but If 
it is too much here is a yet another approach:

2c. This one makes me a bit nervous, but it is worth putting on the 
table.  A new RC4 key is generated for every packet fragment sent. 
Borrow one bit from each such key. The bit number used might be 
derived from K. Accumulate the bits in a series of 32 bit word, say 8 
of them.  When you have accumulated them, use them to compute a new 
sub-key, either by adding them pair-wise, or, better, using 

DOS attack on WPA 802.11?

2002-11-07 Thread Arnold G. Reinhold
The new Wi-Fi Protected Access scheme (WPA), designed to replace the 
discredited WEP encryption for 802.11b wireless networks, is a  major 
and welcome improvement. However it seems to have a significant 
vulnerability to denial of service attacks. This vulnerability 
results from the proposed remedy for the self-admitted weakness of 
the Michael message integrity check (MIC) algorithm.

To be backward compatible with the millions of 802.11b units already 
in service,  any MIC algorithm must operate within a very small 
computing budget. The algorithm chosen, called Michael,  is spec'd as 
offering only 20 bits of effective security.

According to an article by Jesse Walker of Intel 
http://cedar.intel.com/media/pdf/security/80211_part2.pdf :

This level of protection is much too weak to afford much benefit by 
itself, so TKIP complements Michael with counter-measures. The design 
goal of the counter-measures is to throttle the utility of forgery 
attempts, limiting knowledge the attacker gains about the MIC key. If 
a TKIP implementation detects two failed forgeries in a second, the 
design assumes it is under active attack. In this case, the station 
deletes its keys, disassociates, waits a minute, and then 
reassociates. While this disrupts communications, it is necessary to 
thwart active attack. The countermeasures thus limits the expected 
number of undetected forgeries such an adversary might generate to 
about one per year per station.

Unfortunately the countermeasures cure may invite a different 
disease. It would appear easy to mount a denial of service attack by 
simply submitting two packets with bad MIC tags in quick succession. 
The access point then shuts down for a minute or more. When it comes 
back up, one repeats the attack.  All the attacker needs is a laptop 
or hand held computer with an 802.11b card and a little software. 
Physically locating the attacker is made much more difficult than for 
an ordinary RF jammer by the fact that only a couple of packets per 
minute need be transmitted. Also the equipment required has innocent 
uses, unlike a jammer, so prosecuting an apprehended suspect would be 
more difficult.

The ability to deny service might be very useful to miscreants in 
some circumstances. For example, an 802.11b network might be used to 
coordinate surveillance systems at some facility or event.  With 
802.11b exploding in popularity, it is impossible to foresee all the 
mission critical uses it might be put to.

Here are a couple of suggestions to improve things, one easier, the 
other harder.

The easier approach is to make the WPA response to detected forgeries 
more configurable.  The amount of time WPA stays down after two 
forgeries might be a parameter, for example.  It should be possible 
to turn the countermeasures off completely. Some users might find the 
consequences of forgeries less than that of lost service. For a firm 
offering for-fee public access, a successful forgery attack might 
merely allow free riding by the attacker, while denied service could 
cost much more in lost revenue and reputation.

Another way to make WPA's response more configurable would be for the 
access point to send a standard message to a configurable IP address 
on the wire side when ever it detects an attack. This could alert 
security personal to scan the parking lot or switch the access point 
to be outside the corporate firewall. The message also might quote 
the forged packets, allowing them to be logged.  Knowing the time and 
content of forged packets could also be useful to automatic radio 
frequency direction finding equipment. As long as some basic hooks 
are in place, other responses to forgery attack could be developed 
without changing the standard.

The harder approach is to replace Michael with a suitable but 
stronger algorithm (Michelle?).  I am willing to assume that 
Michael's designer, Niels Ferguson, did a fine job within the 
constraints he faced. But absent a proof that what he created is 
absolutely optimal, improving on it seems a juicy cryptographic 
problem. How many bits of protection can you get on a tight budget? 
What if you relaxed the budget a little, so it ran on say 80% of 
installed access points? A public contest might be in order.

Clearly, WPA is needed now and can't wait for investigation and 
vetting of a new MIC. But if a significantly improved MIC were 
available in a year or so, it could be included as an addendum or as 
as part of the 802.11i specification.  Some might say that 802.11i's 
native security will be much better, so why bother? My answer is that 
802.11i will not help much unless WPA compatibility is shut off.  And 
with so many millions of 802.11 cards in circulation that are not 
.11i ready, that won't happen in most places for a long time. On 
the other hand, an upgraded MIC could  be adopted by an organization 
that wished improved security with modest effort. Backward 
compatibility could be maintained, with a 

Re: Windows 2000 declared secure

2002-11-07 Thread Arnold G. Reinhold
 
want to know whether an individual is still in the building before 
granting access through an inside terminal

o Training requirements and awareness maintenance for users, 
operators and administrators, including frequency and specific topics 
to be covered

o Legal forms -- security notices, employee agreements, acceptable 
use policies, etc.


I can envision this stuff evolving into something like the fire 
protection regulations that every architect has to either follow or 
request a waver.

Arnold Reinhold






At 6:38 AM -0500 11/4/02, Jonathan S. Shapiro wrote:
I'm answering this publicly, because there is a surprise in the answer.


On Sun, 2002-11-03 at 13:12, Arnold G. Reinhold wrote:

Jonathan S. Shapiro [EMAIL PROTECTED] wrote:
... If a
reputable group of recognized computer scientists were to publish a well
thought out set of evaluation criteria...

If I may ask a naive question, couldn't such a set of evaluation
criteria be abstracted from the design goals of Eros?


Funny you should ask that. First, I need to correct my original
statement: one needs both evaluation criteria and an effective
requirement set for a secure OS. The Common Criteria evaluation process
needs to be augmented with quantitative tests on the actual software
artifact, but it's actually pretty good.

Requirements, on the other hand, is a tough problem. David Chizmadia and
I started pulling together a draft higher-assurance OS protection
profile for a class we taught at Hopkins. It was drafted in tremendous
haste, and we focused selectively on the portions of CC we would cover
in class, but it may provide some sense of how hard this is to actually
do:

	http://www.eros-os.org/assurance/PP/ASP-OS.pdf

Sorry about the formatting errors - it's an automatically generated
document that needs cleanup.

The difficulty in drafting a PP like this is avoid specifying solutions.
A PP is supposed to be a requirements document. Unfortunately, you get
into quandries. Some of the requirements we think are important can be
done in capability systems but not in non-capability systems (at least
based on published verifications to date). It becomes tempting at that
point to introduce requirements that can *only* be done by capability
systems.

Also, much is present only by reading between the lines. An annotated
document is needed in order to really make any headway on understanding
what is implied by some of the requirements.


Also there is no reason such a document need be as voluminous as
existing criteria.  It is high time we departed from the quality
industries practice of focusing on tangential issues, ignoring
substance and generating mountains of paper as a proxy for
accomplishment.


Having read a number of existing protection profiles, I have to say that
people have done quite well on this. There *is* some unneeded bulk, but
this is primarily due to conventions that yield consistently styled
documents. Once you understand how to read one PP you can read pretty
much any PP. A modest amount of size expansion is a reasonable price to
pay.


shap


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: New Protection for 802.11

2002-11-06 Thread Arnold G. Reinhold
See the following two Intel links with detailed discussions of TKIP 
and Michael which i found via Google:

Increasing Wireless Security with TKIP

Forwarded from: eric wolbrom, CISSP, sa ISN-a...

http://www.secadministrator.com/Articles/Index.cfm?ArticleID=27064

Mark Joseph Edwards
October 23, 2002


For a more in-depth look at wireless encryption technology, especially
WEP and TKIP, be sure to read two articles from Intel. The first
article discusses encryption key management in both WEP and TKIP
protocols, and the second article discusses TKIP in considerable
detail.

--
http://cedar.intel.com/media/pdf/wireless/80211_1.pdf
http://cedar.intel.com/media/pdf/security/80211_part2.pdf


Gojko Vujovic
http://www.elitesecurity.org/




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Palladium -- trivially weak in hw but secure in software??(Re: palladium presentation - anyone going?)

2002-10-22 Thread Arnold G. Reinhold
At 4:52 PM +0100 10/22/02, Adam Back wrote:

Remote attestation does indeed require Palladium to be secure against
the local user. 

However my point is while they seem to have done a good job of
providing software security for the remote attestation function, it
seems at this point that hardware security is laughable.


I think the most important phrase above is at this point. Palladium 
is still being designed.  I'd argue that the software/firmware 
portion is the trickiest to get right. It seems rational for 
Microsoft to let that design mature, then analyze the remaining 
hardware threats and turn the hardware engineers loose to try to plug 
them.

Palladium has to be viewed in the larger context of a negotiation 
between Microsoft and Hollywood (I include here all the content 
owners: movie studios, recording industry, book publishers, etc. ). 
Hollywood would prefer a completely closed PC architecture, where 
consumers' use of the computer could be tightly monitored and 
controlled.  They perceive general purpose computing as we know and 
love it to be a mortal threat to their continued existence. Keeping 
the content of DVDs and future media locked up is not enough in their 
eyes. They want all material displayed to be checked for watermarks 
and blocked or degraded if the PC owner hasn't paid for the content.

Microsoft wants to preserve general purpose computing because it 
realizes that in a closed architecture, the OS would become a mere 
commodity component and the consumer electronics giants would 
eventually displace Microsoft. On the other hand, Microsoft needs 
Hollywood provide the kind of content that will drive PC sales and 
upgrades. The base line PC platform of today or even two years ago is 
powerful enough for most consumers and businesses. People are keeping 
their PCs longer and not upgrading them as often. Most everyone who 
wants a PC (at least in North America) already has one. Microsoft 
needs something new to drive sales.

I expect Microsoft and Hollywood to haggle over the final specs for 
Palladium PCs and no doubt additional hardware protection measures 
will be included.  The actual spec may well be kept secret, with NDA 
access only. Hollywood will hold two strong card at the table: its 
content and the threat of legislation.  I'm sure Senator Hollings is 
watching developments closely.

The big question in my mind is how to get PC consumers a place at the 
bargaining table. It seems to me that PC consumers have three tools: 
votes, wallets and technology. The Internet is well suited to 
political organizing. Remember the amount of mail generated by the 
modem tax hoax? Consumer boycotts are another powerful threat, given 
how powerful and upgradable existing computer already are. Technology 
can provide an alternative way to gain the benefits that will be 
touted for controlled computing.  Anti-virus and anti-DDS techniques 
come to mind. Also, since I expect an eventual push to ban 
non-Palladium computers from the Internet, alternative networking 
technology will be important.

The Palladium story is just beginning.

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: palladium presentation - anyone going?

2002-10-21 Thread Arnold G. Reinhold
At 10:52 PM +0100 10/21/02, Adam Back wrote:

On Sun, Oct 20, 2002 at 10:38:35PM -0400, Arnold G. Reinhold wrote:

There may be a hole somewhere, but Microsoft is trying hard to get
it right and Brian seemed quite competent.


It doesn't sound breakable in pure software for the user, so this
forces the user to use some hardware hacking.

They disclaimed explicitly in the talk announce that:

| Palladium is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

However I was interested to know exactly how easy it would be to
defeat with simple hardware modifications or reconfiguration.

You might ask why if there is no intent for Palladium to be secure
against the local user, then why would the design it so that the local
user has to use (simple) hardware attacks.  Could they not, instead of
just make these functions available with a user present test in the
same way that the TOR and SCP functions can be configured by the user
(but not by hostile software).


One of the services that Palladium offers, according to the talk 
announcement, is:

b. Attestation. The ability for a piece of code to digitally sign
or otherwise attest to a piece of data and further assure the
signature recipient that the data was constructed by an unforgeable,
cryptographically identified software stack.


It seems to me such a service requires that Palladium be secure 
against the local user. I think that is the main goal of the product.


For example why not a local user present function to lie about TOR
hash to allow debugging (for example).


Adam Back wrote:
- isn't it quite weak as someone could send different information to
the SCP and processor, thereby being able to forge remote attestation
without having to tamper with the SCP; and hence being able to run
different TOR, observe trusted agents etc.

There is also a change to the PC memory management to support a
trusted bit for memory segments. Programs not in trusted mode can't
access trusted memory.


A trusted bit in the segment register doesn't make it particularly
hard to break if you have access to the hardware.

For example you could:

- replace your RAM with dual-ported video RAM (which can be read using
alternate equipment on the 2nd port).

- just keep RAM powered-up through a reboot so that you load a new TOR
which lets you read the RAM.


Brian mentioned that the system will not be secure against someone 
who can access the memory bus.  But I can see steps being taken in 
the future to make that mechanically difficult. The history of the 
Scanner laws is instructive. Originally one had the right to listen 
to any radio communication as long as you did not make use of the 
information  received. Then Congress banned the sale of scanners that 
can receive cell phone frequencies. Subsequently the laws were 
tightened to require scanners be designed so that their frequency 
range cannot be modified.  In practice this means the control chip 
must be potted in epoxy.  I can see similar steps being taken with 
Palladium PCs. Memory expansion could be dealt with by finding a way 
to give Palladium preferred access to the first block of physical 
memory that is soldered on the mother board.



Also there will be three additional x86 instructions (in microcode)
to support secure boot of the trusted kernel and present a SHA1 hash
of the kernel code in a read only register. 


But how will the SCP know that the hash it reads comes from the
processor (as opposed to being forged by the user)?  Is there any
authenticated communication between the processor and the SCP?


Brian also mentioned that there would be changes to the Southbridge 
LCP bus, which I gather is a local I/O bus in PCs.  SCP will sit on 
that and presumably the changes are to insure that the SCP can only 
be accessed in secure mode.

At 12:27 AM +0100 10/22/02, Peter Clay wrote:
I've been trying to figure out whether the following attack will be
feasible in a Pd system, and what would have to be incorporated to prevent
against it.

Alice runs trusted application T on her computer. This is some sort of
media application, which acts on encoded data streamed over the
internet. Mallory persuades Alice to stream data which causes a buffer
overrun in T. The malicious code, running with all of T's privileges:

- abducts choice valuable data protected by T (e.g. individual book keys
for ebooks)
- builds its own vault with its own key
- installs a modified version of T, V, in that vault with access to the
valuable data
- trashes T's vault

The viral application V is then in an interesting position. Alice has two
choices:

- nuke V and lose all her data (possibly including all backups, depending
on how backup of vaults works)
- allow V to act freely


There are two cases here. One is a buffer overflow in one of the 
trusted agents running in Palladium. Presumably an attack here will 
only be able to damage vaults associated with the product

Re: palladium presentation - anyone going?

2002-10-20 Thread Arnold G. Reinhold
At 7:15 PM +0100 10/17/02, Adam Back wrote:

Would someone at MIT / in Boston area like to go to this [see end] and send a
report to the list?


I went. It was a good talk. The room was jam packed. Brian is very 
forthright and sincere. After he finished speaking, Richard Stallman 
gave an uninvited rebuttal speech,  saying Palladium was very 
dangerous and ought to be banned.  His concerns are legitimate, but 
the net effect, I think, was to make the QA session that followed 
less hostile.

Palladium sets up a separate trusted virtual computer inside the PC 
processor, with its own OS, called Nexus, and it own applications, 
called agents. The trusted computer communicates with a security 
co-processor on the mother board,  and has a secure channel to your 
keyboard and mouse and to a selected window on your CRT screen.

How to prevent the secure channel to the on-screen window from being 
spoofed is still an open problem. Brian suggested a secure mode LED 
that lights when that window has focus or having the secure window 
display a mother's-maden-name type code word that you only tell 
Nexus.  Of course this doesn't matter for DRM since *your* trusting 
the window is not the issue.

All disk and network I/O is done thru the untrusted Windows OS on the 
theory that the trusted machine will encrypt anything it wants to 
keep private. Windows even takes care of Nexus scheduling.

A major design goal is that all existing software must run without 
change. Users are not required to boot Palladium at all, and are to 
be able to boot it long after Windows has booted.

Might help clear up some of the currently
unexplained aspects about Palladium, such as:

- why they think it couldn't be used to protect software copyright (as
the subject of Lucky's patent)


The specific question never came up. As Brain did say, Palladium is 
just a platform. People can built whatever they want on top of it. 
It seemed clear to me that the primary goal is DRM, but as someone 
else in the audience said (approximate quote) We always hear that 
you can't do this or that without trusted hardware. Well, this is 
trusted hardware.  I don't see why anyone would think protecting 
software copyright could not be done.


- are there plans to move SCP functions into processor?  any relation
to Intel Lagrange


No. The SCP is based on a smart card core and is to be a light 
weight, low pin count chip with a target cost of $1 in volume.  I 
presume future deals between MS and Intel are always possible.

The SCP will support several algorithms, including 2048-bit RSA, 
128-bit AES, SHA1, an HMAC. They may include another cipher and 
another hash. There will also be a FIPS140-2 Random Number Generator 
and several monotonic counters, but no time of day clock. Each chip 
will have a unique RSA key pair, an AES key and a HMAC key. The only 
key that the SCP will reveal to the outside is the RSA public key and 
it will only do that once per power up cycle.


- isn't it quite weak as someone could send different information to
the SCP and processor, thereby being able to forge remote attestation
without having to tamper with the SCP; and hence being able to run
different TOR, observe trusted agents etc.


There is also a change to the PC memory management to support a 
trusted bit for memory segments. Programs not in trusted mode can't 
access trusted memory. Also there will be three additional x86 
instructions (in microcode) to support secure boot of the trusted 
kernel and present a SHA1 hash of the kernel code in a read only 
register.  There may be a hole somewhere, but Microsoft is trying 
hard to get it right and Brian seemed quite competent.


I notice at the bottom of the talk invite it says

| Palladium is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

but in this case how does it meet the BORA prevention.  Is it BORA
prevention _presuming_ the local user is not interested to reconfigure
his own hardware?


Near as I can see, the real trust comes from the RSA key pair stored 
in the SCP and a cert on that key from the SCP manufacturer.  There 
is no command to obtain the private key from the SCP.  Presumably 
they leverage smart card technology plus what ever tricks they think 
of to make it hard to get that key.   Differential power analysis or 
HNO3 might do the trick. We'll have to wait and see.


Will it really make any significant difference to DRM enforcement
rates?  Wouldn't the subset of the file sharing community who produce
DVD rips still produce Pd DRM rips if the only protection is the
assumption that the user won't make simple hardware modifications.


The real question from Microsoft's stand point is will the 
entertainment industry be satisfied with Palladium's level of 
security and release content that can play on Palladium equipped PCs? 
DVDs aren't Hollywood's main problem.  Movies are becoming available 
online long before the DVD is 

Re: Microsoft marries RSA Security to Windows

2002-10-15 Thread Arnold G. Reinhold

I can see a number of problems with using mobile phones as a second 
channel for authentication:

1. It begs the question of tamper resistant hardware. Unless the 
phone contains a tamper resistant serial number or key, it is 
relatively easy to clone. And cell phones are merging with PDAs. If 
you have secure storage, why not implement a local solution on the 
PDA side?

2. Even if the phone is tamperproof, SMS messages can be intercepted. 
I can imagine a man-in-the-middle attack where the attacker cuts the 
user off after getting the SMS message, before the user has a chance 
to enter their code.

3. Cell phones don't work everywhere. Geographic coverage is limited. 
Most U.S. phones don't work overseas. Reception can fail inside 
buildings and cell phone use is prohibited on commercial airplanes 
in-flight (the airlines are planning to offer Internet access in the 
near future). And what happens if I choose to TEMPEST shield my 
facility?

4. The cell phone network can get clogged in times of high stress, 
e.g. a snow storm at rush hour, a natural disaster or a terrorist 
incident. Presumably some people who use two factor authentication 
have important work to do. Do you want them to be locked out of their 
computers at such critical times?

5. Cell phones are vulnerable to denial of service attacks. A simple 
RF jammer could prevent an individual or an entire building from 
accessing their computers.

6. People are generally cavalier about their cell phones. They wear 
them on belt pouches, leave them in cars and gym lockers, let 
strangers borrow them. I left mine in a coat pocket that I checked at 
a restaurant and ended up with a $40 long distance bill. Habits like 
that are hard to change. On the other hand, a token that goes on a 
key chain or is worn as jewelry taps into more security conscious 
cultural behavior.  Human factors are usually the weak link in 
security, so such considerations are important.

7. It's a tax on logins. SMS messages aren't free.

8. If I lose my token, I can use my cell phone to report it promptly. 
If I lose my cell phone...

9. Improved technology should make authentication tokens even more 
attractive. For one thing they can be made very small and waterproof. 
Connection modes like USB and Bluetooth can eliminate the need to 
type in a code, or allow the PIN to be entered directly into the 
token (my preference).

10. There is room for more innovative tokens. Imagine a finger ring 
that detects body heat and pulse and  knows if it has removed. It 
could then refuse to work, emit a distress code when next used or 
simply require an additional authentication step to be reactivated. 
Even implants are feasible.


Arnold Reinhold



At 8:56 AM -0700 10/9/02, Ed Gerck wrote:
Tamper-resistant hardware is out, second channel with remote source is in.
Trust can be induced this way too, and better. There is no need for 
PRNG in plain
view, no seed value known. Delay time of 60 seconds (or more) is fine because
each one-time code applies only to one page served.

Please take a look at:
http://www.rsasecurity.com/products/mobile/datasheets/SIDMOB_DS_0802.pdf

and http://nma.com/zsentry/

Microsoft's move is good, RSA gets a good ride too, and the door may open
for a standards-based two-channel authentication method.

Cheers,
Ed Gerck

Roy M.Silvernail wrote:

 On Tuesday 08 October 2002 10:11 pm, it was said:

  Microsoft marries RSA Security to Windows
  http://www.theregister.co.uk/content/55/27499.html

 [...]

  The first initiatives will centre on Microsoft's licensing of RSA SecurID
  two-factor authentication software and RSA Security's 
development of an RSA
  SecurID Software Token for Pocket PC.

 And here, I thought that a portion of the security embodied in a SecurID
 token was the fact that it was a tamper-resistant, independent piece of
 hardware.  Now M$ wants to put the PRNG out in plain view, along with its
  seed value. This cherry is just begging to be picked by some blackhat,
  probably exploiting a hole in Pocket Outlook.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Microsoft marries RSA Security to Windows

2002-10-15 Thread Arnold G. Reinhold

At 8:40 AM -0700 10/11/02, Ed Gerck wrote:
Arnold G. Reinhold wrote:

 I can see a number of problems with using mobile phones as a second
 channel for authentication:

Great questions. Without aspiring to exhaust the answers, let me comment.

 1. It begs the question of tamper resistant hardware. Unless the
 phone contains a tamper resistant serial number or key, it is
 relatively easy to clone. And cell phones are merging with PDAs. If
 you have secure storage, why not implement a local solution on the
 PDA side?

Cloning the cell phone has no effect unless you also have the credentials
to initiate the transaction. The cell phone cannot initiate the authentication
event. Of course, if you put a gun to the user's head you can get it all but
that is not the threat model.

If we're looking at high security applications, an analysis of a 
two-factor system has to assume that one factor is compromised (as 
you point out at the end of your response). I concede that there are 
large classes of low security applications where using a cell phone 
may be good enough, particularly where the user may not be 
cooperative. This includes situations where users have an economic 
incentive to share their login/password, e.g. subscriptions, and in 
privacy applications (Our logs show you accessed Mr. Celebrity's 
medical records, yet he was never your patient. Someone must have 
guessed my password. How did they get your cell phone too?) Here 
the issue is preventing the user from cloning his account or denying 
its unauthorized use, not authentication.


A local solution on the PDA side is possible too, and may be helpful where
the mobile service may not work. However, it has less potential for wide
use. Today, 95% of all cell phones used in the US are SMS enabled.

What percentage are enabled for downloadable games? A security 
program would be simpler than most games.  It might be feasible to 
upload a new game periodically for added security.


 2. Even if the phone is tamperproof, SMS messages can be intercepted.
 I can imagine a man-in-the-middle attack where the attacker cuts the
 user off after getting the SMS message, before the user has a chance
 to enter their code.

Has no effect if the system is well-designed. It's possible to make 
it mandatory
(under strong crypto assurances) to enter the one-time code using the *same*
browser page provided in response to the authentication request -- which
page is supplied under server-authenticated SSL (no MITM).

You may be right here, though assuming SSL lets one solve a lot of 
security problems associated with traditional password login.


 3. Cell phones don't work everywhere. Geographic coverage is limited.
 Most U.S. phones don't work overseas. Reception can fail inside
 buildings and cell phone use is prohibited on commercial airplanes
 in-flight (the airlines are planning to offer Internet access in the
 near future). And what happens if I choose to TEMPEST shield my
 facility?

No solution works everywhere. Cell phones are no exception. But it is
possible to design the system in a such a way that the user can use 
a different
access class (with less privileges, for example) if the cell phone does
not work. After all, the user is authenticated before the message is sent to
the cell phone.

That said, cell phone coverage is becoming ubiquitous and the solution also
works with pagers (while they still exist), email accounts (blackberrys) and
other means of communication -- including voice.

Security tokens work everywhere I can think of.  I'm not sure the 
cell companies are spending much to push into rural areas given the 
current economy.  Might be a new market for Iridium, but that doesn't 
work well inside buildings.


 4. The cell phone network can get clogged in times of high stress,
 e.g. a snow storm at rush hour, a natural disaster or a terrorist
 incident. Presumably some people who use two factor authentication
  have important work to do. Do you want them to be locked out of their
 computers at such critical times?

Let's be careful with generalizations. During the tragic events of 9/11, cell
phones emerged as the solution for communication  under a 
distributed terrorist
attack.

The WTC collapse took out a major portion of lower Manhattan's 
landline capacity. Cell phones were better than nothing, but many 
people experienced difficulty placing calls.  It is simply too 
expensive to design a switched system to handle all the calls people 
want to make in a major crisis. Military systems include priority 
tags to deal with this.

This does raise an interesting possibility: giving SMS messages 
priority over voice could be very useful in an emergency. SMS 
messages take much less bandwidth than voice and the entry mechanism 
on most cell phones is very slow. So existing cell infrastructure 
might be able to handle all the SMS traffic generated a crisis. 
Anyone know if cell phone companies are doing this?


Second, as I hint somewhere above

Re: unforgeable optical tokens?

2002-09-24 Thread Arnold G. Reinhold

It might be possible to get the same effect using a conventional 
silicon chip. I have in mind a large analog circuit, something like a 
multi-stage neural network. Random defects would be induced, either 
in the crystal growing process or by exposing the wafer at one or 
more stages with a spray of pellets or chemicals. The effect would be 
to cut wires and alter component values such as resistances,  zener 
diode break down voltages, transistor gains.

Critical parts of the circuit would be protected by a passivation 
layer or would  simply designed with  larger geometries to make them 
less sensitive. Multiple inputs would be driven by D/A converters, 
either in parallel or through a charge coupled analog shift register. 
There would be enough stuff' in the middle to make it impractical to 
characterize the entire circuit from the inputs. One could use very 
small geometries for the network and still get high circuit yield 
since defects are something we want.

The advantage of this approach over a optical system is that it would 
be very easy to interface with existing technology -- smart cards, RF 
ID, dongles, etc.

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-29 Thread Arnold G. Reinhold

At 12:20 PM -0700 7/29/02, David Honig wrote:

Whether there is a need for very high bandwidth RNGs was discussed
on cypherpunks a few months ago, and no examples were found.
(Unless you're using something like a one-time pad where you need
a random bit for every cargo bit.)  Keeping in mind that
a commerical crypto server can often accumulate entropy during
off-peak hours. 


It's been discussed here some time back as well. If you believe your 
crypto primitives are infeasible to break, a crypto-based PRNG with a 
long enough random seed should be indistinguishable from a true, 
perfect RNG. If you are only confident that your crypto primitives 
are expensive to break, then using a true RNG for keys and nonces, 
rather than deriving them all from one PRNG, adds security.

This suggest a continuum of solutions: Construct a crypto PRNG and 
periodically (once enough has accumulated) stir your entropy source 
into it's state in some safe way. If you extract entropy slower than 
you put it in you can expect the equivalent of of a true RNG. If you 
extract entropy faster than you put it in, the system degrades 
gracefully in the sense that someone who expends the effort to break 
the number generation scheme only gets to read messages since the 
last entropy update.

The reason for batching entropy input is to prevent someone who has 
broken your system once from discovering each small entropy input by 
exhaustive search.  (There was a nice paper pointing this out in. If 
someone has the reference...)

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: It's Time to Abandon Insecure Languages

2002-07-21 Thread Arnold G. Reinhold

Language wars have been with us since the earliest days of computing 
and we are obviously not going to resolve them here.  It seems to me 
though, that cryptographic tools could be use to make to improve the 
reliability and security of C++ by providing ways to manage risky 
usages.

I have in mind a modified development environment that detects 
dangerous programming instances like pointer arithmetic,  assignments 
in if statements, C (as opposed to C++) strings, char array 
declarations, maloc's etc.  Methods where such usage is necessary 
would be signed by the author and one or more reviewers, with the 
signature embedded inside a special comment statement.  The 
development environment would then check whether only approved usages 
are present and, if so, sign the executable file. Final versions of 
code would be built on trusted servers whose compilers could not be 
tampered with and whose private key is not accessible to the 
developers.

Implementing such an environment should not be difficult. No real 
language changes would be involved, beyond reserving a standardized 
comment prefix for signatures. Most programmers would only be able to 
employ safe objects and constructs.  The few instances where 
dangerous usages were really needed would be limited, visible and 
require authorization.

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-29 Thread Arnold G. Reinhold

At 12:23 PM -0700 3/24/02, [EMAIL PROTECTED] wrote:
or just security proportional to risk ...

While a valid engineering truism, I have a number of issues with that dictum:

1.  It is too often used as an excuse for inaction by people who are 
poorly equipped to judge either risk or cost.  We've all encountered 
the experts on tap, not on top attitude of many managements.  There 
was a good reason the U.S. centralized all crypto in the NSA after WW 
II. Managers in organizations like the State Department simply 
ignored known security compromises.  Communications security never 
had a high priority with functional managers, so it was taken away 
from them.

2. Costs are often overstated or quoted out of context. A $1000 
coprocessor that can verify 100 keys per second ends up costing under 
a millicent per verification, even allowing a large factor for peak 
demand.  The added cost to store long keys is tiny. Good engineering 
(often the biggest cost) can be spread over many applications. Cost 
of keeping up with security patches is likely modest compared to 24/7 
watchman security for a physical location.

3. The nature of risk is very different in cyberspace. Many 
cryptographic techniques introduce single points of failure.  Bonnie 
and Clide can't rob all the banks at once, but the wily hacker might. 
It may be cheaper to employ bullet-proof solutions than to really 
understand the risks in good enough approaches.

4. There is also the question of risk to whom. Many businesses seem 
to assume the the government will pick up the tab for a major cyber 
terrorism incident.  If business execs can say with a straight face 
that basic accounting principals are too difficult for them to grasp, 
imagine what they will say about a massive crypto failure. So in a 
sense taxpayers and  consumers are being asked to insure some of 
these risks.  I suspect they would gladly pay the added costs 
(pennies) to apply the best available technology.

5. There is a failure to distinguish between components and systems. 
It may be true that any real world system has holes, but that is no 
reason to give up on perfecting the tools used to build these 
systems. Incorporating known weaknesses into new designs is not 
justifiable, absent a compelling, fact-based, cost/security analysis.

Arnold Reinhold


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-23 Thread Arnold G. Reinhold

There are groups with lots of money and dedicated, trained agents who 
are willing to die that would dearly like to steal a nuclear weapon. 
So far, they have not succeeded (if they do, I fear we will know 
about it quickly).  So someone has been able to do physical security 
right.

The problem is doing it in a way that is affordable and doesn't 
require an army. Designing computers that can detect an attack seems 
worth exploring. FIPS-140 envisions such an approach when it talks 
about wrapping security modules in a mesh of insulated wire whose 
penetration tells the module to zeroize.

I'm not sure what changes in your argument if you delete the word 
physical.  Perhaps we should all just give up with this security 
nonsense.


Arnold reinhold



At 11:28 PM -0600 3/21/02, Jim Choate wrote:
As someone who spent 5 years doing all the physical security for a major
university I can say that ALL physical systems can be broken. No
exception. The three laws of thermodynamics apply to security systems as
well.

There is ALWAYS a hole.

On Thu, 21 Mar 2002, Arnold G. Reinhold wrote:

 It's not clear to me what having the human present accomplishes.
 While the power was out, the node computer could have been tampered
 with, e.g. a key logger attached.

 Who said you were allowed to lose power and stay secure? Laptops are
 pretty cheap and come with multi-hour batteries.  There should be
 enough physical security around the node to prevent someone from
 tripping power.

 One approach might be to surround a remote node with enough sensors
 so that it can detect an unauthorized attempt to physically approach
 it.


 --


 There is less in this than meets the eye.

 Tellulah Bankhead
 [EMAIL PROTECTED] www.ssz.com
 [EMAIL PROTECTED]  www.open-forge.org



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-21 Thread Arnold G. Reinhold

At 8:52 PM -0800 3/20/02, Mike Brodhead wrote:
  The usual good solution is to make a human type in a secret.

Of course, the downside is that the appropriate human must be present
for the system to come up properly.

It's not clear to me what having the human present accomplishes. 
While the power was out, the node computer could have been tampered 
with, e.g. a key logger attached.


In some situations, the system must be able to boot into a working
state.  That way, even if somebody accidentally trips the power-- I've
had this happen on production boxen --the system outage lasts only as
long as the boot time.  If a particular human (or one of a small
number of secret holders) must be involved, then the outage could be
measured in hours rather than minutes.

Who said you were allowed to lose power and stay secure? Laptops are 
pretty cheap and come with multi-hour batteries.  There should be 
enough physical security around the node to prevent someone from 
tripping power.

One approach might be to surround a remote node with enough sensors 
so that it can detect an unauthorized attempt to physically approach 
it. Web cams are pretty cheap. Several cameras and/or mirrors would 
be required to get 4Pi coverage.  Software could detect frame to 
frame changes that indicated an intrusion. The machine would be kept 
in a secure closet or cabinet. The the machine would be set up in 
what ever location by a trusted person or team and would remain 
conscious from then on. Entry would be authorized via an 
authenticated link. Any unauthorized entry would result in the node 
destroying it's secrets. It would then have to be replaced.


Don't forget that Availability is also an important aspect of
security.  It all depends on your threat model.


The approach I outlined offers very high availability.


Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Cringely Gives KnowNow Some Unbelievable Free Press... (fwd)

2002-02-26 Thread Arnold G. Reinhold

At 11:49 AM -0800 2/25/02, bear wrote:
...
The secure forever level of difficulty that we used to believe
we got from 2kbit keys in RSA is apparently a property of 6kbit
keys and higher, barring further highly-unexpected discoveries.

Highly-unexpected?   All of public key cryptography is build on 
unproven mathematical assumptions. Why should this be the last 
breakthrough? If you plot the curve of what key length was considered 
long enough as a function of time, it doesn't look very good.

Perhaps it is time to stop claiming secure forever altogether until 
solid mathematical proofs of security are available.

...
I predict that Elliptic-Curve systems are about to become more
popular.


I'm not completely comfortable with Elliptic-Curve systems. The 
mathematics is relatively young and has seen a lot of progress. Yet 
typical EC key length recommendations are based on the assumption 
that there is no way to calculate discrete logs in EC groups that is 
any faster than the general algorithm that applies to all finite 
groups. That sounds pretty aggressive to me.

If we are going to have to upgrade OpenPGP standards in light of the 
Bernstein paper, I would suggest a standard that combines RSA, EC 
and, if possible, a third PK system whose algorithm is based on an 
apparently independent problem.  The advantage of double or triple 
encryption is that a breakthrough in one problem area does not 
immediately compromise all your previously encrypted data. And you 
can upgrade the component key in question and distribute it signed 
with the old key, without have to start from scratch in establishing 
trust. Most personal computers are capable of this level of security. 
Why settle for less?


Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Report on a James Bamford Talk at Berkeley

2002-02-22 Thread Arnold G. Reinhold

At 4:42 PM -0500 2/17/02, R. A. Hettinga wrote:
http://www.lewrockwell.com/orig2/bamfordreport.html


Report on a
James Bamford Talk at Berkeley

James Bamford is the author of The Puzzle Palace and Body of Secrets, books
about the National Security Agency. He is visiting Berkeley in the School
of Public Policy, and gave a talk entitled Intelligence Failures that Led
to the September 11th Attacks.

...
NSA was created after WWII from the code breaking activity that had
proceeded during the war. At the time it was created, no one but a couple
of people even knew it had been created. NSA stands for no such agency, or
never say anything, or after Puzzle Palace, not secret anymore. To
illustrate how secret NSA is, after Puzzle Palace was published, Bamford
went on a book tour. At one point he was scheduled onto a PBS show where
the other guest was Sen. Bill Bradley. Prior to the show, the Senator asked
Bamford why he was on the show, and he explained that he had written a book
on the NSA. Bradley asked him what that was, and Bamford explained. Then
Bradley went on the show to explain his ideas for the economy, or whatever,
and then the interview switched to Bamford. Bamford explained that the NSA
was a secret agency. The interviewer said How secret? And, naturally,
Bamford did not pass on the opportunity to say that it was so secret that
not even Sen. Bradley knew about it. Bradley was not pleased.

Nonsense! The NSA's existence and purpose hasn't been much of a 
secret since the early 60's, long before The Puzzle Palace was 
published in 1982.  Kahn has a chapter on NSA in The Codebreakers, 
which came out in 1967, and that chapter wasn't much of a revelation 
even back then. No wonder Bradley was not pleased. He'd been 
sandbagged.

...

About NSA: it is 38,000 people, 50 buildings, on a campus in Maryland, in
suburban Washington DC. The have the most powerful computers in the world,
1.6 million tapes in their tape library [tapes?].

The helical scan system, later commercialized as the video tape 
recorder, was invented for NSA so they could record whole swaths of 
the radio spectrum for later analysis. Instead of monitoring each 
station, they could go back and replay the tapes once they knew what 
to look for. I believe they attempted to record the entire HF 
spectrum continuously from multiple locations. 1.6 million tapes 
sounds low if anything.

Basically they do signals
intelligence, listening to phone calls, faxes, email, and any sort of
communication.

Other signals as well, such as missile telemetry, satellite control, 
unintended emissions and all types of radar. (I believe one of the 
arms control treaties prohibits the U.S. and Russian from encrypting 
missile test telemetry.)

To do this they have extensive facilities all around the
world. One technique that Bamford mentioned was how they capture microwave
signals. Microwave, unlike high frequency signals [HF are actually lower
frequency than microwave, in case you care], do not bounce off the
ionosphere and travel in a straight line. Towers must be line of sight from
each other. So how's the NSA going to listen to this? Answer is that some
of the radiation goes past the receiving station, and continues in its
straight line out into space. The NSA has satellites out there to grab the
signals. [Bamford described the satellites as geosynchronous, but that
wouldn't work.]

I think Bamford got it close to right on this one.  Aviation Week has 
reported on NSA geosynchronous satellite launches from time to time. 
See also 
http://users.ox.ac.uk/~daveh/Space/Military/milspace_sigint.html and 
http://www.fas.org/spp/military/program/sigint/androart.htm

There are several problems with low earth orbit satellites as 
listening systems. First, they only able to monitor any given spot 
for a short time. Target countries can shut off systems of interest 
while the satellite is overhead.  The second is that low earth orbit 
satellites can be attacked more easily and quickly in time of war. 
The U.S. at one time had an air launched missile that could do this. 
Geosynchronous orbit takes more energy and a longer time to get to.

Finally, a directional antenna on a low earth orbit satellite has to 
be steered very rapidly as the satellite moves over its target. That 
is very hard to do mechanically, and electronically steered antennae 
have narrow bandwidths, not what NSA wants for monitoring.  A 
geosynchronous monitoring satellite can have a huge, light weight 
parabolic mesh pointed at Earth. It only needs to steer very slowly, 
if at all. Remember that while signal strength drops as the square of 
the distance, a parabolic antenna's gain grows as the square of its 
diameter.  Geosynchronous orbit is about 50 times higher than typical 
low earth orbits used by NSA, so a 50 times wider antenna gets you to 
beak-even on signal strength.

NSA also uses satellites in 12-hour semi-synchronous elliptical 
orbits for the same reason that the Soviets put their 

Re: Welome to the Internet, here's your private key

2002-02-08 Thread Arnold G. Reinhold

At 5:12 PM +0100 2/8/02, Jaap-Henk Hoepman wrote:
I think there _are_ good business reasons for them not wanting the users to
generate the keys all by themselves. Weak keys, and subsequent 
compromises, may
give the CA really bad press and resulting loss of reputation (and this
business is built on reputation anyway).

If the CA has nothing to do with key generation in the first place, 
I'm not sure how weak keys would affect the CA's reputation. We had 
nothing to do with making that key, we just signed it is a concept 
even the general public can understand. And the risk of weak keys 
seems small compared to the myriad ways a user's private key can be 
compromised.  If the CA has any access to private keys, any 
compromise can be blamed on the CA and diminish their reputation.

So: there are good reasons not to
let the CA generate the private key, but also good reasons to not let the user
generate the keys all by himself.

So the question is: are there key generation protocols for mutually 
distrustful
parties, that would give the CA the assurance that the key is generated using
some good randomness (coming from the CA) and would give the user 
the guarantee
that his private key is truly private. Also, the CA should be able to verify
later that the random data he supplied was actually used, but this should not
give him (too much) advantage to find the private key.

It's hard to see how to establish a secure protocol between the 
user's machine and the CA without a good source of randomness on the 
user's machine in the first place.  You can't presume there's a 
shared secret.

Simply providing an applet or plug-in to generate keys would seem 
sufficient.  The CA could maintain a list of approved smart cards 
based on inspecting their source code.  They might even let approved 
smart card vendors embed a signing key in the smart card to let the 
CA know that the user key had been generated by an approved device. 
Such a system could be defeated but it's not clear why anyone would 
have the motivation to do so. If someone wants to create a 
compromised key incident, they merely have to leak a key.


Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Welome to the Internet, here's your private key

2002-02-07 Thread Arnold G. Reinhold

At 6:18 PM -0500 2/5/02, Ryan McBride wrote:
On Tue, Feb 05, 2002 at 11:16:40AM -0800, Bill Frantz wrote:
 I expect you could initialize the random data in that memory during
 manufacture with little loss of real security.  (If you are concerned about
 the card's manufacturer, then you have bigger problems.  If anyone does,
 the manufacturer has the necessary equipment to extract data from secret
 parts of the card, install Trojans etc.)

They say a secret is something you tell one other person
  -- U2, The Fly

While it is true that most users of smartcards will choose to simply
trust the manufacturer, paranoid users could use a n choose m type of
approach to achieve a certain level of assurance. In most cases
verifying that a card is trojan free is a destructive process, so the
user would test a relatively low percentage of cards and make the
penalty for cheating high enough to ensure that the manufacturer stays
honest.

One criteria for a cryptographic system that is rarely mentioned is 
auditability. To the maximum extent possible users should be able to 
verify every component of the system that affects security. We have 
gotten too used to systems so bloated that they no one can know 
what's in them. There are historic reasons for this but that is no 
excuse. Finding out how to simplify systems is far more important 
today than designing the next great cipher.  A great virtue of doing 
all crypto on a smart card is that they can be verified, at least 
with some effort.


Having the manufacturer provide the random data changes the burden of
proof drastically - there is no way for to _prove_ that they did not
retain a copy of the random data, while it can be proved that they did
not try to cheat simply by testing all the cards.

And creates a potential legal liability  for the smart card 
manufacturer. This gets to the original question of this thread. I 
wonder why the CA's lawyers let them generate private keys 
themselves. If it ever came out that private keys were misused by CA 
employees or even someone who penetrated their security, they would 
be legally defenseless, all the gobbledygook in their practice 
statements not withstanding. There is no good business reason for a 
CA to generate private keys and very powerful business reasons for 
them not to.


Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Welome to the Internet, here's your private key

2002-02-05 Thread Arnold G. Reinhold

I'd argue that the RSA and DSA situations can be made equivalent if 
the card has some persistent memory. Some high quality randomness is 
needed at RSA key generation.  For the DSA case, use 256 bits of 
randomness at initialization to seed a PRNG using AES, say. Output 
from the PRNG could be then used to provide the nonces for DSA.  For 
extra credit, PRNG seed could be xor'd periodically with whatever 
randomness is available on chip.

The resulting DSA system requires about the same randomness at 
initialization as RSA. The additional vulnerability introduced 
requires breaking AES to exploit, even if no further randomness is 
available.  All things considered, I'd trust an AES PRNG more than a 
smart card RNG whose long term quality I cannot assess. Better to use 
both, of course.

Arnold Reinhold



At 3:09 PM -0700 2/4/02, [EMAIL PROTECTED] wrote:
One could claim that one of the reasons for using RSA digital signatures
with smart cards rather than DSA or EC/DSA is the DSA  EC/DSA requirement
for quality random number generation as part of the signature process.

...


Cards with quality random numbers ... can

1) do on card key-gen
2) use DSA or EC/DSA
3) remove dependency on external source to include random number in message
to be signed.

DSA  EC/DSA because they have a random number as parting of the signing
process precludes duplicate signatures on the same message ... multiple
messages with the same content  same exact signature is a replay. DSA 
EC/DSA doing multiple signings of the same content will always result in a
different signature value.

I've heard numbers on many of the 8bit smartcards ... power-cycle the card
each time it is asked to generate a random number  do random number
generation 65,000 times and look at results. For some significant
percentage of 8bit cards it isn't unusual to find 30 percent of the random
numbers duplicated.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cringely Gives KnowNow Some Unbelievable Free Press... (fwd)

2002-02-01 Thread Arnold G. Reinhold

At 7:38 AM -0800 1/29/02, Eric Rescorla wrote:
Ben Laurie [EMAIL PROTECTED] writes:
  Eric Rescorla wrote:

 BTW, I don't see why using a passphrase to a key makes you vulnerable to
 a dictionary attack (like, you really are going to have a dictionary of
 all possible 1024 bit keys crossed with all the possible passphrases?
 Sure!).
Unfortunately, dictionary attack is used differently by different
people. There are two different kinds of attacks here:

(1) A brute-force attack such as is used by Crack where you
successively try a small subset of the passphrase space in
the expectation that it is the space that people are likely
to populate. (This is what RFC 2828 calls a dictionary attack).

(2) A table-driven attack where you have an enormous table
(say of passphrases to keys) and just do a lookup in the table.

I was referring to the former, which is quite practical against
such a system. The latter probably consumes too much memory to
be practical.


I think there are significant advantages to a passphrase-derived 
public key system. It allows total portability and the encryption 
hardware can be totally zeroized between uses.  One of the biggest 
threats to modern cryptosystems is their large electronic footprint 
that leaves too much room to hide things.

Passphrase-derived public keys also allow very long term storage of 
keys (e.g. on acid free paper in a vault) without worries about 
deterioration of media or inability to read old formats.

Method 2 is totally impossible in systems that use long salt (48 bits 
or more) or probably unique salt e.g an e-mail address or complete 
phone number.

Here are three very practical techniques to protect against Method 1:

The first is aggressive key stretching that burns up on the order of 
1 second of processing time and utilizes silicon-consuming resources 
like memory and 32-bit multiplies.

The second is for the system itself to suggest strong passphrases. 
Users could ignore the suggestion but nothing can protect a user who 
is not willing to follow recommended precautions. With good key 
stretching even a 5 word diceware passphrase (64-bit entropy) would 
provide strong protection.

The third would be to combine the password and salt with a secret 
stored in the encryption device. This makes the key dependent on the 
device, but requires the attacker to capture both the device and the 
passphrase.


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Fingerprints (was: Re: biometrics)

2002-01-28 Thread Arnold G. Reinhold

There is some interesting information at http://www.finger-scan.com/ 
They make the point that finger scanning differs from finger printing 
in that what is stored is a set of recognition parameters much 
smaller than a complete fingerprint image.  So there is no need for a 
lengthily process to acquire an initial image. Presumably this also 
makes finger scan data proprietary, since each vendor will use a 
different recognition algorithm.

Finger Scan also has a page on accuracy where they debunk other 
vendors' claims of 0.01% false reject/ 0.001% false accept, but tell 
you to e-mail them for the real numbers.

Arnold Reinhold


At 5:07 PM -0600 1/28/02, Rick Smith at Secure Computing wrote:
At 02:46 PM 1/28/2002, [EMAIL PROTECTED] wrote:

The process took about 20-30 minutes;

Have you been fingerprinted before? Did it take that long in that 
case? In my own experience, it only takes a few minutes to be 
fingerprinted on a standard card and, in theory, they should be able 
to build a database from high-res fingerprint card images. Some 
small percentage of the population has prints that are unusually 
hard to read. It might be time consuming to put such a person's 
prints onto a card.

Or perhaps it takes 20 minutes of ablutions and purifications to 
copy a fingerprint card, so they figure they might as well make the 
subject wait, too.


Rick.
[EMAIL PROTECTED]roseville, minnesota
Authentication in bookstores http://www.visi.com/crypto/




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: password-cracking by journalists... (long, sorry)

2002-01-22 Thread Arnold G. Reinhold

At 5:16 PM -0500 1/21/02, Will Rodger wrote:
Arnold says:

You can presumably write your own programs to decrypt your own 
files. But if you provide that service to someone else you could 
run afoul of the law as I read it. The DMCA prohibits trafficking 
in technology that can be used to circumvent technological 
protection measures. There is no language requiring proof than 
anyone's copyright was violated.  Traffic for hire and it's a 
felony.

I think there's a good argument to the contrary.

The DMCA only bans trafficking in devices whose _primary_ purpose is 
infringement.

No, DMCA bans trafficking in devices whose primary purpose is 
*circumvention.*   I'm not trying to nit pick, it's an important 
point. DMCA creates a whole new class of proscribed activity, 
circumvention, that does not require proof of infringement.

As for the phrase primary purpose, I can easily see a judge 
accepting the argument that the primary purpose of a tool that breaks 
encryption is circumvention as defined in this act. In the 2600 case, 
the defense argued that DeCSS was also useful for playing purchased 
DVDs on Linux machines and for fair use. The courts dismissed this 
argument.

And it only applies to works protected by this Title, that is, 
Title 17, which is the collection of laws pertaining to copyright.

Right, but just about everything written today is copyrighted from 
the moment of creation. You have to go out of your way (or work for 
the U.S. government) to place new works in the public domain.


There was a very long, drawn out discussion of what would be banned 
and what not before passage. It included all sorts of people 
traipsing up to Capitol Hill to make sure that ordinary research and 
system maintenance, among other things, would not be prosecuted. 
Bruce Schneier was among those who talked to the committees and was 
satisfied, as I recall, that crypto had dodged a bullet. I'm not 
saying that Bruce liked the bill, just that this particular fear was 
lessened greatly, if not eliminated, by the language that finally 
emerged.

I've heard that story as well. I don't know if he saw the final 
language, how long he had to study it or what he based that opinion 
on.  Maybe there is some statement in the legislative history, which 
is only what the legislators said about the bill, that might be 
helpful in court. Absent that, we have to rely on what the law 
actually says. Bruce's opinion of what the law means would carry no 
weight in court.


Now a prosecutor probably wouldn't pursue the case of a 
cryptographer who decoded messages on behalf of parents of some kid 
involved in drugs or sex abuse. But what if the cryptographer was 
told that and the data turned out to be someone else's? Or if the 
kid was e-mailing a counselor about abuse by his parents? Or the 
government really didn't like the cryptographer because of his 
political views?

It all gets down to knowingly doing something, right? If our 
cryptographer acted in good faith, he wouldn't be prosecuted -- the 
person who set him up would be.

I see nothing in the law that exempts you from liability if you 
didn't know you acted without authorization of the copyright holder. 
There is a provision, 1203(c)(5), that lets a court reduce reducing 
civil damages if you didn't know.  That presumably does not apply to 
the criminal provisions and prosecutors are notorious for doing 
whatever it takes if they want to get someone.  See, for example 
http://www.nytimes.com/2002/01/21/nyregion/21CLEA.html



There is also the argument that Congress only intended to cover 
tools for breaking content protections schemes like CSS and never 
intended to cover general cryptanalysis.   You might win with that 
argument in court (I think you should), but expect a 7 digit legal 
bill.  And if you lose, we'll put up a Free Will web site.

No argument there!

As for the legal situation before the DMCA,  the Supreme Court 
issued a ruling last year in a case, Barniki v. Volper,  of a 
journalist who broadcast a tape he received of an illegally 
intercepted cell phone conversation between two labor organizers. 
The court ruled that the broadcast was permissible.

The journalist received the information from a source gratis. 
That's different from paying for stolen goods, hiring someone to 
eavesdrop, or breaking the law yourself. The First Amendment 
covers a lot, in this case.

Correct. The Barniki opinion pointed out that the journalists were 
not responsible for the interception.  But journalists receive 
purloined data from whistle-blowers all the time. Suppose in the 
future it was one of those e-mail messages with a cryptographically 
enforced expiration date? A journalist who broke that system might 
be sued under DMCA.  That possibility might not frighten the WSJ, 
but what about smaller news organizations?


Fair enough. But what would the damages under copyright law be? They 
generally correspond to a harm in the market for a certain kind of 

Re: password-cracking by journalists...

2002-01-20 Thread Arnold G. Reinhold

At 4:12 PM -0500 1/18/02, Will Rodger wrote:
This law has LOTS of unintended consequences.  That is why many 
people find it so disturbing.  For example, as I read it, and I am 
*not* a lawyer, someone who offered file decryption services for 
hire to people who have a right to the data, e.g. the owner lost 
the password, or a disgruntled employee left with the password, or 
a parent wants to see what was stored on their child's hard drive, 
could still be charged with committing a felony.

If it's your copyright, it's still yours. The law recognizes that.

You can presumably write your own programs to decrypt your own files. 
But if you provide that service to someone else you could run afoul 
of the law as I read it. The DMCA prohibits trafficking in technology 
that can be used to circumvent technological protection measures. 
There is no language requiring proof than anyone's copyright was 
violated.  Traffic for hire and it's a felony.

Now a prosecutor probably wouldn't pursue the case of a cryptographer 
who decoded messages on behalf of parents of some kid involved in 
drugs or sex abuse. But what if the cryptographer was told that and 
the data turned out to be someone else's? Or if the kid was e-mailing 
a counselor about abuse by his parents? Or the government really 
didn't like the cryptographer because of his political views?

There is also the argument that Congress only intended to cover tools 
for breaking content protections schemes like CSS and never intended 
to cover general cryptanalysis.   You might win with that argument in 
court (I think you should), but expect a 7 digit legal bill.  And if 
you lose, we'll put up a Free Will web site.


As for the legal situation before the DMCA,  the Supreme Court 
issued a ruling last year in a case, Barniki v. Volper,  of a 
journalist who broadcast a tape he received of an illegally 
intercepted cell phone conversation between two labor organizers. 
The court ruled that the broadcast was permissible.

The journalist received the information from a source gratis. That's 
different from paying for stolen goods, hiring someone to eavesdrop, 
or breaking the law yourself. The First Amendment covers a lot, in 
this case.

Correct. The Barniki opinion pointed out that the journalists were 
not responsible for the interception.  But journalists receive 
purloined data from whistle-blowers all the time. Suppose in the 
future it was one of those e-mail messages with a cryptographically 
enforced expiration date? A journalist who broke that system might be 
sued under DMCA.  That possibility might not frighten the WSJ, but 
what about smaller news organizations?


 So the stolen property argument you give might not hold. The 
change wrought by the DMCA is that it makes trafficking in the 
tools needed to get at encrypted data, regardless whether one has a 
right to (there is an exemption for law enforcement) unlawful.

There's language governing that in the statute. Trafficking in tools 
specifically designed to break a given form of copy protection is 
one thing. The continued availability of legal tools for 
cryptanalysis and legitimate password cracking is another. As bad as 
the DMCA is, it's not _that_ bad.

Will

I've read the statute very carefully and I never found such language. 
(You can read my analysis at 
http://world.std.com/~reinhold/DeCSSamicusbrief.html) It's certainly 
possible that I overlooked something. Perhaps you could cite the 
language you are referring to?


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: password-cracking by journalists...

2002-01-20 Thread Arnold G. Reinhold

At 7:38 PM -0500 1/19/02, Steven M. Bellovin wrote:
In message 
[EMAIL PROTECTED], Sampo
 Syreeni writes:
On Thu, 17 Jan 2002, Steven M. Bellovin wrote:

For one thing, in Hebrew (and, I think, Arabic) vowels are not normally
written.

If something, this would lead me to believe there is less redundancy in
what *is* written, and so less possibility for a dictionary attack.

Also, there are a few Hebrew letters which have different forms when
they're the final letter in a word -- my understanding is that there are
more Arabic letters that have a different final form, and that some have
up to four forms: one initial, two middle, and one final.

At least Unicode codes these as the same codepoint, and treats the
different forms as glyph variants. Normalizing for these before the attack
 shouldn't be a big deal.

Arabic Unicode is based on ISO 8859/6 so this was presumably the case 
before Unicode as well.

 
Finally, Hebrew (and, as someone else mentioned, Arabic) verbs have a
three-letter root form; many nouns are derived from this root.

This would facilitate the attack, especially if the root form is all that
is written -- it would lead us expect shorter passwords and a densely
populated search space, with less possibility for easy variations like
punctuation.



I'm not sure why someone would only write the root. I don't think 
it's any more natural for speaker of those languages than writing 
Latin roots would be for English speakers.

Right -- there are factors pushing in both directions, and I don't know
how it balances.

A few more factors:

1. Neither Hebrew nor Arabic have capitalization the way Latin does. 
This reduces opportunities for variation. The Hebrew final forms make 
up for that to a small degree.  They are treated as different code 
points in all encodings*, by the way.

2. Almost all Hebrew encodings* include the Latin letters as well. 
In 7-bit ASCII Hebrew, the Hebrew alphabet replaces the lowercase 
Latin letters. In IBM-PC and ISO 8859/8  encodings, the Hebrew 
alphabet is in the upper 128 characters, with the lower 128 printable 
characters being standard ASCII. So a Hebrew user could mix Latin and 
Hebrew characters if they wished.  I suspect most Arabic computer 
users have easy access to Latin characters too.

3. Arabic and Hebrew users might be counseled to selectively use 
vowels or diacritical marks in their passwords.

4. People outside the U.S. are less likely to be mono-lingual. 
Someone from Israel for example might be expected to know several 
languages among Hebrew, Arabic, Aramaic, English, Russian, Yiddish 
and Ladino.

5. Unicode includes an extended Arabic-encoding with 96 additional 
letter/diacritic forms used in non-Arabic languages that use Arabic 
alphabet, including 9 for Pashto. I don't know if these are available 
in consumer PC's yet.

6. Finally users of these or other non-Latin alphabet languages might 
well choose to transliterate their password into Latin characters to 
make them easy to enter on any computer.


Your mention of Unicode, though, brings up another point:  the encoding
that's used can matter, too.  If UCS-2 or UCS-4 (16 and 31-bit
encodings) are used, I believe that there are many constant bits per
character.  Even UTF-8 would have that effect.


I think the analysis depends on the type of password system employed. 
In a properly designed system that places no restriction on password 
length and applies a cryptographic hash to the password input + ample 
salt, the existence of constant bits per character in some encodings 
has no effect. The entropy of the password is determined by the 
symbol space the user is employing, not the internal encoding.

Systems like these are probably best attacked by trying long lists of 
likely passwords, preferably guided by whatever personal information 
is known about the password creator.

If the password bit length is limited to a low number, e.g. the Unix 
56-bit limit,  switching to 16-bit or 32-bit per character encoding 
would be disastrous. As far as I know, no one does this. I don't know 
if any implementations attempt to accept UTF-8 encoding. There are 
clearly some pitfalls there.

On the other hand, the Unix password system, particularly those where 
the hashed password can be obtained by an attacker, is so broken that 
any natural language password is going to be weak.  Random 8 
character passwords from a 26 letter alphabet, will only have 38 bits 
of entropy.  A dictionary attack is quite feasible at that size. A 
random password with 6 letters, one digit and one special character 
(typical of what users are counseled to choose) has 42 bits.  A 
random password using the full 96 printable ASCII character set only 
gets you to 53 bits of entropy. Stamping out the 8 character Unix 
password limit would be a good use of Homeland Defense money.


Arnold Reinhold


*At least all those listed in Narshon and Rosenschein, The Many 
Faces of Hebrew, Kivun Ltd. (a developer of multilingual 

Re: password-cracking by journalists...

2002-01-18 Thread Arnold G. Reinhold

At 9:41 AM -0500 1/18/02, Will Rodger wrote:
Arnhold writes:

Another interesting question is whether the reporters and the Wall 
Street Journal have violated the DCMA's criminal provisions. The al 
Qaeda data was copyrighted (assuming Afghanistan signed one of the 
copyright conventions--they may not have), the encryption is 
arguably a technological protection measure and the breaking was 
done for financial gain.

That, I think, is an unintended consequence of the law, but I bet 
there's a lawyer somewhere who'd take a crack at it. More important 
is the origin of the info. itself: were it peacetime you'd have a 
pretty clear case of receiving stolen property. Add to that certain 
trade-secret laws in various of the 50 United States, and you could 
do a long time in the slammer over this...

Will Rodger

This law has LOTS of unintended consequences.  That is why many 
people find it so disturbing.  For example, as I read it, and I am 
*not* a lawyer, someone who offered file decryption services for hire 
to people who have a right to the data, e.g. the owner lost the 
password, or a disgruntled employee left with the password, or a 
parent wants to see what was stored on their child's hard drive, 
could still be charged with committing a felony.

As for the legal situation before the DMCA,  the Supreme Court issued 
a ruling last year in a case, Barniki v. Volper,  of a journalist who 
broadcast a tape he received of an illegally intercepted cell phone 
conversation between two labor organizers.  The court ruled that the 
broadcast was permissible.  So the stolen property argument you give 
might not hold. The change wrought by the DMCA is that it makes 
trafficking in the tools needed to get at encrypted data, regardless 
whether one has a right to (there is an exemption for law 
enforcement) unlawful.

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: password-cracking by journalists...

2002-01-17 Thread Arnold G. Reinhold

At 9:15 AM -0500 1/16/02, Steve Bellovin wrote:
A couple of months ago, a Wall Street Journal reporter bought two
abandoned al Qaeda computers from a looter in Kabul.  Some of the
files on those machines were encrypted.  But they're dealing with
that problem:

   The unsigned report, protected by a complex password, was
   created on Aug. 19, according to the Kabul computer's
   internal record. The Wall Street Journal commissioned an
   array of high-speed computers programmed to crack passwords.
   They took five days to access the file.

Does anyone have any technical details on this?  (I assume that it's
a standard password-guessing approach, but it it would be nice to know
for certain.  If nothing else, are Arabic passwords easier or harder
to guess than, say, English ones?)


Outside of the good possibility that they might be quotations from 
Islamic religious texts, why would you think Arabic passwords are any 
easier to guess?

Another interesting question is whether the reporters and the Wall 
Street Journal have violated the DCMA's criminal provisions. The al 
Qaeda data was copyrighted (assuming Afghanistan signed one of the 
copyright conventions--they may not have), the encryption is arguably 
a technological protection measure and the breaking was done for 
financial gain.

17 USC 1204 (a) In General. - Any person who violates section 1201 
or 1202 willfully and for purposes of commercial advantage or private 
financial gain -(1) shall be fined not more than $500,000 or 
imprisoned for not more than 5 years, or both, for the first 
offense...

BTW: The 2600 Magazine defense team has filed an appeal for en banc 
review of the 2nd Circuit's DMCA opinion:

Brief: http://www.eff.org/IP/Video/MPAA_DVD_cases/20020114_ny_2600_appeal.html

Press Release: 
http://www.eff.org/IP/Video/MPAA_DVD_cases/20020114_ny_eff_pr.html


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Linux-style kernel PRNGs and the FIPS140-2 test

2002-01-15 Thread Arnold G. Reinhold

This result would seem to raise questions about SHA1 and MD5 as much 
as about the quality of /dev/random and /dev/urandom.  Naively, it 
should be difficult to create input to these hash functions that 
cause their output to fail any statistical test.

Arnold Reinhold

At 3:23 PM -0500 1/15/02, Thor Lancelot Simon wrote:
Many operating systems use Linux-style (environmental noise
stirred with a hash function) generators to provide random
and pseudorandom data on /dev/random and /dev/urandom
respectively.  A few modify the general Linux design by adding an
output buffer which is not stirred so that bits which have already
been output are not stirred into the pool of new random data
(IMO, not doing this is insane, but that's a different subject).

The enclosed implementation of the FIPS140-1/2 statistical test
appears to show that such generators fail the runs test quite
regularly.  Interestingly, the Linux generator seems to do better
the longer you let it run (which, perhaps, suggests that quite a
bit of data should be run through it at boot time and discarded)
but other, related generators do not.

The usual failure mode is too many runs of 1 1s.  Using MD5
instead of SHA1 as the mixing function, the Linux generator
also displays too many runs of 1 0s.  I have not yet seen
other failure modes from these generators.

To reproduce my results, just compile the enclosed and do
a.out  /dev/urandom on your platform of choice.

Thor

Attachment converted: Arnold's iMac:fips140.c (TEXT/ttxt) (0011EDDD)




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



PAIIN crypto taxonomy (was Re: CFP: PKI research workshop)

2002-01-03 Thread Arnold G. Reinhold

The PAIIN model (privacy, authentication, identification, integrity, 
non-repudiation) is inadequate to represent the uses of cryptography. 
Besides the distinction between privacy and confidentiality, I'd like 
to point out some additional uses of cryptography which either don't 
fit at all or are poorly represented in this model:

Anonymity - the ability to communicate without messages being 
attributed to the sender (e.g. remailers).

Confidential verification -- the ability to verify information 
without disclosing it (e.g. zero knowledge proofs).

Fragmentation -- dividing control over information among several parties.

Invisibility -- the ability to communicate or store information 
without being detected. This includes stegonography, low probability 
of observation communication techniques such as low power spread 
spectrum, and measures against traffic analysis such as link 
encryption.

Proof of trespass -- The ability to demonstrate that anyone having 
access to data knew they were doing so without authorization, (e.g. 
for trade secret and criminal evidence law).

Remote randomization -- the ability for separated parties to 
create fair and trusted random quantities.

Resource taxing -- techniques to prove a minimum expenditure of 
computing resources  e.g. hash-cash.

Time delay -- making information available but not immediately.

Transmission assurance -- anti-jam and anti censorship technology.

Use control -- the whole digital rights management scene.


I'm not suggesting this is a complete list or the best breakdown, but 
I hope is shows that the cryptographic imagination goes beyond PAIIN.

Arnold Reinhold





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Steganography covert communications - Between Silk andCyanide

2001-12-31 Thread Arnold G. Reinhold

At 2:59 PM -0800 12/30/01, John Gilmore wrote:

Along these lines I can't help but recommend reading one of the best
crypto books of the last few years:

   Between Silk and Cyanide
   Leo Marks, 1999

This wonderful, funny, serious, and readable book was written by the
chief cryptographer for the 'nefarious organization' in England which
ran covert agents all over Europe during WW2 -- the Special 
Operations Executive.

What makes this book so excellent is that Marks was not just the 
chief cryptographer at SOE, he was the *only* cryptographer. He got 
to do everything. A young amateur crypto enthusiast, he didn't make 
the cut for Bletchley after basic cryptanalysis training due to a bad 
case of smart ass and was sent down to SOE. He almost failed to get 
that job when it took him all day to decipher a test message. He 
didn't realize he had been supplied the key.

... He taught the receiving code
clerks in England how to decode even garbled messages, rather than
asking agents to re-send them.  (Re-sends of the same text gave the
enemy even more trivial ways to crack the codes.)

More important, it sharply increased their risk of being caught by 
German radio direction finders. Agents had been captured or shot in 
the middle of re-transmissions.

At 1:21 PM + 12/31/01, Ben Laurie wrote:
David Honig wrote:
 
  Unbeknown to the latter, Marks had already cracked General de Gaulle's
 private cypher in a spare moment on the lavatory. -from the obit of Leo
 Marks, cryptographer

But this was because it was, in fact, one of his own ciphers.

That's not quite fair to Mr. Marks. General de Gaulle used a double 
transposition cipher similar to the one the OSE had been using since 
before Marks got there, though Marks had to discover this on his own. 
Marks' codemaking efforts were directed toward improving that cipher 
and replacing it with a one-time pad. One advantage of the one time 
pad was that messages could be short, reducing the DF risk. Double 
transposition cipher required a minimum length, 200 letters, lest 
they simply be anagramed.

I have a review of the book at 
http://world.std.com/~reinhold/silkandcyanide.html

Arnold Reinhold





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Stegdetect 0.4 released and results from USENET searchavailable

2001-12-28 Thread Arnold G. Reinhold

At 4:33 AM -0500 12/28/01, Niels Provos wrote:
In message v04210101b84eca7963ad@[192.168.0.3], Arnold G. Reinhold writes:
I don't think you can conclude much from the failure of your
dictionary attack to decrypt any messages.
We are offering various explanations.  One of them is that there is no
significant use of steganography.  If you read the recent article in
the New York Times [1], you will find claims that about 0.6 percent
of millions of pictures on auction and pornography sites had hidden
messages.

I certainly can't imagine any group or activity that would generate 
the hundreds of thousands of stego messages a 0.6 percent rate 
implies.


2. The signature graphs you presented for several of the stego
methods seemed very strong. I wonder if there is more pattern
recognition possible to determine highly likely candidates. I would
be interested in seeing what the graphs look like for the putative
false alarms you found. It also might be interesting to run the
detection program on a corpus of JPEGs known NOT to contain stego,
such as a clip art CD.
The following slides contain examples of false-positives

  http://www.citi.umich.edu/u/provos/papers/detecting-csl/mgp00023.html
  http://www.citi.umich.edu/u/provos/papers/detecting-csl/mgp00024.html

In my experience, eliminating false-positives is not quite that easy.
Some graphs look like they should have steganographic content even
though they do not.  Any test will have a false-positive rate, the
goal is to keep it very low.

In general you are of course correct. But this particular case may be 
an exception. I am not a stego maven, and before reading your paper, 
it never occurred to me that some stego software would be designed to 
place message bits in the first n available slots. Spreading them 
pseudo-randomly seems so easy and so obvious a win.  However, since 
much software out there does use first n slot message placement, 
detection of such messages may be possible with a very high signal to 
noise ratio. The graphs in your papers, with very flat tops and 
bottoms and steep skirts suggest that to me.  They are very different 
from the false-positive graphs in the slides above. It may possible 
to distinguish them with high enough confidence to be able to assert 
the presence of stego messages even if they cannot be decrypted.


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: CFP: PKI research workshop

2001-12-27 Thread Arnold G. Reinhold

It seems to me that a very similar argument can be made regarding the 
need (or lack there of) for a national identity card.  Organizations 
that require biometric identity can simply record that information in 
their own databases. The business most widely cited as needing 
national ID cards, the airlines, already maintain elaborate customer 
databases for their frequent flyer programs. Adding biometrics, with 
mileage points or faster check-in as incentive, would be easy enough. 
Your frequent flyer application would authorize the airline to 
compare your  identifying data with security databases at other 
airlines, credit bureaus, the government, etc.

If photo matching software is unable to validate two photos of you 
from different databases, both would be displayed next time you check 
in. If the clerk decides they match, that would be recorded. If the 
discrepancy is major, you would be taken aside and the matter 
investigated. Over time, the confidence in any individual's record 
would grow as more use is made of it. There is still the problem of 
protecting the database from alteration, but that applies to whatever 
database would be used to issue national ID cards as well.

Even if high speed data lines are not available at all gates, 
reservations are normally made well in advance, so a passenger list 
with biometrics could be prepared overnight and delivered to each 
gate in printed form (for photos) or on a CD-RW.  Last minute 
reservations could be handled by slower data links or the maker would 
simply be subject to a higher level of scrutiny. Passport numbers 
could be requested at the time of an international reservation and 
checked with the issuing government well before flight. Government's 
that don't cooperate would have their citizens subject to additional 
scrutiny. During times of heightened alert, additional cross checking 
can be implemented.

None of this is particularly hard and all the issues of of forged, 
revoked, stolen or cursorily examined ID cards go away. So do the 
issues of abuse where petty officials confiscate your ID card leaving 
you helpless.

Arnold Reinhold


At 8:22 AM -0700 12/27/01, [EMAIL PROTECTED] wrote:
it isn't that you move it to a central authority  you move it to an
authority that typically is already established in association with
authorization ... aka in normal business operation, a business relationship
is established that typically consists of creating an account record that
has various privileges associated with that account/entity. For
authentication purposes a public key can be bound to that account and/or
set of privileges. This goes on in the world today  and would continue
to regardless of whether x.509 identity certificates were issued or not.
given that businesses have to play the registration function for
authorization  privileges ... aka normal procedure doesn't allow somebody
to walk into a random bank and withdraw funds from a random account ...
regardless of who they are ... aka indentity doesn't magically enable a
person to withdraw funds from an arbritrary account ... ability to withdraw
funds typically is a privilege associated with whether or not some entity
is authorized to perform that function for a specific account. As such, the
financial institution has to register lots of information for the account
... also registering a public key is consistent with the existing business
processes, liability, administrative and management infrastructure.

In effect, large numbers of business processes already exist for
registration, administration, and management of authentication information
 and having a certificate in the loop doesn't eliminate those business
processes (whether or not I had a certificate  there still would have
to be something registered that some attribute of me has authorization to
do certain things). Doing business flow and informatioin management
optimization just demonstrates that given existing business infrastructures
for registration, administration and management which also includes
certificates it is usually trivially possible to demonstrate that the
actual certificates are redundant, superfulous and extraneous ... aka
directly registering the public key and providing direct binding between
the authentication process and the authorization process  eliminating a
possibly huge number of extraneous and unnecessary business entities and
business processes associated with certificate-based operation.

There doesn't have to be any single central authority in a certificateless
model. There can be all sorts of authorities for all sorts of infomation
 which could be also hypothesized for a certificate-based certification
and authentication model. However, the certificateless exercise typically
trivially demonstrates that any certificate-based solution duplicates
existing business processes which aren't going to be eliminated. Therefor,
it is then possible to demonstrate business optimization 

Re: Stegdetect 0.4 released and results from USENET searchavailable

2001-12-26 Thread Arnold G. Reinhold

This is an nice piece of work, but I have a couple of comments:

1. The paper asserts Even if the majority of passwords used to hide 
content were strong, there would be a small percentage of weak 
passwords ... and we should have been able to find them.  That might 
be true if there are a large number of stego users independently 
selecting passwords, but it's not a compelling argument if stego is 
being employed by a few sophisticated terrorist  organizations, as 
suggested by the April 1991 Newsday article, 
http://www.usatoday.com/life/cyber/tech/2001-02-05-binladen.htm . It 
is quite likely that such organizations  train users to select strong 
passwords or passphrases. Indeed, since the stego systems use 
symmetric keys, field cells would have to be assigned passwords prior 
to deployment. In all likelihood this would be done by a central 
communications group, with good crypto skills.

Even if some cells did use weak passwords, they are likely to derive 
them from languages and religious quotes  that I suspect are not well 
represented in your dictionary. There is also the possibility that 
the terrorist organizations modified published stego programs or 
built their own from scratch, perhaps to incorporate public key 
methods. In that case, a dictionary attack is hopeless.

I don't think you can conclude much from the failure of your 
dictionary attack to decrypt any messages.

2. The signature graphs you presented for several of the stego 
methods seemed very strong. I wonder if there is more pattern 
recognition possible to determine highly likely candidates. I would 
be interested in seeing what the graphs look like for the putative 
false alarms you found. It also might be interesting to run the 
detection program on a corpus of JPEGs known NOT to contain stego, 
such as a clip art CD.

3. If you did succeed in decrypting one of Osama Bin Laden's 
missives, wouldn't he have a case against you under DMCA?

Arnold Reinhold

At 12:16 PM -0500 12/21/01, Niels Provos wrote:
I just released Stegdetect 0.4.  It contains the following changes:

 - Improved detection accuracy for JSteg and JPhide.
 - JPEG Header Analysis reduces false positives.
 - JPEG Header Analysis provides rudimentary detection of F5.
 - Stegbreak uses the file magic utility to improve dictionary
   attack against OutGuess 0.13b.

You can download the UNIX source code or windows binary from

  http://www.outguess.org/download.php

-
The results from analyzing one million images from the Internet Archive's
USENET archive are available at

  http://www.citi.umich.edu/u/provos/stego/usenet.php

[...]
  After scanning two million images from eBay without finding any
  hidden messages, we extended the scope of our analysis.

  This page provides details about the analysis of one million images
  from the Internet Archive's USENET archive.

  Processing the one million images with stegdetect results in about
  20,000 suspicious images. We launched a dictionary attack on the
  JSteg and JPHide positive images.  The dictionary has a size of
  1,800,000 words and phrases.  The disconcert cluster used to
  distribute the dictionary attack has a peak performance of roughly
  87 GFLOPS.

  However, we have not found a single hidden message.
[...]

Comments and feedback are welcome.  We have an FAQ at

  http://www.citi.umich.edu/u/provos/stego/faq.html

Regards and a merry Christmas,
  Niels Provos



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: FreeSWAN US export controls

2001-12-11 Thread Arnold G. Reinhold

At 12:18 AM -0600 12/11/01, Jim Choate wrote:
On Mon, 10 Dec 2001, John Gilmore wrote:

 NSA's export controls.  We overturned them by a pretty thin margin.
 The government managed to maneuver such that no binding precedents
 were set: if they unilaterally change the regulations tomorrow to
 block the export of public domain crypto, they wouldn't be violating
 any court orders or any judicial decisions.  I.e. they are not BOUND
 by the policy change.

That's not accurate. There have been several court rulings finding source
code and such protected by the 1st. This would provide a lever that was
not there previously.


In the most recent ruling, Universal v. Remerdez/Eric Corley 2600.com 
(00-9185), http://cryptome.org/mpaa-v-2600-cad.htm , the US Court of 
Appeals for the Second Circuit declined to overturn an injunction 
against the posting of DeCSS on the Internet. The Court held that 
software was speech, but did not enjoy the level of First Amendment 
protection accorded to pure speech because it is functional with 
little human intervention. This is a very disturbing precedent which 
I hope will be reversed on appeal, but given the post-9/11 mood and 
the limited technological understanding of most judges, I wouldn't 
count on it. Also I believe the U.S. Supreme Court has upheld export 
controls in the past, the First Amendment notwithstanding.

Having a body of open source crypto software that is not entangled by 
any U.S. input is not a foolish idea.  Surely there are good 
programers outside the U.S. who understand the importance of making 
FreeSWAN work seamlessly with Linux.


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: I-P: Papers Illuminate Pearl Harbor Attack

2001-12-08 Thread Arnold G. Reinhold

This story smells of revisionism.  The events leading up to Pearl 
harbor are throughly chronicled in the first chapter of David Kahn's 
classic, The Codebreakers. In particular:

o The Tojo government, regarded as militarist, came into power in 
October 1941 (Togo was Tojo's foreign minister)

o The order to attack Pearl Harbor was promulgated on November 4

o The last ship of the Pearl Harbor strike force, the aircraft 
carrier Kuikaku, reached the fleet assembly point  in the Kuriles on 
November 19

o Japan presented its ultimatum to the U.S. on November 20, which 
would have required the U.S. to acquiesce to Japan's conquests in 
Asia and supply her with oil

o On November 25, the the Pearl Harbor strike force was ordered to 
leave on it mission at 6 am the next day

o The reply U.S. that the delivered to Japan on November 26 came 
after a week of frantic ... consultations ... (Kahn). These may 
have led to the Chinese cables reported in the story and created a 
flicker of hope among whatever doves remained in the Japanese 
government. It was not a close situation, however. Had the U.S. been 
willing to accede to Japan's terms, the strike could have been called 
off, but the die had been cast and Japan's war preparations were well 
under way. (Remember that Pearl Harbor was but one of many places 
that Japan attacked 60 years ago today).

As for the notion that Japan may have tried to warn the United 
States about the attack, this is no doubt the famous 14-part 
telegram breaking off negotiations that was to be delivered to the 
Secretary of State Cordell Hull at 1 pm Washington time (7:30 am 
Honolulu time), 25 minutes before the first bomb fell.  The U.S. had 
intercepted and decoded the first 13 parts that telegram early on 
December 7. The last part and the portentous instruction to deliver 
it at 1 pm were recovered later that morning.  Secretary Hull was 
concerned about looking properly upset when he was handed the 
official copy by the Japanese ambassador, lest the ambassador suspect 
that Hull had already seen it. Kahn describes how Army attempts to 
deliver a final warning to Pear Harbor were bollixed up by 
communications problems.

Arnold Reinhold



At 11:33 AM -0500 12/6/01, R. A. Hettinga wrote:
--- begin forwarded text


Status:  U
From: Arnell [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: I-P: Papers Illuminate Pearl Harbor Attack
Date: Thu, 6 Dec 2001 09:12:58 -0500
Sender: [EMAIL PROTECTED]
Reply-To: Arnell [EMAIL PROTECTED]

http://wire.ap.org/APnews/center_story.html?FRONTID=ASIASTORYID=APIS7G7BPGO
0

DECEMBER 05, 19:33 ET
Papers Illuminate Pearl Harbor Attack

By MARI YAMAGUCHI
Associated Press Writer

KOBE, Japan (AP) - Japan may have attacked Pearl Harbor because decoded U.S.
cables did not prepare its leaders for American demands that the imperial
army withdraw from China and Southeast Asia, a Japanese scholar said
Wednesday.

Previously classified Foreign Ministry documents reveal a turning point that
may have persuaded doves in the Japanese government that war with the United
States was necessary, Kobe University law professor Toshihiro Minohara said.

``The discovery will probably help reevaluate the history of this period,''
Minohara told The Associated Press before announcing his findings.

That turning point came in November 1941, just weeks before the Dec. 7
attack that killed 2,390 and plunged America into World War II.

Japan and the United States had been at odds for years over the imperial
army's march through Asia. On Nov. 22, 1941, Tokyo intercepted a Chinese
telegram saying the United States would propose allowing Japan to keep its
colonies if it abandoned further aggression, Minohara said. The telegram was
sent from the Chinese Embassy in Washington to Chinese government officials
in the wartime capital of Chungking, now Chongqing.

The sudden possibility of a compromise strengthened the position of Foreign
Minister Shigenori Togo, who opposed war with the United States and was
trying to persuade militarists in the government to back down, Minohara
said.

But the official U.S. position sent to Japan on Nov. 26 was entirely
different: Agree to withdraw from China and Southeast Asia or say goodbye to
a diplomatic solution.

That message, sent to Japan's embassy in Washington by then-Secretary of
State Cordell Hull, was interpreted as an ultimatum and convinced pacifists
in the Japanese government that war was inevitable.

``I was so shocked I even felt dizzy,'' Togo later wrote in his memoirs.
``At this point, we had no choice but to take action.''

Researchers also said Japan broke secret codes employed by the United
States, Britain, China and Canada between May 18, 1941, and Dec. 3, 1941,
Kyodo News Agency reported.

Kobe University professor Makoto Iokibe said that ``defies the common
belief...that Japan was behind in the information war against the U.S. and
others,'' the agency reported.

But Japan's extensive spying operations misguided it about 

More on Drivers' Licenses

2001-11-09 Thread Arnold G. Reinhold

Noah Silva recently brought this interesting 1994 article on DMV data 
exchange by Simson Garfinkel to the attention of the 
[EMAIL PROTECTED] list:

http://www.wired.com/wired/archive/2.02/dmv_pr.html

The article discusses the  AAMVAnet system and the extent to which 
the threat of revocation of driver's license is already being used as 
a tool for social control.  It's also clear that the state DMVs are 
in a unique position to provide identity information for a future PKI.

I did some poking around on Google to see what has been happening in 
this  area since then. I found the American Association of Motor 
Vehicle Administrators web site which announces:

On October 24, 2001, AAMVA's Executive Committee passed a resolution 
creating a  Special Task Force on Identification Security to develop 
a strategy on enhancing the issuance of secure identification 
credentials for driver licensing and  photo ID purposes, and to 
develop short- and long-term priorities and actions.
http://www.aamva.com/drivers/drvIDSecurityindex.asp

They already have a standard for Driver IDs that is available on-line

http://www.aamva.com/standards/stdAAMVADLIdStandard2000.asp

http://www.aamva.com/Documents/stdAAMVADLIDStandrd000630.pdf (full text)

It is a very through and detailed document that builds on a raft of 
existing international standards (smart cards, bar codes, JPEG, etc.) 
and US DMV and LE practices (data dictionaries, encodings, 
fingerprint and signature storage, etc.).  It does not prescribe any 
card technology, but sets standards to be used if a technology is 
selected.

What is strikingly to me about the document is the complete lack of 
cryptographic standards. The document specifically discourages 
encryption of machine readable data unless required by law. In a very 
interesting Appendix H on physical security measures, digital 
signatures are mentioned only in passing under Machine Readable Data:

Common techniques to ensure data integrity include:

   ­ Check digits and data encryption (presumably with public key encryption)

­ For IC cards, tamper detection and chip disabling; and digital 
signatures for all data written to the chip.

That's it! There is a set of proposed revisions to the standard, but 
they are only accessible to AAMVA  members.  I don't know if the 
revisions  address crypto issues, but from the quote above,  I 
suspect they have a long way to go.


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Scarfo keylogger, PGP

2001-10-16 Thread Arnold G. Reinhold

At 12:09 AM + 10/16/2001, David Wagner wrote:
It seems the FBI hopes the law will make a distinction between software
that talks directly to the modem and software that doesn't.  They note
that PGP falls into the latter category, and thus -- they argue -- they
should be permitted to snoop on PGP without needing a wiretap warrant.

However, if you're using PGP to encrypt email before sending, this
reasoning sounds a little hard to swallow.  It's hard to see how such a
use of PGP could be differentiated from use of a mail client; neither
of them talk directly to the modem, but both are indirectly a part of
the communications path.  Maybe there's something I'm missing.

Reading between the lines, I think the FBI is taking the position 
that e-mail stored on your computer, either before or after you send 
it, is a business record and not an electronic communication. Thus 
they would also claim the right to key-log a mail client when it was 
off line under the authority of just a search warrant, without a wire 
tap order. In effect, they seem to be claiming that only instant 
messaging is protected under anti-wiretapping laws.


If you're using PGP to encrypt stored data only, though, then I can
see how one might be able to make a case that use of PGP should be
distinguished from use of a mail client.

Does anyone know what PGP was used for in this case?  Was it used only
for encrypting stored data, or was it also used from time to time for
encrypting communications?


Press reports said PGP was used to encrypt gambling records. The 
defense challenged the keylogging on the grounds that it must have 
intercepted electronic communications as well, and therefore went 
beyond the authority of the FBI'ssearch warrant.

It also seems that the FBI used two separate tools on Scarfo's computer:

1. an only-when-the-modem's-off key logger

2. a tool to capture the passphrase when it was entered into the PGP 
dialog box.

One way to create the latter tool is to simply use the PGP source 
code to make a doctored version of PGP that saves the passphrase in a 
hidden file or even e-mails it and the secret key to a special 
address. This possibility suggests that it is a mistake to include 
the full PGP version number in plaintext, as is done in the present 
PGP message format. Doing so allows any attacker to prepare a 
doctored program that matches the target's version in advance, 
reducing the number of surreptitious entries needed. This may not 
matter much to the FBI (which apparently made five entries is this 
case) but could be significant to an attacker with fewer resources, 
e.g. a terrorist cell.

Transmitting the software version enclar may also help in creating a 
capture tool that knows where keying information is stored in memory. 
If there is a need to alert the receiving program as to the format of 
the encrypted message, a message format code should be used, not the 
software version number.


Arnold Reinhold
(who is not a lawyer)



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



NSA upgrade plans

2001-10-05 Thread Arnold G. Reinhold

There is an interesting article in Federal Computer Week 
http://www.fcw.com/fcw/articles/2001/0910/news-nsa-09-10-01.asp that 
says NSA planning a major effort to modernize the nation's 
cryptoystems which are rapidly growing obsolete and vulnerable. 
They quote Michael Jacobs, head of NSA's information Assurance 
Directorate as saying the the underlying encryption algorithms are 
nearing the end of their life expectancy.

There were hints in the past that NSA used 90-bit keys for some 
ciphers. I wonder if that is the issue or if they see the quantum 
computing handwriting on the wall and plan to go to 256-bit (or 
larger) keys.

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Historical PKI resources

2001-10-05 Thread Arnold G. Reinhold

At 11:10 AM -0800 1/5/2001, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] said:
  I have found significant information about PKI as it exists today,
 but am looking for some background information.  I'm looking for
 information about the history of PKI, how and where it started, how it
  developed, etc.


You might also look for information on the NSA's STU-III secure 
telephone system which I believe uses a form of PKI.  There is a fair 
amount of information about it available on the web.

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: AGAINST ID CARDS

2001-10-05 Thread Arnold G. Reinhold

I too am very nervous about the prospect of national ID cards.  I 
have an idea for a possible compromise, but I have not made up my 
mind on it. I'm interested in hearing other people's opinions.

The idea is a federal standard for secure  drivers' licenses. These 
would be cards containing a chip that stores an electronically signed 
and time stamped data file consisting of the driver's name, date of 
birth, height, address, photo, and scanned signature, as well as 
endorsements such as truck, school bus, motorcycle and hazmat 
operator licenses. All this information is contained in existing 
drivers' licenses, but in a way that is too easy to forge.

The licenses would still be issued by the states so there would be no 
new bureaucracy.  People who don't drive could get proof of age 
cards using the same technology. Many states now issue such cards in 
conventional formats for liquor purchase. There would be pressure to 
expand the use of these licenses to other uses. That has already 
happened for conventional DLs with liquor purchase and airline 
boarding. Some new uses might be acceptable, e.g. using the cards to 
contain  pilot or boating licenses. Limitations on new uses could be 
included in the enabling legislation.

The security model of the card would be privacy oriented, i.e. 
limiting who could access the cards to authorized users and the 
owner. The integrity of the information would come from the 
electronic signatures.  As I understand it, much of the forgery of 
DLs that now takes place involves unauthorized use of the equipment 
that produces legitimate cards. The secure DL would cut down on this 
because the information on the card would be signed by by the 
operator of the equipment, making the forgery more traceable. The 
data would also be signed using a key that is only available at a 
central location and a copy of the signed info would be retained in 
the driver database (this information is already collected anyway). 
This would make it more difficult to change just the photo on the 
license, for example.

The main difference between a secure driver's license and a national 
ID is that there would be no new requirement to obtain or carry the 
card.  One can look at it as the nose in the camel's tent or as a way 
to deflect pressure for more Draconian solutions.

Thoughts?

Arnold Reinhold


At 1:47 PM -0400 10/3/2001, R. A. Hettinga wrote:
--- begin forwarded text


Status:  U
To: [EMAIL PROTECTED]
From: National Review D.C. [EMAIL PROTECTED]
Subject: AGAINST ID CARDS
Date: Wed,  3 Oct 2001 13:58:40 +
Reply-To: [EMAIL PROTECTED]
List-Help: http://topica.com/lists/WashingtonBulletin/
List-Subscribe: mailto:[EMAIL PROTECTED]
List-Archive: http://topica.com/lists/WashingtonBulletin/read

Washington Bulletin: National Review's Internet Update for
October 3, 2001
http://www.nationalreview.com

AGAINST ID CARDS
[The worse way to fight terrorism]

Only a bare majority of Americans--51 percent--support the creation of a
national identity card, according to a new poll by Fabrizio, McLaughlin
 Associates. This is a substantial loss of support since the Pew
Research Center found 70 percent endorsing the concept in a survey it
conducted immediately after the September 11 attacks.

Yet plenty of warning signs remain. Westerners are only demographic
group with a majority opposing ID cards (53 percent) and senior citizens
are the only segment with a plurality against it (47 percent).
Republicans and men are evenly split on the issue, with Democrats and
women likely to favor it. Most troubling, however, may be that the poll
shows overall support jumping to 61 percent when the ID card is
described as ìa measure to combat terrorism and make the use of false
identities more difficult.î

If ever the American public was primed to accept an ID card, the time is
now. A recent Washington Post survey reports that 64 percent of
Americans say they trust the federal government to do the right thing
ìnearly alwaysî or ìmost of the timeî--the highest level of trust
recorded since 1966 and twice the level measured just a year ago. ìThis
is the most collective mood weíve seen in America for a long time,î
Democratic pollster Celinda Lake told the New York Times. ìAnd itís
coming off one of the most individualistic eras in American history.î

The Bush administration already has signaled through a spokesman that it
does not support the idea, though several members of Congress have
embraced it and House immigration subcommittee chairman George Gekas, a
Pennsylvania Republican, says ID cards will definitely receive
consideration. Oracle CEO Larry Ellison has said his company, a leader
in databases, would donate the software to make it happen.

Conservatives must oppose these internal passports with vigor. They may
be promoted now as tools for combating terrorism, but their potential
for abuse is enormous. How long before the federal government also
starts tracking gun sales through them? Or auditing income-tax 

Re: New encryption technology closes WLAN security loopholes

2001-09-21 Thread Arnold G. Reinhold

At 10:34 AM -0400 9/20/2001, Perry E. Metzger wrote:
R. A. Hettinga [EMAIL PROTECTED] writes:
 [1] New encryption technology closes WLAN security loopholes
 Next Comm has launched new wireless LAN security technology called
 Key Hopping. The technology aims to close security gaps in Wired
 Equivalent Privacy (WEP). It uses the MD5 (message digest, version 5)
 algorithm that allows for rapid changes in encryption keys used, some
 as often as every three seconds, denying hackers the time they need
 to piece together an encryption pattern.

We don't need a new proprietary technology. IPSec tunnels from the
wireless node to the base station work just fine, and are actually
secure on top of it!


This sounds a lot like a proposal I made to improve 802.11 WEP 
security after the first round of attacks in February. 
http://world.std.com/~reinhold/airport.html#wf1  I've been working on 
updating the proposal in light of the Shamir, et al, paper. One 
difficulty is getting a good upper bound on the number of packets 
transmitted per second. None the less, it's clear that at least with 
the 128-bit versions of 802.11b, you can get reasonable security by 
frequent key changes. With 40-bit it's hard to avoid at least one 
byte being compromised, which would reduce the problem to attacking a 
32-bit encryption every few seconds.  On the other hand, the original 
40-bit WEP encryption could be brute forced with an office full of 
desktop PCs.

As I understand things, and please correct me if I am misinformed, 
IPSec is still quite complex to install and setup. Many 802.11b users 
are individuals or small offices. Until IPSec is user friendly enough 
for them, a solution that restores WEP to a reasonable level of 
privacy is worthwhile.

While we are on the topic, it seems to me that the other implication 
of 802.11 is that the Ethernet backbone in most offices can no longer 
be considered secure. It is too easy for someone to install a 802.11 
base station without permission inside the corporate firewall. It may 
be that the only way to maintain corporate security is for every 
computer in an organization to use IPSec, with keys authorizing 
connection to the network transmitted out-of-band, (e.g. by hand).

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: The tragedy in NYC

2001-09-13 Thread Arnold G. Reinhold

At 9:20 AM +0300 9/13/2001, Amir Herzberg wrote:
...

In fact, if giving up crytpto completely would help substantially to protect
against terror, I'll support it myself. But...

The real argument is simple: there is no evidence or convincing argument why
shutting down crypto will substantially help defend against terrorism. It is
a popular, easy solution, good for politicians as it is an easy `sell` to
the public, but not effective. That's why we should defend against it; the
negligible help it may provide to law-enforcement is not worth its cost in
loss of privacy and commerce, in the loss of freedom, and in the dangers of
abuse by government.

Best, Amir Herzberg


I would go one step further: the U.S. Government's misguided effort 
to suppress crypto is a root cause of the massive vulnerability of 
the United States information infrastructure.  Manufacturers of 
commercial operating systems and application software have sharply 
limited the security features they include out of fear that their 
products will be subject to export controls.  If security isn't built 
into foundation products, it can't be bolted on later.

Some say the reason security is lacking is that no one wants to pay 
for it, but the software we use is bloated with features most people 
don't need or want.  Absent export controls I believe free markets 
would have produced good security solutions because companies need 
any competitive edge they can find.

In addition, many of the anti-crypto measures the government has 
suggested in the past, such as key escrow, only create new 
vulnerabilities. In time the security at escrow storage sites would 
have degenerated to the joke level we saw at our airports.

The Pandora's box of strong crypto was opened long ago. The bad guys 
already have it. The question is when will the good guys start using 
it for real?

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: moving Crypto?

2001-08-03 Thread Arnold G. Reinhold

At 9:25 AM -0400 8/1/2001, Derek Atkins wrote:
There are many alternative conferences than Crypto, and many of them
are already outside the US.  Indeed, the IACR already runs EuroCrypt
and AsiaCrypt.

Personally, I think that trying to move Crypto is just an
over-reaction to the current situation.  If the situation really does
get worse (and the egg-in-face arrest wasn't an abberation) then
perhaps the conference should get moved to a more friendly North
American country.


I don't think it is an overreaction. Would you hold a political 
science conference in China right now?  Moving to Canada would be an 
important statement.  I'd prefer Toronto because it's a fairly 
central location, but Vancouver is cool too.

Arnold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Effective and ineffective technological measures

2001-07-29 Thread Arnold G. Reinhold

At 11:20 AM +0200 7/29/2001, Alan Barrett wrote:
The DMCA said:
  1201(a)(1)(A):
No person shall circumvent a technological measure that effectively
controls access to a work protected under this title.

What does effectively mean here?

The law attempts to define it:

'1201(a)(3)(B) a technological measure ''effectively controls access to a
   work'' if the measure, in the ordinary course of its operation,
   requires the application of information, or a process or a
   treatment, with the authority of the copyright owner, to gain
   access to the work.'

If it has its plain english meaning, then one could argue that ROT13,
CSS (and anything else that can easily be broken) are *ineffective*
technological measures, so circumventing them is not prohibited by this
clause.  Distinguishing effective measures from ineffective measures
might reduce to measuring the resources required to break them.

Or does the clause really mean No person shall circumvent a
technological measure that *purports to control* access to a work
protected under this title?

I suspect most judges would interpret the ordinary course of its 
operation the latter way.  Clearly Judge Kaplan was not impressed by 
the fact that CSS was broken by a high school kid.  There is also the 
argument that if a measure is really effective in plain English 
meaning, you don't *need* an anti-circumvention law.

Whether the anti=circumvention provision is constitutional, since it 
eliminates fair use, is another question. There is an excellent 
Twiki site at Harvard Law School that has many of these arguments 
and also allows others to contribute: 
http://eon.law.harvard.edu/twiki/bin/view/Openlaw/OpenlawDVD


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Criminalizing crypto criticism

2001-07-27 Thread Arnold G. Reinhold

At 1:56 AM -0400 7/27/2001, Declan McCullagh wrote:
On Thu, Jul 26, 2001 at 10:53:02PM -0400, David Jablon wrote:
 With these great new laws, there is no longer any risk of being legally
 criticised for using even the most glaringly flawed cryptography 
-- just use it
 for Copy Protection, and TADA!  Negative criticism magically disappears.
 Almost by definition.

 Flaws can only be exposed by those who won't show their work,
 or from anonymous sources, who nobody will trust without confirmation [...]
[...]
 We seem to be entering the twilight zone -- the end of an exciting,
 but brief era -- of public cryptography.

The DMCA may be bad, but it's not *that* bad. It contains a broad
prohibition against circumvention (No person shall circumvent a
technological measure that effectively controls access) and then has
a bunch of exceptions.

One of those -- and you can thank groups like ACM for this, if my
legislative memory is correct -- explicitly permits encryption
research. You can argue fairly persuasively that it's not broad
enough, and certainly 2600 found in the DeCSS case that the judge
wasn't convinced by their arguments, but at least it's a shield of
sorts. See below.

If you read the language carefully, you will see that 1201g only 
permits *circumvention* as part of cryptographic research (and then 
only under limited circumstances). There is nothing in the law that 
allows publication of results.

Even the recent Shamir, et. al. paper on RC4 and WEP could arguably 
violate DMCA. WEP could be considered a TPM since it protects 
copyrighted works (e.g. e-mail). More importantly RC4 could be used 
in some other copy protection system that we don't know about -- it's 
use might even be a trade secret.  There is simply no way to 
guarantee that a given cryptoanalytic result doesn't compromise some 
TPM. Even software that breaks Ceaser ciphers could be actionable. 
DCMA is *that* bad.

Arnold Reinhold



-Declan

PS: Some background on Sklyarov case:
http://www.politechbot.com/cgi-bin/politech.cgi?name=sklyarov

PPS: Note you only get the exemption if you make a good faith effort
to obtain authorization before the circumvention. Gotta love
Congress, eh?



http://thomas.loc.gov/cgi-bin/query/z?c105:H.R.2281.ENR:

`(g) ENCRYPTION RESEARCH-

`(1) DEFINITIONS- For purposes of this subsection--

`(A) the term `encryption research' means activities necessary to
identify and analyze flaws and vulnerabilities of encryption
technologies applied to copyrighted works, if these activities are
conducted to advance the state of knowledge in the field of encryption
technology or to assist in the development of encryption products; and

`(B) the term `encryption technology' means the scrambling and
descrambling of information using mathematical formulas or algorithms.

`(2) PERMISSIBLE ACTS OF ENCRYPTION RESEARCH- Notwithstanding the
provisions of subsection (a)(1)(A), it is not a violation of that
subsection for a person to circumvent a technological measure as
applied to a copy, phonorecord, performance, or display of a published
work in the course of an act of good faith encryption research if--

`(A) the person lawfully obtained the encrypted copy, phonorecord,
performance, or display of the published work;

`(B) such act is necessary to conduct such encryption research;

`(C) the person made a good faith effort to obtain authorization
before the circumvention; and

`(D) such act does not constitute infringement under this title or a
violation of applicable law other than this section, including section
1030 of title 18 and those provisions of title 18 amended by the
Computer Fraud and Abuse Act of 1986.

`(3) FACTORS IN DETERMINING EXEMPTION- In determining whether a person
qualifies for the exemption under paragraph (2), the factors to be
considered shall include--

`(A) whether the information derived from the encryption research was
disseminated, and if so, whether it was disseminated in a manner
reasonably calculated to advance the state of knowledge or development
of encryption technology, versus whether it was disseminated in a
manner that facilitates infringement under this title or a violation
of applicable law other than this section, including a violation of
privacy or breach of security;

`(B) whether the person is engaged in a legitimate course of study, is
employed, or is appropriately trained or experienced, in the field of
encryption technology; and

`(C) whether the person provides the copyright owner of the work to
which the technological measure is applied with notice of the findings
and documentation of the research, and the time when such notice is
provided.

`(4) USE OF TECHNOLOGICAL MEANS FOR RESEARCH ACTIVITIES-
Notwithstanding the provisions of subsection (a)(2), it is not a
violation of that subsection for a person to--

`(A) develop and employ technological means to circumvent a
technological measure for the sole purpose of that person performing
the acts of good faith encryption 

Re: Crypto hardware

2001-07-16 Thread Arnold G. Reinhold

At 11:09 AM -0700 7/12/2001, Jurgen Botz wrote:
...

Set up a PC with CA software and a smart card reader and put
your CA cert/key on a smart card and you have your tamperproof
CA master... the only weak link in the certificate generation
process is the CA's secret key, so that's really the only thing
you need to protect.  From a security standpoint everything
else should be as transparent as possible, so ideally you want
a box running open source software rather than a proprietary
appliance and isolate the critical part of the process to
something that can be made very tamperproof and has well known
specs/intefaces... i.e. a smart card.

The CA's secret key is not the only weak link. There is also the the 
software that submits certs to be signed to the tamper proof smart 
card. If I can gain control of that software, it is a simple matter 
to have your smart card sign any cert I want. And if I get root on 
your off-the-shelf PC, such an attack would not be hard to mount.

At the very least, one needs some audit trail maintained inside the 
tamper proof module and a tamper proof means to display that audit 
trail.

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: septillion operations per second

2001-06-21 Thread Arnold G. Reinhold

At 12:16 PM +0200 6/20/2001, Barry Wels wrote:
Hi,

In James Bamford's new book 'Body of Secrets' he claims the NSA is 
working on some FAST computers.
http://www.randomhouse.com/features/bamford/book.html
---
The secret community is also home to the largest collection of 
hyper-powerful computers, advanced mathematicians and skilled 
language experts on the planet.
Within the city, time is measured in femtosecondsone million 
billionth of a second, and scientists work in secret to develop 
computers capable of performing more than one septillion 
(1,000,000,000,000,000,000,000,000) operations every second.
---

If they ever build such a computer (or 1.000.000 of them) what would 
that mean for today's key lengths ?
I am curious how long a computer capable of a septillion operations 
per second would take to crack one 128 bit or 256 bit key.
Or a RSA 1024 or 2048 bit key for that matter ...


One septillion =  10**24 or about 2**80. If you assume 1000 
operations to test a key, a septillion ops per second machine tests 
about 2**70 keys per second. For a 128 bit key, that means you need 
about 2**57 seconds on average to find a key, or about 4.6 billion 
years, the age of the Earth.  A million of them (not likely) would do 
the job in only 4600 years.

Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Thermal Imaging Decision Applicable to TEMPEST?

2001-06-13 Thread Arnold G. Reinhold

At 8:57 AM -0700 6/12/2001, John Young wrote:
The Supreme Court's decision against thermal imaging appears
to be applicable to TEMPEST emissions from electronic devices.
And is it not a first against this most threatening vulnerability
in the digital age? And long overdue.

Remote acquisition of electronic emissions, say from outside a
home, are not currently prohibited by law as far as I know. And
the language of the thermal imaging decision makes it applicable
to any technology not commonly in use.

...


This decision(Kyllo v. US) is important and very welcome, but I am 
not sure you are right about the prior status of TEMPEST. There was 
an earlier decision (Katz v. US, 1967), cited in the Kyllo decision, 
that involved eavesdropping by means of an electronic listening 
device placed on the outside of a phone booth. The court held back 
then that doing this without a warrant violated the Fourth Amendment. 
I can't see how this would fail to apply to TEMPEST.

TEMPEST is not shut down by any means. This decision applies to homes 
and places where there is an reasonable expectation of privacy (like 
a phone booth). The status of computers in offices, cars, and public 
places is less clear. Your data stored on someone else's computer 
outside you home is apparently not protected (they got Kyllo's 
electric bills legally without a warrant). In any event, the NSA can 
still use TEMPEST against foreign nationals and overseas, the FBI can 
use it against US nationals with a warrant, and the government can, 
de facto, use it secretly, as many people believe they now use 
wiretapping, to develop information that leads to other evidence that 
is admissible.

The other interesting thing about Kyllo is that the Court clearly 
needed the help of a good physicist.  If you read the oral arguments, 
http://www.supremecourtus.gov/oral_arguments/argument_transcripts/99-8 
508.pdf you'll see that no one in the court had a basic understanding 
of the science. The case involved a bust for growing marijuana. The 
police had obtained Kyllo's electric bills (no warrant required) and 
found he used a lot of power.  Since power usage varies a lot among 
houses, this was not considered sufficient to get a search warrant. 
They then used the thermal imager. The government claimed they only 
used the imager to verify that a lot of heat was being produced in 
the house. No one pointed out that, except for highly unlikely 
circumstances (e.g. someone running a lighthouse or charging a LOT of 
batteries in the basement), essentially all the electricity consumed 
by a house is converted to heat.  Discovering that the house radiated 
a lot of heat added no new information to what the utility bills 
said. The defense claimed it was the presence of specific hot spots 
in the image that made the warrant issuable and that these revealed 
what was happening inside the house.

There is also some physically unrealistic stuff in the dissenting 
opinion. Justice Stevens suggests that the rare homeowner who wishes 
to engage in uncommon activities that produce a large amount of heat 
[can] make sure that the surrounding area is well insulated. Unless 
the homeowner is planning to set her house on fire, that won't work. 
The heat has to escape somewhere. A system that spread the heat so 
evenly that a thermal imager couldn't detect the source is far beyond 
the abilities of a homeowner to construct.

This is a great science and law case.


Arnold Reinhold



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]