Re: Logging of Web Usage

2003-04-05 Thread Bill Stewart
At 11:32 AM 04/03/2003 -0800, Bill Frantz wrote:
Ah yes, I haven't updated my timings for the new machines that are faster
than my 550Mhz.  :-)
The only other item is importance is that the exhaustive search time isn't
the time to reverse one IP, but the time to reverse all the IPs that have
been recorded.
Also, until recently, there was the problem that storing a hash value
for every IP address took 8-10 bytes * 2**32, and the resulting 32-40GB
was an annoyingly large storage quantity, requiring a deck of Exabyte tapes
or corporate-budget quantities of disk drive, which also meant that
sorting the results was also awkward.  These days, disk drive prices
are $1/GB at Fry's for 3.5 IDE drives, so there's no reason not to have
120GB on your desk top.
This does mean that if you're keeping hashed logs you should probably
use some sort of keyed hash - even if you don't change the keys often,
you've at least prevented pre-computed dictionary attacks over the
entire IPv4 address space, and the key should be long enough (e.g. 128 bit)
so that dictionary attacks on the IP addresses of Usual Suspects
also can't be precomputed.
A related question is keeping lists of public information,
e.g. don't-spam lists, in some form that isn't readily abusable,
such as hashed addresses.  The possible namespace there is much larger,
but the actual namespace isn't likely to be more than a couple of billion,
in spite of the number of spammers selling their lists of 9 billion names.
There's the question of how exact a match do you need -
if mail is for [EMAIL PROTECTED], you'd ideally like to be able to check
[EMAIL PROTECTED], [EMAIL PROTECTED], and @example.com,
which makes the lookup process more complex.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Run a remailer, go to jail?

2003-03-31 Thread Bill Stewart
At 06:06 PM 03/28/2003 -0500, Steven M. Bellovin wrote:
What's unclear to me is who is behind this.  Felten thinks it's content
providers trying for state-level DMCA; I think it's broadband ISPs who
are afraid of 802.11 hotspots.
It looked to me like it was the cable TV industry trying to ban
possession or sale of illegal cable descramblers as well as
connection-sharing things like NAT, but it was a bit hard to tell
how much of the language was new as opposed to older,
so this may have been extending existing cable descrambler laws
to also cover 802.11 or Napsterizing your Tivo.
I don't think that banning remailers or crypto was the intent,
but the cable industry has never been above using nuclear weaponry
to discourage cable service theft, regardless of collateral damage.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Bill Stewart
At 11:10 PM 03/23/2003 -0500, Ian Grigg wrote:
Consider this simple fact:  There has been no
MITM attack, in the lifetime of the Internet,
that has recorded or documented the acquisition
and fraudulent use of a credit card (CC).
(Over any Internet medium.)
One of the major reasons for this, of course,
is the requirement for certificates,
which give at least some vague level of authentication
that you're talking to the site you wanted,
as well as some much vaguer level of authentication
that the web site might correspond to some actual business
that at least had enough capital to buy a cert.
Sure, there are a variety of subtle and entertaining ways
to pull of MITM attacks, but one crude and obvious one
is to forge either an entire site or at least the parts of it
that ask for your credit card number,
and use something like DNS hacking or minor name misspellings
to get people to visit your site instead of the real one.
If you need to forward some of the requests on to the real site,
that's a bit more work, and makes you easier to trace,
so if you can be a MITM without bothering with the back half, great.
And of course the cruder and more obvious attack was to
create a site for a company that wasn't actually on the web yet,
so nobody's watching the site, and then fly-by-night out of there.
Is it perfect?  No, but it does tend to raise the bar on attacks
to the point that keeps out lots of the anklebiters
and makes it more effective to attack a badly-administered server
instead of forging a better-administered server.
Oh, and it also let merchants who desperately wanted the public
to trust them enough to give them credit card numbers
tell their potential customers See, we've got *cryptography*!
instead of See, we've got servers sitting exposed to the net,
which is a social engineering problem,  and also let them say
See, the certificates let you know you're talking to the
REAL Example Inc. instead a some faker putting up example.com.
Because the real economics is whether you can get customers to show up.
(Well, ok, and whether you can make money if they do show up :-)
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Bill Stewart
I get the impression that we're talking at cross-purposes here,
with at least two different discussions.  Let's look at several cases:
1 - Sites that have SSL and Expensive Certs that need them and need MITM 
protection
1a - 	These sites, but with other security holes making it easy to break in.
1b - 	These sites, broken by SSL bugs or browser bugs
2 - Sites that have SSL and Expensive Certs that don't need them,
	as long as they've got some crypto like self-signed certs,
	which don't give MITM protection
3 - Sites that don't have SSL today because it's too annoying,
	for which crypto would be useful,
	and ADH or self-signed certs would be good enough,
	because MITM isn't a big threat for them.
4 - Sites that don't need crypto.

Some people are arguing Many Sites with SSL Certs are Type 2, Not Type 1
(No they're not!  Yes, they are!)
Some people are arguing There are lots of Type 3, so we should support them
better than we do today instead of requiring them to do Type 1
(I suspect that's what Ian was really trying to say,
but most of the replies have been to the other question, e.g.
There are lots of Type 3!  No, there aren't many Type 2!
Yes there *are* lots of Type 3!  No there ARENT'T many Type 2!
 Yes, there are lots of 1a, but that doesn't imply 2!
Type 1+2 is 1% and 3+4 is 99%!  No, 1b was fixed
One of the big reasons for DNSSEC was MITM protection,
at least before virtual hosting took over,
because it gave you a way to trust that the IP address you used
was the correct IP address for the domain name you wanted,
so you were probably talking to the right machine.
Of course that doesn't get you ARP-spoofing protection,
or eavesdropping protection unless you also use it as a crypto key
or at least a signature key for DH parts,
and doesn't protect you against other users on your machine
(but a shared machine doesn't have much protection anyway,
at least from root, so that was already part of your threat model,
and that's another 1-vs-1a variant, like the heavy-duty lock on your
apartment building front door when your own apartment door has a wimpy lock.)
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Face-Recognition Technology Improves

2003-03-24 Thread Bill Stewart
At 12:39 PM 03/16/2003 +0100, Eugen Leitl wrote:
On Sat, 15 Mar 2003, Bill Stewart wrote:

 They're probably not independent, but they'll be influenced by lighting,
 precise viewing angles, etc., so they're probably nowhere near 100%
 correlated either.
I notice the systems mentioned in the study rely on biometrics extracted
from flat images. Recent crop of systems actually scan the face geometry
by using patterned light (apparently, cheaper than using a laser scanner),
resulting in a much richer and standartized (lighting and facial
orientation is irrelevant) biometric fingerprint.
But there are two sides to the problem -
recording the images of the people you're looking for,
and viewing the crowd to try to find matches.
You're right that airport security gates are probably a pretty good
consistent place to view the crowd, but getting the target images
is a different problem - some of the Usual Suspects may have police mugshots,
but for most of them it's unlikely that you've gotten them to sit down
while you take a whole-face geometry scan to get the fingerprint.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Brumley Boneh timing attack on OpenSSL (fwd)

2003-03-24 Thread Bill Stewart
At 09:51 AM 03/22/2003 +0100, Eugen Leitl wrote:
Some clarification by Peter Gutmann [EMAIL PROTECTED] on why
cryptlib doesn't do timing attack resistance default:
Peter Gutmann [EMAIL PROTECTED]:
cryptlib was never intended to be a high-performance SSL server (the docs are
fairly clear on this), and I don't think anyone is using it to replace Apache
or IIS.  OTOH it is used in a number of specialised environments such as 
closed
...
 For this reason, cryptlib makes the use of sidechannel-
attack-protection an optional item, which must be selected by the user 
(via use
of the blinding code, now admittedly I should probably make this a bit easier
to do in future releases than having to hack the source :-).  This is not to
downplay the seriousness of the attack, merely to say that in some cases the
slowdown/CPU consumption vs.attack risk doesn't make it worthwhile to defend
against.
If it's not meant to be a high-performance server, then slowing it down
another 20% by doing RSA timing things is probably fine for most uses,
and either using compiler flags or (better) friendlier options of some sort
to turn off the timing resistance is probably the better choice.
I'm not sure how flexible things need to be - real applications of the
openssl code include non-server things like certificate generation,
and probably some reasonable fraction of the RSA or DH calculations
don't need to be timing-protected, but many of them are also things
that aren't CPU-consumption-critical either.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Face-Recognition Technology Improves

2003-03-16 Thread Bill Stewart
At 09:01 AM 03/15/2003 -0500, Derek Atkins wrote:
Sidney Markowitz [EMAIL PROTECTED] writes:

  In addition, only one subject in 100 is falsely linked
  to an image in the data base in the top systems.

 Wow, 99% accuracy for false positives! That means only a little more than
 75 people a year mistakenly detained for questioning in Atlanta
 HartsField Airport (ATL), and even fewer at the less busy airports (source
 Airports Council International, 10 Busiest Airports in US by Number of
 Passengers, 2001).
Were there really 750 Million Passengers flying through ATL???  That
number seems a bit high...
750,000 * 100 = 75,000,000 usually (:-), which sounds more credible.
No idea how many of those are unique passengers, but there are probably
a lot of frequent business travellers going through there many times.
Also, I'm not convinced that multiple trials for a single individual
are independent.  Indeed, one could easily assume that multiple trials
for a single individual are highly correlated -- if the machine isn't
going to recognize the person on the first try it's highly unliklely
it will recognize the person on subsequent tries.  It's not like there
is a positive feedback mechanism.
They're probably not independent, but they'll be influenced by lighting,
precise viewing angles, etc., so they're probably nowhere near 100% 
correlated either.
There could be some positive feedback, if they keep photographs of near 
matches.
Another mechanism they could use is the set of names of people expected
to fly in and out of the airport, but of course that only works for people
who use their real names on airline tickets - it's better for tracking
Green Party members than for tracking Carlos the Jackal.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Microsoft: Palladium will not limit what you can run

2003-03-16 Thread Bill Stewart
Anish asked for references to Palladium.
Using a search engine to find things with palladium cryptography 
wasabisystems
or palladium cypherpunks will find a bunch of pointers to articles,
some of them organized usefully.


On Thursday, Mar 13, 2003, at 21:45 US/Eastern, Jay Sulzberger wrote:
The Xbox will not boot any free kernel without hardware modification.
The Xbox is an IBM style peecee with some feeble hardware and software DRM.
But is the Xbox running Nag-Scab or whatever Palladium was renamed?
Or is it running something of its own, perhaps using some similar components?
At 12:38 AM 03/14/2003 -0500, Jeroen C. van Gelderen wrote:
and sold by Microsoft below cost (aka subsidized).
With the expectation that you will be buying Microsoft games
to offset the initial loss. (You don't have a right to this subsidy,
it is up to Microsoft to set the terms here.)
It doesn't need to be below cost; Walmart was selling machines
with capabilities fairly similar to the Xbox for less,
and they certainly don't do anything below cost.
(This was the ~$200 Linux PCs.)  Now, the amortized development cost
of those PCs is probably less than that of X-box,
and they were a bit less compact hardware (though Xbox is pretty
much of a porker compared to most of the other gamer boxes),
and of course the cost of the Xbox might include some amortized
cost of developing whichever Windows variation it uses,
while Walmart didn't have that cost.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Diffie-Hellman 128 bit

2003-03-14 Thread Bill Stewart
At 01:48 PM 03/13/2003 -0800, NOP wrote:
I am looking at attacks on Diffie-Hellman.

The protocol implementation I'm looking at designed their diffie-hellman
using 128 bit primes (generated each time, yet P-1/2 will be a prime, so no
go on pohlig-hellman attack), so what attacks are there that I can look at
to come up with either the logarithm x from (a=g^x mod p) or the session key
that is
calculated. A brute force wouldn't work, unless I know the starting range.
Are there any realistic
attacks on DH parameters of this size, or is theoretically based on
financial computation attacks?
Google for Odlyzko Diffie Hellman and look at the various papers.
Unless you're talking about elliptic curve versions of Diffie Hellman
(and even then 128 bits probably isn't enough), 128 is way too weak.
DH is similar in strength to RSA, so don't think about using less than 1024,
and realistically go for 2048 or more.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Brumley Boneh timing attack on OpenSSL

2003-03-14 Thread Bill Stewart
From Slashdot: 
http://slashdot.org/article.pl?sid=03/03/14/0012214mode=threadtid=172
David Brumley and Dan Boneh write:
Timing attacks are usually used to attack weak computing devices such as 
smartcards.
We show that timing attacks apply to general software systems.
Specifically, we devise a timing attack against OpenSSL.
Our experiments show that we can extract private keys from a
OpenSSL-based server such as Apache with mod_SSL and stunnel
running on a machine in the local network. Our results demonstrate that
timing attacks against widely deployed network servers are practical.
Subsequently, software should implement defenses against timing attacks.
Our paper can be found at Stanford's Applied Crypto Group.
http://crypto.stanford.edu/~dabo/abstracts/ssl-timing.html  

Schmoo Group response on cryptonomicon.net
http://www.cryptonomicon.net/modules.php?name=Newsfile=articlesid=263mode=order=0thold=0
Apparently OpenSSL has code to prevent the timing attack,
but it's often not compiled in (I'm not sure how much that's for
performance reasons as opposed to general ignorance?)
They also comment (as did somebody on Slashdot) that
this is distinct from the timing attack described in the paper
by Canvel, Hiltgen, Vaudenay, and Vuagnoux last month.
That one's an implementation problem and hard to exploit.
http://lasecwww.epfl.ch/memo_ssl.shtml
http://slashdot.org/article.pl?sid=03/02/20/1956229
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Active Countermeasures Against Tempest Attacks

2003-03-11 Thread Bill Stewart
At 09:14 AM 03/10/2003 -0500, Arnold G. Reinhold wrote:
On the other hand, remember that the earliest Tempest systems
were built using vacuum tubes. An attacker today can carry vast amounts
of signal processing power in a briefcase.
And while some of the signal processing jobs need to scale with the target 
systems,
as computer clock speeds get faster, the leakage gets higher and
therefore shielding becomes harder and leakage gets higher.
Most of the older shielding systems can do fine with the 70 MHz monitor speeds,
but the 3 GHz CPU clock speed is more leaky.  Millimeter wavelengths are
_much_ more annoying.

All in all I would not put much faith in ad hoc Tempest protection. 
Without access to the secret specifications and test procedures, I would 
prefer to see highly critical operations done using battery powered 
laptops operating in a Faraday cage, with no wires crossing the boundary 
(no power, no phone, no Ethernet, nada).  In that situation, one can 
calculate shielding effectiveness from first principles. 
http://www.cs.nps.navy.mil/curricula/tracks/security/AISGuide/navch16.txt 
suggests US government requirements for a shielded enclosure are 60 db minimum.
Back when most of the energy lived at a few MHz, it was easy to make enclosures
that had air vents that didn't leak useful amounts of signal.  It's harder 
today.
So take your scuba gear into your Faraday cage with you :-)

Basically, if you've got a serious threat of TEMPEST attacks,
you've got serious problems anyway...
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-08 Thread Bill Stewart
At 01:33 PM 03/07/2003 -0800, Ed Gerck wrote:
David Howe wrote:
 This may be the case in france - but in england, every vote slip has a
 unique number which is recorded against the voter id number on the
 original voter card. any given vote *can* be traced back to the voter
 that used it.
This is true in the UK, but legal authorization is required to do so.
No, legal authorization is only required to do so _legally_.
We're talking about different threat models here,
since we're talking about stuffing ballot-boxes and bribing people -
what does it take to get the information without getting caught?
Can it be traced in real time, or after the fact, or both,
and how much is the voter's cooperation required?
How long is the data stored after the election?
(For instance, if the election isn't close enough to be contested
within N days, do they burn all the ballots?)
The two usual scenarios are
- Real-time: Thank you for your receipt, here's your bottle of whiskey,
and the Democratic Party invites you to vote again this afternoon!
- Later: Mr. Smith, we've been auditing the ballots and we see that
you voted for Emmanuel Goldstein.  We're taking you in for therapy.






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-08 Thread Bill Stewart
Barney Wolff wrote:
 This is a perfect example of what I'm complaining about:  You're holding
 electronic voting to a much higher standard than you are paper ballots.
If it's going to replace paper ballots, it needs to offer advantages
that make up for its disadvantages, and if it gives us the opportunity to
make a significantly better system, might as well try to do that too.
The two main disadvantages of paper systems are slow speed and cost of 
counting.
Problems with speed are really problems with lack of patience :-)

But electronic systems have the major disadvantage that unless you have
some kind of independently auditable record created at the time of voting,
there's no way to tell that the system hasn't been set to cheat,
whereas most of the easy ways to cheat paper and lever-machine systems
are obvious, and can either be prevented by watching the materials
at the right times, or audited by counting the holes and hanging chads
and unused supplies afterwards.
The primary complaint everybody had with Florida's paper ballot system
was that the layout was confusing,
making it hard to tell if you were voting for Gore or Buchanan,
and any of you who've never seen a confusing layout on a computer interface
can let me know
At 12:39 PM 03/08/2003 -0800, Ed Gerck wrote:
Bill Stewart wrote:
 No, legal authorization is only required to do so _legally_.
 We're talking about different threat models here,
 since we're talking about stuffing ballot-boxes and bribing people -
 what does it take to get the information without getting caught?
 Can it be traced in real time, or after the fact, or both,
 and how much is the voter's cooperation required?
 How long is the data stored after the election?
 (For instance, if the election isn't close enough to be contested
 within N days, do they burn all the ballots?)
The UK is still a sovereign nation and, thus, they can choose to have
an election system where the ability to verify eligibility to vote
after the election trumps the voter's right to privacy, fraud
possibilities notwithstanding. The US and other countries have
a different model for public elections, where voter privacy is absolute.
Well, of course they can, if they want; they can also go back to
strange women lying in ponds distributing swords for all I care...
But the context of the discussion isn't whether the system will do
the things it's supposed to when nobody's trying to cheat,
and if they've got different rules, they've got different ways to cheat.
 The two usual scenarios are
 - Real-time: Thank you for your receipt, here's your bottle of whiskey,
  and the Democratic Party invites you to vote again this 
afternoon!

Not in the UK -- there is no Democratic party there ;-)
What's the traditional bribe for a vote in the UK?



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Columbia crypto box

2003-02-15 Thread Bill Stewart
At 11:08 AM 02/13/2003 -0500, Trei, Peter wrote:

 Pete Chown[SMTP:[EMAIL PROTECTED]]
 As a footnote to those times, 2 ** 40 is 1,099,511,627,776.  My PC can
 do 3,400,000 DES encryptions per second (according to openssl).  I
 believe DES key setup is around the same cost as one encryption, so we
 should halve this if a different key is being used each time.  Brute
 force of a 40-bit DES key will therefore take about a week.  In other
 words 40-bit DES encryption is virtually useless, as brute force would
 be available to anyone with a modern PC.

You can actually do much better that that for key set up. To toot my own
horn, one of the critical events in getting software DES crackers running
at high speed was my realization that single-bit-set key schedules can
be OR'd together to produce any key's schedule. Combining this with
the use of Grey Codes to choose the order in which keys were tested
(Perry's idea) led to key scheduling taking about 5% of the time budget.


But to further toot Peter's horn here (:-), before Peter's discovery,
or maybe some work by Biham (?) around that time,
at least as far as the public literature knew,
DES key scheduling was substantially slower than the S-box phases of DES,
so not only were general-purpose-computer attacks Moore'sLawfully slower,
but add another factor of 10 or so, and customer hardware crackers
would also need to burn resources on both parts of the algorithm
and therefore take at least twice as much ASIC space unless
extremely carefully managed.  So while modern technology has
made it severely useless, and while it was crippled back then,
it was at least not _as_ crippled as it looks from today's viewpoint.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Columbia crypto box

2003-02-08 Thread Bill Stewart


On Sat, Feb 08, 2003 at 01:36:46PM -0500, Adam Fields wrote:
 On Sat, Feb 08, 2003 at 01:24:14PM -0500, Tim Dierks wrote:
  There may be more valid reasons for treating the device as secret; some
  categories that come to mind include protecting non-cryptographic
  information, such as the capabilities of the communication channel. 
Also,
  many systems on the shuttle are obsolete by modern standards, and it's
  possible that the communications security is similarly aged.

 Isn't it also possible that the device contains a physical key of some 
kind?

Mom, can I borrow the keys to the Space Shuttle?

From a cryptographic perspective,
a physical key is just a ROM containing some bits,
or else a smart-card containing some bits it doesn't tell you directly,
but either way the only thing magic about the physical container
is whether the operator needs to know the bits or not.

These days nobody *has* a better cryptosystem than you do.
They might have a cheaper one or a faster one,
but for ten years the public's been able to get free 
planet-sized-computer-proof crypto,
and if you don't like it, you can switch from 3DES and 1024-bit RSA to
5DES and/or 4096-bit RSA.

That doesn't mean that the space shuttle has that quality crypto
for its critical operational communications - its computers were antique
compared to 
commercial-off-the-shelf-non-radiation-hardened-non-shock-proofed PCs,
so it could be running on really lame 60s NSA hardware crypto.
The tradeoff with that kind of equipment was using good key hygiene
(doesn't matter too much if the key gets stolen as long as you know,
and as long as you can wait for the guy with the briefcase handcuffed to 
his wrist),
but also using Obscurity to make cryptanalysis difficult.

So it's possible that they're running some crypto that's lame enough that
if somebody recovers it, they'll be able to crack the algorithms,
which might let them crack the keys for some other shuttle,
or it's possible that it will let them learn enough about old NSA crypto
and maybe the KGB can decode some old messages from somebody,
which might still have some value to somebody (learning 60s/70s military 
tactics?)
It'd be lame, but it's possible.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [IP] Master Key Copying Revealed (Matt Blaze of ATT Labs)

2003-01-28 Thread Bill Stewart
At 09:12 PM 01/26/2003 -0500, Donald Eastlake 3rd wrote:

It's just silly to spend, say, $50 more, on a more secure lock unless
you are really willing, in the forseeable future, to spend hundreds or
thousands of dollars or even more on other weaknesses to make most of
them approximately as strong.


Defense in depth is certainly important for physical security,
for serial attacks as well as parallel attacks.
A long long time ago, in a phone company far far away,
about two floors down from where Matt Blaze was working,
I ran the computers and some other operations
for a workroom that did classified government processing.
The higher-security data lived in safes when we weren't actively using it,
as did any classified backup magtapes.  (Computers were still big then,
and the removable disk packs were roughly 14 diameter, 8 high, 250MB.)
The TEMPEST room they lived in didn't have locks on it,
just annoyingly unreliable electrical airlock doors.
It lived inside a room that had several inches of sheetrock and wiremesh walls,
and a door that had two locks - a classified-rated Sergeant  Greenleaf
mechanical combination lock, which we used when the room was unattended,
and an electronic-pushbutton combination lock which was enough when
the room wasn't attended by a guard at the front desk,
plus there were motion-detector alarms set when it wasn't attended.
Army Reg 380-380 didn't require that the room be impregnable to
people with sawzalls and dynamite - just that it be hard to break into,
and extremely hard to break into without leaving an obvious mess,
and a guard schedule appropriate for the level of difficulty breaking in.


There are also other factors in planning physical security. I've had to
actually break through a wall because an electronic lock's battery back
up power died because the transformer for a building was being replaced
and it had absolutely no power feed for a few days. The repair of such
wall damage is an expense. Mechanical devices do not have the problem of
requiring power (PS: Brass is self lubricating).


One of the screws holding the SG lock to the doorframe came loose
and jammed the lock.  We had to call a locksmith to drill it out,
and it took him about the required two hours to do it.
(If there'd been an emergency, we'd have sawzalled the door.)
The electronic lock jammed a couple of times, and it wasn't hard to
jimmy the door enough with a fireman's prybar to use a screwdriver to
open the latch, but we let the guards know before we started.

The real security problem was when somebody built another secure lab
next door, with what was supposed to be a high-spookiness-quality alarm system;
it took a long time to figure out that most of the false alarms were from
the guards' walkie-talkies causing electrical interference,
and got them instructed not to press talk in that hallway unless
there was something seriously suspicious going on...
and got them instructed to call the other guy, not me, if there was an 
alarm :-)



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: JILT: New Rules for Anonymous Electronic Transactions? An Exploration of the Private Law Implications of Digital Anonymity

2003-01-27 Thread Bill Stewart
At 07:56 AM 01/24/2003 -0500, Bob Hettinga wrote:

http://elj.warwick.ac.uk/jilt/01-2/grijpink.html


There's some interesting discussion about the ability of the
Dutch legal culture to provide useful tools for regulating transactions
in anonymous or semi-anonymous environments - if you can't find somebody,
can you speak of enforcing contracts, etc.  Not surprisingly,
this has been discussed extensively by the Cypherpunks and other people
exploring applications for cryptographically-protected communications.
Some of the standard references are Tim May's Cyphernomicon paper (on the 
web),
Orson Scott Card's novel Ender's Game, and Vernor Vinge's story True Names.
(As the JILT paper says, systems like this may be quite complex to actually
implement in practice, and fiction provides a good tool for exploring the
social implications without doing the difficult detail work.)

I do want to comment on the concept of pseudonymity and semi-anonymity.
The paper appears to be using a definition in which a Trusted Third Party
provides a pseudonym service, which knows the True Name behind each pseudonym
and can provide it when required for a limited number situations,
such as collecting unpaid debts or prosecuting ThoughtCrime,
but otherwise the pseudonym is adequate for many activities,
and the user can protect his privacy and conduct various activities
under different pseudonyms without them being linked to each other
or to his True Name.Unfortunately, the definitions of ThoughtCrime
have been radically expanded in recent years, primarily due to
intellectual property concerns from the music and movie publishers and
the Church of Scientology, so the usefulness of these pseudonyms has
decreased, even for pure communications applications without the
anonymous digital payment systems that can enable anonymous business.

An alternative definition of pseudonymity, which is more common in the
Cypherpunks discussions, is the use of a persistent identity,
verified by digital signatures, which permits the development of
reputations without the need for True Names.  The types of businesses
that can be supported in this environment are more limited,
because there's no way to throw somebody in jail if they default,
but much of European merchant law evolved without this ability.
For some applications, Reputation Capital provides enough protection -
a name that's used for months or years of good transactions
or writing good essays or making good investment recommendations
has a value that will be lost if it's abused,
but for other applications, escrow services substantially increase
the types and values of transactions that are possible.
Escrow can be used on a per-transaction basis, or the escrow service
may be part of establishing a pseudonym, providing an amount of money
that can be seized in a dispute resolution process
without needing the True Name of the pseudonym-holder.

Pseudonymity is becoming increasingly common in practice.
AOL screen names were primarily intended to
allow multiple family members to share an account, but are also
useful for protecting privacy, especially of children in chat rooms.
There's no explicit requirement for a True Name, though most accounts
use credit cards which do provide some tracing ability,
but the depth of credit checking performed by AOL is
did their credit card company approve paying for their service this month,
rather than how big a transaction can their assets cover or
where do they sleep, in case the police want to arrest them.
Yahoo Mail and Hotmail systems are relatively untraceable, however.
EBay accounts have an organized reputation capital system,
allowing buyers and sellers to rate whether the other party has
met their obligations, and to allow prospective buyers and sellers
to see the ratings and estimate whether they'll be defrauded or not.
Unfortunately, EBay recently bought Paypal, so the privacy of
Paypal users is no longer protected by the separation between
the auction system and the payment system, since Paypal uses
credit cards and therefore semi-traceable identities to pay people.

Julf Helsingius's original Anonymous Remailer was originally intended
to provide the stronger form of pseudonymity, but unfortunately
he was forced to reveal the information he had about a user
(because of the intellectual property Throughtcrime problem),
though in fact that identity was another disposable email address.

In order to respond to a growing need for anonymity in legal transactions, 
the regulations for organised semi-anonymity could also be extended (e.g. 
under property law), so that it will be possible to break through a 
person's anonymity retrospectively if necessitated by court order or by 
the law. Organised semi-anonymity (or pseudonymity) in legal transactions 
is therefore a useful weapon against a number of disadvantages of acting 
absolutely anonymously or spontaneously semi-anonymously, while retaining 
the envisaged protection of privacy. It is only with the 

Re: DeCSS, crypto, law, and economics

2003-01-10 Thread Bill Stewart
At 08:45 AM 01/08/2003 -0800, Eric Rescorla wrote:

Maybe. Not necessarily if that meant that no new movies ever got
made. Now, the UK isn't a big enough market for this, but consider
what would happen if the US said listen, free drugs would be great
for consumers so let's get rid of all drug patents. This would
probably dramatically increase social welfare at the moment, since
there are quite a few people who would buy drugs if they were
cheaper. (It's of course not Pareto dominant). However, it seems
likely that this would have such a negative effect on future
production that it would lower social welfare in the future.


In the case of medicinal drugs (as opposed to recreational),
the legal barriers to development and sale of new drugs
have raised the cost to about $500-800 million,
as well as adding a significant delay to availability dates,
and there are fairly convincing arguments that those have
at least as large a negative effect.  It certainly focuses
drug development in directions that can sustain big-hit
marketing campaigns, plus a small amount that's covered by
orphan-disease-drug loopholes.

It's fairly well-known that far more people died from
regulation-caused delays in deployment of several heart-attack drugs
than from active damage by failures such as misuse of thalidomide,
though some people still believe that we're better off because
the regulators also prevented wide deployment of
SideEffectOMycin and DidntWorkATol.

But back to the DVD issue - it's not an issue of public safety;
this stuff is just television.  While I'm not particularly convinced that
copyright and patent legislation actually accomplishes the goals of
advancing science and the arts, or that the time periods that those
protections give exactly count as limited, it's certainly important
to have Fair Use protections for what the public can do with the
information.  On the other hand, legislating against DRM because
it prevents the public from exercising fair use seems wrong
(especially because I prefer technical means of protection for
that kind of material than legal protections),
though legislation that bans public attempts to work around DRM
also seems wrong as well, and failing to ban it just means that
people who want to build DRM systems will just have to do a better job of it.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: DBCs now issued by DMT

2002-12-09 Thread Bill Stewart
At 02:17 AM 12/05/2002 +, Peter Fairbrother wrote:

OK, suppose we've got a bank that issues bearer money.
Who owns the bank? It should be owned by bearer shares, of course.


Why?


Or the propounders wanting to: make a profit/control the bank?


There are two main reasons honest people start banks -
- either they want to make a profit / gain control / etc.
- or else they want to get banking services with some
predictability they're not finding in the commercial market,
e.g. in the US, this is a Credit Union,
or in many cultures, this is some family or private
group that lends money to each other.







-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Public Key Addressing?

2002-11-19 Thread Bill Stewart
Abstract: Maybe he's saying that phone calls could be implemented
like remailers or onion routers, or at least like ipsec tunnels,
where the contents of the call are kept separate from the
signalling information, so the ISPs only see what they need to.

At 01:05 PM 11/13/2002 +0100, Hadmut Danisch wrote:

  When doing a phone call, phone numbers must be
  transmitted, and signals about the state of the
  connection as well.
Now a german professor of computer science, who
claims to be a cryptographer, denied this in
a way which I translate to english like this:
  This is a wrong statement about the technical details.
  It is wrong to claim, that, when doing phone
  calls, phone numbers must be transmitted. The author
  seems to take only the currently practiced ISDN protocols
  into consideration and ignored that, e.g. in particular
  for Packet Switched Networking with Public Key Addressing,
  as researched by Donald Davies as the original fundament
  for the introduction of Packet Switched Networks, especially
  this problem was to be bypassed/avoided.

...

Does anybody have any idea, even an absurd one, what could
the professor have driven to this conclusion and what he
could have meant with Public Key Addressing?


I can think of a couple of things, some of which I even understand :-)

Please excuse the brief explanation of telephony terms first:
There have been several popular approaches to telephone signalling
over the years, which have different security levels against
eavesdropping and manipulation by different users
- Step-by-step transmits the signalling along with the call,
and each piece of equipment uses a digit to route the
audio channel for the call to the next piece of equipment,
but ignores everything else except call tear-down signals.
(Nobody does this any more)  Phone Phreaks liked this.
Eavesdroppers can listen to future signalling and audio.
- Stored-program-control in-band signalling sends the call setup
information in the same channel as the call (either as
audio tones or electrical dial pulses, or robbed bits in the US),
but the first switch receives all the digits,
makes some decision about where to send the call,
and if the next step is a stored-program switch,
sends the (possibly translated) signalling information
to the next switch, followed by the audio call.
(If the next step is a phone, it sends ring tones,
and if the next step is step-by-step, it sends
individual step signals at a standard speed.)
Phone Phreaks liked this also!
- Common-channel signalling sends call-setup instructions along
a data network, which tells the control interfaces of
voice switches to connect an audio channel.
This obviously requires stored-program-control switches.
Phone phreaks didn't like this unless they were really expert.
Signalling System 7 (SS7), CCIS, and CCS were versions of this.
Most modern telephone company switches work this way.
- ISDN has signalling protocols that use data carried along with a
group of audio-or-user-data channels.  (1 or 2 data + 2, 23 or 30 
voice.)
In telephone company networks, ISDN is commonly used as an
interface from the user to the telephone company,
which uses common-channel signalling to complete the call
to its destgination (or at least to the last intelligent
common-channel signalling switch in the path,
then either ISDN or in-band audio or step-by-step,
depending on how obsolete the phone switches at the destination are.)
In customer-owned networks, such as business PBXs,
the trunks between switches might also be ISDN,
which would carry the signalling in the data channels
in the same group as the voice channels.

Another digression - in US wiretapping law, a pen register is
a device that detects the signalling information on a customer's
telephone line, and records the signalling but not the audio.
(Originally, this used moving pens and paper to show the electrical
impulses from pulse dials.)  Unfortunately, US courts decided that
pen registers don't record private information, because the user
is telling the telephone company who they want to talk to,
which is therefore public information, so it should not receive
the same legal protection as wiretaps that actually listen to the
speech part of a telephone call.  Another unfortunate consequence
is that every time somebody develops a new technology for
eavesdropping or wiretapping, the police try to claim that it is
like a pen register, not a real wiretap, and every time somebody
develops a new communication medium, the police try to claim that
it's not like a private telephone call that has some legal protection,
or a person-to-person conversation or personal papers that have
more legal protection, but instead is only like radio 

Re: What email encryption is actually in use?

2002-10-02 Thread Bill Stewart

At 09:05 AM 10/01/2002 -0700, Major Variola (ret) wrote:
So yes Alice at ABC.COM sends mail to Bob at XYZ.COM and
the SMTP link is encrypted, so the bored upstream-ISP netops
can't learn anything besides traffic analysis.
But once inside XYZ.COM, many unauthorized folks could
intercept Bob's email.  Access Control is sorely lacking folks.

I'm running Win2000 in You're Not The Administrator mode.
Since somebody else is root and I'm not, the fact that
my network admins could eavesdrop on my link traffic
isn't a big deal, especially when they set up my PC's software.
And if I do pretend to trust my machine against some insiders,
I can use SSH, SSL, and PGP to reduce risks from others...
Also, STARTTLS can reduce eavesdropping at Alice's ABC.COM.

If your organization is an ISP, the risks are letting them
handle your email at all (especially with currently proposed
mandatory eavesdropping laws), and STARTTLS provides a
mechanism for direct delivery that isn't as likely to be blocked
by anti-spamming restrictions on port 25.
Now to get some email *clients* using it.

On the other hand, if your recipient is at a big corporation,
they're highly likely to be using a big shared MS Exchange server,
or some standards-based equivalent, so the game's over on that end
before you even start.  Take the STARTTLS and run with it...

Link encryption is a good idea, but rarely sufficient.

Defense in depth is important for real security.
STARTTLS can be a link-encryption solution,
but it can also be part of a layered solution,
and if you don't bother with end-to-end,
it's a really good start, and isolates your risks.
It also offers you some possibility of doing certificate management
to reduce the risk of man-in-the-middle attacks from
outside your organization, and does reduce some traffic analysis.

 at Tuesday, October 01, 2002 3:08 AM, Peter Gutmann
 [EMAIL PROTECTED] was seen to say:
  For encryption, STARTTLS, which protects more mail than all other
  email encryption technology combined.

If your goal is to encrypt 20% of the net by Christmas,
STARTTLS will get a lot closer to that than a perfect system.
Similarly, IPSEC using the shared key open secret
would have been a much-faster-deployed form of opportunistic
encryption than the FreeSWAN project's more complex form
that wants some control over DNS that most users don't have.

In the absence of a real Public Key Infrastructure,
neither is totally man-in-the-middle-proof,
so if the Feds are targeting *you* it's clearly not enough,
but reducing mass-quantity fishing expeditions increases
our security and reduces the Echelon potential -
especially if 90% of the encrypted material is
routine corporate email, mailing lists, Usenet drivel, etc.

At 01:20 PM 10/1/02 +0100, David Howe wrote:
 I would dispute that - not that it isn't used and useful, but unless you
 are handing off directly to the home machine of the end user (or his
 direct spool) odds are good that the packet will be sent unencrypted
 somewhere along its journey. with TLS you are basically protecting a
 single link of a transmission chain, with no control over the rest of
 the chain.

You can protect most of the path if your firewalls don't interfere,
and more if your recipients' don't.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Real-world steganography

2002-10-01 Thread Bill Stewart
At 09:38 PM 09/30/2002 -0700, Bram Cohen wrote:

Peter Gutmann wrote:
 I recently came across a real-world use of steganography which hides extra
 data in the LSB of CD audio tracks to allow (according to the vendor) the
 equivalent of 20-bit samples instead of 16-bit and assorted other features.

I don't think that's really 'steganography' per se, since no attempt is
made to hide the fact that the information is in there. The quasi-stego
used is just to prevent bad audio artifacts from happening.


Traditional digital telephone signalling uses a robbed-bit method that
steals the low-order bit from every sixth voice sample to carry information
like whether the line is busy or idle or wants to set up a connection.
(That's why you only get 56kbps and not 64kbps in some US formats,
since it doesn't want to keep track of which low bits got robbed.)

In a sense both of these are steganography, because they're trying to
hide the data channel from the audio listener by being low level noise
in ways that equipment that isn't looking for it won't notice.

That's not really much different from encoding Secret Data in the LSB
of uncompressed graphics or audio - it's about the second-crudest
form of the stuff, and if you think there are Attackers trying to
decide if you're using stego, you need more sophisticated stego -
at minimum, encoding the stegotext so it looks like random noise,
or encoding the stegotext with statistics resembling the
real noise patterns, or whatever.  The definition of hidden writing
doesn't specify how hard you tried to hide it or how hard the
Attacker is looking - you need to Bring Your Own Threat Model.


Since I don't speak Audiophile Engineering / Human perceptual modelspeak,
which the paper was written in, I wasn't able to figure out where the
HDCD stuff hides the extra bits.  Are they really there (in the CDROM's
error-correction bits or something)?  It sounded like they were either
saying that they make part-time use of the one LSB bit to somehow encode
the LSB and 4 more bits, which sounded really unlikely given that there
weren't any equations there about the compression models, or else that they
had some perceptual model and were using that to make a better choice of LSB
than a simple 50% cut-off of the A-to-D converter (more absolute distortion,
but better-sounding distortion.)  Or did I miss the implications of the
reference to oversampling and the real difference is that HDCD disks
really have more pixels on the disk with only the LSB different,
so a conventional reader reads it fine but needs the ECC to get the LSB?

A separate question is - so is there some internet-accessible list of
disks using HDCD, or do I just have to look at the labels for a logo?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Microsoft's Palladium transforms Internet from Wild West to suburban neighborhood

2002-06-30 Thread Bill Stewart

At 03:35 PM 06/28/2002 -0400, R. A. Hettinga wrote:
http://worldtechtribune.com/worldtechtribune/asparticles/buzz/bz06282002.asp
WorldTechTribune/Buzz___

Microsoft's Palladium transforms Internet from Wild West to suburban 
neighborhood

Stepford CT?


  Special to WorldTechTribune
Scott McCollum
June 28, 2002



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: DOJ proposes US data-rentention law.

2002-06-29 Thread Bill Stewart

At 06:38 PM 06/22/2002 -0400, Steve Fulton wrote:
At 17:37 22/06/2002 -0400, [EMAIL PROTECTED] wrote:

Not arguing, but the hardware cost curve for storage has a shorter
halving time than the cost curve for CPU (Moore's Law) and the
corresponding halving time for bandwidth is shorter still.

You've got a point.  Storage is becoming less and less expensive per 
gigabyte, especially for IDE drives.  If you're using a RAID set up, IDE 
doesn't cut it, SCSI is the way to go (for now).  SCSI is a lot cheaper 
than it used to be, but it's still over $1000 for a single 70gig drive in 
Canada.  For maximum redundancy in one rack-mount server, RAID 10 is the 
way to go.  That means for every 1 drive, there must be an an exact 
duplicate.  Costs can increase exponentially.

[more examples of expensiveness deleted; fibre channel, etc.]

You're not making appropriate technology choices,
so your costs are off by a factor of 5-10.

IDE is just fine, especially in RAID configurations,
because if you're making a scalable system, you can use as many spindles
as you need, and you don't need to run fully mirrored systems - RAID5 is fine.
Almost any technology you get can run 5MB/sec, which is T3 speeds,
so that RAID5 system can keep up with an OC3 with no problem.
Disk drive prices here in the US are about $1/GB for IDE.
The problem is that's about 200 seconds of T3 time, so your 5 100GB drives
will last about a day before you take them offline for tape backup.
The real constraints become how fast you can copy to tape,
i.e. how many tape drives you need to buy, and what fraction of data you keep.
If it's 1%, you can afford it - adding $5/day = $150/month per T3 is just 
noise.
Keeping 10% of the bits - $50/day = $1500/month/T3 -
is a non-trivial fraction of your cost, so you have to go for tape.

Fibre channels are useful for cutting-edge databases on mainframes,
and have the entertaining property that they can go 10-20km,
so you've got more choices for offsite backup, but GigE is fine here.

Make sure you also keep a couple of legacy media devices so you can
give the government the records they want in FIPS-specified formats,
such as Hollerith cards and 9-track tape.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-05-12 Thread Bill Stewart

At 08:52 AM 04/24/2002 +0800, Enzo Michelangeli wrote:
In particular, none of the naysayers explained me clearly why it should be
reasonable to use 256-bit ciphers like AES with 1024-bit PK keypairs. Even
before Bernstein's papers it was widely accepted that bruteforcing a 256-bit
cipher requires computing power equivalent to ~16Kbit RSA or DH keys (and
~~512-bit ECC keys). Given that a cipher protects only one session,

*Something* has to be the weakest link; calls for balance really come down to
If Algorithm A is already the stronger part of the system,
why should I waste extra time/work strengthening it instead of Algorithm B?.
It doesn't hurt to make parts of the system stronger than necessary,
unless there are other costs like limiting the sizes of the other keys
that can fit in a UDP packet or whatever.   And making the AES algorithm
use longer-than-needed keys gives you some extra insurance against
mathematical breakthroughs or other sorts of discovered weaknesses.

The important issue about whether you can use X-bit block cyphers with
Y-bit public-key cyphers is whether Y bits of PK can give you X good key bits.
For Diffie-Hellman, the conventional wisdom seems to be that
Y bits of public key gives you Y/2 bits of usable keying material,
which means that 1024-bit DH is good enough for 256-bit AES.
For RSA, I think you can protect almost up to pubkeylength bits of message,
since the key generation happens separately from the public-key parts,
but you're definitely into overkill range.

So the question falls back to Is 1024 bits long enough?.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: 40 teraflops (fwd)

2002-03-28 Thread Bill Stewart

Unfortunately, the article that Bob Hettinga excerpted from the
South China Morning Post is a pay-only article.

http://www.es.jamstec.go.jp/ - Japanese government site.
http://www.es.jamstec.go.jp/esc/eng/ - Good page
http://www.es.jamstec.go.jp/esrdc/eng/menu.html - The ES center
http://www.es.jamstec.go.jp/esc/gallary/index_e.html - Pictures.
(This sucker appears to be *big*.  Some pictures want Flash.)

Here are a couple of articles from 2000 about how cool the machine will be:
http://www.nec.co.jp/press/en/0005/3001.html  - NEC press release
http://www.ess.nec.de/hpc/HPCwire/17830.htm   - Some technical detail

Cool lecture by Jack Dongarra (a name you should know)
overview of high-performance computing.  Spring 2002 CS594 UTenn.
http://www.cs.utk.edu/~dongarra/WEB-PAGES/SPRING-2002/lect01.pdf
Most Important Slide is the pointer to http://www.netlib.org

The other reference site for this stuff:  http://www.top500.org

Article about Google doing work on parallel projects
http://www.cosmiverse.com/tech03250202.html




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread Bill Stewart

At 05:38 PM 03/23/2002 -0800, Lucky Green wrote:
While the latter doesn't warrant comment, one question to ask
spokespersons pitching the former is what key size is the majority of
your customers using with your security product? Having worked in this
industry for over a decade, I can state without qualification that
anybody other than perhaps some of the HSM vendors would be misinformed
if they claimed that the majority - or even a sizable minority - of
their customers have deployed key sizes larger than 1024-bits through
their organization. Which is not surprising, since many vendor offerings
fail to support larger keys.

While SSL implementations are mostly 1024 bits these days,
aren't PGP Diffie-Hellman keys usually 1536 bits?



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Announce: San Francisco Cypherpunks, Sat 2/16/02, 6pm - 225 11th, SF

2002-02-13 Thread Bill Stewart

This announcement will be at 
http://cryptorights.org/cypherpunks/meetingpunks.html
and is being sent to several cypherpunks-related mailing lists.
===

The San Francisco Bay Area Cypherpunks Meeting will be
Saturday, February 16, 2002, at Don Ramon's Restaurant, 225 11th St, San 
Francisco.


As usual, this is an open public meeting on US soil.
Everyone's invited, including non-US-citizens, and several suspected Canadians
will be present :-)

Our agenda is a widely-held secret.
Predicted topics include Intrepid Traveller Bill Scannell's trip to Cuba,
discussion of the crypto and data-sharing presentations from Codecon,
potentially a couple of projects that were too late for Codecon,
and generally what everybody's been doing and working on.

The unusual time and place are because the Codecon conference www.codecon.org
will be a block away at the DNA Lounge www.dnalounge.com Friday-Sunday 
afternoons,
and the RSA conference will be at the San Jose Convention Center the 
following week.
A number of cypherpunks will be speaking at Codecon (See people.html and 
schedule.html.)

The RSA Conference appears to have decided to protect their agenda through 
Obscurity;
you can obtain PDFs of the agenda from their website if you have Multimedia 
Flash :-)
Codecon is a low-priced conference; RSA has high-priced talks, low-priced 
exhibits,
and the usual vendor parties.

=== DIRECTIONS AND PARKING 
Don Ramon's Restaurant - Look for Usual Suspects, probably upstairs.
The restaurant serves family-style Mexican food, as well as
caffeine and ethanol, and is said to be pretty good.

Directions: 225 11th St. is a block north of the DNA Lounge,
for which directions are available at http://www.dnalounge.com/directions/ .

 From the South Bay:
Take 101 North to the 9th Street / Civic Center exit.
Take 9th Street, and turn left onto Harrison, then right onto 11th.

 From the East Bay:
From the Bay Bridge, take I-80 West to the 9th Street / Civic Center exit.
Keep right at the fork in the ramp. Turn left onto Harrison, then right 
onto 11th.

By Public Transit:
The nearest BART stations are at 16th and Mission and Civic Center (Market 
and Hyde),
both about 3/4ths of a mile away.
There are bus stops at 11th and Harrison (right outside the club),
and at Market and Van Ness (four blocks away).
From Caltrain, it appears that the #47 bus is probably the best choice.

Parking is easy for once:  Ample public parking is available at the Costco 
parking lot,
at the corner of 11th and Harrison. The parking lot entrance is on 11th.
You can also find free street parking, but beware of the street cleaning 
signs.

=



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Welome to the Internet, here's your private key

2002-02-09 Thread Bill Stewart

At 05:12 PM 02/08/2002 +0100, Jaap-Henk Hoepman wrote:
I think there _are_ good business reasons for them not wanting the users to
generate the keys all by themselves. Weak keys, and subsequent 
compromises, may
give the CA really bad press and resulting loss of reputation (and this
business is built on reputation anyway). So: there are good reasons not to
let the CA generate the private key, but also good reasons to not let the user
generate the keys all by himself.

So the question is: are there key generation protocols for mutually 
distrustful
parties, that would give the CA the assurance that the key is generated using
some good randomness (coming from the CA) and would give the user the 
guarantee
that his private key is truly private. Also, the CA should be able to verify
later that the random data he supplied was actually used, but this should not
give him (too much) advantage to find the private key.

There are three different cases
- Active malicious tampering by the user
- Inadequate key generation capabilities on the user's machine,
 e.g. weak randomness, too little memory, too slow CPU
- User's machine is compromised by some untrustable third party.

In the third case, you were toast anyway.
In the first case, what can malice accomplish that's different for a
 user-generated key as opposed to a CA-generated key?
 The malicious user could already give away his private key,
 and giving the CA a key generated by someone else to certify
 is pretty much equivalent to giving away the key.

The second case appears to be the interesting one.  There may be limited
 environments where the user's system doesn't have enough CPU or RAM
 (though someone else replied that even smartcards are often smart 
enough),
 but certainly for anything as smart as a Palm3,
 it's basically just a one-time startup delay.

 The way to fix inadequate randomness is not to have the CA 
generate the key,
 it's to have the CA generate a bunch of randomness and
 send it to the user's system to input it in the key generation 
program.
 If you're worried that the user might not bother to enter the
 randomness into the CA-supplied key generation program,
 have the program check that the user entered enough characters,
 or even use a checksum on the user-entered characters.

 If you're worried about Man In The Middle attacks on the
 CA-supplied randomness, you could digitally sign it, though
 that's a bit long-winded for user-typed strings
 unless you use Elliptic Curves or some similar short-signature
 method or use an HMAC or something.

Are there any other cases in which the CA needs to generate the key
for legitimate reasons, as opposed to because the CA wants access to
the user's private key for later purposes?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Attacks using Pure Text (Was: Re: Results, not Resolutions)

2002-01-28 Thread Bill Stewart

At 10:17 PM 01/26/2002 -0800, Bill Frantz wrote:
At 7:42 PM -0800 1/25/02, R. A. Hettinga quoted Schneier and Shostack:
 Here's one example: Originally, e-mail was text only, and e-mail viruses
 were impossible. ...

Well, the line between code and data is fuzzier than that.  That 7 bit
ASCII email is properly thought of as a series of instructions for a text
rendering engine which is implemented in software on modern machines.  If
there is a bug in that rendering software, then it may be possible to
design a sequence of text which executes arbitrary code on the receiving 
machine.

Email viruses were not impossible with text-only email.
ASCII text is probably much safer today than it was 20 years ago,
because there are far more systems that try to render it,
and almost all of them do less interpretation than they did back then,
when we usually read email on dumb terminals, preferably the
smartest dumb terminals we could find.  Before everything standardized on ANSI,
lots of terminals, particularly the HP 262x series and the DEC VT100s,
had features that would let you hand escape sequences to the terminal
that would not only change fonts, move the cursor, clear the screen, etc.,
but could also program function keys and execute function keys.
So it was possible to send somebody email that would cause their terminal
to transmit a limited number of characters to the computer as if the
user had typed them, which opened a variety of security holes.
Also, one of the talk programs would write directly to another user's
terminal if the permissions were set to the default world-writable.

There was an article in the SF Chron or Oakland Trib in spring 1979
saying that hackers at Berkeley had broken the security of
the Unix, a computer made by DEC, which was really about one of these
escape-sequence exploits, probably for VT100s.  It was tough to do much,
but if you guessed what mail reader the person was using,
you could fake keystrokes for exit mail, run something dangerous,
especially if you first sent the victim a file using UUCP.
I don't think anybody built a virus this way, since we weren't
really virus-aware at the time, but some people probably sent
password-stealers this way.

And it was easy to do a shut down or hose up the victim's terminal attack.
Unfortunately, the escape sequence for bold-face on one popular terminal
would hose up another popular terminal, so it wasn't always deliberate -
there were often flames on Usenet about people posting formatted articles
(commonly recipes on rec.cooking) with dangerous escape sequences in them.

A much cruder denial-of-service attack was to send +++ or other
modem-control characters, which could disconnect a Hayes-style modem.

These days, almost all of the terminal emulation programs
either just run totally dumb text, or run some subset implementation
of ANSI or the very similar DEC VT100, and most of those emulators
haven't bothered implementing the fancier features.

Another text-only attack was magic sequences for text editors like vi
(and perhaps emacs?) which would look for option-setting commands
in the beginning or end lines of files.  The purpose was to allow
comments in C code that would turn on C-related editor options,
comments in Lisp code that would turn on lisp-related options, etc.
This was eventually realized to be a security risk, so
nomodelines became the default in vi.

Of course, the most successful text-only attack was the Good Times virus,
which worked by infecting the wetware of the operator :-)

There is really no substitute for limiting the authority of code which
processes potentially hostile input, such as email and web pages, so that
the consequences of flaws are limited.  One way to limit authority in
current systems is to use an operating system that provides a measure of
real security between users, and then have an account which is only used
for email, web surfing etc.

Yup.  Designing code for hostile and bogus input was about the
third lesson in CS100, after Comment everything and The One True Indent 
Style,
and well before anything fancier than arrays and loops.
But enough code doesn't do that that it's important to use
operating system protections as well.







-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RSA Attacks - Talk at Stanford - 1/28/2002 4PM (fwd)

2002-01-25 Thread Bill Stewart

Looks like an interesting talk!

-- Forwarded message --
Date: Thu, 24 Jan 2002 16:52:35 -0800 (PST)
From: Glenn Durfee [EMAIL PROTECTED]
Subject: Ph.D. Oral Exam: Monday, January 28, 4PM

Algebraic Cryptanalysis
 Glenn Durfee

Department of Computer Science
  Stanford University
Gates Building, Room 498
Monday, Jan. 28th, 2002
   4:00 PM - 5:00 PM


In this talk we study the security of the widely-used RSA public key
cryptosystem.  RSA is used in the SSL protocol for security on the
Internet, and the SET protocol used by Visa for secure credit card
transactions.  This talk outlines several cryptanalytic results on the RSA
public key cryptosystem and variants.  We obtain our results using tools
from the theory of integer lattices.

We begin by introducing a novel algorithm for the factorization of a
class of integers related closely to RSA moduli, showing a new class
of integers can be efficiently factored.  We go on to introduce
new attacks on the RSA public key cryptosystem which take advantage of
partial knowledge of a user's secret key, showing that in low public
exponent RSA, leaking the quarter least significant bits of the secret key
is sufficient to compromise RSA.  Similar results (though not as strong)
hold for larger values of the public key.  Next we describe a new attack on
the RSA public key cryptosystem when a short secret exponent is used,
extending previous bounds for short secret exponent vulnerability.  Lastly,
we describe the Sun-Yang-Laih RSA key generation schemes, and introduce
attacks to break two out of three of these schemes.

++
| This message was sent via the Stanford Computer Science Department |
| colloquium mailing list.  To be added to this list send an arbitrary   |
| message to [EMAIL PROTECTED]  To be removed from this list,|
| send a message to [EMAIL PROTECTED] For more information,|
| send an arbitrary message to [EMAIL PROTECTED] For directions|
| to Stanford, check out http://www-forum.stanford.edu   |
+-xcl+




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: CFP: PKI research workshop

2001-12-28 Thread Bill Stewart

SST is the SuperSonic Transport; I think the term was specific
to US attempts to build something like the Concorde, but it may have
been more generic.  Among other problems (making it work, sonic booms,
economics in general), use of fast airplanes in non-military airspace
was limited by the capabilities of the air-traffic control systems,
which couldn't really handle airplanes that fast.
It's much easier to build supersonic airplanes for the military,
where you're not concerned about price per passenger-mile.

Except for airports and amusement parks, the only place I've seen
a monorail is in Seattle.  (I'm counting Las Vegas as an amusement park :-)
Airports similarly don't follow normal economic rules,
because they can often scam money out of government authorities,
who will often do stuff because it Looks Cool.
There may be economic niches where monorails make sense
(streets that are too narrow to add pillars for conventional
elevated railways, perhaps), but they're pretty limited.

Until recently I was the Regional ATM Specialist for
one of the offshoots of The Phone Company that did the
PicturePhones at the World's Fair back in the 60s :-)
Web cams are widely available, but they're still not how
most people make their phone calls, and it did take
30-40 years before they finally became economical.
ATM also has a fairly wide economic niche, though routers
have caught up with the big end of the performance curve,
and it always was too complex to win at the desktop end.

PKIs are quite simple and low cost to implement -
the problems are finding a way to make them widely useful.
Unfortunately, that hasn't matched most PKI companies'
business plans that promised World Domination to their VCs :-)
And even among the people who adopt crypto because it Looks Cool,
the last time I looked through the Web Of Trust on the PGP keyservers,
most keys were either unsigned or only signed by a couple of people,
not enough to build a big connected graph.

 Bill


At 07:34 PM 12/28/2001 +, Phillip Hallam-Baker wrote:
Let us see.

 Monorails are commonplace in airports these days.
 Web cams for online chat are used by millions of teenagers
 SST ? What is that

 Phill

-Original Message-
From: Peter Gutmann [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]; [EMAIL PROTECTED]
[EMAIL PROTECTED]; [EMAIL PROTECTED]
[EMAIL PROTECTED]
Date: 27 December 2001 21:42
Subject: Re: CFP: PKI research workshop


 As I never tire of saying, PKI is the ATM of security.
 
 Naah, it's the monorail/videophone/SST of security.  Looks great at the
World
 Fair, but a bit difficult to turn into a reality outside the fairgrounds.
 
 Peter (who would like to say that observation was original, but it was
actually
stolen from Scott Guthery).
 
 
 -
 The SPKI Mailing List
 Unsubscribe by sending unsubscribe spki to [EMAIL PROTECTED]
 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Stegdetect 0.4 released and results from USENET search available

2001-12-28 Thread Bill Stewart

At 01:59 PM 12/28/2001 -0800, David Honig wrote:
A.A.M + PGP = covert radio transmitter which sends coded messages.  Obviously
interesting, so you direction-find to defeat the anonymity.

And Perry replied:
[Moderator's note: And how would you possibly do that? --Perry]

Back in the old days, it was easy - Usenet messages carried a
bang-path route to the original sender.  You could forge parts of it
easily enough, as the Kremvax hoax demonstrated,
but the only real untraceability was because there were lots of
pre-Honey-Danber UUCP sites which would accept incoming messages
from unknown senders.  These days, most of them are gone -
you're really depending on how long sites keep logfiles.

[Moderator's note: That's not the point. You can post without any
authentication via many web sites, or over the net via accounts you
can get with little or no identification in a dozen countries, which
you can log in to anonymously from web cafes, airport kiosks,
etc. around the world. If you decide not to be found, you won't be
found. --Perry]

Reader anonymity depends a lot on how many people actually read A.A.M,
and on how many sites keep NNTP logs - it probably a lot fewer readers
than the largest binary porn spam groups, but a lot also depends on
how many small ISPs around the world still spool their own news
rather than buying access from news services.  It's certainly harder
to trace than senders.

So tracing a single transmission may be hard, but tracing an ongoing pattern
is easier, unless there's a trusted Usenet site in some
country where you don't have jurisdiction problems.
That means that A.A.M + PGP is fine for an occasional
Attack at Dawn message, but not necessarily for routine traffic.

So it helps to add an extra step - posting the anonymous message
through a web2news gateway through an anonymizer,
or a mail2news gateway from a webmail account from a cybercafe,
or mail2news through an open relay somewhere in the world
(since open relays are usually people who haven't bothered
configuring their mail systems, and are less likely to keep logs
unless that's the default, plus you can spread your messages
among lots of different relays.)






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Speaker Wanted - This Wednesday, Pulver Conference - Presence Instant Messaging

2001-10-29 Thread Bill Stewart

(Forwarded for [EMAIL PROTECTED] )
=
This is Brad Templeton from the EFF.  This Wednesday I'm moderating a
panel at Jeff Pulver's semi-annual conference on Presence and Instant
Messaging.   It's a smallish (couple of hundred) conference where you'll
see most of the commercial players in instant messaging, with the very
notable exception of AOL.

However, having attended this conference I have found that most of the
people there pay little attention to issues of security and privacy in
the IM world.  Sometimes for real reasons (most IM is forced by NAT and
firewalls to be routed through central servers) but often times simply
because they haven't bothered.

The panel I am moderating is on these topics of Presence and Instant
Messaging, and due to various circumstances, right now I have only 2
other speakers on it, who will speak about the privacy and security work
being done by two standards bodies, the PAM formum, and the IETF SIP working
group.  I have my own talk on the design and political issues, but I can
move a lot of that into my plenary talk later in the day where I want to
get those issues out.

In particular I am interested in technologically interesting projects or
research to allow privacy, encryption and anonymity in instant messaging,
and also in presence data and location-aware devices.  (Part of the
conference is also on location aware services, E911 manadated location-aware
phones etc.)

So I apologize for not asking until today, but if you have done any work
in these areas you would like to talk about briefly, I could have a slot
for you, and get you free attendance at this normally $2,000 conference.
Last time Lenny Foner gave a great talk on his work.

The conference info is at http://www.pulver.com/pim/ and my session is
This Wedesday, Oct 31, at 9:45am.  It is at the Marriott in Santa Clara.

Sorry as well for posting without regularly reading cypherpunks, but I
need to keep my email load down.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Dilbert Random Number Generator

2001-10-25 Thread Bill Stewart

Dilbert's been visiting the Trolls In Accounting,
who have been spitting all over his data.
Now he's on a tour, and the troll is showing him their random number generator.

http://www.dilbert.com/comics/dilbert/archive/images/dilbert2001182781025.gif




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Field slide attacks and how to avoid them.

2001-09-19 Thread Bill Stewart

But XDR is so BORING compared to a REAL standard like ASN.1!
It doesn't have infinite possibilies for object definitions
requiring help from standards committees, multiple incompatible
data representations with different kinds of ambiguity,
or ugly API packages that are too large to believe that the
implementers debugged them adequately.  That's just no fun at all!

 (I realize it doesn't do everything in the world,
 or have all the power, expressiveness, or bit-twiddling
 that ASN.1 or even PGP/OpenPGP data formats have,
 but there's a lot to be said for something that's minimal and works.)

At 04:50 PM 09/10/2001 -0400, Kevin E. Fu wrote:
We use hashes of marshalled XDR representations of data in the SFS
read-only file system [2].  This allows us to protect the integrity of
public, read-only content without having to worry about simple
splicing attacks.  The eXternal Data Representation Standard is
popular for implementing things like NFS [2, 4].
...
[0] 6.033 Spring 2001 Quiz 2.  http://web.mit.edu/6.033/www/handouts/s01_2.ps
[1] Dos and Dont's of Client Authentication on the Web, USENIX Security 2001,
  http://cookies.lcs.mit.edu/
[2] NFS Version 3 Protocol Specification, RFC 1813
[3] SFS Read-only File System, USENIX OSDI 2000, http://www.fs.net/
[4] XDR: External Data Representation, RFC 1014








-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: NYC events and cell phones

2001-09-15 Thread Bill Stewart

At 07:59 AM 09/13/2001 -0400, Angelos D. Keromytis wrote:

An interesting bit of information: on Tuesday afternoon, to the extend that
cellphones operated, GSM encryption was turned off throughout Manhattan. My
GSM phone would repeatedly warn me of this on every call I made (or tried
to make). As of Wednesday morning, things were back to normal.

Interesting.  For the most part, TDMA encryption in the US isn't turned on;
my Nokia phone always starts off calls by telling me
Voice Privacy Not Active, even though the encryption is even lamer than
the GSM encryption.





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Stealth Computing Abuses TCP Checksums

2001-08-30 Thread Bill Stewart


http://fyi.cnn.com/2001/TECH/internet/08/29/stealth.computing/index.html
http://slashdot.org/article.pl?sid=01/08/29/199205mode=thread

A group of researchers at Notre Dame figured out how to use the
TCP Checksum calculations to get other computers to do number-crunching for 
them.

Below, we present an implementation of a parasitic computer
using the checksum function.  In order for this to occur,
one needs to design a special message that coerces a target server
into performing the desired computation.

The article has the amount of great mathematical depth you'd expect from 
CNN :-)
But it does say that the paper will be published in Nature this week.

It's a really cool hack, though not especially efficient for real work.

Of course, the Slashdot discussion follows typical structure -
there's an interesting technical suggestion (ICMP checksums may be usable
and are probably more efficient than TCP), some trolls and flamers,
the obligatory Imagine a Beowulf Cluster of those! comment,
and some speculation about the potential legalities and other uses for it.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



ANNOUNCE CYPHERPUNKS Saturday, Aug 11, 1-5pm, Stanford

2001-08-09 Thread Bill Stewart

SF Bay Area Cypherpunks August 2001 Physical Meeting Announcement

General Info
DATE:   Saturday 11 August 2001
TIME:   1:00 - 5:00 PM (Pacific Time)
PLACE:  Tressider Student Union Courtyard,
 Stanford University Campus
 Palo Alto, California, USA
P
H2Agenda/H2
Our agenda is a widely-held secret.

As usual, this is an Open Meeting on US Soil, and everyone's invited.
Some events and topics that have been happening recently
include Sklyarov, Defcon, Worms From The Borg, and New FBI directors.
PGP is ten years old!  Several of us were at Phil Z's talk.
The 802.11 WEP Wireless Encryption Protocol has been cracked
even more thoroughly.  The Fedz couldn't crack Nicky Scarfo's PGP key,
but they could steal his passphrase by cracking his computer.

Location
The Stanford meeting location will be familiar to those who've been to our
outdoor summer meetings before, but for those who haven't been, it's on the
Stanford University campus (in Palo Alto, California), at the end of
Santa Theresa, at the tables outside Tressider Union, just west of
Dinkelspiel Auditorium.

We meet at the tables on the West side of the building, inside the
horseshoe U formed by the Tresidder building. Ask anyone on campus
where Tresidder or the Student Union is and they'll help you find it.

If the weather is bad, we'll meet inside.

Food and beverages are available at the cafe inside Tresidder.

Location Maps:
http://www.stanford.edu/home/map/search_map.html?keyword=ACADEMIC=Tresidder+Union 
Overview
http://www.stanford.edu/home/map/stanford_zoom_map.html?234,312 
(zoomed detail view):
http://www.stanford.edu/home/visitors/campus_map.pdf Printable Stanford 
Map/A (407k).

GPS Coordinates: 37d23:40 N 122d04:49 W

If you get lost on the way, you can try calling:
  +1.415.307.7119 (Bill)

If you have questions, comments or last-minute agenda requests, please 
contact the
meeting organizers:
BR
 Bill Stewart [EMAIL PROTECTED]
 BR
 Dave Del Torto [EMAIL PROTECTED]





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: tapping undersea fibers?

2001-06-13 Thread Bill Stewart

At 12:55 PM 06/04/2001 -0400, Lenny Foner wrote:
So we now have at least two people who've confirmed my expectation,
namely that one can feasibly encrypt the entire cable.  (After all,
I know what's involved in making fast, special-purpose chips to do
varous sorts of digital operations, and this isn't any different.)


I'm not particularly convinced of this -
there's OC12 hardware available now (622Mbps, aka 12 T3s plus overhead),
but most telco fibers run at multiples of OC48 or OC192
(48 or 192 T3s, aka 2.4 or 10 Gbps.)  Some cables run small numbers of
wavelengths - often 8-16 of one of those two speeds,
but some of the newer fiber technology can run 80 or 160 wavelengths
if you want to buy the electronics to put on the ends.

As a telco, your end users may be able to encrypt their data streams
fast enough, if they care, but you're not going to.
It costs way too much, and there's no demand.
And as Lenny mentions - politicians, intelligence agencies, etc.,
aren't stopped by telco-provided encryption,
because what a telco can encrypt, a bureaucrat can tell them to decrypt.







-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]