Re: [Cryptography] /dev/random is not robust

2013-10-14 Thread John Gilmore

I'll be the first to admit that I don't understand this paper.  I'm
just an engineer, not a mathematician.  But it looks to me like the
authors are academics, who create an imaginary construction method for
a random number generator, then prove that /dev/random is not the same
as their method, and then suggest that /dev/random be revised to use
their method, and then show how much faster their method is.  All in
all it seems to be a pitch for their method, not a serious critique of

They labeled one of their construction methods robustness, but it
doesn't mean what you think the word means.  It's defined by a mess of
greek letters like this:

  Theorem 2. Let n  m, , γ ∗ be integers. Assume that G :
  {0, 1}m → {0, 1}n+ is a deterministic (t, εprg )- pseudorandom
  generator. Let G = (setup, refresh, next) be defined as above. Then
  G is a ((t , qD , qR , qS ), γ ∗ , ε)- 2 robust PRNG with
  input where t ≈ t, ε = qR (2εprg +qD εext +2−n+1 )
  as long as γ ∗ ≥ m+2 log(1/εext )+1, n ≥ m + 2
  log(1/εext ) + log(qD ) + 1.

Yeah, what he said!

Nowhere do they seem to show that /dev/random is actually insecure.
What they seem to show is that it does not meet the robustness
criterion that they arbitrarily picked for their own construction.

Their key test is on pages 23-24, and begins with After a state
compromise, A (the adversary) knows all parameters.  The comparison
STARTS with the idea that the enemy has figured out all of the hidden
internal state of /dev/random.  Then the weakness they point out seems
to be that in some cases of new, incoming randomness with
mis-estimated entropy, /dev/random doesn't necessarily recover over
time from having had its entire internal state somehow compromised.

This is not very close to what /dev/random is not robust means in
English.  Nor is it close to what others might assume the paper
claims, e.g. /dev/random is not safe to use.


PS: After attending a few crypto conferences, I realized that
academic pressures tend to encourage people to write incomprehensible
papers, apparently because if nobody reading their paper can
understand it, then they look like geniuses.  But when presenting at
a conference, if nobody in the crowd can understand their slides, then
they look like idiots.  So the key to understanding somebody's
incomprehensible paper is to read their slides and watch their talk,
80% of which is often explanations of the background needed to
understand the gibberish notations they invented in the paper.  I
haven't seen either the slides or the talk relating to this paper.
The cryptography mailing list

Re: [Cryptography] PGP Key Signing parties

2013-10-10 Thread John Gilmore
 Does PGP have any particular support for key signing parties built in or is
 this just something that has grown up as a practice of use?

It's just a practice.  I agree that building a small amount of automation
for key signing parties would improve the web of trust.

I have started on a prototype that would automate small key signing
parties (as small as 2 people, as large as a few dozen) where everyone
present has a computer or phone that is on the same wired or wireless

 I am specifically thinking of ways that key signing parties might be made
 scalable so that it was possible for hundreds of thousands of people...

An important user experience point is that we should be teaching GPG
users to only sign the keys of people who they personally know.
Having a signature that says, This person attended the RSA conference
in October 2013 is not particularly useful.  (Such a signature could
be generated by the conference organizers themselves, if they wanted
to.)  Since the conference organizers -- and most other attendees --
don't know what an attendee's real identity is, their signature on
that identity is worthless anyway.

So, if I participate in a key signing party with a dozen people, but I
only personally know four of them, I will only sign the keys of those
four.  I may have learned a public key for each of the dozen, but that
is separate from me signing those keys.  Signing them would assert to
any stranger that I know that this key belongs to this identity, which
would be false and would undermine the strength of the web of trust.


The cryptography mailing list

[Cryptography] System level security in low end environments

2013-10-05 Thread John Gilmore
 b.  There are low-end environments where performance really does
 matter.  Those often have rather different properties than other
 environments--for example, RAM or ROM (for program code and S-boxes)
 may be at a premium.

Such environments are getting very rare these days.  For example, an
electrical engineer friend of mine was recently working on designing a
cheap aimable mirror, to be deployed by the thousands to aim sunlight
at a collector.  He discovered that connectors and wires are more
expensive than processor chips these days!  So he ended up deciding to
use a system-on-chip with a built-in radio that eliminated the need to
have a connector or a wire to each mirror.  (You can print the antenna
on the same printed circuit board that holds the chip and the

What dogs the security of our systems these days is *complexity*.  We
don't have great security primitives to just drop into place.  And the
ones we do have, have complicated tradeoffs that come to the fore
depending on how we compound them with other design elements (like
RNGs, protocols, radios, clocks, power supplies, threat models, etc).
This is invariant whether the system is low end or high end.

That radio controlled mirror can be taken over by a drive-by attacker
in a way that would take a lot more physical labor to mess up a
wire-controlled one.  And if the attack aimed two hundred mirrors at
something flammable, the attacker could easily start a dangerous fire
instead of making cheap solar energy.  (Denial of service is even
easier - just aim the mirrors in random directions and the power goes
away.  Then what security systems elsewhere were depending on that
power?  This might just be one cog in a larger attack.)  Some of the
security elements are entirely external to the design.  For example,
is the radio protocol one that's built into laptops by default, like
wifi or bluetooth?  Or into smartphones?  Or does it require custom
hardware?  If not, a teenager can more easily attack the mirrors --
and a corrupt government can infect millions of laptops and phones
with malware that will attack mirror arrays that they come near to.

For products that never get made in the millions, the design cost
(salaries and time) is a significant fraction of the final cost per
unit.  Therefore everybody designs unencrypted and unauthenticated
stuff, just because it's easy and predictable.

For example it's pretty easy to make the system-on-chip above send or
receive raw frames on the radio.  Harder to get it to send or receive
UDP packets (now it needs an IP address, ARP, DHCP, more storage, ...).
Much harder to get it to send or receive *authenticated* frames or UDP
packets (now it needs credentials; is it two-way authenticated, if so
it needs a way to be introduced to its system, etc).  Much harder
again to get it to send or receive *encrypted* frames or UDP packets
(now it needs keys too, and probably more state to avoid replays,
etc).  And how many EE's who could debug the simple frame sending
firmware and hardware, can debug a crypto protocol they've just
implemented (even making the dubious assumpion that they compounded
the elements in a secure way and have just made a few stupid coding


The cryptography mailing list

Re: [Cryptography] encoding formats should not be committee'ized

2013-10-01 Thread John Gilmore
  Here's a crazy idea: instead of using one of these formats, use a
  human readable format that can be described by a formal grammar
  which is hopefully regular, context-free, or context-sensitive in a
  limited manner

If only we could channel the late Jon Postel.  Didn't you ever notice
how almost all the early Arpanet/Internet standards use plain text
separated by newlines, simply parsed, with a word at the front of each
line that describes what is on the line?  Like, for example, the
header of this email message.  And the SMTP exchange that delivered it
to your mailbox.

It makes everything so easy to debug...and extend...and understand.
And it turns out to often be more compact than binary formats.
Much better than binary blobs that not even their mother could love.

The cryptography mailing list

Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread John Gilmore
 And the problem appears to be compounded by dofus legacy implementations
 that don't support PFS greater than 1024 bits. This comes from a
 misunderstanding that DH keysizes only need to be half the RSA length.
 So to go above 1024 bits PFS we have to either
 1) Wait for all the servers to upgrade (i.e. never do it because the won't
 2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits
 or above'.

Can the client recover and do something useful when the server has a
buggy (key length limited) implementation?  If so, a new cipher suite
ID is not needed, and both clients and servers can upgrade asynchronously,
getting better protection when both sides of a given connection are
running the new code.

In the case of (2) I hope you mean yes we really do PFS with an
unlimited number of bits.  1025, 2048, as well as 16000 bits should work.

The cryptography mailing list

[Cryptography] An NSA mathematician shares his from-the-trenches view of the agency's surveillance activities

2013-09-17 Thread John Gilmore
Forwarded-By: David Farber
Forwarded-By: Annie I. Anton Ph.D.

NSA cryptanalyst: We, too, are Americans

Summary: ZDNet Exclusive: An NSA mathematician shares his from-the-trenches 
view of the agency's surveillance activities.

By David Gewirtz for ZDNet Government | September 16, 2013 -- 12:07 GMT (05:07 

An NSA mathematician, seeking to help shape the ongoing debate about the 
agency's foreign surveillance activities, has contributed this column to ZDNet 
Government. The author, Roger Barkan, also appeared in last year's National 
Geographic Channel special about the National Security Agency.

The rest of this article contains Roger's words only, edited simply for 

Many voices -- from those in the White House to others at my local coffee shop 
-- have weighed in on NSA's surveillance programs, which have recently been 
disclosed by the media.

As someone deep in the trenches of NSA, where I work on a daily basis with data 
acquired from these programs, I, too, feel compelled to raise my voice. Do I, 
as an American, have any concerns about whether the NSA is illegally or 
surreptitiously targeting or tracking the communications of other Americans?

The answer is emphatically, No.

NSA produces foreign intelligence for the benefit and defense of our nation. 
Analysts are not free to wander through all of NSA's collected data 
willy-nilly, snooping into any communication they please. Rather, analysts' 
activity is carefully monitored, recorded, and reviewed to ensure that every 
use of data serves a legitimate foreign intelligence purpose.

We're not watching you. We're the ones being watched.

Further, NSA's systems are built with several layers of checks and redundancy 
to ensure that data are not accessed by analysts outside of approved and 
monitored channels. When even the tiniest analyst error is detected, it is 
immediately and forthrightly addressed and reported internally and then to 
NSA's external overseers. Given the mountains of paperwork that the incident 
reporting process entails, you can be assured that those of us who design and 
operate these systems are extremely motivated to make sure that mistakes happen 
as rarely as possible!

A myth that truly bewilders me is the notion that the NSA could or would spend 
time looking into the communications of ordinary Americans. Even if such 
looking were not illegal or very dangerous to execute within our systems, given 
the monitoring of our activities, it would not in any way advance our mission. 
We have more than enough to keep track of -- people who are actively planning 
to do harm to American citizens and interests -- than to even consider spending 
time reading recipes that your mother emails you.

There's no doubt about it: We all live in a new world of Big Data.

Much of the focus of the public debate thus far has been on the amount of data 
that NSA has access to, which I feel misses the critical point. In today's 
digital society, the Big Data genie is out of the bottle. Every day, more 
personal data become available to individuals, corporations, and the 
government. What matters are the rules that govern how NSA uses this data, and 
the multiple oversight and compliance efforts that keep us consistent with 
those rules. I have not only seen but also experienced firsthand, on a daily 
basis, that these rules and the oversight and compliance practices are 
stringent. And they work to protect the privacy rights of all Americans.

Like President Obama, my Commander-in-Chief, I welcome increased public 
scrutiny of NSA's intelligence-gathering activities. The President has said 
that we can and will go further to publicize more information about NSA's 
operating principles and oversight methodologies. I have every confidence that 
when this is done, the American people will see what I have seen: that the NSA 
conducts its work with an uncompromising respect for the rules -- the laws, 
executive orders, and judicial orders under which we operate.

As this national dialogue continues, I look to the American people to reach a 
consensus on the desired scope of U.S. intelligence activities. If it is 
determined that the rules should be changed or updated, we at NSA would 
faithfully and effectively adapt. My NSA colleagues and I stand ready to 
continue to defend this nation using only the tools that we are authorized to 
use and in the specific ways that we are authorized to use them. We wouldn't 
want it any other way.

We never forget that we, too, are Americans.

Roger Barkan, a Harvard-trained mathematician, has worked as an NSA 
cryptanalyst since 2002. The views and opinions expressed herein are those of 
the author and do not necessarily reflect those of the National Security 
Agency/Central Security Service.
The cryptography mailing list

[Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-17 Thread John Gilmore
 not volunteer, thus I
don't use Facebook, Google, etc.  When collection is involuntary, like
with NSA's Big Data, I work to limit their power, both to collect, and
to use; and then I don't believe they will follow the rules anyway,
because of all the historical evidence.  So I arrange my life to not
leave a big data trail: I don't use ATMs, I pay with cash, don't carry
identification, don't use Apple or Google or Microsoft products, etc.

Your government will not make a big announcement when it has become a
police state.  So if you're a patriot, you'd better practice now: how
to avoid stupid mistakes that would let a police state catch you when
telling the truth to your fellow citizens becomes a crime -- like it
did for Mr. Snowden, Ms. Manning, Mr. Ellsberg, Mr. Nacchio,
Mr. Assange, and Ms. Mayer (who claims she's been dragged silently
kicking and screaming to spy on her customers rather than be
prosecuted for telling them the truth).  NSA and its Big Data will not
be defending you when the secret police come to bust you for
publishing secrets.  NSA will be on the cops' and prosecutors' side.
They have recently filed legal memos declaring that they don't have to
help the defense side in any criminal trials, even when NSA has
exculpatory data, and even when NSA provided wiretapped Big Data that
led the prosecutors to you.  Defending the citizens from the excesses
of government isn't their job.  Defending their turf, their budget,
and their powers is their job.

John Gilmore
The cryptography mailing list

Re: [Cryptography] An NSA mathematician shares his from-the-trenches view of the agency's surveillance activities

2013-09-17 Thread John Gilmore
Techdirt takes apart his statement here:

  NSA Needs To Give Its Rank-and-File New Talking Points Defending
  Surveillance; The Old Ones Are Stale
  from the that's-not-really-going-to-cut-it dept
  by Mike Masnick, Tue, Sep 17th 2013

  It would appear that the NSA's latest PR trick is to get out beyond
  the top brass -- James Clapper, Keith Alexander, Michael Hayden and
  Robert Litt haven't exactly been doing the NSA any favors on the PR
  front lately -- and get some commentary from the rank and file.
  ZDNet apparently agreed to publish a piece from NSA mathemetician/
  cryptanalyst Roger Barkan in which he defends the NSA using a bunch
  of already debunked talking points. What's funny is that many of
  these were the talking points that the NSA first tried out back in
  June and were quickly shown to be untrue. However, let's take a
  look. It's not that Barkan is directly lying... it's just that he's
  setting up strawmen to knock down at a record pace.

The cryptography mailing list

[Cryptography] FISA court releases its Primary Order re telephone metadata

2013-09-17 Thread John Gilmore
The FISA court has a web site (newly, this year):

Today they released a Memorandum Opinion and Primary Order in 
case BR 13-109 (Business Records, 2013, case 109), which lays
out the legal reasoning behind ordering several telephone companies
to prospectively give NSA the calling records of every subscriber.
That document is here:

I am still reading it...

The cryptography mailing list

Re: [Cryptography] Perfection versus Forward Secrecy

2013-09-12 Thread John Gilmore
  I wouldn't mind if it had been called Pretty Good Forward Secrecy instead,
  but it really is a lot better than regular public key.
 My point was that the name is misleading and causes people to look for more
 than is there.

There doesn't seem to be much downside to just calling it Forward
Secrecy rather than Perfect Forward Secrecy.  We all seem to agree
that it isn't perfect, and that it is a step forward in security, at a
moderate cost in latency and performance.

The cryptography mailing list

Re: [Cryptography] Matthew Green on BULLRUN: briefly censored

2013-09-12 Thread John Gilmore

Johns Hopkins University censored this exact blog post by Prof. Green,
because of a complaint from its local defense contractor affiliated
with NSA, the Applied Physics Laboratory

The university gets slight credit for backtracking one day after the
censorship story hit Twitter and the press.  So the blog post is now
back (and is still worth reading).

Here's the story:

Now, why is it that so many folks with links to NSA think like
totalitarians?  It's wonderful seeing them crawl out of the woodwork
and try to give orders to the public about what it is allowed to
think, what it is allowed to read, and what it is allowed to write.
It's only wonderful because the huge public counter-reaction protects
us -- the totalitarians reveal their true colors, but they don't
actually get to tell us what to do.  Thank you, fellow denizens of the
world, for creating your own freedom, by making a lot of noise when
some NSA-affiliated idiot tries to take it away.


PS: How much NSA tax money does JHU's Applied Physics Lab get?  I don't
know, but here's a guy on LinkedIn who worked at NSA in the past,
works at the Lab today, and brags that he's managing a $120M contract
from NSA:
The cryptography mailing list

Re: [Cryptography] Points of compromise

2013-09-09 Thread John Gilmore
Phillip Hallam-Baker wrote:
 5) Protocol vulnerability that IETF might have fixed but was discouraged
 from fixing.

By the way, it was a very interesting exercise to actually write out
on graph paper the bytes that would be sent in a TLS exchange.  I did
this with Paul Wouters while working on how to embed raw keys in TLS
(that would be authenticated from outside TLS, such as via DNSSEC).

Or, print out a captured TLS packet exchange, and try to sketch around
it what each bit/byte is for.  The TLS RFCs, unlike most Jon Postel
style RFCs, never show you the bytes -- they use a high level
description with separate rules for encoding those descriptions on
the wire.

There is a LOT of known plaintext in every exchange!

Known plaintext isn't the end of the world.  But it makes a great crib
for cryptanalysts who have some other angle to attack the system with.
Systems with more known plaintext are easier to exploit than those
with less.  Is that why TLS has more known plaintext than average?
Only the NSA knows for sure.


The cryptography mailing list

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread John Gilmore
  First, DNSSEC does not provide confidentiality.  Given that, it's not
  clear to me why the NSA would try to stop or slow its deployment.

DNSSEC authenticates keys that can be used to bootstrap
confidentiality.  And it does so in a globally distributed, high
performance, high reliability database that is still without peer in
the world.

It was never clear to me why DNSSEC took so long to deploy, though
there was one major moment at an IETF in which a member of the IESG
told me point blank that Jim Bidzos had made himself so hated that the
IETF would never approve a standard that required the use of the RSA
algorithm -- even despite a signed blanket license for use of RSA for
DNSSEC, and despite the expiration of the patent.  I thought it was an
extreme position, and it was very forcefully expressed -- but it was
apparently widely enough shared that the muckety-mucks did force the
standard to go back to the committee and have a second algorithm added
to it (which multiplied the interoperability issues considerably and
caused several years of further delay).


PS: My long-standing domain registrar ( STILL doesn't support
DNSSEC records -- which is why doesn't have DNSSEC
protection.  Can anybody recommend a good, cheap, reliable domain
registrar who DOES update their software to support standards from ten
years ago?
The cryptography mailing list

Re: [Cryptography] IA side subverted by SIGINT side

2013-09-06 Thread John Gilmore
 I have a small amount of raised eyebrow because the greatest bulwark
 we have against the SIGINT capabilities of any intelligence agency are
 that agency's IA cousins. I don't think that the Suite B curves would
 have been intentionally weak. That would be a shock.

Then be shocked, shocked that the muscular exploitation side of an
intelligence agency would overrule the weak Information Assurance
side.  It happens over and over.

It even happens in companies that have no SIGINT side, like Crypto AG,
when somebody near the top is corrupted or blackmailed into submission.

As late as 1996, the National Academy of Sciences CRISIS panel was
tasked by the US Congress with coming up with a US crypto policy that
would be good for the whole nation, updating the previous policy that
was driven by spy agency and law enforcement excesses to sacrifice the
privacy and security of both people and companies.  After taking a
large variety of classified and unclassified input, the panel's
unanimous consensus suggested that everybody standardize on 56-bit
DES, which they KNEW was breakable.

Diffie, Hellman and Baran persuasively argued in the 1970s when DES
was up for standardization that a brute force DES cracker was
practical; they recommended longer keys than 56 bits.  See for example
this contemporaneous 1976 cassette recording / transcript:

Subsequent papers in 1993 (Weiner, Efficient DES Key Search) and in
1996 (Goldberg  Wagner, Architectural Considerations for
Cryptanalytic Hardware) provided solid designs for brute-force DES
key crackers.  Numerous cryptographers and cypherpunks provided input
to the CRISIS panel as well.  They even cited these papers and input
on page 288 of their report.

I have never seen a subsequent accounting by the CRISIS panel members
for this obviously flawed recommendation.  It was rapidly obsoleted by
subsequent developments when in June 1997 Rocke Verser coordinated a
team to publicly crack DES by brute force in months; when in 1998 EFF
revealed its DES Cracker hardware that cost $250K and could crack DES
in a week; and when in 2000 the export regs were effectively removed
on any strength encryption in mass market and free software, a change
forced upon them by EFF's success in Dan Bernstein's First Amendment

The panel members included substantial information-assurance folks
like Marty Hellman and Peter Neumann, Lotus Notes creator Ray Ozzie,
and Willis Ware (an engineer on WW2 radars and the Johnniac, who later
spread computers throughout aviation design and the Air Force, ended
up at RAND, and served on the 1974 Privacy Act's Privacy Protection
Study Commission).  But several of those people (and others on the
panel such as Ann Caracristi, long-term NSA employee and 2-year deputy
director of NSA) also have a long history involved with classified
military work, which makes their publicly-uttered statements unlikely
to reflect their actual beliefs.


PS: The CRISIS panel also recommended that encryption of any strength
be exportable if the proposed product user is willing to provide
access to decrypted information upon a legally authorized request.
They assumed the ongoing existence of a democratic civilian government
and a functioning independent court system in the United States -- an
assumption that is currently questionable.  I don't think the panel
foresaw that a single legally authorized request would come with a
gag order from a secret court, would purport to target a single
unnamed individual, but would nevertheless require that information
about every person making a phone call in the United States be turned
over to a classified government agency for permanent storage and
exploitation.  Nor did they see that the government they were part of
would be committing serious international war crimes including political
assassination, torture, indefinite detention without trial, and wars
of aggression, on an ongoing basis.  Either that, or maybe NSA
blackmailed the committee members into these recommendations, just as
J. Edgar Hoover blackmailed his way through 40 years of unchecked
power.  Trouble is, Hoover eventually had to die; NSA, not being
human, does not have that natural limit.
The cryptography mailing list

Re: [Cryptography] NSA hates sunshine

2013-09-06 Thread John Gilmore
  As of Jan-2014 CAs are forbidden from issuing/signing anything less than 
  2048 certs.  
 For some value of forbidden. :-)

Yeah, just like employees at big companies are forbidden to reveal
how they are collaborating with NSA.

Years ago I heard what happened when George Davida filed a patent on
something related to encryption, all the way back in 1978, and
eventually received a communication from the government telling him
that his patent was subject to patent secrecy, that it would never
issue, and that he could not even tell anyone that it had been
suppressed, nor could he ever tell anyone how his invention worked.
In theory, the law was all on the NSA's and the patent office's side.
But in fact, they were in a very weak position.

Instead of acquiescing, Davida shouted it to the housetops, engaged
the press and his university about censorship of academic freedom,
involved his Congressperson, etc.  Within months, the secrecy order
was rescinded.

NSA hates sunshine.  NSA secrecy relies on the cowardice of most
people.  Courage is all it takes to beat them.

If NSA tries to shut you up, just shine a lot of attention on their
attempt to shut you up.  Spread the information that they are trying
to suppress, far and wide.  Send copies to a dozen random post-office
boxes in different cities, asking the recipient to physically bring it
in to their local newspaper.  Leave your cellphone at home, then stash
copies in places that you don't frequent, so that government agents
can't come raid your house and office and steal all copies of what
they're trying to suppress.  In my case I posted something like this
(a suppressed paper by Ralph Merkle) to Usenet, and it was suddenly on
thousands of servers overnight.

NSA habitually decides that the publicity that their activities get
from any continued effort to suppress the information is FAR worse
than the damage caused by the initial release of the info.  Any
efforts they make to shut you up, prosecute you, jail you, etc give
you a perfect soapbox, and the attention of the news media and the
public.  Keep repeating the info, from your jail cell if necessary,
and you're likely to win.  Because if NSA relents, your revelations
become last week's news and get a lot less public attention.  When
NSA found out I had copies of an early encryption tutorial that they
considered classified (I was suing them under FOIA to get a copy, but
then found copies in a public library), they first tried to persuade
my lawyer to bring in all the copies so we can secure them in a safe
place.  That's NSA-ese for throw them down a deep hole where you'll
never see them again.  When we refused, and instead contacted the New
York Times, which printed a story about the attempted suppression, NSA
and DoJ buckled within one day.  (Indeed, the way I found out they had
suddenly declassified the document is that they called the NYT
reporter to tell him.  They never did tell me; I got the news from the

As part of suing the government, the Al Haramain foundation
accidentally received a government report making it clear that the
government had illegally wiretapped their phone calls.  They noticed
this but it took the government 60 days to notice.  Unfortunately,
instead of making hundreds of copies of the document, and spreading
them all over the world and to the press, they did what the government
asked, and destroyed all their copies of the document.  Once all
copies of the document were gone, NSA went to the court and claimed
first that the whole thing was a state secret and couldn't proceed,
and then second that the group didn't have any standing to challenge
the wiretaps in court because Al Haramain (now) had zero evidence that
the taps had even occurred.  The foundation and their lawyers have
literally spent years of work recovering from that one mistake, and
only the kind indulgence of a smarter than average judge enabled their
lawsuit to survive at all.  See this story by one of their lawyers:

Don't make the same mistake when NSA, or their minions at the FBI or
FISA or DoJ come to threaten YOU to suppress information that came to you
through no fault of your own.

John Gilmore

The cryptography mailing list

[Cryptography] Snowden fabricated digital keys to get access to NSA servers?

2013-06-28 Thread John Gilmore

The Daily Beast

Greenwald: Snowden's Files Are Out There if 'Anything Happens' to Him
by Eli Lake Jun 25, 2013 1:36 PM EDT

Snowden has shared encoded copies of all the documents he took so that they 
won't disappear if he does, Glenn Greenwald tells Eli Lake.

As the U.S. government presses Moscow to extradite former National Security 
Agency contractor Edward Snowden, America's most wanted leaker has a plan B. 
The former NSA systems administrator has already given encoded files containing 
an archive of the secrets he lifted from his old employer to several people. If 
anything happens to Snowden, the files will be unlocked.

Glenn Greenwald, the Guardian journalist who Snowden first contacted in 
February, told The Daily Beast on Tuesday that Snowden has taken extreme 
precautions to make sure many different people around the world have these 
archives to insure the stories will inevitably be published. Greenwald added 
that the people in possession of these files cannot access them yet because 
they are highly encrypted and they do not have the passwords. But, Greenwald 
said, if anything happens at all to Edward Snowden, he told me he has arranged 
for them to get access to the full archives.

The fact that Snowden has made digital copies of the documents he accessed 
while working at the NSA poses a new challenge to the U.S. intelligence 
community that has scrambled in recent days to recover them and assess the full 
damage of the breach. Even if U.S. authorities catch up with Snowden and the 
four classified laptops the Guardian reported he brought with him to Hong Kong 
the secrets Snowden hopes to expose will still likely be published.

A former U.S. counterintelligence officer following the Snowden saga closely 
said his contacts inside the U.S. intelligence community think Snowden has 
been planning this for years and has stashed files all over the Internet. This 
source added, At this point there is very little anyone can do about this.

The arrangement to entrust encrypted archives of his files with others also 
sheds light on a cryptic statement Snowden made on June 17 during a live chat 
with The Guardian. In the online session he said, All I can say right now is 
the U.S. government is not going to be able to cover this up by jailing or 
murdering me. Truth is coming, and it cannot be stopped.

Last week NSA Director Keith Alexander told the House Permanent Select 
Committee on Intelligence that Snowden was able to access files inside the NSA 
by fabricating digital keys that gave him access to areas he was not allowed to 
visit as a low-level contractor and systems administrator. One of those areas 
included a site he visited during his training that Alexander later told 
reporters contained one of the Foreign Intelligence Surveillance Act (FISA) 
Court orders published by The Guardian and The Washington Post earlier this 

[John here.  Let's try some speculation about what this phrase,
fabricating digital keys, might mean.]

The cryptography mailing list

Re: Computer health certificate plan: Charney of DoJ/MS

2010-10-07 Thread John Gilmore
 it that way.

Security measures should report to the system owner -- not to the ISP
or the manufacturer.  The owner of the machine should determine which
software it's appropriate for it to run.  This whole idea of
collectivist approval of your computing environment gives me the
willies.  In their model, you'd be perfectly free to write a new piece
of software, sort of the way you are perfectly free to design and
build a new house.  First you spend tens of thousands of dollars on a
government-licensed architect and a similarly licensed structural
engineer.  Then you submit your plans to a bureaucrat, and wait.  And
wait.  And they demand changes.  And you negotiate, but they really
don't care what you want; you NEED their approval.  So you wait some
more, then give in to whatever they want.  Don't forget to use union
labor to build your software -- it'll be required.  And any bureaucrat
can come by after an alcoholic lunch to inspect your software -- and
if you don't properly kiss their ass and/or bribe them, their red
tag will physically keep your software from being usable on every
computer.  Periodically politicians will write bizarre new
requirements into the rules (e.g. you can't use PVC pipe because that
would put local plumbers out of work; or you can't use portable
languages because then your software might run on competing platforms),
and you'll just have to follow orders.  At least that's how the
Planning Department and Building Inspection Department work here in
San Francisco.  I don't see why a software monopoly enforced from the
top would work any different.  Writing software for any Apple platform
except the Mac is already like that.

John Gilmore

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Something you have, something else you have, and, uh, something else you have

2010-09-27 Thread John Gilmore
 I don't know how NZ banks do it; in the US, they use the phone
 number you're calling from.  Yes, it's spoofable, but most folks (a)
 don't know it, and (b) don't know how.

No, they don't use the phone number to validate anything.  I routinely
ignore the instructions to call from your home phone.  I call in from
random payphones to activate my cretin cards, and they activate just

Perhaps there's a database record made somewhere with the phone number
of that payphone -- but the card is active, and I could be stealing 
money from it immediately.

Note also that their ability to get that phone number depends on the
FCC exemption that allows 800-numbers to bypass caller-ID blocking.
If the FCC ever comes to its senses (I know, unlikely) then making
somebody call an 800-number will not even produce a phone number.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: 2048-bit RSA keys

2010-08-18 Thread John Gilmore
 It's worth a quote from the paper at CRYPTO '10 on factorization of a
 768-bit number:

A good paper by top academics.

 Another conclusion from
 our work is that we can confidently say that if we restrict ourselves to
 an open community, academic effort such as ours and unless something
 dramatic happens in factoring, we will not be able to factor a 1024-bit
 RSA modulus within the next five years [27]. After that, all bets are off.

The 768-bit team started crunching in early 2007 and completed three
years later in December 2009.  They used fewer than a thousand
commercially available unspecialized computers, connected by
commercially available interconnects.  Their intermediate results fit
on less than a dozen $150 2TB disk drives.  And one of their results
is that it's better to scale up the part of the process that scales
linearly with minimal communication (sieving), to reduce the complexity
of the nonlinear parts.

(Given their prediction that they won't be done with a 1024-bit number
within 5 years, but they will be done well within the next decade,
which 1024-bit number are they starting to factor now?  I hope it's a
major key that certifies big chunks of the Internet for https today,
rather than one of those silly challenge keys.)

Their reported time and difficulty results are great lower bounds
on the capabilities of the covert or criminal -- but don't mistake
them for upper bounds!

No open-community academic has ever designed, built and deployed
special-purpose hardware for factoring numbers of this size.  Yet they
have published designs that claim order-of-magnitude speedups or
better on time-consuming parts of the process.  EFF read similar
published paper designs for DES cracking.  When a few years later we
built the actual device, we discovered that the basic structure of the
academics' designs really did work.  There are good reasons to believe
that the covert community *has* built RSA cracking hardware as good or
better than what's been publicly designed.  And in some places covert
agencies and organized crime are partners, thus merely stealing large
amounts of money, as opposed to military objectives, might motivate a
covert key crack.

Here is Europe's consensus report on recommended key sizes, also
co-authored by Lenstra: 

  ECRYPT2 Yearly Report on Algorithms and Keysizes (2010).

  For RSA, we recommend |N| = 1024 for legacy systems and |N| = 2432
  for new systems.

A more accessible table of ECRYPT2-2010 recommendations:

  Bits  Security level
  1008:  Short-term protection against medium organizations,
 medium-term protection against small organizations
  1248:  Very short-term protection against agencies,
 long-term protection against small organizations
 Smallest general-purpose level,
  1776:  Legacy standard level
  2432:  Medium-term protection
  3248:  Long-term protection
 Generic application-independent recommendation,
 protection from 2009 to 2040
  15424:  Foreseeable future
  Good protection against quantum computers,
  unless Shor's algorithm applies


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: non 2048-bit keys

2010-08-15 Thread John Gilmore
  ... 2048-bit keys performing
 at 1/9th of 1024-bit. My own internal benchmarks have been closer to
 1/7th to 1/8th. Either way, that's back in line with the above stated
 90-95% overhead. Meaning, in Dan's words 2048 ain't happening.

Can I abuse a phrase and call this binary thinking?

There is no reason that the next step after 1024 bits has to be 2048 bits.
How about 1032 bits?  Or 1040?  Or 1104?
How about 1200 bits?  How about 1536?  How about 1600?  1808?

I have a theory that if everyone picked a pseudo-random key size above
1024 and below 2048, rather than standardizing on Yet Another Fixed
Keysize, we'd avoid making a powerful incentive for bad guys to build
a key-cracker optimized for one size.  Which incentive we have
currently created at 1024 bits.  It's the Microsoft Windows of key
sizes -- the target that gets you 90+% of the market.  So pick a
larger size than 1024 that your server load can live with, even if it
isn't 2048.  And don't tell anybody else what size you picked :-).


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: lawful eavesdropping by governments - on you, via Google

2010-08-03 Thread John Gilmore
 There is no guarantee, once an eavesdropping system is
 implemented, that it will be used only for legitimate purposes -- see,
 for example, the scandal in which Greek government ministers were
 listened to using the lawful intercept features of cellphone

And, by the way, what ever happened with the Google lawful access
system and China?  Inside the Google internal network is a whole
wiretapping subsystem designed to answer orders and requests from cops
and governments all over the globe (including US warrants, subpoenas,
National Security Letters, and court orders, as well as those of other
countries).  The trigger that gave Google the sudden courage to tell
the Chinese where to stuff it, was that they analyzed the malware
which had succeeded in penetrating their internal network, and
discovered that it was designed to specifically try to break into
Google's internal wiretapping system -- presumably so China could do
covert wiretaps into the mountain of up-to-the-minute personal data
that is Google -- wiretaps that wouldn't get reported to the US
government or to Google management or to anybody else.

So, six months later, Google and the Chinese government had a nicely
staged negotiated moment where each of them could claim victory, and
things have gone more or less back to normal on the surface.  But
nobody on either side has said anything about what kind of access the
government of China is getting to Google's internal network.  My guess
is that their detente also involved some negotiation about that, not
just about censored or non-censored searches.  Anybody know more?


PS: One of the great things about having a big global company that
collects and retains massive data about individuals is that
governments can get that data with simple subpoenas.  Most of the time
they could never get a judge to sign a warrant, or the legislature to
pass a law, to collect the same information directly from the data
subject (i.e. you).  Why?  A terrible US Supreme Court decision
(California Bankers Association) from decades ago decided that you
have zero Fourth Amendment protection for data that third parties have
collected about you.  The government can't collect it themselves, by
watching you or searching your house or your communications, but they
can grab it freely from anybody who happens to collect it.  (In a
classic blow-a-hole-in-the-constitution-and-They-Will-Come maneuver,
numerous laws now *require* businesses to collect all kinds of data
about their customers, employees, etc, IN ORDER that governments can
later look at it with no Fourth Amendment protection for the victims.)
Google, of course, needed no law from the Feds to inspire them to make
a database entry every time you move your mouse from one side of the
screen to the other.  Or open your Google phone.  Or call their free
411 service.  Or read your email.  Or visit any web site (free Google
Analytics is on most, even if there are no ads).  Or ...

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: What if you had a very good patent lawyer...

2010-07-23 Thread John Gilmore
It's pretty outrageous that anyone would try to patent rolling barcoded
dice to generate random numbers.

I've been generating random strings from dice for years.  I find that
gamers' 20-sided dice are great; each roll gives you a hex digit, and
anytime you roll a 17 thru 20, you just roll again.  One die will do;
you just roll it as many times as you need hex digits.

Presumably pointing a camera at ordinary dice could automate the data
collection -- hey, wait, let me get my patent lawyer!


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread John Gilmore
 designed 25 years ago would not scale to today's load.  There was a  
 crucial design mistake: DNS packets were limited to 512 bytes.  As a  
 result, there are 10s or 100s of millions of machines that read *only*  
 512 bytes.

Yes, that was stupid, but it was done very early in the evolution of
the Internet (when there were only a hundred machines or so).

Another bizarre twist was that the Berkeley socket interface to UDP
packets would truncate incoming packets without telling the user
program.  If a user tried to read 512 bytes and a 600-byte packet came
in, you'd get the first 512 bytes and no error!  The other 88 bytes
were just thrown away.  When this incredible 1980-era design decision
was revised for Linux, they didn't fix it!  Instead, they return the
512 bytes, throw away the 88 bytes, and also return an error flag
(MSG_TRUNC).  There's no way to either receive the whole datagram, or
get an error and try again with a bigger read; if you get an error,
it's thrown away some of the data.

When I looked into this in December '96, the BIND code (the only major
implementation of a name server for the first 20 years) was doing
512-byte reads (which the kernel would truncate without error).  Ugh!
Sometimes the string and baling wire holding the Internet together
becomes a little too obvious.

 It is possible to have larger packets, but only if there is prior  
 negotiation via something called EDNS0.

There's no prior negotiation.  The very first packet sent to a root
name server -- a query, about either the root zone or about a TLD --
now indicates how large a packet can be usefully returned from the
query.  See RFC 2671.  (If there's no OPT field in the query, then
the reply packet size is 512.  If there is, then the reply size is
specified by a 16-bit field in the packet.)

In 2007, about 45% of DNS clients (who sent a query on a given day to
some of the root servers) specified a reply size.  Almost half of
those specified 4096 bytes; more than 80% of those specified 2048 or
4096 bytes.  The other ~55% of DNS clients didn't specify, so are
limited to 512 bytes.

For a few years, there was a foolish notion from the above RFC that
clients should specify arbitrarily low numbers like 1280, even if they
could actually process much larger packets.  4096 (one page) is, for
example, the size Linux allows client programs to reassemble even in
the presence of significant memory pressure in the IP stack.  See:

 That in turn means that there can be at most 13 root  
 servers.  More precisely, there can be at most 13 root names and IP  

Any client who sets the bit for send me the DNSSEC signatures along
with the records is by definition using RFC 2671 to tell the server
that they can handle a larger packet size (because the DNSSEC bit is
in the OPT record, which was defined by that RFC).

dig . ns doesn't use an OPT record.  It returns
a 496 byte packet with 13 server names, 13 glue IPv4 addresses, and
2 IPv6 glue addresses.

dig +nsid . ns uses OPT to tell the name server
that you can handle up to 4096 bytes of reply.  The reply is 643 bytes
and also includes five more IPv6 glue addresses.

Older devices can bootstrap fine from a limited set of root servers;
almost half the net no longer has that restriction.

 The DNS is working today because of anycasting;  
 many -- most?  all? -- of the 13 IP addresses exist at many points in  
 the Internet, and depend on routing system magic to avoid problems.   

Anycast is a simple, beautiful idea, and I'm glad it can be made to
work in IPv4 (it's standard in IPv6).

 At that, you still *really* want to stay below 1500 bytes, the Ethernet MTU.

That's an interesting assumption, but is it true?  Most IP-based
devices with a processor greater than 8 bits wide are able to
reassemble two Ethernet-sized packets into a single UDP datagram,
giving them a limit of ~3000 bytes.  Yes, if either of those datagrams
is dropped en route, then the datagram won't reassemble, so you've
doubled the likely failure rate.  But that's still much lower overhead
than immediately falling back to an 8-to-10-packet TCP connection,
particularly in the presence of high packet drop rates that would
also cause TCP to use extra packets.

  As it is today, if NSA (or any major country, organized crime
  group, or civil rights nonprofit) built an RSA key cracker, more
  than 50% of the RSA keys in use would fall prey to a cracker that
  ONLY handled 1024-bit keys.  It's probably more like 80-90%,
  actually.  Failing to use 1056, 1120, 1168-bit, etc, keys is just
  plain stupid on our (the defenders') part; it's easy to automate
  the fix.

 That's an interesting assumption, but is it true?

I've seen papers on the prevalence of 1024-bit keys, but don't have a 
ready URL.  It's a theory.  Any comments, NSA?

 In particular, is it really that useful to tune an cracking engine

Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread John Gilmore
 ts a fun story, but... RFC 4034 says RSA/SHA1 is mandatory and DSA is

I was looking at RFC 2536 from March 1999, which says Implementation
of DSA is mandatory for DNS security. (Page 2.)  I guess by March 2005
(RFC 4034), something closer to sanity had prevailed.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

EFF Warns Texas Instruments to Stop Harassing Calculator Hobbyists (for cracking public keys)

2009-10-14 Thread John Gilmore
FYI.  As I understand it, TI calculator boot ROMs use a 512 bit RSA
public key to check the signature of the software they're loading.
When hobbyists who wanted to run their own alternative OS software on
their calculator calculated the corresponding private key and were
thus able to sign their own software, TI sent them DMCA takedowns
claiming they had cracked TI's DRM.  As with the CSS keys, a
publish/takedown chase ensued.  Wikileaks has had the censored keys up
since August.  EFF is now representing the hobbyists, and may stand to
collect legal fees from TI.  Here's Schneier's take:


Electronic Frontier Foundation Media Release

For Immediate Release: Tuesday, October 13, 2009


Jennifer Stisa Granick
   Civil Liberties Director
   Electronic Frontier Foundation
   +1 415 436-9333 x134

EFF Warns Texas Instruments to Stop Harassing Calculator

Baseless Legal Threats Squash Free Speech, Innovation

San Francisco - The Electronic Frontier Foundation (EFF)
warned Texas Instruments (TI) today not to pursue its
baseless legal threats against calculator hobbyists who
blogged about potential modifications to the company's
programmable graphing calculators.

TI's calculators perform a signature check that allows
only approved operating systems to be loaded onto the
hardware.  But researchers were able to reverse-engineer
signing keys, allowing tinkers to install custom operating
systems and unlock new functionality in the calculators'
hardware.  In response to this discovery, TI unleashed a
torrent of demand letters claiming that the
anti-circumvention provisions of the Digital Millennium
Copyright Act (DMCA) required the hobbyists to take down
commentary about and links to the keys.  EFF represents
three men who received such letters.

The DMCA should not be abused to censor online discussion
by people who are behaving perfectly legally, said Tom
Cross, who blogs at It's legal to engage
in reverse engineering, and its legal to talk about reverse

In fact, the DMCA explicitly allows reverse
engineering to create interoperable custom software like
the programs the hobbyists are using.  Additionally, TI
makes its software freely available on its website, so
there is no connection between the use of the keys and
unauthorized distribution of the code.

This is not about copyright infringement.  This is about
running your own software on your own device -- a
calculator you legally bought, said EFF Civil Liberties
Director Jennifer Granick.  Yet TI still issued empty
legal threats in an attempt to shut down discussion of this
legitimate tinkering.  Hobbyists are taking their own tools
and making them better, in the best tradition of American

For the full letters sent to Texas Instruments by EFF on
behalf of their clients:

For this release:

About EFF

The Electronic Frontier Foundation is the leading civil
liberties organization working to protect rights in the
digital world. Founded in 1990, EFF actively encourages and
challenges industry and government to support free
expression and privacy online. EFF is a member-supported
organization and maintains one of the most linked-to
websites in the world at


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Certainty

2009-08-21 Thread John Gilmore
 Getting back towards topic, the hash function employed by Git is showing 
 signs of bitrot, which, given people's desire to introduce malware 
 backdoors and legal backdoors into Linux, could well become a problem in 
 the very near future.

 James A. Donald

 I believe attacks on Git's use of SHA-1 would require second pre-image
 attacks, and I don't think anyone has demonstrated such a thing for
 SHA-1 at this point. None the less, I agree that it would be better if
 Git eventually used better hash functions. Attacks only get better with
 time, and SHA-1 is certainly creaking.
 Emphasis on eventually, however. This is a as soon as convenient, not
 as soon as possible sort of situation -- more like within a year than
 within a week.
 Yet another reason why you always should make the crypto algorithms you
 use pluggable in any system -- you *will* have to replace them some day.
 Perry E. Metzger

 Of course, I still believe in hash algorithm agility: regardless of how 
 preimage attacks will be found, we need to be able to deal with them 
 --Paul Hoffman, Director

I tried telling this to Linus within a few weeks of the design, while
he was still writing git.  He rejected the advice.  Perhaps a
delegation of cryptographers should approach him -- before it's too

His biggest argument was that the important git trees would be off-net
and would not depend on public trees.  I think git is getting enough
use (e.g. by thousands of development projects other than the Linux
kernel) that those assumptions are probably no longer valid.

His secondary argument was that git only uses the hash as a
collision-free oracle, not a cryptographic hash.  But that's exactly
the problem.  If malicious people can make his oracle produce
collisions, other parts of the git code will make false assumptions
that can be exploited.

His final argument is the same one I heard NSA make to Diffie and
Hellman about DES in 1976: the crypto will never be the weakest link
in the system, so it doesn't really have to be that strong.  That
argument was wrong then and it's wrong now.  The cost of using a
strong cryptosystem isn't significantly greater than the cost of using
a weak cryptosystem; and cracking the crypto HAS become the weakest
link in the overall security of many systems (CSS is an obvious one).


Subject: SHA1 is broken; be sure to parameterize your hash function
Date: Sat, 23 Apr 2005 15:21:07 -0700
From: John Gilmore

It's interesting watching git evolve.  I have one comment, which is
that the code and the contributors are throwing around the term SHA1
hash a lot.  They shouldn't.  SHA1 has been broken; it's possible to
generate two different blobs that hash to the same SHA1 hash.  (MD5
has totally failed; there's a one-machine one-day crack.  SHA1 is
still *hard* to crack.)  But as Jon Callas and Bruce Schneier said:
Attacks always get better; they never get worse.  It's time to walk,
but not run, to the fire exits.  You don't see smoke, but the fire
alarms have gone off.  It's time for us all to migrate away from
SHA-1.  See the summary with bibliography at:

Since we don't have a reliable long-term hash function today, you'll
have to change hash functions a few years out.  Some foresight now
will save much later pain in keeping big trees like the kernel secure.
Either that, or you'll want to re-examine git's security assumptions
now: what are the implications if multiple different blobs can be
intentionally generated that have the same hash?  My initial guess is
that changing hash functions will be easier than making git work in
the presence of unreliable hashing.

In the git sources, you'll need to install a better hash function when
one is invented.  For now, just make sure the code and the
repositories are modular -- they don't care what hash function is in
use.  Whether that means making a single git repository able to use
several hash functions, or merely making it possible to have one
repository that uses SHA1 and another that uses some future
WonderHash, is a system design decision for you and the git
contributors to make.  The simplest case -- copying a repository with
one hash function into a new repository using a different hash
function -- will change not only all the hashes, but also the contents
of objects that use hash values to point to other objects.  If any of
those objects are signed (e.g. by PGP keys) then those signatures will
not be valid in the new copy.

Adding support now for SHA256 as well as SHA1 would make it likely
that at least git has no wired-in dependencies on the *names* or
*lengths* of hashes, and let you explore the system level issues.  (I
wouldn't build in the assumption that each different hash function
produces a different length output

2 serving time in UK prisons for refusing to decrypt on demand

2009-08-18 Thread John Gilmore
[But we don't know who they are!   --gnu]

Two convicted for refusal to decrypt data
Up to five years in jail after landmark prosecutions

By Chris Williams

Posted in Policing, 11th August 2009 13:17 GMT

Two people have been successfully prosecuted for refusing to provide 
authorities with their encryption keys, resulting in landmark convictions that 
may have carried jail sentences of up to five years.

The government said today it does not know their fate.

The power to force people to unscramble their data was granted to authorities 
in October 2007. Between 1 April, 2008 and 31 March this year the first two 
convictions were obtained.

The disclosure was made by Sir Christopher Rose, the government's Chief 
Surveillance Commissioner, in his recent annual report.

The former High Court judge did not provide details of the crimes being 
investigated in the case of either individual - neither of whom were 
necessarily suspects - nor of the sentences they received.

The Crown Prosecution Service said it was unable to track down information on 
the legal milestones without the defendants' names.

Failure to comply with a section 49 notice carries a sentence of up to two 
years jail plus fines. Failure to comply during a national security 
investigation carries up to five years jail.

Sir Christopher reported that all of the 15 section 49 notices served over the 
year - including the two that resulted in convictions - were in counter 
terrorism, child indecency and domestic extremism cases.

The Register has established that the woman served with the first section 49 
notice, as part of an animal rights extremism investigation, was not one of 
those convicted for failing to comply. She was later convicted and jailed on 
blackmail charges.

Of the 15 individuals served, 11 did not comply with the notices. Of the 11, 
seven were charged and two convicted. Sir Christopher did not report whether 
prosecutions failed or are pending against the five charged but not convicted 
in the period covered by his report.

To obtain a section 49 notice, police forces must first apply to the National 
Technical Assistance Centre (NTAC). Although its web presence suggests NTAC is 
part of the Home Office's Office of Security and Counter Terrorism, it is in 
fact located at the government's secretive Cheltenham code breaking centre, 

GCHQ didn't immediately respond to a request for further information on the 
convictions. The Home Office said NTAC does not know the outcomes of the 
notices it approves.

NTAC approved a total of 26 applications for a section 49 notice during the 
period covered by the Chief Surveillance Commissioner's report, which does not 
say if any applications were refused. The judicial permission necessary to 
serve the notices was then sought in 17 cases. Judges did not refuse permission 
in any case.

One police force obtained and served a section 49 notice without NTAC approval 
while acting on incorrect information from the Police National Legal 
Database, according to Sir Christopher. The action was dropped before it 
reached court.

Readers with further information about the convictions can contact the reporter 
in confidence here.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: The latest Flash vulnerability and monoculture

2009-07-27 Thread John Gilmore
  While I agree with the sentiment and the theory, I'm not sure that it
  really works that way.  How many actual implementations of typical
  protocols are there?

For Adobe Flash, there are three separate implementations -- Adobe's
proprietary one, GNU Gnash, and Swfdec.

Gnash is focused on long-term reliability and compatability, like the
rest of the GNU programs.  Its browser plugin executes the flash
interpreter in a separate process (which draws in the browser's
subwindow).  It can use either gstreamer or ffmpeg to play video or
audio.  This summer, the development focus is on implementing Flash
10's new class library (they redid it all).

Swfdec is focused on playing popular video sites well (sooner than
gnash).  They share some common regression-testing infrastructure, but
the implementations came from different code bases.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Fast MAC algorithms?

2009-07-24 Thread John Gilmore
 2) If you throw TCP processing in there, unless you are consistantly going to
 have packets on the order of at least 1000 bytes, your crypto algorithm is
 almost _irrelevant_.

This is my experience, too.  And I would add and lots of packets.
The only crypto overhead that really mattered in a real application
was the number of round-trip times it took to negotiate protocols and
keys.  Crypto's CPU time is very very seldom the limiting factor in
real end-user application performance.

 Could the lack of support for TCP offload in Linux have skewed these figures
 somewhat?  It could be that the caveat for the results isn't so much this was
 done ten years ago as this was done with a TCP stack that ignores the
 hardware's advanced capabilities.

I have never seen a network card or chip whose advanced capabilities
included the ability to speed up TCP.  Most such advanced designs
actually ran slower than merely doing TCP in the Linux kernel using an
uncomplicated chip.  I saw a Patent Office procurement of Suns in the
'80s that demanded these slow TCP offload boards (I had to write the
bootstrap code for the project) even though the motherboard came with
an Ethernet chip and software stack that could run TCP *at wire speed*
all day and night -- for free.  The super whizzo board couldn't even
send back-to-back packets, as I recall.  Some government contractor
had added the TCP offload requirement, presumably to inflate the
price that they were adding a percentage markup to.

As a crypto-relevant aside, last year I looked at using the crypto
offload engine in the AMD Geode cpu chip to speed up Linux crypto
operations in the OLPC.  There was even a nice driver for it.
Summary: useless.  It had been designed by somebody who had no idea of
the architecture of modern software.  The crypto engine used DMA for
speed, used physical rather than virtual addresses, and stored the
keys internally in its registers -- so it couldn't work with virtual
memory, and couldn't conveniently be shared between two different
processes.  It was SO much faster to do your crypto by hand in a
shared library in a user process, than to cross into the kernel, copy
the data to be in contiguous memory locations (or manually translate
the addresses and lock down those pages into physical memory), copy
the keys and IVs into the accelerator, do the crypto, copy the results
back into virtual memory, and reschedule the user process.  In typical
applications (which don't always use the same key) you'd need to do
this dance once for every block encrypted, or perhaps if you were
lucky, for every packet.  Even kernel crypto wasn't worth doing
through the thing.  And the software libraries were not only faster,
they were also portable, running on anything, not just one obsolete

Hardware guys are just jerking off unless they spend a lot of time
with software guys AT THE DESIGN STAGE before they lay out a single
gate.  One stupid design decision can take away all the potential gain.
Every TCP offloader I've seen has had at least one.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: consulting question.... (DRM)

2009-05-29 Thread John Gilmore
Their product inserts program code into 
 existing applications to make those applications monitor and report
 their own usage and enforce the terms of their own licenses, for 
 example disabling themselves if the central database indicates that 
 their licensee's subscription has expired or if they've been used 
 for more hours/keystrokes/clicks/users/machines/whatever in the 
 current month than licensed for.
 The idea is that software developers could use their product instead
 of spending time and programming effort developing their own license-
 enforcement mechanisms...

Many people have had the same idea before.  The software license
manager field is pretty full of little companies (and divisions of
big ones).  Your prospect might be able to find a niche in there
somewhere, if they study their competition to see what's missing and
how they can build up an edge.  But customers tend to hate software
that comes managed with license managers, so it takes an exceptional
company to fight the uphill sales battle to impose them.  (And having
a company switch from License Manager A to License Manager B requires
reissuing licenses to every customer, an extraordinary customer-
support hassle.)  Only in markets where the customer has no effective
choice (of a competing DRM-free product) does it tend to work.

My last startup, Cygnus, sold un-license-managed compilers,
competiting with some entrenched companies that sold license-managed
compilers.  We kept seeing how our own automated overnight software
builds would fail using our competitors' compilers because the license
manager would screw up -- or merely because the local net or Internet
was down.  Or it would hang overnight awaiting an available license,
and doing no work in the meantime.  Our compiler always ran when you
asked it to.

We got tens of thousands of people to switch to our (free) GNU C and
C++ compilers, and enough of them paid us for support and development
that our company kept growing.  Our best selling point against Sun's
compilers, for example, was that ours didn't use any license manager.
Once you bought or downloaded it, it was yours.  It would run forever,
on as many machines as you liked, and you were encouraged to share it
with as many friends as you could.  It was simple for us to invade
their niche when they had deliberately forsworn a feature set like that.

John Gilmore

PS:  Our trade-show giveaway button one year was License Managers Suck;
 it was very popular.

PPS: On a consulting job one time, I helped my customer patch out the
license check for some expensive Unix circuit simulation software they
were running.  They had bought a faster, newer machine and wanted to
run it there instead of on the machine they'd bought the node-locked
license for.  The faster their simulation ran, the easier my job was.
Actually, I think we patched the Unix kernel or C library that the
program depended upon, rather than patch the program; it was easier.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: consulting question.... (DRM)

2009-05-26 Thread John Gilmore
It's a little hard to help without knowing more about the situation.
I.e. is this a software company?  Hardware?  Music?  Movies?
Documents?  E-Books?  Is it trying to prevent access to something, or
the copying of something?  What's the something?  What's the threat
model?  Why is the company trying to do that?  Trying to restrain
customers?  Competitors?  Trying to build a cartel?  Being forced to
do it by a cartel?  Is their product embedded?  Online?  Hardware?
Software?  Battery powered?  Is it on a phone network?  On the
Internet?  On no network?  What country or countries does the company
operate in?  What jurisdictions hold its main customer bases?  How
much hassle will its customers take before they switch suppliers?
What kind of industry standards must the company adhere to?  What
other equipment or data formats do they have/want to interoperate

Most DRM is probably never cracked, because the product it's in
never gets popular enough that anyone talented wants to crack it.
If they only sell a thousand units, will they be happy?  Or do they
hope/plan/need to sell millions of units?

Most DRM exists to build a cartel -- to make an artificial monopoly --
not to prevent *customers* from copying things, but to prevent
*competitors* from being able to build compatible or interoperable
equipment.  This is largely because US reverse-engineering law makes
such a cartel unenforceable in court, unless you use DRM to make it.

 Can anyone point me at good information uses I can use to help prove 
 the case to a bunch of skeptics who are considering throwing away 
 their hard-earned money on a scheme that, in light of security
 experience, seems foolish?

Why should we bother?  Isn't it a great idea for DRM fanatics to
throw away their money?  More, more, please!  Bankrupt yourselves
and drive your customers away.  Please!

It's only the DRM fanatics whose installed bases of customers
are mentally locked-in despite the crappy user experience (like
the brainwashed hordes of Apple users, or the Microsoft victims)
who are troublesome.  In such cases, the community should
intervene on behalf of the users -- not to prevent the company
from wasting its time and money.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Chinese hackers break iTunes gift certificate algorithm

2009-04-30 Thread John Gilmore

Chinese hackers crack iTunes Store gift codes, sell certificates
By Charles Starrett
Senior Editor, iLounge
Published: Tuesday, March 10, 2009

A group of Chinese hackers has succeeded in cracking Apple’s algorithm  
for encoding iTunes Store Gift Certificates, and are creating  
discounted certificates using a key generator. Outdustry reports that  
a number of the codes are available on the site Taobao, with $200  
cards selling for as little as $2.60. The owner of the Taobao shop  
offering the cards admitted that the codes are created using key  
generators, and that he paid to use the hackers’ service. He also said  
that while the price of the codes has dropped steadily, store owners  
make more money as the number of customers grows.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Activation protocol for car-stopping devices

2009-03-03 Thread John Gilmore
 * Is there any standard cryptographic hash function with an output  
 of about 64 bits? It's OK for our scenario if finding a preimage for  
 a particular signature takes 5 days. Not if it takes 5 minutes.

This is a protocol designed for nasty guys who want to steal your car,
which would forcibly stop the car regardless of the wishes of the
driver, remotely from anywhere on the Internet?  And it's mandated by
the government?

These are not tracking devices, as your subject line said; they
actively intervene in driving -- much more dangerous.

As usual, it sounds like a great tool when used responsibly -- against
stolen cars, though it will probably cause collisions, which could
hardly be called accidents since they are easily foreseeable.  And
it's a terrible tool when used any other way (by criminals against cop
cars, for example; or by Bulgarian virus authors against random cars;
or by breaking into the DENATRAN and stealing and posting all the
secrets; or by an invading army).

It reminds me of the RFID passport design process: One entity figures
out what would make ITS life easier (reading your passport while
you're in line at the border), mandates a change, and ignores the
entire effects on the rest of society that result.

Why would you limit anything to 64 bits, or think it's OK that with 5
days of calculation *anyone* could do this to your mother's or
daughter's car?

Shouldn't tracking or disabling the car require the active cooperation
of the car's owner, e.g. by the owner supplying a secret known only to
them, and not recorded in a database anywhere (in the government, at
the dealer, etc)?  That way, if the protocol is actually secure, most
of the evil ways to use it AGAINST the owner would be eliminated.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Judge orders defendant to decrypt PGP-protected laptop

2009-03-03 Thread John Gilmore
 I would not read too much into this ruling -- I think that this is a
 special situation, and does not address the more important general
 In other cases, where alternative evidence is not available to the
 government, and where government agents have not already had a look at
 the contents, the facts (and hence perhaps the ruling) would be

Balls.  This is a straight end-run attempt around the Fifth Amendment.
The cops initially demanded a court order making him reveal his
password -- then modified their stance on appeal after they lost.  So
he can't be forced to reveal it, but on a technicality he can be
forced to produce the same effect as revealing it?  Just how broad is
this technicality, and how does it get to override a personal
constitutional right?

If the cops bust down your door and you foolishly left your computer
turned on, are they entitled to make you reveal your encryption
passwords anytime later, because your encrypted drive was accessible
when they ran in screaming at your family and shooting your dog?
Suppose they looked it over and typed a few things to the screen?
Suppose they didn't?  Suppose they used a fancy power-transfer plug to
keep it running as they walked it out the door, but they tripped and
dropped it and it powered off?  That's a technicality, isn't it?

Don't forget, this is a nuisance case.  It's about a harmless Canadian
citizen who's a permanent US resident, who crossed the Canadian border
with his laptop.  A guy smart enough to encrypt his drive.  On the
drive, among other things, was a few thousand porn images downloaded
from the net.  Legal porn.  The border guards, who had no business
even looking at his laptop's contents, trolled around in it until they
found some tiny fraction of the images that (they allege) contained
underage models.  (How would *he* know the ages of the models in
random online porn?  Guess he'd better just store no porn at all,
whether or not porn is legal.  That's the effect that the bluenoses
who passed the child porn laws want, after all.)  That's the crime
being prosecuted here.  This isn't the Four Horsemen's
torture-the-terrorist-for-the-password hostage situation where lives
are at stake and the seconds are ticking away.  This is a pointless
search containing the only evidence of a meaningless censorship
non-crime.  If the feds can force you to reveal your password in this
hick sideshow, they can force it anytime.

Suppose the guy had powered off his laptop rather than merely
foolishly suspending it.  If the border guards had DRAM key recovery
tools that could find a key in the powered-down RAM, but then lost
the key or it stopped working, would you think he should later be
forced to reveal his password?

Suppose they merely possessed DRAM key recovery software, but never
deployed it?  Hey, we claim that you crossed the border with that key
in decaying RAM; fork over that password, buddy!

Don't give them an inch, they'll take a mile.  Drug users can now not
safely own guns, despite the Second Amendment.  Not even guns locked
in safes in outbuildings, because the law passed against using a gun
in a drug crime has been expanded by cops and judges to penalize
having a gun anywhere on the property even though it was never
touched, and even when the only drug crime was simple possession.
Five year mandatory minimum sentence enhancement.  (Don't expect NRA
to help -- their motto is screw the criminals, leave us honest people
alone.  That's no good when everybody's a criminal, especially the
honest people like this guy, who had nothing to hide from the border
guards and helped them search his laptop.)

   Sessions wrote: Boucher's act of producing an unencrypted
   version of the Z drive...

There is no such document as an unencrypted version of the Z drive.
It does not exist.  It has never existed.  One could in theory be
created, but that would be the creation of a new document, not the
production of an existing one.  The existing one is encrypted, and
the feds already have it.

I'm still trying to figure out what the feds want in this case if the
guy complies.  They'll have a border guard testify that he saw a
picture with a young teen in it?  They'll show the jury a picture of a
young teen, but won't authenticate it as a picture that came off the
hard drive?  It can just be any random picture of a young teen, that
could've come from anywhere?  How will that contribute to prosecuting
this guy for child porn?

Maybe they're just bored from training themselves by viewing official
federal child porn images (that we're not allowed to see), or
endlessly searching gigabytes of useless stuff on laptops.  Instead
they want the thrill of setting a precedent that citizens have no
right to privacy in their encrypted hard drives.  Let's not help them
by declaring this guy's rights forfeit on a technicality.


The Cryptography Mailing List

Re: full-disk subversion standards released

2009-01-31 Thread John Gilmore
 Given such solutions, frameworks like what TCG is chartered to build are
 in fact good and useful.  I don't think it's right to blame the tool (or
 the implementation details of a particular instance of a particular kind
 of tool) for the idiot carpenter.

Given the charter of TCG, to produce DRM standards, it's pretty clear
what activity their tool is designed to be used for.

The theory that we should build good and useful tools capable of
monopoly and totalitarianism, but use social mechanisms to prevent
them from being used for that purpose, strikes me as naive.  Had you
not noticed obvious indications like the corruption of the Executive
Branch by NSA, RIAA and MPAA (including the shiny new president), the
concurrence of the Legislative Branch in that corruption, and the
toothlessness of the States and the Judicial Branch in failing to
actually reign in major federal constitutional violations?

Yes, I'm analogizing DRM to wiretaps and jiggered voting machines.
But isn't DRM like a wiretap deep inside your computer -- a foreign
agent that spies on you and reports back whatever it chooses, against
your will?  Worse, it's like a man-in-the-middle attack, buried inside
your computer.  If Hollywood succeeded in injecting DRM into all our
infrastructure, who among us would seriously believe the government
would not muscle its way in and start also using the DRM capabilities
against the citizens?  The Four Horsemen of the Infopocalypse are
alive and well.  Are you one of those guys in *favor* of sex offenders
being allowed free access to children on the Internet, buddy?  It's
so simple, everyone will just prove they aren't a sex offender before
being granted access.  It's just like getting on a plane.

(TCG has excised all mention of DRM from recent publications -- but I
have the original ones, which had DRM examples explaining the
motivation for why they were doing this work.  I'll append one such
example, for those who can't readily search the archives back to 2003.
Skip down to TCPA in the body below.)


To: Jerrold Leichter
cc:, gnu
Subject: Re: Difference between TCPA-Hardware and other forms of trust
In-reply-to: pine.gso.4.58.0312151831570.3...@frame 
Date: Tue, 16 Dec 2003 13:53:24 -0800
From: John Gilmore

 | means that some entity is supposed to trust the kernel (what else?). If
 | two entities, who do not completely trust each other, are supposed to both
 | trust such a kernel, something very very fishy is going on.

 Why?  If I'm going to use a time-shared machine, I have to trust that the
 OS will keep me protected from other users of the machine.  All the other
 users have the same demands.  The owner of the machine has similar demands.

I used to run a commercial time-sharing mainframe in the 1970's.
Jerrold's wrong.  The owner of the machine has desires (what he calls
demands) different than those of the users.

The users, for example, want to be charged fairly; the owner may not.
We charged every user for their CPU time, but only for the fraction that
they actually used.  In a given second, we might charge eight users
for different parts of that fraction.

Suppose we charged those eight users amounts that added up to 1.3
seconds?  How would they know?  We'd increase our prices by 30%, in
effect, by charging for 1.3 seconds of CPU for every one second that
was really expended.  Each user would just assume that they'd gotten a
larger fraction of the CPU than they expected.  If we were tricky
enough, we'd do this in a way that never charged a single user for
more than one second per second.  Two users would then have to collude
to notice that they together had been charged for more than a second
per second.

(Our CPU pricing was actually hard to manage as we shifted the load
among different mainframes that ran different applications at
different multiples of the speed of the previous mainframe.  E.g. our
Amdahl 470/V6 price for a CPU second might be 1.78x the price on an
IBM 370/158.  A user's bill might go up or down from running the same
calculation on the same data, based on whether their instruction
sequences ran more efficiently or less efficiently than average on the
new CPU.  And of course if our changed average price was slightly
different than the actual CPU performance, this provided a way to
cheat on our prices.

Our CPU accounting also changed when we improved the OS's timer
management, so it could record finer fractions of seconds.  On average,
this made the system fairer.  But your application might suffer, if its
pattern of context switches had been undercharged by the old algorithm.)

The users had to trust us to keep our accounting and pricing fair.
System security mechanisms that kept one user's files from access by
another could not do this.  It required actual trust, since the users
didn't have access to the data required to check up on us

Re: full-disk subversion standards released

2009-01-30 Thread John Gilmore
If it comes from the Trusted Computing Group, you can pretty much
assume that it will make your computer *less* trustworthy.  Their idea
of a trusted computer is one that random unrelated third parties can
trust to subvert the will of the computer's owner.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: Proof of Work - atmospheric carbon

2009-01-26 Thread John Gilmore
  If POW tokens do become useful, and especially if they become money,
  machines will no longer sit idle. Users will expect their computers to
  be earning them money (assuming the reward is greater than the cost to

Computers are already designed to consume much less electricity when
idle than when running full tilt.  This trend will continue and
extend; some modern chips throttle down to zero MHz and virtually zero
watts at idle, waking automatically at the next interrupt.

The last thing we need is to deploy a system designed to burn all
available cycles, consuming electricity and generating carbon dioxide,
all over the Internet, in order to produce small amounts of bitbux to
get emails or spams through.

Can't we just convert actual money in a bank account into bitbux --
cheaply and without a carbon tax?  Please?


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

Re: data rape once more, with feeling.

2008-10-27 Thread John Gilmore
Usability research about how to track web users?  How Google-like.
Can't you just dump a 25-year cookie on them from twelve different 
directions, and be done with it?

 Federated Login has been a holy grail in the identity community
 for a long time.  We have known how to do the technical part for a
 long time.  However the industry has constantly tried, and failed,
 to find a model that was (1) simple for end users, and (2) had a
 reasonable trust model between the RP (the relying party, which is
 the site you want to log into) and the IDP (the identity provider,
 who will identify you to the RP).

Explicitly ignoring the trust model between the end users and the RP,
and the trust between the end users and the IDP.  Why should end users
trust your web site?  Why should they trust an IDP like Google?

It's not that every website that requires a login is a privacy swamp.
But the big ones pretty much all are, and those are the ones who want
to impose this new model without bothering the end user's little head
about whether he should trust them.

And if every little wiki that just uses logins to slightly limit spam
today, began using federated identity, then ALL of them would become
privacy swamps.

 For example, the site might require users to agree to a Terms of Service.

Let's see an example of how you're automating how the USER might
require the SITE to agree to a Terms of Service.  Doesn't seem to be
part of the model, which is that the SITE has something valuable it
needs lawyers to protect, while the USER is just an
eyeball-with-attached-wallet to be sold to the highest bidder.

 When users are presented with a traditional signup page that asks
 for E-mail, password,  password confirmation, it is quite common
 for 30-50% of users to not finish the process.

I wonder why not!  Perhaps they do not want to be tracked, numbered,
wiretapped, monitored, herded, logged, datamined, folded, spindled,
and mutilated.  Perhaps they just want to look at a web site without
tying their reading habits to their social security number and
their medical records.

Similar percentages describe how many people lie through their teeth
to get into random websites.  So half won't even login, half of those
will lie like hell; a quarter of the people either think you're
trustworthy, or are too stupid to care.  Which fraction is federated
identity aimed at?  Catching the liars, i.e. fencing in the people
who actually take care to protect their privacy?  Yeah, those are
the guys this community wants you to screw as hard as you can. :-(

 In this scenario, could detect that the domain name is for
 an IDP that it trusts.  It could then redirect the user to AOL to
 verify their identity.  Assuming the user approves sharing their
 identity, then the user will be redirected back to which can
 automatically create an account for them, and log them in.

That's an interesting assumption.  Why would you assume that AOL would
give users the choice?  AOL is not famous for choice.  Wouldn't AOL
just read the user's 25-year AOL cookie, and redirect the browser back
to with full account information supplied, without any
interaction with the user at all?  AOL could probably even charge the
RP a few bucks for doing so.  How simple.  How evil.  Franchising your
privacy violations.

 The RP can control what IDPs it trusts, and even switch their users
 back to legacy logins if the IDP becomes untrustworthy

You can pretty well guarantee that RP websites will somehow decline to
trust any IDP that provides privacy to the end user -- like, for example.  A few web sites that send you an email
to verify you are who you say you are already blacklist mailinator,
though it's usually easy to bypass the blacklist by using one of its
alternative domain names.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Chip-and-pin card reader supply-chain subversion 'has netted millions from British shoppers'

2008-10-24 Thread John Gilmore
[British shoppers were promised high security by switching from credit
cards to cards that have a chip in them and require that a PIN be entered
for each transaction.  That was the reason for changing everything over,
at high cost in both money and inconvenience to shops and shoppers.  Perhaps
chip-and-pin HAS reduced overall fraud -- but check out this elaborate 
scheme that beat their security for tens of millions of UK pounds.  

Now, why is this being announced by the US National Counter-
intelligence Executive, Joel Brenner?  Because none of the banks or
stores is willing to admit it?  Still, why publicize it at all?  I
find his quote very telling: Previously only a nation state's
intelligence service would have been capable of pulling off this type
of operation.  How would he know this?  Which nation-states have done
similar types of operation, and why isn't he telling the public about
THEM instead of about these other criminals?

I've long suspected that NSA's (still secret) budget (approved by a
tiny number of manipulated Congressmen) has been, uh, augmented, by
its ability to manipulate financial markets using inside information
obtained from domestic and global mass wiretaps.  You don't suppose
NSA is behind the recent market volatility, do you?  It's easiest to
skim off billions when trillions are hurriedly sloshing around in a
panic.  --gnu]

Forwarded-By: Kurt Albershardt [EMAIL PROTECTED]

Clever (and a tad frightening)

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

WPost: Cybersecurity Will Take A Big Bite of the Budget

2008-07-21 Thread John Gilmore
[News report below.]

This highly classified little-publicized multi-billion dollar vague
program to secure Federal computers seems doomed to failure.  People
like you and I, in the unclassified private sector, design and build
and program all those computers and networks.

But of course we've never heard of this initiative.  And we probably
don't share its goals.

NSA's occasional public efforts to secure the civilian infrastructure
have been somewhat interesting.  Not that they've succeeded: they
crippled DES, wouldn't admit it was broken, and tried to force us all
to use it; the IPSEC they designed was painfully complex, impossible
to administer, easy to penetrate, and wouldn't scale; the export
controls they championed torpedoed civilian efforts to secure
ANYTHING; and Secure Linux seems to be no more secure than any other
Linux.  Do we know of *any* honest and successful NSA effort to raise
the integrity and security of the public infrastructure (even at the
expense of their ability to illegally tap it)?

Now that NSA, the President, and Congress have gone totally to the
Dark Side, we'd better assume that any such initiative does not have
the public's best interests at heart.  The theory is that the public's
computers will be easy for the government to break into, while
Wiretapper-General McConnell can shield every unconstitutional thing
he does from the prying eyes of the public and the courts?  It'd be
better for private-sector engineers to follow our own muses, rather
than become the rats following government-contractor Pied Pipers into
a totalitarian sewer.

Let's guess why they would classify this effort at all.  For security
through obscurity?  So that foreigners won't find out how to secure
their own computers against NSA intrusions (ahem, foreigners build ALL
our computers)?  Merely to hide their own incompetence?  Or because
the effort would be quickly identified as malfeasance, like trying to
impose a national ID system and routine suspicionless checkpoint
searches on a free people?


Forwarded-By: Melissa Ngo [EMAIL PROTECTED]

Cybersecurity Will Take A Big Bite of the Budget
By Walter Pincus
Monday, July 21, 2008; A13

President Bush's single largest request for funds and most important  
initiative in the fiscal 2009 intelligence budget is for the  
Comprehensive National Cybersecurity Initiative, a little publicized  
but massive program whose details remain vague and thus open to  
question, according to the House Permanent Select Committee on  

A highly classified, multiyear, multibillion-dollar project, CNCI --  
or Cyber Initiative -- is designed to develop a plan to secure  
government computer systems against foreign and domestic intruders and  
prepare for future threats. Any initial plan can later be expanded to  
cover sensitive civilian systems to protect financial, commercial and  
other vital infrastructure data.

It is no longer sufficient for the U.S. Government to discover cyber  
intrusions in its networks, clean up the damage, and take legal or  
political steps to deter further intrusions, Director of National  
Intelligence Mike McConnell noted in a February 2008 threat  
assessment. We must take proactive measures to detect and prevent  
intrusions from whatever source, as they happen, and before they can  
do significant damage. His conclusions echoed those of a 2007  
interagency review that led to CNCI's creation.

During debate on the intelligence authorization bill last week, Rep.  
Jim Langevin (D-R.I.), a member of the House intelligence committee  
and chairman of the Homeland Security subcommittee on emerging  
threats, described cybersecurity as a real and growing threat that  
the federal government has been slow in addressing.

Without specifying funding figures, which are classified, Langevin  
said the panel approved 90 percent of the funds requested for CNCI but  
warned that the committee does not intend to write the administration  
a blank check.

The committee's report recognized that as the initiative develops, it  
will be imperative that the government also take into account the  
interests and concerns of private citizens, the U.S. information  
technology industry, and other elements of the private sector.

Such a public-private partnership will be unlike any model that  
currently exists, said the committee, which recommended a White House  
study leading toward establishment of an oversight panel of lawmakers,  
executive branch officials and private-sector representatives. The  
panel would review the intelligence community's development of the  

The committee said it expects the policy debates over the initiative  
to extend into the next administration, and major presidential  
candidates have addressed the issue.

On the same day the intelligence bill passed the House, Sen. Barack  
Obama (D-Ill.) told an 

Re: Why doesn't Sun release the crypto module of the OpenSPARC? Crypto export restrictions

2008-06-12 Thread John Gilmore
 I would expect hardware designs to be treated more like hardware than 

A hardware design is not hardware.  Only a naive parsing of the
words would treat it so.  A software design is not treated like
software; you are free to write about how ATM machine crypto is
designed, even if you can't export ATM machine crypto software without
a license (because it's proprietary and not mass-market).

A hardware design is a lot like software.  It's human written and
human readable, it's trivial to reproduce, it's compiled automatically
into something that can execute, and if you write it into hardware,
then it does something.

The court case that EFF won against the export controls was won on
those grounds: the government can't suppress the publication of
human-written and human-readable text, on the grounds that somebody
somewhere might put it into a machine that does things the government
doesn't like.

Sun may be chicken on the point, and the government did a sneaky trick
to technically avoid having a Ninth Circuit precedent set on the
topic, but a similar precedent was set by Peter Junger's case in
another circuit.  I think Sun would be well within its rights to ship
VHDL or Verilog source code that implements crypto under an open
source license.  And I'd be happy to point them at good lawyers who'd
be happy to be paid to render a more definitive opinion.

John Gilmore

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Wikileaks: NSA funding of academics

2007-11-21 Thread John Gilmore

Grant code 'MDA904' - National Security Agency

The NSA has pushed tens or hundreds of millions into the academy
through research grants using one particular grant code.  ...


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

NSA solicited illegal Qwest mass wiretaps right after Bush inauguration

2007-10-23 Thread John Gilmore

Nacchio affects spy probe
His court filings point to government surveillance months before 9/11
By Andy Vuong
The Denver Post
Article Last Updated: 10/20/2007 11:38:08 PM MDT


  Previously sealed documents filed by former Qwest chief executive Joe Nacchio 
in connection with his top-secret defense strategy, in which he argued that he 
was privy to classified information that led him to believe the company was in 
line to receive lucractive government contracts. The documents were released 
Wednesday at the request of The Denver Post.

  [Follow the link above to get the links to these court documents.  --gnu]
  * Read file 1, Sept. 29, 2006 (PDF, 46 pages).
  * Read file 2, Oct. 31, 2005 (PDF, 84 pages).
  * Read file 3, Jan. 4, 2007 (PDF, 10 pages).
  * Read file 4, Oct. 31, 2006 (PDF, 25 pages).
  * Read file 5, Jan. 22, 2007 (PDF, 3 pages).
  * Read file 6, Feb. 20, 2007 (PDF, 25 pages).
  * Read file 7, May 25, 2007 (PDF, 12 pages).

Recent revelations about former Qwest chief executive Joe Nacchio's 
classified-information defense, which went unheard during his insider-trading 
trial, are feeding the furor over the government's warrantless-wiretapping 

Nacchio alleges the National Security Agency asked Qwest to participate in a 
program the phone company thought was illegal more than six months before the 
Sept. 11, 2001, terrorist attacks, according to court documents unsealed at the 
request of The Denver Post.

Nacchio also maintains that when he refused to participate, the government 
retaliated by not awarding lucrative contracts to Qwest.

Previously sealed transcripts released at the same time as the court documents 
indicate the government was prepared to counter Nacchio's claims.

Though specifics about the wiretapping program were redacted from the court 
documents, Nacchio's attorney Herbert Stern said in May 2006 that Nacchio 
rejected requests from the government for customers' phone records in fall 2001.

The recently unsealed documents push that time frame back to February 2001 and 
indicate the NSA may have also sought to monitor customers' Internet traffic 
and fax transmissions.

Nacchio's claims could affect President Bush's controversial efforts to grant 
legal immunity to large telecommunications companies such as ATT, which has 
been sued in connection with the surveillance program.

The Nacchio materials suggesting that the NSA had sought telco cooperation 
even before 9/11 undermines the primary argument for letting the phone 
companies off the hook, which is the claim that they were simply acting in good 
faith to help the president fight the terrorists after 9/11, said Kevin 
Bankston, a staff attorney for the Electronic Frontier Foundation, a 
civil-liberties group.

The fact that these materials suggest that cooperation with the program was 
tied to the award of certain government contracts also contradicts their (phone 
companies') claims that they were simply acting in good faith to help fight the 
terrorists when it appears that they may have been motivated by financial 
concerns instead, Bankston said.

Up to this point, discussions on Capitol Hill over telecom immunity have 
focused on government surveillance efforts spurred by the Sept. 11 terrorist 

This is, sooner or later, going to be the stuff of congressional hearings 
because a new starting point has been established for this controversy. A new 
starting point seven months before 9/11, said Ron Suskind, author of The One 
Percent Doctrine, which reported examples of how companies worked with the 
government in its fight against terrorism after Sept. 11.

The idea that deals were getting cut between the government and telecom 
companies in secret in the early part of 2001 creates a whole new discussion as 
to intent, motivation and goals of the government, Suskind said.

Last week, Rep. John Conyers Jr., D-Mich., chairman of the House Judiciary 
Committee, asked federal intelligence officials for more information about 
Nacchio's allegations.

The extent to which this is true could shed light on the efficacy of this 
program and raise questions about the reasons behind its implementation, 
Conyers wrote on his blog.

For his part, Nacchio wanted to introduce the claims to show he didn't sell 
Qwest stock illegally in early 2001. The government alleged Nacchio dumped 
Qwest stock because he had inside information that the Denver company's 
financial health was deteriorating. He was convicted on 19 counts of insider 
trading in April after a month-long trial and sentenced to six years in prison.

He remains free on $2 million bond pending his appeal, which, among other 
charges, is challenging rulings U.S. District Judge Edward Nottingham made 
related to the classified-information defense.

Nacchio has maintained he was upbeat about Qwest because he had top-secret 
information that the company 

LA Times: US funds super wiretap system for Mexico

2007-06-09 Thread John Gilmore,0,7011563.story?coll=la-home-center

Mexico to boost tapping of phones and e-mail with U.S. aid
Calderon is seeking to expand monitoring of drug gangs; Washington also may 
have access to the data.
By Sam Enriquez, Times Staff Writer
May 25, 2007

MEXICO CITY - Mexico is expanding its ability to tap telephone calls and e-mail 
using money from the U.S. government, a move that underlines how the country's 
conservative government is increasingly willing to cooperate with the United 
States on law enforcement.

The expansion comes as President Felipe Calderon is pushing to amend the 
Mexican Constitution to allow officials to tap phones without a judge's 
approval in some cases. Calderon argues that the government needs the authority 
to combat drug gangs, which have killed hundreds of people this year.

Mexican authorities for years have been able to wiretap most telephone 
conversations and tap into e-mail, but the new $3-million Communications 
Intercept System being installed by Mexico's Federal Investigative Agency will 
expand their reach.

The system will allow authorities to track cellphone users as they travel, 
according to contract specifications. It includes extensive storage capacity 
and will allow authorities to identify callers by voice. The system, scheduled 
to begin operation this month, was paid for by the U.S. State Department and 
sold by Verint Systems Inc., a politically well-connected firm based in 
Melville, N.Y., that specializes in electronic surveillance.

Although information about the system is publicly available, the matter has 
drawn little attention so far in the United States or Mexico. The modernization 
program is described in U.S. government documents, including the contract 
specifications, reviewed by The Times.

They suggest that Washington could have access to information derived from the 
surveillance. Officials of both governments declined to comment on that 

It is a government of Mexico operation funded by the U.S., said Susan 
Pittman, of the State Department's Bureau of International Narcotics and Law 
Enforcement Affairs. Queries should be directed to the Mexican government, she 

Calderon's office declined to comment.

But the contract specifications say the system is designed to allow both 
governments to disseminate timely and accurate, actionable information to each 
country's respective federal, state, local, private and international partners.

Calderon has been lobbying for more authority to use electronic surveillance 
against drug violence, which has threatened his ability to govern. Despite 
federal troops posted in nine Mexican states, the violence continues as rival 
smugglers fight over shipping routes to the U.S.-Mexico border, as well as for 
control of Mexican port cities and inland marijuana and poppy growing regions.

Nonetheless, the prospect of U.S. involvement in surveillance could be 
extremely sensitive in Mexico, where the United States historically has been 
viewed by many as a bullying and intrusive neighbor. U.S. government agents 
working in Mexico maintain a low profile to spare their government hosts any 
political fallout.

It's unclear how broad a net the new surveillance system will cast: Mexicans 
speak regularly by phone, for example, with millions of relatives living in the 
U.S. Those conversations appear to be fair game for both governments.

Legal experts say that prosecutors with access to Mexican wiretaps could use 
the information in U.S. courts. U.S. Supreme Court decisions have held that 4th 
Amendment protections against illegal wiretaps do not apply outside the United 
States, particularly if the surveillance is conducted by another country, 
Georgetown University law professor David Cole said.

Mexico's telecommunications monopoly, Telmex, controlled by Carlos Slim Helu, 
the world's second-wealthiest individual, has not received official notice of 
the new system, which will intercept its electronic signals, a spokeswoman said 
this week.

Telmex is a firm that always complies with laws and rules set by the Mexican 
government, she said.

Calderon recently asked Mexico's Congress to amend the country's constitution 
and allow federal prosecutors free rein to conduct searches and secretly record 
conversations among people suspected of what the government defines as serious 

His proposal would eliminate the current legal requirement that prosecutors 
gain approval from a judge before installing any wiretap, the vetting process 
that will for now govern use of the new system's intercepts. Calderon says the 
legal changes are needed to turn the tide in the battle against the drug gangs.

The purpose is to create swift investigative measures against organized 
crime, Calderon wrote senators when introducing his proposed constitutional 
amendments in March. At times, turning to judicial authorities hinders or 

Re: Was a mistake made in the design of AACS?

2007-05-09 Thread John Gilmore
 Well, there's an idea: use different physical media formats for
 entertainment and non-entertainment content (meaning, content created by
 MPAA members vs. not) and don't sell writable media nor devices capable
 of writing it for the former, not to the public, keeping very tight
 controls on the specs and supplies.  

This approach was rejected by the computer industry, in particular
with respect to DVDs.  Computer companies like Intel, HP, Dell, and
Sony wanted to be able to compete to be a consumer electronics
platform, playing music, video, photos, etc.  Indeed, many of the
advances in consumer electronics have come from computerization, such
as digital music (DATs and CDs), MP3 players, digital video, fax
machines, digital cameras and digital photo storage, color photo
printers, ...

I do recall that it took most of a decade for computer CD-ROM drives
to be able to digitally read audio CDs, and then later to record them.
Silicon Graphics gets major kudos for breaking that artificial barrier.

 Then finding, say, a Disney movie
 on an HD-DVD of the data format would instantly imply that it's pirated.

False.  It's like saying Then finding a record album on a cassette tape
would instantly imply that it's pirated.  No, it would instantly imply
that it's been copied onto a medium of the consumer's choice.  Consumers
are (and should be) free to record copyrighted works onto media of their
own choice, for their own convenience, without needing the permission or
concurrance of the copyright owner.

Congratulations, Nico, you fell into Hollywood's favorite word:
pirated.  It takes discipline to stop thinking in the grooves that
they have worn in your brain.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Man sues Microsoft for snake oil security that lets the FBI in

2007-03-07 Thread John Gilmore
Forwarded-By: Brad Templeton [EMAIL PROTECTED]

The plaintiff is suing Microsoft (and already got a settlement from
Compaq and Circuit City) because in spite of the security tools they
sold him, the FBI forensic lab was able to get at his data in a criminal

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Intel finally plans to add the NSA instruction

2007-02-15 Thread John Gilmore

Page 7 of the PDF describes the POPCNT application-targeted accelerator.


PS:  They don't give much detail, but they seem to be adding a grep
instruction too (at least fgrep), and a zlib accelerator.  Anybody know
more, while it's still early enough to get them to change the most bogus

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] IBM donates new privacy tool to open-source Higgins

2007-01-30 Thread John Gilmore

IBM donates new privacy tool to open-source
  By  Joris Evers
  Staff Writer, CNET
  Published: January 25, 2007, 9:00 PM PST

IBM has developed software designed to let people keep personal  
information secret when doing business online and donated it to the  
Higgins open-source project.

  The software, called Identity Mixer, was developed by IBM  
researchers. The idea is that people provide encrypted digital  
credentials issued by trusted parties like a bank or government agency  
when transacting online, instead of sharing credit card or other  
details in plain text, Anthony Nadalin, IBM's chief security architect,  
said in an interview.

  Today you traditionally give away all of your information to the man  
in the middle and you don't know what they do with it, Nadalin said.  
With Identity Mixer you create a pseudonym that you hand over.

  For example, when making a purchase online, buyers would provide an  
encrypted credential issued by their credit card company instead of  
actual credit card details. The online store can't access the  
credential, but passes it on to the credit card issuer, which can  
verify it and make sure the retailer gets paid.

  This limits the liability that the storefront has, because they don't  
have that credit card information anymore, Nadalin said. All you hear  
about is stores getting hacked.

  Similarly, an agency such as the Department of Motor Vehicles could  
issue an encrypted credential that could be used for age checks, for  
example. A company looking for such a check won't have to know an  
individual's date of birth or other driver's license details; the DMV  
can simply electronically confirm that a person is of age, according to  

  The encrypted credentials would be for one-time use only. The next  
purchase or other transaction will require a new credential. The  
process is similar to the one-time-use credit card numbers that  
Citigroup card holders can already generate on the bank's Web site.

  IBM hopes technology such as its Identity Mixer helps restore trust in  
the Web. Several surveys in past years have shown that the seemingly  
incessant stream of data breaches and threats such as phishing scams  
are eroding consumer confidence in online shopping and activities such  
as banking on the Web.

  To get Identity Mixer out of the lab and into the real world, IBM is  
donating its work to Higgins project, a broad, open-source effort  
backed by IBM and Novell that promises to give people more control of  
their personal data when doing business online. Higgins also aims to  
make the multiple authentication systems on the Net work together,  
making it easier for people to manage Internet logins and passwords.

  We expect Higgins to get wide deployment and usage. You'll get the  
ability by using Higgins to anonymize data, Nadalin said.

  Higgins is still under development. A first version of the projects  
work is slated to be done sometime midyear, said Mary Ruddy, a Higgins  
project leader. We were thrilled to get this donation to Higgins, IBM  
has done a lot of good work.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Big NSA expansion in Augusta, GA

2006-12-24 Thread John Gilmore

This comes from an interesting SIGINT and more blog from
the Augusta Metro Spirit, a local weekly newspaper.  Excerpts:

... Augusta is about to get a $340-million taste of Sweet Tea.

The National Security Agency is building a massive new operations
facility, dubbed project Sweet Tea. It will come complete with all the
amenities: a workout room, nursing areas, a mini-shopping center, a
credit union, an 800-seat cafeteria and thousands of exclusive parking
spaces. Secret parking spaces.

There are, of course, actual operational national security-type
elements to the project. For example, it will include a new shredder
facility (for all those classified documents) and an antenna farm (to
help listen in on enemy combatants like Osama bin Laden and Princess
Di).  ...

The document says the main new structure, a 525,000- square-foot
Regional Security Operations Center, should be complete by May 2010.

The NSA and its allies in the U.S. Congress have been pushing this
project for years. The Defense Department requested a $340.8 million
appropriation for the Georgia Regional Security Operations Center back
in February. And a construction award was scheduled for Sept. 25, NSA
documents show.

Maybe the deal was awarded on schedule. Maybe there was a
delay. Either way, it wasn't announced until Dec. 8, one day after the
Metro Spirit started calling around with questions. The announcement
was one of only eight press releases that the usually silent spy
agency had issued all year.  ...

Indeed, there is reason to believe that the NSA-Georgia project's
actual cost will be even higher than the $340 million that's
known to have been appropriated.

A military source familiar with cost analysis told the Metro Spirit
that the facilities may wind up costing more than $1 billion.  ...

Clyde Taylor, military legislative assistant to Georgia Sen. Saxby
Chambliss, said his office spent a couple of years obtaining the
appropriation. Taylor also gave credit to Georgia Rep. Charlie
Norwood, whose office issued its own press release last Friday.

The need for the new NSA facility is driven by the growth in overseas
surveillance activities, Taylor said. He said that the agency plans to
move linguists and analysts down from its Fort Meade, Md.,
headquarters to the Augusta listening station, which targets the
Middle East.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: TPM disk crypto

2006-10-12 Thread John Gilmore
 What we want is that a bank client can prove to the bank
 it is the real client, and not trojaned.  What the evil
 guys at RIAA want is that their music player can prove
 it is their real music player, and not hacked by the end
 user. Having a system that will only boot up in a known
 state is going to lead to legions of unhappy customers
 who find their system does not come up at all.

Having remote attestation that provides signed checksums of every
stage of the startup process, which are checked by guys at the RIAA or
guys at the bank, will lead to legions of unhappy customers who find
their system boots fine, but is denied access to both the bank and the
music store.  (Seventy thousand totally valid configurations are not
going to be checked and confirmed by either one.)  But their system
will access the Darknet just fine.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

National Security Agency ex-classified publication indexes now online

2006-09-28 Thread John Gilmore
[The Memory Hole also publishes an interesting list of FOIA logs,
 listing who asked NSA for what, across many years.  I see a lot of
 friends in there. -- gnu]

By Michael Ravnitzky , [EMAIL PROTECTED]

This page, just published online at Russ Kick's site, The Memory Hole:

Or you can get there from his home page at

contains indexes of four periodicals published by the National Security
Agency, plus a listing of publications from the NSA's Center for Cryptologic
History. These indexes haven't been publicly released until now, and many of
the Cryptologic History publications weren't previously known to the public:

Cryptologic Quarterly Index

NSA Technical Journal Cumulative Index

Cryptologic Spectrum Index

Cryptologic Almanac Index

Center for Cryptologic History Publications

You can request from NSA a copy of any of the reports listed in these
hundreds of pages of indexes, using the instructions provided.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

SSL Cert Prices Notes

2006-08-08 Thread John Gilmore
Date: Sun, 6 Aug 2006 23:37:30 -0700 (PDT)
Subject: SSL Cert Notes

Howdy Hackers,

Here is the latest quick update on SSL Certs. It's interesting that 
generally prices have risen. Though ev1servers are still the best commercial 
deal out there.

The good news is that CAcert seems to be posistioned for prime time debut, 
and you can't beat *Free*. :-)

SSL Certificate Authorities VerificationSubdomains Too
Low HighLow High
Thawte  $149$199$799$1349
Comodo / instantssl $49 $62.50  $449.95 $17.99  $74.95  $179.99 $269.99 $69 $99 $199$349
ev1servers  $14.95  $49
CAcert  FreeFreeFreeFree


Inclusion Status:

Latest news

Thanks for your concerns - getting the CAcert root cert included into 
Mozilla is indeed probably our largest challenge right now, but one we are 
actively engaged in.

Philipp is working on a CPS see PolicyDrafts

To see where we are with Mozilla, see: [Summary: Mozilla has 
established a fair and firm policy which CAcert should be able to meet. 
Then they threw the ball back to CAcert.]

Background of the story

We found when we first started down this road that the typical way a 
vendor got included into a major browser's root store was simply by paying 
whatever fees were demanded. In Microsoft's case with Internet Explorer, 
they don't care really who's included or who is not, but insist only that 
you pass a Webtrust audit and if so, you're eligible for inclusion. Our 
problem is that the audit costs in the neighborhood of $75,000 with a 
yearly +$10,000 fee. For CAcert, as a non-profit organisation that is 
simply out of the question (at least in the forseeable future).

The folks at Mozilla have a rather different strategy currently. They are 
willing to include our cert providing we conform to reasonable guidelines 
that everyone agrees to. And there's the rub - getting all involved to 
decide which guidelines are and are not necessary and how best to conform 
to them.

This is an ongoing process and we are documenting our Certificate Policy 
Statement (CPS) at: (not always up-to-date) 
see CPS

(Many thanks to Christian Barmala's ( work on this 
and for everyone who has helped us with shaping these policies!)



The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Hayden's statement from Oct 2002 on liberty and security

2006-05-28 Thread John Gilmore

While testifying to a joint hearing of the House and Senate
intelligence committees a year after 9/11, Michael Hayden, as NSA
Director, testified about NSA's response to 9/11.  In closing, he

38. When I spoke with our workforce shortly after the September 11th
attacks, I told them that free people always had to decide where
to draw the line between their liberty and their security, and I
noted that the attacks would almost certainly push us as a nation
more toward security. I then gave the NSA workforce a challenge:
We were going to keep America free by making Americans feel safe

39. Let me close by telling you what I hope to get out of the national
dialogue that these committees are fostering. I am not really
helped by being reminded that I need more Arabic linguists or by
someone second-guessing an obscure intercept sitting in our files
that may make more sense today than it did two years ago. What I
really need you to do is to talk to your constituents and find out
where the American people want that line between security and
liberty to be.

40. In the context of NSA's mission, where do we draw the line between
the government's need for CT information about people in the
United States and the privacy interests of people located in the
United States?

Practically speaking, this line-drawing affects the focus of NSA's
activities (foreign versus domestic), the standard under which
surveillances are conducted (probable cause versus reasonable
suspicion, for example), the type of data NSA is permitted to
collect and how, and the rules under which NSA retains and
disseminates information about U.S. persons.

41. These are serious issues that the country addressed, and resolved
to its satisfaction, once before in the mid-1970's. In light of
the events of September 11th, it is appropriate that we, as a
country, readdress them. We need to get it right. We have to find
the right balance between protecting our security and protecting
our liberty. If we fail in this effort by drawing the line in the
wrong place, that is, overly favoring liberty or security, then
the terrorists win and liberty loses in either case.

42. Thank you. I look forward to the committees' questions.

Now we know a small part of what he was really talking about.  At
least he had the balls to mention it.  But who among us could suspect
that when Congress responded by Patriot Act tune-ups making many kinds
of wiretapping easier, NSA's reaction was to ignore the laws, treating
the illegality of its operations as a classified technique for
surprising the secret enemy under our beds.  Anyone who had said NSA
was a rogue that ignored the laws, before or after 9/11, was either
called paranoid, unrealistically cynical, or against us and for the

Read this again:

Practically speaking, this line-drawing affects the focus of NSA's
activities (foreign versus domestic), the standard under which
surveillances are conducted (probable cause versus reasonable
suspicion, for example), the type of data NSA is permitted to
collect and how, and the rules under which NSA retains and
disseminates information about U.S. persons.

Now we find out that NSA has crossed each of these lines.  It is now
focusing domestically.  It now uses a reasonable suspicion standard
adjudicated by its own staff.  It is collecting all types of data and
how!, apparently retaining that data indefinitely, and disseminating
it as it sees fit (to the FBI, at least).

In the open crypto community, we noticed this curious part of his
speech, but generally didn't engage with him.  Personally I felt that
whatever I said would be ignored, just as my concerns were ignored
during the entirety of the 1990's, in the Clipper Chip debacle and the
Export Control madness.  We were ignored until we forced change upon
NSA with the courts and, in partnership with business, in Congress.
We are having to take the same routes today (though business is now
against us, since business is up to its eyeballs in spying).

Did anyone else respond to Mr. Hayden at that time, and if so, what
reaction did you get?


PS: NSA's web site SIGINT FAQ still says they don't
unconstitutionally spy on Americans.  It raises some guff about the
Fourth Amendment and strictly following the laws.
( But I hear that if you're
discussing something classified, it's not only acceptable to lie, but
it's actually required.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

May 24: National Day of Outrage at NSA/Telco surveillance

2006-05-22 Thread John Gilmore
Some alternative media groups have called for a national day of protests
against the telcos' latest sleazy activities, including their cooperation
in NSA's illegal surveillance of innocent citizens.

Events are already scheduled in Boston, Chicago, San Francisco, and
NYC.  You can register your own local event by sending mail to

Curiously, nobody in Washington, DC or Baltimore is protesting yet.
Perhaps a contingent should form outside NSA, with signs showing the
NSA employees on their way to/from work just what we think of their
disrespect for the constitution, the law, and the public.  Do we have
a local volunteer to organize it?


PS: I don't agree with all the things these people are protesting, but
I admire their energy.  I haven't seen cryptographers and cypherpunks
with protest signs -- yet.  But I hope to see you out there on May 24th.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Encrypted disk storage

2006-05-05 Thread John Gilmore
  I guess perhaps the reason they don't do integrity checking is that it
  involves redundant data, so the encrypted volume would be smaller, or
  the block offsets don't line up, and perhaps that's trickier to handle
  than a 1:1 correspondence.
 Exactly, many file systems rely on block devices with atomic single block
 (sector) writes.

I'm sure I've seen modern disk drives that allow reformatting to use
sectors of 516 or 520 or 524 bytes rather than 512 bytes.  This would
require some generalization in the low-level I/O buffering code, but
would permit both integrity and transparency at the filesystem level.

It might also throw fits with forensic software (or even Live CDs
inserted by a thief or intruder) that expect 512 byte sectors.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Unforgeable Blinded Credentials

2006-04-05 Thread John Gilmore
 I am aware of, Direct Anonymous Attestation proposed for the Trusted
 Computing group, .

 DAA provides
 optionally unlinkable credential showing and relies on blacklisting to
 counter credential sharing.

Hmm, why doesn't this blacklisting get mentioned in IBM's DAA page?

What sort of blacklist would this be?  What actions would being listed
on it trigger?


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

HDCP support in PCs is nonexistent now?

2006-02-14 Thread John Gilmore

HDCP is Intel-designed copy prevention that uses strong crypto to
encrypt the digital video signal on the cable between your video card
(or TV or DVD player) and your monitor.  There is no need for it --
you are seeing the signal that it is encrypting -- except for DRM.

Despite a bunch of PC graphics chips and boards having announced HDCP
support, according to the above article, it turns out that none of
them will actually work.  It looks like something slipped somewhere,
and an extra crypto-key chip needed to be added to every existing
board -- at manufacturing time.  My wild ass guess is that the
original design would have had software communicate the keys to the
board, but Hollywood has recently decided not to trust that design.

This is going to make life very interesting for the HD-DVD crowd.
Intel's grand scheme was to corrupt the PC to an extent that Hollywood
would trust movies, music, etc, to PCs.  Intel decided to learn from
an oligopoly what they know about extending a monopoly into the
indefinite future, by combining legislative bribery with technological
tricks.  Now it appears that even though they have largely succeeded
in pushing all kinds of crap into PC designs, Hollywood doesn't trust
the results enough anyway.  The result may well be that HD-DVDs that
contain movies can only be played on dedicated equipment (standalone
HD-DVD players), at least for the first few years.  Or, you'll need a
new video board, which nobody sells yet, when you buy your first
HD-DVD drive.  Or the DRM standards involved will have to be somehow

Anybody know anything more about this imbroglio?


PS:  Of course, the whole thing is foolish.  DVD encryption has been
cracked for years, and circumvention tools widely distributed
worldwide, despite being too illegal to appear in out-of-the-box
products.  DVD encryption has provided exactly zero protection for DVD
revenues -- yet DVD revenues are high and rising.  In short, unless
Hollywood was lying about its motivations, DRM has so far been useless
to Hollywood.  Yet it has done great violence to consumers, to
computer architecture, to open competition, and to science.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

GnuTLS 1.2.10 - Security release

2006-02-10 Thread John Gilmore
From: Simon Josefsson [EMAIL PROTECTED]
OpenPGP: id=B565716F; url=
X-Hashcash: 1:21:060209:[EMAIL PROTECTED]::zaOuZtWmJFhp9CnX:7K5h
Date: Thu, 09 Feb 2006 16:46:28 +0100
MIME-Version: 1.0
Subject: GnuTLS 1.2.10 - Security release
Content-Type: multipart/mixed; boundary1374029283==

Content-Type: multipart/signed; boundary==-=-=;
micalg=pgp-sha1; protocol=application/pgp-signature

Content-Transfer-Encoding: quoted-printable

We are pleased to announce the availability of GnuTLS version 1.2.10,
a security bug-fix release on the stable 1.2.x branch.

This release fixes several serious bugs that would make the DER
decoder in libtasn1 crash on invalid input.  The problems were
reported by Evgeny Legerov on the 31th of January.

We invite more detailed analysis of the problem, following our general
security advisory approach explained on:

Particularly, it would be useful to answer the question of whether
these bugs are possible to exploit remotely.  It is certainly possible
to cause the server to crash.  We don't have resources to investigate
this problem more ourselves currently.

To make it easier for you to review this problem, I have prepared a
self test that trigger three bugs in the old libtasn1.  It will be
part of GnuTLS 1.3.4, in tests/certder.c.  A diff between libtasn1
0.2.17 and libtasn1 0.2.18 is also available, for those wishing to
analyze the changes made to address the problems.  It contains a few
unrelated fixes too, but it is not too large.  It is available from:

Please send your analysis to [EMAIL PROTECTED] and I'll update the
security advisory web page pointing to it.

GnuTLS is a modern C library that implement the standard network
security protocol Transport Layer Security (TLS), for use by network

Noteworthy changes since version 1.2.9:
=2D Fix read out bounds bug in DER parser.  Reported by Evgeny Legerov
  [EMAIL PROTECTED], and debugging help from Protover SSL.
=2D Libtasn1 0.2.18 is now required (contains the previous bug fix).
  The included version has been updated too.
=2D Fix gnutls-cli STARTTLS hang when SIGINT is sent too quickly, thanks to
  Otto Maddox [EMAIL PROTECTED] and Nozomu Ando [EMAIL PROTECTED].
=2D Corrected a bug in certtool for 64 bit machines. Reported
  by Max Kellermann [EMAIL PROTECTED].
=2D Corrected bugs in gnutls_certificate_set_x509_crl() and
  gnutls_certificate_set_x509_trust(), that caused memory corruption if
  more than one certificates were added. Report and patch by Max Kellermann.
=2D Fixed bug in non-blocking gnutls_bye(). gnutls_record_send() will no=20
  longer invalidate a session if the underlying send fails, but it will=20
  prevent future writes. That is to allow reading the already received data.
  Patches and bug reports by Yoann Vandoorselaere [EMAIL PROTECTED]

Improving GnuTLS is costly, but you can help!  We are looking for
organizations that find GnuTLS useful and wish to contribute back.
You can contribute by reporting bugs, improve the software, or donate
money or equipment.

Commercial support contracts for GnuTLS are available, and they help
finance continued maintenance.  Simon Josefsson Datakonsult, a
Stockholm based privately held company, is currently funding GnuTLS
maintenance.  We are always looking for interesting development

If you need help to use GnuTLS, or want to help others, you are
invited to join our help-gnutls mailing list, see:

The project page of the library is available at: (updated fastest)

Here are the compressed sources: (2.7MB)

Here are GPG detached signatures signed using key 0xB565716F:

The software is cryptographically signed by the author using an
OpenPGP key identified by the following information:
  1280R/B565716F 2002-05-05 [expires: 2006-02-28]
  Key fingerprint =3D 0424 D4EE 81A0 E3D1 19C6  F835 EDA2 1E94 B565 716F

The key is available from:

Here are the build reports for various platforms:

Here are the SHA-1 checksums:

18140bebae006e019deb77962836bcd775256aab  gnutls-1.2.10.tar.bz2
19d200ce04dc54b55d609a091500d1a2aee6e368  gnutls-1.2.10.tar.bz2.sig

NSA director on NSA domestic wiretaps (to Cong in Oct 2002)

2005-12-19 Thread John Gilmore
Paragraph 40, below, is about as bald a statement as an NSA director
could make, saying he needs help to decide what he should be allowed
to wiretap about US persons.  We, the privacy community, did not
respond.  We were a bit surprised, but that was about the extent of
the support we offered.

Of course, we were living in a time where being anti-paranoia or
anti-war or anti-president was considered treasonous by the president,
and by most of the people who elected him, and many who worked for
him.  And we were living in the lost time when we expected the
government to follow clearly written laws, until such time as they
were rewritten.  And nobody had ever gotten NSA to stop doing ANYTHING
corrupt, without either suing them, beating them in the legislature,
or shining some bright sunlight on one of their secrets -- in some
cases it took all three.  The door of the NSA Director's office has
never been open for privacy activists to come in and review their
secret programs for sanity and constitutionality, though it should be.

His challenge to the NSA work force -- to keep America free by making
Americans feel safe again -- is as bogus as TSA's We're upholding
the right to travel by making travel feel safe, even while we keep
innocent YOU off the plane.  It begs the question -- who do we need
to feel safe FROM?  Governments are historically thousands of times as
likely to injure you than 'terrorists'.  Do you feel safe from Bush
and NSA and TSA today?  Are you really sure your government isn't
tapping and tracing you, building databases about who you call and who
you travel with, with or without a warrant from some rubber stamp

Indeed, what good would it have done if the whole privacy and crypto
community had risen up to say, You should follow the law!?  Bush was
intent on breaking it in secret ANYWAY, and rather than exposing his
treason, NSA followed his orders.  Mr. Hayden did not pose the
question as, We are now wiretapping the foreign communications of US
persons without warrants, in violation of the FISA; do you think this
is OK?, though he was doing so at the time he made this speech.  But
that's the question that he and his successor will have to face civil
and criminal charges over.

Statement for the record by Lieutenant General Michael V. Hayden, USAF,
Director, National Security Agency... 17 October 2002

2.  We know our responsibilities for American freedom and security at
NSA. Our workforce takes the events of September 11, 2001 very
personally. By the very nature of their work, our people deeply
internalize their mission. This is personal.


25. The final issue - what have we done in response - will allow me to
give some specifics although I may be somewhat limited by the
demands of classification. I will use some of the terms that
Congress has used with us over the past year.

26. It was heartening, for example, to hear Congress echo the phrase
of our SIGINT Director, Maureen Baginski, in the belief that we
need to be hunters rather than gatherers. She believed and
implemented this strategy well before September 11th, and then she
applied it with a vengeance to al-Qa'ida after the attacks.


36. There is a certain irony here. This is one of the few times in the
history of my Agency that the Director has testified in open
session about operational matters. The first was in the mid 1970s
when one of my predecessors sat here nearly mute while being
grilled by members of Congress for intruding upon the privacy
rights of the American people. Largely as a result of those
hearings, NSA is governed today by various executive orders and
laws and these legal restrictions are drilled into NSA employees
and enforced through oversight by all three branches of

37. The second open session was a little over two years ago and I was
the Director at that time. During that session the House
intelligence committee asked me a series of questions with a
single unifying theme:

How could I assure them that I was safeguarding the privacy rights
of those protected by the U.S. constitution and U.S. law? During
that session I even said - without exaggeration on my part or
complaint on yours - that if Usama bin Laden crossed the bridge
from Niagara Falls, Ontario to Niagara Falls, New York, U.S. law
would give him certain protections that I would have to
accommodate in the conduct of my mission. And now the third open
session for the Director of NSA: I am here explaining what my
Agency did or did not know with regard to 19 hijackers who were in
this country legally.

38. When I spoke with our workforce shortly after the September 11th
attacks, I told them that free people always had to decide where
to draw the line between their liberty and their security, and I
noted that the attacks would almost 

Live Tracking of Mobile Phones Prompts Court Fights on Privacy

2005-12-13 Thread John Gilmore
[See the details at EFF:
 including the three court orders, and EFF's argument to the first court.

 The real story is that for years prosecutors have been asking
 magistrates to issue court orders to track cellphones in real time
 WITHOUT WARRANTS.  They're tracking people for whom they can't get
 warrants because they have no probable cause to believe there's any
 crime.  They're fishing.  The public never knew, because it all
 happens under seal.  One judge who had previously issued such orders
 got an attack of conscience, and surprisingly PUBLISHED a decision
 against such a secret DoJ request.  EFF noticed and offered legal
 analysis, and that judge and two others started publicly refusing
 such requests.  DoJ won't appeal, because without an appeals court
 precedent against them, they can keep secretly pulling the wool over
 the eyes of other magistrates, and keep tapping the locations of
 ordinary people in realtime without warrants.  --gnu]

No cookies or login required:

Published Saturday, December 10, 2005
Live Tracking of Mobile Phones Prompts Court Fights on Privacy

New York Times

Most Americans carry cellphones, but many may not know that government
agencies can track their movements through the signals emanating from
the handset.

In recent years, law enforcement officials have turned to cellular
technology as a tool for easily and secretly monitoring the movements
of suspects as they occur. But this kind of surveillance - which
investigators have been able to conduct with easily obtained court
orders - has now come under tougher legal scrutiny.

In the last four months, three federal judges have denied prosecutors
the right to get cellphone tracking information from wireless
companies without first showing probable cause to believe that a
crime has been or is being committed. That is the same standard
applied to requests for search warrants.

The rulings, issued by magistrate judges in New York, Texas and
Maryland, underscore the growing debate over privacy rights and
government surveillance in the digital age.

With mobile phones becoming as prevalent as conventional phones (there
are 195 million cellular subscribers in this country), wireless
companies are starting to exploit the phones' tracking abilities. For
example, companies are marketing services that turn phones into even
more precise global positioning devices for driving or allowing
parents to track the whereabouts of their children through the

Not surprisingly, law enforcement agencies want to exploit this
technology, too - which means more courts are bound to wrestle with
what legal standard applies when government agents ask to conduct such

Cellular operators like Verizon Wireless and Cingular Wireless know,
within about 300 yards, the location of their subscribers whenever a
phone is turned on. Even if the phone is not in use it is
communicating with cellphone tower sites, and the wireless provider
keeps track of the phone's position as it travels. The operators have
said that they turn over location information when presented with a
court order to do so.

The recent rulings by the magistrates, who are appointed by a majority
of the federal district judges in a given court, do not bind other
courts. But they could significantly curtail access to cell location
data if other jurisdictions adopt the same reasoning. (The
government's requests in the three cases, with their details, were
sealed because they involve investigations still under way.)

It can have a major negative impact, said Clifford S. Fishman, a
former prosecutor in the Manhattan district attorney's office and a
professor at the Catholic University of America's law school in
Washington. If I'm on an investigation and I need to know where
somebody is located who might be committing a crime, or, worse, might
have a hostage, real-time knowledge of where this person is could be a
matter of life or death.

Prosecutors argue that having such information is crucial to finding
suspects, corroborating their whereabouts with witness accounts, or
helping build a case for a wiretap on the phone - especially now that
technology gives criminals greater tools for evading law enforcement.

The government has routinely used records of cellphone calls and
caller locations to show where a suspect was at a particular time,
with access to those records obtainable under a lower legal
standard. (Wireless operators keep cellphone location records for
varying lengths of time, from several months to years.)

But it is unclear how often prosecutors have asked courts for the
right to obtain cell-tracking data as a suspect is moving. And the
government is not required to report publicly when it makes such

Legal experts say that such live tracking has tended to happen in
drug-trafficking cases. 

Re: [Clips] Banks Seek Better Online-Security Tools

2005-12-03 Thread John Gilmore many people on this list use or have used online banking?
 To start the ball rolling, I have not and won't.

Dan, that makes two of us.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: An overview of cryptographic protocols to prevent spam

2005-09-26 Thread John Gilmore
 I wrote an overview of Cryptographic Protocols to Prevent Spam, 

I stopped reading on page V -- it was too painfully obvious that Amir
has bought into the whole censorship-list based anti-spam mentality.

It was hard to get from paragraph to paragraph without finding
approving mentions of blacklists.  I am a victim of many such
blacklists.  May Amir never appear on one, or his unthinking
acceptance of blacklisting might change.  His analysis made me think
of clinical reviews of experiments done on human subjects in prison
camps -- careful to focus on the facts while ignoring the obvious
moral problems.

Interspersed were discussions of various kinds of port blocking.  The
Internet is too good for people who'd censor other peoples'
communications, whether by port number (application) or by IP address
(person).  It saddens me to see many of my friends among that lot.

John Gilmore

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Defending users of unprotected login pages with TrustBar

2005-09-20 Thread John Gilmore
Perhaps the idea of automatically redirecting people to alternative
pages goes a bit too far:

 1. TrustBar will automatically download from our own server,
 periodically, a list of all of the unprotected login sites, including
 any alternate protected login pages we are aware of. By default,
 whenever a user accesses one of these unprotected pages, she will be
 automatically redirected to the alternate, protected login page.

How convenient!  So if I could hack your server, I could get all
TrustBar users' accesses -- to any predefined set of pages on the
Internet -- to be redirected to scam pages.

A redirect to an untrustworthy page is just as easy as a redirect to a
trustworthy page.  The question is who you trust.

 BTW, TrustBar is an open-source project, so if some of you want to
 provide it to your customers, possibly customized (branded) etc., there
 is no licensing required.

Also providing a handy platform for slightly modified versions, that will
take their cues from a less trustworthy list of redirects.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [Clips] Contactless payments and the security challenges

2005-09-19 Thread John Gilmore

Interesting article, but despite the title, there seems to be no
mention of any of the actual security (or privacy) challenges involved
in deploying massive RFID payment systems.  E.g. I can extract money
from your RFID payment tag whenever you walk past, whether you
authorized the transaction or not.  And even assuming you wanted it
this way, if your Nokia phone has an RFID chip in it, who's going to
twist the arms of all the transit systems and banks and ATM networks
and vending machines and parking meters and supermarkets and
libraries?  Their first reaction is going to be to issue you an RFID
themselves, and make you juggle them all, rather than agreeing that
your existing Nokia RFID will work with their system.  If you lose
your cellphone, you can report it gone (to fifty different systems),
and somehow show them your new Motorola RFID, but how is each of them
going to know it's you, rather than a fraudster doing denial of
service or identity theft on you?

Then there's the usual tracking people via the RFIDs they carry
problem, which was not just ignored -- they claimed the opposite:
This kind of solution provides privacy, because the token ID is
meaningless to anyone other than the issuing bank which can map that
ID to an actual account or card number.  That is only true once --
til anyone who wants to correlates that token ID blob with your
photo on the security camera, your license plate number (and the RFIDs
in each of your Michelin tires), the other RFIDs you're carrying, your
mobile phone number, the driver's license they asked you to show, the
shipping address of the thing you just bought, and the big database on
the Internet where Equifax will turn a token ID into an SSN (or vice
verse) for 3c in bulk.

The article seems to have a not-so-subtle flavor of boosterspice.
Anybody got a REAL article on contactless payments and security


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Clearing sensitive in-memory data in perl

2005-09-17 Thread John Gilmore
 Generally speaking, I think software with a security impact should not
 be written in C.


The C language is not the problem.  The C library is not the problem.
Both of these things were fixed during ANSI standardization, so that
standard-conforming programs will not fail runtime checks for
overrunning arrays (strings are just arrays of characters).

There have been various C implementations that did these checks,
including both compilers and interpreters.  Some are free, some are
proprietary.  (I proposed to fund adding these checks to GCC, but the
price I was quoted was too high for me.)  I fault the people who don't
use such tools -- not the C language.

(Aside: What ever happened to Saber C?  Oh, it was renamed to
Centerline CodeCenter, never made it out of the Unix workstation
market, used FlexLM per-cpu licensing crap, has gone morbid, and was
acquired a year ago by, a graphics library company, with a
promise to port it to Linux.  There's no evidence of such a port, and
the product support matrix was last updated in June 2001.  The
product doesn't appear on ICS's product pages.  I wonder how cheaply
the source could be bought and freed up, to bring it back to life?  It
was a nice product, fifteen years ago.)

The reason there's fewer security bugs in PL/1 programs than C
programs is because almost nobody has written programs in PL/1 since
about 1985.  Google did find me a compiler you can download -- it runs
on VMS, on Vaxes or Alphas.  Anybody still running those space-heaters
is welcome to program in PL/1.  The rest of us have real work to do,
and it's likely to get done in C or C++.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [Clips] Venona Ten Years Later: Lessons for Today

2005-07-22 Thread John Gilmore
  one that is all too relevant today. The pertinent question is no longer
  whether Americans spied, but rather how highly educated, intelligent men
  and women failed to comprehend the true nature of Stalinist communism, and
  why they were willing to risk their lives and imperil the security of their
  families, neighbors and friends to commit crimes on behalf of a foreign
  power opposed to the basic tenets of modern society. 

This was a good observation, but the next sentence muddled it with 
typical American self-blindness. 

  Answers to similar
  questions, regarding educated Muslims with experience of life in Europe and
  the U.S. like those who led the 9-11 and Madrid attacks, are essential to
  constructing a defense against 21st century terrorism.

I want the same answer about how not just the Washington elite, but
even army kids from Iowa, fail to comprehend WHY we prohibit torture,
provide fair trials and legal representation, due process of law, and
why we have a constitution or civil rights at all.  Do they not
comprehend the true nature of a United States with arbitrary searches,
travel papers, pervasive surveillance, no effective Leg. or
Jud. checks on arbitrary executive power, no federalism checks on
unlimited federal power, indefinite imprisonment of US citizens at the
will of the President, indefinite imprisonment without trial of
non-citizens seized by force anywhere in the world, and wars of
occupation?  It's caled an expanding totalitarian state, kiddies, and
every totalitarian stste tells its citizens how they are the freest
country in the world.  Get out and compare for yourself!

Then tell me what the basic tenets of modern society are.

John Gilmore (posting from Greece)

PS:  Add in a lapdog press too.  Try reading the foreign press on the web.
They actually ask hard questions of pols and slam them for evading.  And 
all their sources aren't anonymous highly placed govt officials.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Digital signatures have a big problem with meaning

2005-06-03 Thread John Gilmore
 That cuts both ways though.  Since so many systems *do* screw with data (in
 insignificant ways, e.g. stripping trailing blanks), anyone who does massage
 data in such a way that any trivial change will be detected is going to be
 inundated with false positives.  Just ask any OpenPGP implementor about
 handling text canonicalisation.

Even mere hash checks are turning up obscure data corruptions.  Some
people reported that BitTorrent would never finish certain files,
getting to 99.9% and stalling.  The problem is that their NAT box was
replacing its external IP address with its internal address --
anywhere in a packet.  This is called Game mode in some NAT boxes.
Their router was corrupting random binary data (and altering the TCP,
UDP, and Ethernet packet checksums!).  They never noticed until
BitTorrent used end-to-end application-level SHA1 hash checks and
retransmission to detect and correct it.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

DRM comes to digital cameras: Lexar LockTight

2005-05-20 Thread John Gilmore
Lexar Media has come up with a Compact Flash card that won't actually
work until you do a nonstandard, proprietary handshake with it.  They
worked with a couple of camera makers (and built their own CF reader
and Windows software) to implement it.  Amazingly, it doesn't actually
store the photos encrypted on the flash; it just disables access to
the memory until you do something secret (probably answer a
challenge/response with something that shows you have the same secret
key that those cameras do).  I don't know of anyone competent who's
taken one apart and figured out what the actual security properties

They also have Active Memory which appears to be another idea for
what can be done by making a separate memory on the CF card that can't
be accessed by the standard protocols.  Idle hands are the devil's work.
They haven't figured out anything useful for it to do: at the moment
their custom software copies copyright notices off the secret memory
onto the photos, after you transfer them to a PC.  Of course, the 
software could've done that WITHOUT the secret memory, just keeping the
copyright info in a file in the standard flash file system.

What Lexar gets out of it is to charge twice as much for these CF cards,
raising them out of the commodity market.  (Assuming anybody buys.)
They're pitching it to cops, who are spending somebody else's money.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Export controls kill Virgin SpaceShipTwo

2005-05-20 Thread John Gilmore

First crypto, now space travel.  The lunatics in Washington are
working hard to drive another industry that's critical to US interests

Did they think that after collecting $20M in prepayments from
passengers, Sir Richard Branson would give up, on orders from DC?  No,
he'll clone Rutan's work somewhere else, as best he can, and build a
space industry where it's welcome.  Either that, or Rutan will take
his head and export it to where he can run a business without


  Red Tape For SpaceShipTwo
  by Irene Mona Klotz
  Cape Canaveral (UPI) Apr 26, 2005

  The problem is U.S. export controls issues ...
  At this point, due to uncertainty about possible licensing
  requirements, we are not able to even view Scaled Composites' designs
  for the commercial space vehicle, Whitehorn said. After U.S.
  government technology-transfer issues are clarified and addressed if
  deemed necessary, we hope to place a firm order for the spacecraft.

  Despite a price tag of $200,000, about 100 people have signed contracts
  for rides on Virgin Galactic's spaceliner and agreed to pay the money
  upfront, while another 29,000 or so aspiring astronauts have agreed to
  put down deposits of $20,000 each.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Network World: 10-node Quantum Crypto net under Boston streets

2005-05-20 Thread John Gilmore

Today's focus:  Hooked on photonics

By Amy Schurr

CAMBRIDGE, MASS. - Chip Elliott is every hacker's worst 

Elliott, principal scientist at BBN Technologies, leads a team 
building the world's first continuously operating quantum 
cryptography network, a 12-mile snoop-proof glass loop under the 
streets of Boston and Cambridge.

Quantum cryptography uses single photons of light to distribute 
keys to encrypt and decrypt messages. Because quantum particles 
are changed by any observation or measurement, even the simplest 
attempt at snooping on the network interrupts the flow of data 
and alerts administrators.

While the technology is still in the pilot stage, Elliott 
envisions a day when quantum cryptography will safeguard all 
types of sensitive traffic. It's not going to overnight replace 
everything we have, he says. But it will be used to augment 
current technologies.

Defense funding

BBN's research is funded by the Pentagon's Defense Advanced 
Research Projects Agency , so it's likely the government would 
be first in line to roll out the super-secure technology. 
Elliott predicts financial firms will deploy quantum 
cryptography within a few years and estimates that businesses in 
general will deploy within five years. The technology also could 
move to the consumer market - for example, in a 
fiber-to-the-home scenario to protect the network between a home 
and service provider.

People think of quantum cryptography as a distant possibility, 
but [the network] is up and running today underneath Cambridge, 
Elliott says. The team of nine researchers from BBN, four from 
Boston University and two from Harvard University, have put 
together a set of high-speed, full-featured quantum 
cryptography systems and has woven them together into an 
extremely secure network, he says.

The system is essentially two networks - one for quantum key 
distribution and one that carries the encrypted traffic. And 
although it's probably the world's most secure network, it's not 
protecting any real secrets, at least not yet. For this pilot 
phase, BBN encrypts normal Internet traffic such as Web pages, 
Webcam feeds and e-mail.

The network has 10 nodes. Eight are at BBN's offices in 
Cambridge, one is at Harvard in Cambridge, and another is across 
the Charles River at BU's Photonics Center.

In keeping with the traditional naming convention that IT 
security professionals use, the nodes are named Alice, Bob, Ali, 
Baba, Amanda, Brian, Anna, Boris, Alex and Barb.

For the complete story, please go to:
To contact: Amy Schurr

Amy Schurr is an editor for Network World's Management 
Strategies and Features sections. If you have any career topics 
you'd like her to cover or want to comment on this newsletter, 
you can reach her at mailto:[EMAIL PROTECTED].

Copyright Network World, Inc., 2005

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Export controls: US wants to export-license fundamental research again

2005-05-20 Thread John Gilmore

The vast majority of the deemed exports have been in computers (661
in 2003), telecomm and information security (357), and electronics
(338).  In 2003, no other category had more than 80 licenses applied
for.  This is all about computers and crypto -- it's not about nuclear
weapons (0 in 2003).

The vast majority of the foreigners involved came from China (367),
Russia (238), and India (83), out of 846 total.  (These stats are on
page 15.)

The inspectors, and the Commerce Dept., also propose a rule that says
the country where you were born trumps your current citizenship.  The
export regs are different for each country, so someone who fled Hong
Kong and took up Canadian citizenship would be considered Chinese
under these rules.  The racist implications seem to be strongly
focused on denying access to high-tech equipment to people of Chinese
and Muslim descent when they're studying or working in the United

John Gilmore
Electronic Frontier Foundation

Forwarded-By: David Farber [EMAIL PROTECTED]
Forwarded-By: Jennifer Mankoff [EMAIL PROTECTED]
Subject: Urgent: Proposed Dept. of Commerce rules pose threat to research

Dear Chairs of PhD-granting Physics Departments,

I am writing to alert you to a possible threat to research in your
department and to urge you and your faculty to write to the Department
of Commerce (DOC) in response to its Advance notice of proposed
rulemaking published in the Federal Register on March 28, 2005. The
notice calls for comments that must be received by May 27, 2005. As
discussed below, the leadership of the American Physical Society feels
this issue is so important that you should seek to provide thoughtful
and accurate responses by your university administration, your
department and individual faculty who might be affected by the
recommended changes. We believe that your comments can make a

The proposed rulemaking by the DOC is a response to recommendations
presented by the Department's Inspector General. Implementation of
these recommendations would cause two major changes:

1) The operation of export-controlled instrumentation by a foreign
national working in your department would be considered a deemed
export, even if that person were engaged in fundamental research. As
a consequence, a license would be required for each affected foreign
national (student, staff or faculty member) and for each export
controlled instrument. Typical export-controlled instruments are
high-speed oscilloscopes, high-resolution lithography systems,
high-end computers and GPS systems. The situation is complicated by
the fact that the list of instruments is different for each country.

2) U.S. organizations would be required to apply for a deemed export
license for students, employees or visitors who are foreign nationals
(but not U. S. naturalized citizens or permanent residents) and have
access to controlled technology if they were born in a country where
the technology transfer in question would require an export license,
regardless of their most recent citizenship or permanent residency.
For example, transfer of technology to a Chinese scientist who has
established permanent residency or citizenship in Canada would be
treated, for export licensing purposes under the proposed guidelines,
as a deemed export to a Chinese foreign national. (The list of
export-controlled instruments for Chinese nationals is particularly

The Department of Commerce officials who have the responsibility for
developing new policies and practices in response to the Inspector
General's recommendations are anxious to determine what the impact of
implementing those recommendations would be. They must seek a balance
between increases in national security that might result from the
implementation of the new rules and the decrease in national security
that would result from negative impacts to US research and

In initial discussions by the APS Panel on Public Affairs (POPA) it
was thought likely that consequences would be: a) research would slow
down significantly due to the need to obtain licenses for each foreign
national and, particularly, Chinese student, staff member, postdoc, or
faculty member using export controlled instrumentation. We believe
that a separate license would have to be obtained for each instrument.
In this regard, it should be noted that the relevant DOC office has
the staff to handle about 800-1000 license requests per year.  Present
times to process a license request are typically 2-3 months. b)
instruments would have to be secured to ensure that those who do not
have the required license could not use them. c) the number of Chinese
and other foreign national students would decrease markedly as their
second-class status on campus became apparent, thus ultimately
weakening the nation's science and technology workforce. d) the
administrative costs of research would rise markedly. e) national
security would ultimately be weakened

DOT neg rulemaking re ID standardization (call for membership of advisory committee)

2005-03-25 Thread John Gilmore
[Here's where an unconstitutional National ID will get created by the
back door.  Do we have anybody in this community who cares?  I can't
participate, because I can't travel to Washington for meetings,
because I don't have the proper ID documents.  I note that they did
not think to include a representative of undocumented people...
  -- John]

[Federal Register: February 23, 2005 (Volume 70, Number 35)]
[Proposed Rules]
[Page 8756-8761]
 From the Federal Register Online via GPO Access []


Office of the Secretary
49 CFR Subtitle A

[Docket No. OST-2005-20434]

Driver's Licenses and Personal Identification Cards
AGENCY: Office of the Secretary (OST), DOT.
ACTION: Notice of intent to form a negotiated rulemaking advisory


SUMMARY: Pursuant to the portion of the Intelligence Reform and
Terrorism Prevention Act of 2004 known as the 9/11 Commission
Implementation Act of 2004, the Office of the Secretary, DOT, is
establishing a committee to develop, through negotiated rulemaking
procedures, recommendations for minimum standards to tighten the
security for driver's licenses and personal identification cards issued
by States, in order for these documents to qualify for use by Federal
agencies for identification purposes. The committee will consist of
persons who represent the interests affected by the proposed rule,
i.e., State offices that issue driver's licenses or personal
identification cards, elected State officials, the Departments of
Transportation and Homeland Security, and other interested parties. The
purpose of this document is to invite interested parties to submit
comments on the issues to be discussed and the interests and
organizations to be considered for representation on the committee.

DATES: You should submit your comments or applications for membership
or nominations for membership on the negotiated rulemaking committee
early enough to ensure that the Department's Docket Management System
(DMS) receives them not later than March 25, 2005. Late-filed comments
will be considered to the extent practicable.

ADDRESSES: You should mention the docket number of this document in
your comments or application/nomination for membership and submit them
in writing to: Docket Management System (DMS), Room PL-401, 400 Seventh
Street, SW., Washington, DC 20590. Commenters may also submit their
comments electronically. Instructions for electronic submission may be
found at the following Web address:
 You may call the Docket at 202-366-9324, and visit it from 10 a.m.
to 5 p.m., Monday through Friday. Interested persons may view docketed
materials on the Internet at any time. Instructions for doing so are
found at the end of this notice.
 You may read the comments received by DMS at the address given
above under ADDRESSES. The hours of the Docket are indicated above in
the same location.
 You may also review all documents in the docket via the internet.
To read docket materials on the internet, take the following steps:
 1. Go to the DMS Web page of the Department of Transportation
 2. On that page, click on ``search.''
 3. On the next page (, type in the four-
digit docket number shown at the beginning of this document. Example:
If the docket number were OST-2005-1234,'' you would type ``1234.''
After typing the docket number, click on ``search.''
 4. On the next page, which contains docket summary information for

[[Page 8757]]

docket you selected, click on the desired comments. You may download
the comments. The comments are word searchable.
 Please note that even after the comment closing date, we will
continue to file relevant information in the Docket as it becomes
available. Further, some people may submit late comments. Accordingly,
we recommend that you periodically check the Docket for new material.

FOR FURTHER INFORMATION CONTACT: Robert C. Ashby, Deputy Assistant
General Counsel for Regulation and Enforcement, Office of the General
Counsel, at 202-366-9310 ([EMAIL PROTECTED]), or Steve Wood, Assistant
Chief Counsel for Vehicle Safety Standards and Harmonization, Office of
the Chief Counsel, National Highway Traffic Safety Administration, 202-
366-2992 ([EMAIL PROTECTED]) Their mailing addresses are at the
Department of Transportation, 400 7th Street, SW., Washington, DC
20590, at rooms 10424 and 5219, respectively.


I. Background

 On December 17, 2004, the President signed into law the
Intelligence Reform and Terrorism Prevention Act of 2004. (Public Law
No. 108-458). Title VII of that Act is known as the 9/11 Commission
Implementation Act of 2004 (the 

SSL Cert prices ($10 to $1500, you choose!)

2005-03-05 Thread John Gilmore
For the privilege of being able to communicate securely using SSL and a
popular web browser, you can pay anything from $10 to $1500.  Clif
Cox researched cert prices from various vendors:


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Network World: NIST dubious about 802.11 TKIP; wants AES

2005-01-26 Thread John Gilmore
NIST mulls new WLAN security guidelines
By Ellen Messmer

The National Institute of Standards and Technology, the federal 
agency responsible for defining security standards and practices 
for the government, plans to issue new guidelines pertaining to 
wireless LANs in the near future.

The decisions NIST reaches, possibly as early as this month, 
will broadly affect federal agency purchases of WLAN equipment, 
because federal agencies are required to follow NIST 
recommendations. According to William Burr, manager of NIST's 
security technology group, the agency is focusing on whether to 
approve the IEEE's 802.11i WLAN security standard for encryption 
and authentication as a government standard. The IEEE approved 
802.11i last July, but Burr says NIST is not keen on some 
aspects of it.

Specifically, NIST has reservations about the so-called Temporal 
Key Integrity Protocol (TKIP), which is the key management 
protocol in 802.11i that uses the same encryption engine and RC4 
algorithm that was defined for the Wired Equivalent Privacy 
protocol (WEP).

The 40-bit WEP, used in many early WLAN products, was criticized 
widely in the past two years as having too short a key length 
and a poor key management scheme for encryption. TKIP is a 
wrapper that goes around WEP encryption and ensures that TKIP 
encryption is 128 bits long.

TKIP was designed to ensure it could operate on WLAN hardware 
that used WEP. In contrast, the 128-bit Advanced Encryption 
Standard (AES), which NIST already has approved, requires a 
hardware change for most older WLAN equipment.

We just don't feel that the TKIP protocol cuts the grade for 
government encryption, Burr says. He adds that the RC4 
encryption algorithm is not a Federal Information Processing 
(FIPS) standard and probably won't ever be because network 
professionals see RC4 as rather weak in terms of message 
authentication and integrity.

NIST is more inclined to approve AES for WLAN security, and in 
fact Burr pointed to the NIST document 800-38C, published last 
summer, for encryption that includes the AES algorithm.

As far as the key management scheme for key exchange and setup 
is concerned, NIST might introduce a new key-management 
technology that's been jointly developed with the National 
Security Agency.
Senior Editor Ellen Messmer covers security for Network World. 
Contact her at mailto:[EMAIL PROTECTED].

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Gov't Orders Air Passenger Data for Test

2004-11-22 Thread John Gilmore
 ... they can't really test how effective the system is ...

Effective at what?  Preventing people from traveling?

The whole exercise ignores the question of whether the Executive Branch
has the power to make a list of citizens (or lawfully admitted non-citizens)
and refuse those people their constitutional right to travel in the United

Doesn't matter whether there's 1, 19, 20,000, or 100,000 people on the
list.  The problem is the same: No court has judged these people.
They have not been convicted of any crime.  They have not been
arrested.  There is no warrant out for them.  They all have civil
rights.  When they walk into an airport, there is nothing in how they
look that gives reason to suspect them.  They have every right to
travel throughout this country.  They have every right to refuse a
government demand that they identify themselves.

So why are armed goons keeping them off airplanes, trains, buses, and
ships?  Because the US constitution is like the USSR constitution --
nicely written, but unenforced?  Because the public is too afraid of
the government, or the terrorists, or Emmanuel Goldstein, or the
boogie-man, to assert the rights their ancestors died to protect?

John (under regional arrest) Gilmore

PS: Oral argument in Gilmore v. Ashcroft will be coming up in the
Ninth Circuit this winter.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: MCI set to offer secure two-way messaging with strong encryption

2004-10-28 Thread John Gilmore
 MCI Inc. will offer secure two-way messaging through its SkyTel
 Communications subsidiary next month, encrypting wireless text
 with the Advanced Encryption Algorithm.

Note that they don't say it's end to end encryption:

 Messages are encrypted between the device and an encryption server
 at SkyTel’s secure network operations center.

And presumably wiretappable there.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Interesting report on Dutch non-use of traffic data

2004-10-06 Thread John Gilmore
 From EDRI-gram via Wendy Seltzer:

4. Dutch police report: traffic data seldom essential

Telephone traffic data are only necessary to solve crimes in a minority of
police investigations. Most cases can be solved without access to traffic
data, with the exception of large fraud investigations.

These are the conclusions of a Dutch police report produced at the request
of the Dutch ministry of Justice. The report was recently obtained by the
Dutch civil liberties organisation Bits of Freedom through a public access

The report undermines the Dutch government's support to the EU draft
framework decision on data retention. The report makes no case for the
proposed data retention as Dutch police already uses traffic data in 90%
of all investigations. The police can already obtain, with a warrant, the
traffic data that telecommunication companies store for their own billing-
and business purposes. The report also shows that the use of traffic data
is a standard tool in police investigations and it not limited to cases of
organised crime or terrorism.

The report is the result of an evaluation of past investigations by the
Dutch police of Rotterdam. Two-thirds of all investigations could have
been solved if no traffic data would have been available at all. The three
main purposes of traffic data in police investigations are: network
analysis (searching for associations of a person to other individuals),
tactical support for surveillance and checking of alibis (through GSM
location data).

Police investigators can compensate a possible lack of traffic data by
other investigative methods such as wiretapping, surveillance, a
preservation order for traffic data and a longer investigative period. The
report states that police officers seldom ask for traffic data older than
six months.

The report was never sent to the Dutch parliament although members of
parliament previously asked for research results about the effectiveness
of mandatory data retention. After Bits of Freedom published the report
new questions have been raised in the Dutch parliament about the reason
for withholding the report.

The use of (historic) traffic data in investigations (April 2003, in Dutch)

(Contribution by Maurice Wessling, EDRI-member Bits of Freedom)

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Linux-based wireless mesh suite adds crypto engine support

2004-10-04 Thread John Gilmore
 - sufficient documentation and really transparent provable details so that
 users could trust and verify that the hardware and software were doing what
 they claimed to be doing and weren't doing anything evil that they didn't
 admit to, such as including backdoors or bad random number generators.
 Tinfoil hat stuff - why trust any crypto hardware then?

I don't -- do you?

Crypto hardware that does algorithms can be tested by periodically
comparing its results to a software implementation.  Production
applications should probably be doing this -- maybe 1% of the time.

Crypto hardware that generates random numbers can't be tested in
production in many useful ways.  My suggestion would be to XOR a
hardware-generated and a software-generated random number stream.  If
one fails, whether by accident, malice, or design, the other will
still randomize the resulting stream.  Belt AND suspenders will keep
your source of randomness from being your weakest link.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: EZ Pass and the fast lane ....

2004-07-09 Thread John Gilmore
 It would be relatively easy to catch someone
 doing this - just cross-correlate with other
 information (address of home and work) and
 then photograph the car at the on-ramp.

Am I missing something?

It seems to me that EZ Pass spoofing should become as popular as
cellphone cloning, until they change the protocol.  You pick up a
tracking number by listening to other peoples' transmissions, then
impersonate them once so that their account gets charged for your toll
(or so that it looks like their car is traveling down a monitored
stretch of road).  It should be easy to automate picking up dozens or
hundreds of tracking numbers while just driving around; and this can
foil both track-the-whole-populace surveillance, AND toll collection.
Miscreants would appear to be other cars; tracking them would not
be feasible.

The rewriteable parts of the chip (for recording the entry gate to
charge variable tolls) would also allow one miscreant to reprogram the
transponders on hundreds or thousands of cars, mischarging them when
they exit.  Of course, the miscreant's misprogrammed transponder would
just look like one of the innocents who got munged.

[I believe, by the way, that the EZ Pass system works just like many
other chip-sized RFID systems.  It seems like a good student project
to build some totally reprogrammable RFID chips that will respond to a
ping with any info statically or dynamically programmed into them by
the owner.  That would allow these hypotheses to be experimentally tested.]


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: EZ Pass and the fast lane ....

2004-07-09 Thread John Gilmore
[By the way, [EMAIL PROTECTED] is being left out of this conversation,
 by his own configuration, because his site censors all emails from me.  --gnu]

 Well, I am presuming that ... the EZ Pass does have an account
 number, right?  And then, the car does have a licence place?  So,
 just correlate the account numbers with the licence plates as they
 go through the gates.

If they could read the license plates reliably, then they wouldn't
need the EZ Pass at all.  They can't.  It takes human effort, which is
in short supply.

 The thing about phones is that they have no licence plates and no
 toll gates.  Oh, and no cars.

Actually, cellphones DO have other identifying information in them,
akin to license plates.  And their toll gates are cell sites.

It's not clear what your remark about phones having no cars has to do
with the issue of whether EZ Pass is likely to be widely spoofed.

 What incentive does a miscreant have to reprogram hundreds or
 thousands of other cars???

(1) Same one they have for releasing viruses or breaking into
thousands of networked systems.  Because they can; it's a fun way to
learn.  Like John Draper calling the adjacent phone booth via
operators in seven countries.  (2) The miscreant gets a cheap toll
along with hundreds of other people who get altered tolls.

[Cory Doctorow's latest novel (Eastern Standard Tribe, available free
online, or in bookstores) hypothesizes MP3-trading networks among
moving cars, swapping automatically with whoever they pass near enough
for a short range WiFi connection.  Sounds plausible to me; there are
already MP3 players with built-in short range FM transmitters, so
nearby cars can hear your current selection.  Extending that to faster
WiFi transfers based on listening preferences would just require a
simple matter of software.  An iPod built by a non-DRM company might
well offer such a firmware option -- at least in countries where
networking is not a crime.  Much of the music I have is freely


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: A National ID: AAMVA's Unique ID

2004-06-17 Thread John Gilmore
 The solution then is obvious, don't have a big central database. Instead use
 a distributed database.

Our favorite civil servants, the Departments of Motor Vehicles, are about
to do exactly this to us.

They call it Unique ID and their credo is: One person, one license,
one record.  They swear that it isn't national ID, because national
ID is disfavored by the public.  But it's the same thing in
distributed-computing clothes.

The reason they say it isn't a national ID is because it's 50 state
IDs (plus US territories and Canadian provinces and Mexican states) --
but the new part is that they will all be linked by a continent-wide
network.  Any official who looks up your record from anywhere on the
continent will be able to pull up that record.  Anyplace you apply for
a state license or ID card, they will search the network, find your
old record (if you have one) and transfer it to that state.  So
there's no way to escape your past record, and no way to get two cards
(in the absence of successful fraud, either by citizens or DMV

This sure smells to me like national ID.

This, like the MATRIX program, is the brainchild of the federal
Department of inJustice.  But those wolves are in the sheepskins of
state DMV administrators, who are doing the grassroots politics and
the actual administration.  It is all coordinated in periodic meetings
by AAMVA, the American Association of Motor Vehicle Administrators
(  Draft bills to join the Unique ID Compact, the
legally binding agreement among the states to do this, are already
being circulated in the state legislatures by the heads of state DMVs.
The idea is to sneak them past the public, and past the state
legislators, before there's any serious public debate on the topic.

They have lots of documents about exactly what they're up to.  See  Unfortunately for us, the real
documents are only available to AAMVA members; the affected public is
not invited.

Robyn Wagner and I have tried to join AAMVA numerous times, as  We think that we have something to say about the
imposition of Unique ID on an unsuspecting public.  They have rejected
our application every time -- does this remind you of the Hollywood
copy-prevention standards committees?  Here is their recent
rejection letter:

  Thank you for submitting an application for associate membership in AAMVA.
  Unfortunately, the application was denied again. The Board is not clear as
  to how FreeToTravel will further enhance AAMVA's mission and service to our
  membership. We will be crediting your American Express for the full amount

  Please feel free to contact Linda Lewis at (703) 522-4200 if you would like
  to discuss this further.

  Dianne E. Graham 
  Director, Member and Conference Services 
  4301 Wilson Boulevard, Suite 400 
  Arlington, VA 22203 
  T: (703) 522-4200 | F: (703) 908-5868  

At the same time, they let in a bunch of vendors of high security ID
cards as associate members.

AAMVA, the 'guardians' of our right to travel and of our identity
records, doesn't see how listening to citizens concerned with the
erosion of exactly those rights and records would enhance their
mission and service.  Their mission appears to be to ram their
secret policy down our throats.  Their service is to take our tax
money, use it to label all of us like cattle with ear-tags, and deny
us our constitutional right to travel unless we submit to being

We protest.  Do you?

John Gilmore

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Passwords can sit on disk for years

2004-06-09 Thread John Gilmore
 Really, a red page needs to be red all the way through all levels of
 virtualization.  Very low level, or even hardware, support might even prove
 useful - e.g., if for whatever reason the data in the physical page frame
 needs to be copied (after a soft ECC error?), zero the previous page frame.)

Intel, Microsoft and Hollywood are solving this for us.  Their new
hardware can't be virtualized, so it can't leak the
monopolists/oligopolists' keys.  In their scheme, of course, OUR keys
don't get the same level of protection as monopolist keys.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: digsig - when a MAC or MD is good enough?

2004-01-03 Thread John Gilmore
 Sarbanes-Oxley Act in the US.  Section 1102 of that act:
 Whoever corruptly--
(1) alters, destroys, mutilates, or conceals a
record, document, or other object, or attempts to
do so, with the intent to impair the object's
integrity or availability for use in an official
proceeding; ...
 shall be fined under this title or imprisoned not
 more than 20 years, or both..

The flaw in this ointment is the intent requirement.  Corporate
lawyers regularly advise their client companies to shred all
non-essential records older than, e.g. two years.  The big reason to
do so is to impair their availability in case of future litigation.
But if that intent becomes illegal, then the advice will be to shred
them to reduce clutter or to save storage space.

 Can we surmise that a digital record with an MD attached and
 logged would fall within object ?

What's the point of keeping a message digest of a logged item?  If the
log can be altered, then the message digest can be altered to match.
(Imagine a sendmail log file, where each line is the same as now, but
ends with the MD of the line in some gibberish characters...)


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: hiding attestation from the consumer

2003-12-31 Thread John Gilmore
There isn't really any security benefit obtained by hiding
the content of the attestation _from the party providing it_!

This statement reveals confusion between the parties.  There are at least
three parties involved in an attestation:

  *  The DRM'd product vendor (somewhere on the net)
  *  The consumer (sitting at their PC)
  *  The PC hardware and software vendors (building attestation in)

There are strong reasons to hide the content of the attestation -- or
even its mere existence -- from the consumer party.  If consumers knew
their PCs were spying on them and letting vendors say, Sorry, our
server is down today not because the server is down, but because the
consumer's PC is blacklisted, then consumers would be upset.  It's a
much simpler customer relations problem if it just doesn't happen to
work, without the consumer ever finding out that they live in a
redlined neighborhood and it will NEVER work for them.

It's really easy to infer that DRM problems are going to be
deliberately inscrutable.  You don't see DRM vendors advertising the
restrictions on their products.  These restrictions aren't in boldface
in the table of contents.  They're hidden deep in the guts of the
manual, if they appear at all.  (In the list of error messages is
where you usually find 'em, with a very brief mention.)  It's the
consumer's fault, or their ISP's fault, or somebody else's, if the
site doesn't work for you.  If your DAT recorder won't record, you
must have cabled it up wrong.  If your HDTV won't work, you ran it
through your VCR by mistake.  And if your music site won't download to
you, you must have installed your software wrong, or there's a
firewall problem, or your codecs are incompatible, or something.  When
the entire goal is to covertly change consumer behavior, by making
things that are utterly legal simply NOT WORK, plain language about
the restrictions has no place.  Consumer problems caused by DRM are
seldom advertised, documented, or reported as the DRM's fault.

You can get a similar effect merely by turning off cookies and
JavaScript today.  (You *do* use a browser that has simple switches to
turn these off, right?  Mozilla is your friend, and it runs on your
platform.)  Web sites will start to fail at random, in inscrutable
ways.  Only about 1% of them will tell you This site requires
JavaScript -- and of those that do, only about a quarter of them
actually do require it.

John Gilmore

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Difference between TCPA-Hardware and other forms of trust

2003-12-18 Thread John Gilmore
 | means that some entity is supposed to trust the kernel (what else?). If
 | two entities, who do not completely trust each other, are supposed to both
 | trust such a kernel, something very very fishy is going on.

 Why?  If I'm going to use a time-shared machine, I have to trust that the
 OS will keep me protected from other users of the machine.  All the other
 users have the same demands.  The owner of the machine has similar demands.

I used to run a commercial time-sharing mainframe in the 1970's.
Jerrold's wrong.  The owner of the machine has desires (what he calls
demands) different than those of the users.

The users, for example, want to be charged fairly; the owner may not.
We charged every user for their CPU time, but only for the fraction that
they actually used.  In a given second, we might charge eight users
for different parts of that fraction.

Suppose we charged those eight users amounts that added up to 1.3
seconds?  How would they know?  We'd increase our prices by 30%, in
effect, by charging for 1.3 seconds of CPU for every one second that
was really expended.  Each user would just assume that they'd gotten a
larger fraction of the CPU than they expected.  If we were tricky
enough, we'd do this in a way that never charged a single user for
more than one second per second.  Two users would then have to collude
to notice that they together had been charged for more than a second
per second.

(Our CPU pricing was actually hard to manage as we shifted the load
among different mainframes that ran different applications at
different multiples of the speed of the previous mainframe.  E.g. our
Amdahl 470/V6 price for a CPU second might be 1.78x the price on an
IBM 370/158.  A user's bill might go up or down from running the same
calculation on the same data, based on whether their instruction
sequences ran more efficiently or less efficiently than average on the
new CPU.  And of course if our changed average price was slightly
different than the actual CPU performance, this provided a way to
cheat on our prices.

Our CPU accounting also changed when we improved the OS's timer
management, so it could record finer fractions of seconds.  On average,
this made the system fairer.  But your application might suffer, if its
pattern of context switches had been undercharged by the old algorithm.)

The users had to trust us to keep our accounting and pricing fair.
System security mechanisms that kept one user's files from access by
another could not do this.  It required actual trust, since the users
didn't have access to the data required to check up on us (our entire
billing logs, and our accounting software).

TCPA is being built specifically at the behest of Hollywood.  It is
built around protecting content from subscribers for the benefit
of a service provider.  I know this because I read, and kept, all
the early public design documents, such as the white paper

(This is no longer available from the web site, but I have a copy.)
It says, on page 7-8:

  The following usage scenarios briefly illustrate the benefits of TCPA

  Scenario I: Remote Attestation

  TCPA remote attestation allows an application (the challenger) to
  trust a remote platform. This trust is built by obtaining integrity
  metrics for the remote platform, securely storing these metrics and
  then ensuring that the reporting of the metrics is secure.

  For example, before making content available to a subscriber, it is
  likely that a service provider will need to know that the remote
  platform is trustworthy. The service provider's platform (the
  challenger) queries the remote platform. During system boot, the
  challenged platform creates a cryptographic hash of the system BIOS,
  using an algorithm to create a statistically unique identifier for the
  platform. The integrity metrics are then stored.

  When it receives the query from the challenger, the remote platform
  responds by digitally signing and then sending the integrity
  metrics. The digital signature prevents tampering and allows the
  challenger to verify the signature. If the signature is verified, the
  challenger can then determine whether the identity metrics are
  trustworthy. If so, the challenger, in this case the service provider,
  can then deliver the content. It is important to note that the TCPA
  process does not make judgments regarding the integrity metrics. It
  merely reports the metrics and lets the challenger make the final
  decision regarding the trustworthiness of the remote platform.
They eventually censored out all the sample application scenarios like
DRM'd online music, and ramped up the level of jargon significantly,
so that nobody reading it can tell what it's for any more.  Now all
the documents available at that site go on for pages and pages saying
things like FIA_UAU.1 Timing of authentication. Hierarchical to: No
other components. FIA_UAU.1.1 The TSF 

The RIAA Succeeds Where the CypherPunks Failed

2003-12-18 Thread John Gilmore
Sent: Wednesday, December 17, 2003 12:29 PM
Subject: [NEC] #2.12: The RIAA Succeeds Where the CypherPunks Failed

NEC @, a mailing list about Networks, Economics, and Culture

Published periodically / #2.12 / December 17, 2003
Subscribe at
   Archived at
   Social Software weblog at

In this issue:

  - Introduction
  - Essay: The RIAA Succeeds Where the Cypherpunks Failed
  Also at
  - Worth Reading:
 - GrokLaw: MVP of the SCO Wars
 - Tom Coates Talks With A Slashdot Troller

* Introduction ===

The end of another year. Thank you all for reading. See you in January.


* Essay ==

The RIAA Succeeds Where the Cypherpunks Failed

For years, the US Government has been terrified of losing surveillance
powers over digital communications generally, and one of their biggest
fears has been broad public adoption of encryption. If the average user
were to routinely encrypt their email, files, and instant messages,
whole swaths of public communication currently available to law
enforcement with a simple subpoena (at most) would become either
unreadable, or readable only at huge expense.

The first broad attempt by the Government to deflect general adoption of
encryption came 10 years ago, in the form of the Clipper Chip
[]. The Clipper Chip was part of a
proposal for a secure digital phone that would only work if the
encryption keys were held in such a way that the Government could get to
them. With a pair of Clipper phones, users could make phone calls secure
from everyone except the Government.

Though opposition to Clipper by civil liberties groups was swift and
extreme [1] the thing that killed it was work by Matt Blaze, a Bell Labs
security researcher, showing that the phone's wiretap capabilities could
be easily defeated [2], allowing Clipper users to make calls that even
the Government couldn't decrypt. (Ironically, ATT had designed the
phones originally, and had a contract to sell them before Blaze sunk the


The Government's failure to get the Clipper implemented came at a heady
time for advocates of digital privacy -- the NSA was losing control of
cryptographic products, Phil Zimmerman had launched his Pretty Good
Privacy (PGP) email program, and the Cypherpunks, a merry band of
crypto-loving civil libertarians, were on the cover of
[] the second
issue of Wired. The floodgates were opening, leading to...

...pretty much nothing. Even after the death of Clipper and the launch
of PGP, the Government discovered that for the most part, users didn't
_want_ to encrypt their communications. The single biggest barrier to
the spread of encryption has turned out to be not control but apathy.
Though business users encrypt sensitive data to hide it from one
another, the use of encryption to hide private communications from the
Government has been limited mainly to techno-libertarians and a small
criminal class.

The reason for this is the obvious one: the average user has little to
hide, and so hides little. As a result, 10 years on, e-mail is still
sent as plain text, files are almost universally unsecured, and so on.
The Cypherpunk fantasy of a culture that routinely hides both legal and
illegal activities from the state has been defeated by a giant
distributed veto. Until now.

It may be time to dust off that old issue of Wired, because the RIAA is
succeeding where 10 years of hectoring by the Cypherpunks failed. When
shutting down Napster turned out to have all the containing effects of
stomping on a tube of toothpaste, the RIAA switched to suing users
directly. This strategy has worked much better than shutting down
Napster did, convincing many users to stop using public file sharing
systems, and to delete MP3s from their hard drives. However, to sue
users, they had to serve a subpoena, and to do that, they had to get
their identities from the user's internet service providers.

Identifying those users has had a second effect, and that's to create a
real-world version of the scenario that drove the invention of
user-controlled encryption in the first place. Whitfield Diffie,
inventor of public key encryption
[], the
strategy that underlies most of today's cryptographic products, saw the
problem as a version of Who will guard the guardians?

In any system where a user's 

Re: US antispam bill is death to anonymity

2003-11-24 Thread John Gilmore
 No, it only makes it illegal to use false or misleading information to
 send commercial e-mail.  That's a rather important distinction.

So, I get non-commercial emails all the time, from topica mailing
lists and from people forwarding New York Times articles and such.
They come with embedded ads, that the sender cannot turn off.  These
ads are for the benefit of the helper site (e.g. topica).  Are these
messages commercial email, or not?  Is the sender penalized if their
email address or domain name was registered with privacy-protecting
circumlocutions (like addresses and cities of 123 Main St, Smallville)?

So, I get emails at various times from people I've never met, saying,
I hear that you give money for drug policy reform, would you give
some to my nonprofit X for project Y?  Is that a commercial email?
It proposes a financial transaction.  Are these people subject to the
anti-spam bill?  Do they have to do anything different in their lives
if it passes?  I think they will.

The larger point is that people in the United States don't generally
have to closely examine the content of their daily communications,
to censor out any possible mention of commerce, money, business, finance,
products, services, etc, to avoid legal liability.  We have a First
Amendment right to communicate without being penalized for our
communications.  We also have a right to speak without the government
putting words in our mouth (like requiring us to put in keywords,
or include a postal return address.  That last requirement was
deliberatly knocked down by the Supreme Court within the last few years,
building on existing precedents that protected anonymous speech.)

 Don't take my word for what the bill says, read it yourself.  It's not
 that long.

He's right.  Congress should be commended for only spending 55 pages
on the details of this important topic.

 There's plenty of things wrong with it, but outlawing all
 anonymous mail isn't one of them.

No, but outlawing anonymizers *is* one of them.  Anyone who wants to
get an anonymizer shut down can just send a commercial email through it.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

US antispam bill is death to anonymity

2003-11-22 Thread John Gilmore
This bill makes it a crime to use any false or misleading information
in a domain name or email account application, and then send an email.
That would make a large fraction of hotmail users instant criminals.

It also makes it a crime to remove or alter information in message
headers in ways that would make it harder for a police officer
to determine who had sent the email.  Anonymizers will be illegal
as soon as this bill becomes law.

There are MANY, MANY other things wrong with it -- including the fact
that most of its provisions apply to *ALL* commercial email, not just
BULK commercial email -- and that it takes zero account of the First
Amendment, attempting to list what topics someone can validly send
messages about, while outlawing all other topics that relate to
commercial transactions.

If it passes, I think I can make a criminal out of just about any
company.  Companies are liable for spam that helps them, even if they
had no part in sending it.

Read the bill yourself:
And weep.  And then call your Congressman.

Everyone's common sense goes out the window when the topic is spam.
They're willing to sacrifice whatever principles they have.  And
you already know how few principles Congress had left.  


Congress Poised for Vote on Anti-Spam Bill
Declan McCullagh
Published: November 21, 2003

Congress has reached an agreement on antispam legislation and could
vote on it as early as Friday afternoon, a move that would end more
than six years of failed attempts to enact a federal law restricting
unsolicited commercial e-mail.

Negotiators from the U.S. Senate and House of Representatives said
Friday that the legislation was a historic accomplishment with
support from key Democrats and Republicans in both chambers. For the
first time during the Internet-era, American consumers will have the
ability to say no to spam, House Energy and Commerce Committee
Chairman Billy Tauzin, R-La., said in a statement. [...]

If the measure becomes law, certain forms of spam will be officially
legalized. The final bill says spammers may send as many commercial
electronic mail messages as they like--as long as the messages are
obviously advertisements with a valid U.S. postal address or P.O. box
and an unsubscribe link at the bottom. Junk e-mail essentially would
be treated like junk postal mail, with nonfraudulent e-mail legalized
until the recipient chooses to unsubscribe. [...]

One hotly contested dispute has been resolved: The bill would pre-empt
more restrictive state laws, including one that California enacted in
September. That law established an opt-in standard and was scheduled
to take effect on Jan. 1. With final passage of this bill, the core of
California's law would never take effect. [...]

- ---

Politech mailing list
Archived at
Moderated by Declan McCullagh (

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Monoculture / Guild

2003-10-03 Thread John Gilmore
 ... it does look very much from the outside that there is an
 informal Cryptographers Guild in place...

The Guild, such as it is, is a meritocracy; many previously unknown
people have joined it since I started watching it in about 1990.

The way to tell who's in the Guild is that they can break your protocols
or algorithms, but you can't break theirs.

While there are only hundreds of serious members of the Guild -- a
comfortable number for holding conferences on college campuses -- I
think just about everyone in it would be happier if ten times as many
people were as involved as they are in cryptography and security.
Then ten times as many security systems that everybody (including the
Guild members) depends on would be designed properly.  They certainly
welcomed the Cypherpunks to learn (and to join if they were serious

I consider myself a Guild Groupie; I don't qualify but I think
they're great.  I follow in their footsteps and stand on their shoulders.

Clearly there are much larger numbers of Guild Groupies than Guild
members, or Bruce Schneier and Neal Stephenson wouldn't be able to
make a living selling books to 'em.  :-)


PS: Of course there's whole set of Mystic Secret Guilds of
Cryptography.  We think our openness will defeat their closedness,
like the free world eventually beat the Soviet Union.  There are some
good examples of that, such as our Guild's realization of the
usefulness of public-key crypto (we reinvented independently, but they
hadn't realized what a revolutionary concept they already had).  Then
again, they are better funded than we are, and have more exemptions
from legal constraints (e.g. it's hard for us to do production
cryptanalysis, which is really useful when learning to design good

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

DirecTV Hacker Is First Person Convicted Under DMCA

2003-09-24 Thread John Gilmore
DirecTV Hacker Is First Person Convicted Under Digital Millennium Copyright Act
Man Faces 30 Years In Prison, Millions In Fines For Selling Illegal Hardware

UPDATED: 1:51 p.m. PDT September 22, 2003

Spertus said Whitehead -- also known as Jungle Mike -- paid a 
co-conspirator $250 a month to continually update software to circumvent 
the latest DirecTV security measures. Whitehead then used the software to 
create and sell modified DirecTV access cards, the prosecutor said.

The conduct violated the DMCA, which bars trafficking in technology 
primarily designed to get around security measures to access a copyrighted 

Copyright 2003 by All rights reserved. This material may not be 
published, broadcast, rewritten or redistributed.  No fair uses of this
material may be made.  (I added that last sentence myself.)

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Who needs secure wireless / tappable wireless infrastructure

2003-09-09 Thread John Gilmore
   And this says nothing at all about the need for tactical
 military wiretaps on GSM systems under battlefield conditions when
 soldiers lives may depend on determining what the enemy is saying over
 cellphones used to direct attacks against friendly forces.

Or when innocent civilians need secure wireless infrastructures, to be
able to coordinate to avoid murderous US military forces.  See, for

which I found via SF writer James P. Hogan's blog:

Prudent citizens should now know that before stepping into the street
to hail a taxi, they should use a secure phone to determine whether
any American tanks are in the area.  But beware of American
direction-finding equipment -- make those calls short!


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Code breakers crack GSM cellphone encryption

2003-09-09 Thread John Gilmore
 See their paper at CRYPTO 2003 for more details.  I am disappointed that
 you seem to be criticizing their work before even reading their paper.
 I encourage you to read the paper -- it really is interesting.

OK, then, where is it?  I looked on: under Crypto 2003 -- no papers there.  The title of the
  paper, presented in Session 15, is:

Instant Ciphertext-Only Cryptanalysis of GSM Encrypted Communication
Elad Barkan,  Eli Biham,  Nathan Keller under Conference Proceedings -- Crypto 2003 not there. under Cryptology ePrint archive -- no Biham or GSM papers there. -- latest paper is from 2000. -- access denied -- a news item about the GSM crack, but no paper.

I'm even a dues-paying IACR member, but I don't seem to have online
access to the papers from recent conferences.  I'm sure a copy will
show up in the mail a few months from now.  Let me guess -- the devils
at Springer-Verlag have stolen IACR's copyrights and the researchers
were dumb enough to hand their copyright to IACR...


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]