Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-11 Thread Chris Palmer
On Tue, Sep 10, 2013 at 2:04 PM, Joe Abley jab...@hopcount.ca wrote:

 As an aside, I see CAs with Chinese organisation names in my browser list.

I wouldn't pick on/fear/call out the Chinese specifically.

Also, be aware that browsers must transitively trust all the issuers
that the known trust anchors have issued issuing certificates for.
That's a much bigger set, and is not currently fully knowable.
(Another reason to crave Certificate Transparency.)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-07 Thread Chris Palmer
On Sat, Sep 7, 2013 at 1:33 AM, Brian Gladman b...@gladman.plus.com wrote:

 Why would they perform the attack only for encryption software? They
 could compromise people's laptops by spiking any popular app.

 Because NSA and GCHQ are much more interested in attacking communictions
 in transit rather than attacking endpoints.

So they spike a popular download (security-related apps are less
likely to be popular) with a tiny malware add-on that scans every file
that it can read to see if it's an encryption key, cookie, password
db, whatever — any credential-like thing. The malware uploads any hits
to the mothership, then exits (possibly cleaning up after itself).
Trivial to do, golden results.

But really, why not leave a little CC pinger behind? Might as well;
you never know when it will be useful.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-06 Thread Chris Palmer
 Q: Could the NSA be intercepting downloads of open-source encryption 
 software and silently replacing these with their own versions?

Why would they perform the attack only for encryption software? They
could compromise people's laptops by spiking any popular app.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] People should turn on PFS in TLS

2013-09-06 Thread Chris Palmer
On Fri, Sep 6, 2013 at 5:34 PM, The Doctor dr...@virtadpt.net wrote:

 Symmetric cipher RC4 (weak 10/49)
 Symmetric key length 128 bits (weak 8/19)
 Cert issued by Google, Inc, US SHA-1 with RSA @ 2048 bit (MODERATE 2/6)

First time I've heard of 128-bit symmetric called weak... Sure, RC4
isn't awesome but they seem to be saying that 128-bit keys per se are
weak.

 Let's contrast this with ChaosPad:
 Symmetric cipher Camellia (STRONG 39/39)
 Symmetric key length 256 bits (STRONG 19/19)
 Cert issued by CAcert, Inc. SHA-1 with RSA @ 4096 bit (MODERATE 2/6)

Without good server authentication, the other stuff doesn't matter.
With Chrome, you get key pinning when talking to some sites (including
Google sites, Tor, and Twtitter); I'd much rather have that and only
128-bit symmetric. Also, I don't know why you weren't getting forward
secrecy; check your Firefox configuration.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Chris Palmer
Thor Lancelot Simon writes:

 a significant net loss of security, since the huge increase in computation
 required will delay or prevent the deployment of SSL everywhere.

That would only happen if we (as security experts) allowed web developers to
believe that the speed of RSA is the limiting factor for web application
performance.

That would only happen if we did not understand how web applications work.

Thankfully, we do understand how web applications work, and we therefore
advise our colleagues and clients in a way that takes the whole problem
space of web application security/performance/availability into account.

Sure, 2048 is overkill. But our most pressing problems are much bigger and
very different. The biggest security problem, usability, rarely involves any
math beyond rudimentary statistics...


-- 
http://noncombatant.org/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-08 Thread Chris Palmer
f...@mail.dnttm.ro writes:

 The idea is the following: we don't want to secure the connection,

Why not?

Using HTTPS is easier than making up some half-baked scheme that won't work
anyway.


-- 
http://noncombatant.org/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

2010-08-26 Thread Chris Palmer
Richard Salz writes:

 A really knowledgeable net-head told me the other day that the problem
 with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs
 are now more prohibitive than the crypto costs.  I was quite surprised to
 hear this; he was stunned to find it out.

Cryptographic operations are measured in cycles (i.e. nanoseconds now);
network operations are measured in milliseconds. That should not be a
stunning surprise.

What is neither stunning nor surprising, but continually sad, is that web
developers don't measure anything. Predictably, web app performance is
unnecessarily terrible.

I once asked some developers why they couldn't use HTTPS. Performance! was
the cry.

Ok, I said. What is your performance target, and by how much does HTTPS
make you miss it? Maybe we can optimize something so you can afford HTTPS
again.

As fast as possible!!! was the response.

When I pointed out that their app sent AJAX requests and responses that were
tens or even hundreds of KB every couple seconds, and that as a result their
app was barely usable outside their LAN, I was met with blank stares.

Did they use HTTP persistent connections, TLS session resumption, text
content compression, maximize HTTP caching, ...? I think you can guess. :)

Efforts like SPDY are the natural progression of organizations like Google
*WHO HAVE ALREADY OPTIMIZED EVERYTHING ELSE*. Until you've optimized the
content and application layers, worrying about the transport layers makes no
sense. A bloated app will still be slow when transported over SPDY.

Developers are already under the dangerous misapprehension that TLS is too
expensive. When they hear security experts and cryptographers mistakenly
agree, the idea will stick in their minds forever; we will have failed.

The problem comes from insufficiently broad understanding: the sysadmins
fiddle their kernel tuning knobs, the security people don't understand how
applications work, and the developers call malloc 5,000 times and perform
2,500 I/O ops just to print Hello, World. The resulting software is
unsafe, slow, and too expensive.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Has there been a change in US banking regulations recently?

2010-08-14 Thread Chris Palmer
Anne  Lynn Wheeler writes:

 subset ... was based on computational load caused by SSL cryptography 
 in the online merchant scenario, it cut thruput by 90-95%; alternative to
 handle the online merchant scenario for total user interaction would have
 required increasing the number of servers by factor of 10-20.

When was this *ever* true? Seriously.

N-tier applications are I/O-bound in at least N ways. Common development
practice leads to excessive message sizes in most of the N tiers. Rare is
the web site with a low N and small message sizes. Fire up your HTTP proxy
and/or packet sniffer and compare Google to, well, just about everyone else.

Now that your packet sniffer and/or proxy is started up, it is also a fun
drinking game to play Spot The SSL Server Misconfiguration. When you see
HTTP persistence turned off, take a shot. When you see TLS session
resumption turned off or sessions destroyed after a short period of time,
take a shot. When you see a new TLS session for each new HTTP request, take
three shots (in addition to the shot you probably took when noticing HTTP
persistence turned off, because let's face it probably was). If the
misconfiguration is at a CDN, take three shots and smash your face with a
frying pan.

The game is usually played with Jaegermeister, but if you are testing a bank
site, you can afford good scotch. The adjudicating committee rarely
disqualifies a contestant on the basis of liquor, although I'd still advise
against the use of vodka.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: phpwn: PHP cookie PRNG flawed (Netscape redux)

2010-08-05 Thread Chris Palmer
travis+ml-cryptogra...@subspacefield.org writes:

 https://media.blackhat.com/bh-us-10/whitepapers/Kamkar/BlackHat-USA-2010-Kamkar-How-I-Met-Your-Girlfriend-wp.pdf

He doesn't mention the php.ini variables session.entropy_length and
session.entropy_file. Last I checked, their default settings were unsafe,
but setting them to 16 and /dev/urandom should solve the problem he
describes in the paper.

Unless not.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


EFF/iSEC's SSL Observatory slides available

2010-08-04 Thread Chris Palmer
http://www.eff.org/observatory

We have downloaded a dataset of all of the publicly-visible SSL
certificates, and will be making that data available to the research
community in the near future.

So, keep an eye on that page. The data is very useful. Many more interesting
conclusions remain to be drawn from the data; once it's out (I'm told Real
Soon Now), you can chew on it yourself and find things out that Eckersley
and Burns haven't gotten to yet.

Highlights from the slide deck are troubling:

* In addition to the implausibly large and diverse group of CAs you trust,
you also completely trust all the intermediary signers they they've signed.
Including, of course, DHS (slide 42). (See also: Soghoian and Stamm,
Certified Lies.)  Windows and Firefox trust 1,482 CA certs (651
organizations).

* Of 16 M IPs listening on 443, 10.8 M started SSL handshake; of those, 4.3
M used CA-signed cert chains (slide 14). Thus, the majority of servers use
invalid/invalid/self-signed certs.

* The invalid certs contain all kinds of bad stuff (see slide 16).

* The valid certs contain all kinds of bad stuff (see the rest of the slide
deck).

* CAs re-use keypairs in new certs to prolong the effective life (slide 28).

* Many CAs sign reserved/private names. Several CAs have signed e.g.
192.168.1.2. That host is certified to live in many countries by many CAs.
One CA thinks its identity is the same as a public/routable IP.

* The single most often signed name is localhost (6K distinct certs for
that subject name). Many CAs have signed that name many times; a few CAs
only signed it once. This suggests many CAs don't even track the names
they've signed to make sure they don't get tricked into signing a name
twice. Never mind the fact that they shouldn't be signing private names in
the first place... A colleague of mine got a CA-signed cert for mail.
Could that be a problem? :)

* Your browser trusts two signing certs that use a 512-bit RSA key (slide 32).

* The bad Debian keys are not dead, and 530 are CA-signed. 73 of the 530
are revoked.

I am, as you know, predisposed to interpret Eckersley's and Burns's findings
as damning for the entire trusted third party with no accountability idea
--- Trent Considered Harmful. But even CA/TTP proponents must admit that
our current system has failed hard: in principle, and empirically. Any new
system must include a substantial answer to the numerous fatal problems
Eckersley, Burns, and Ristic have observed.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: EFF/iSEC's SSL Observatory slides available

2010-08-04 Thread Chris Palmer
They tell me they will be releasing the data both raw and as a MySQL
database, so you can learn interesting things just by writing SQL queries.

 So, keep an eye on that page. The data is very useful. Many more
 interesting conclusions remain to be drawn from the data; once it's out
 (I'm told Real Soon Now), you can chew on it yourself and find things out
 that Eckersley and Burns haven't gotten to yet.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Chris Palmer
Usability engineering requires empathy. Isn't it interesting that nerds
built themselves a system, SSH, that mostly adheres to Perry's theses? We
nerds have empathy for ourselves. But when it comes to a system for other
people, we suddenly lose all empathy and design a system that ignores
Perry's theses.

(In an alternative scenario, given the history of X.509, we can imagine that
PKI's woes are due not to nerd un-empathy, but to
government/military/hierarchy-lover un-empathy. Even in that scenario, nerd
cooperation is necessary.)

The irony is, normal people and nerds need systems with the same properties,
for the same reasons.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-28 Thread Chris Palmer
Paul Tiemann writes:

 I like the idea of SSL pinning, but could it be improved if statistics
 were kept long-term (how many times I've visited this site and how many
 times it's had certificate X, but today it has certificate Y from a
 different issuer and certificate X wasn't even near its expiration
 date...)

That's along the lines of what EFF and I propose, yes. As I state in the
slides, a key problem is how to smooth over the adaptation problem by
various heuristics. We don't necessarily think that our mechanism is best,
just that it's one of a family of likely approaches.

 Another thought: Maybe this has been thought of before, but what about
 emulating the Sender Policy Framework (SPF) for domains and PKI?  Allow
 each domain to set a DNS TXT record that lists the allowed CA issuers for
 SSL certificates used on that domain.  (Crypto Policy Framework=CPF?)

Even if anyone other than spammers had adopted SPF, we should still be
seeking to reduce cruft, not increase it.

 Thought: Could you even list your own root cert there as an http URL, and
 get Mozilla to give a nicer treatment to your own root certificate in
 limited scope (inserted into some kind of limited-trust cert store, valid
 for your domains only)

Sure, or simply put the cert in the DNS itelf. But, DNS is not secure, so in
doing so we would not actually be solving the secure introduction problem.
Some people think that DNSSEC can fill in here, but it hasn't yet.

 Is there a reason that opportunistic crypto (no cert required) hasn't been
 done for https?

As you can see, I am a firm advocate that we should emulate and improve on
SSH's success. On one of my computers I use the HTTPS Everywhere and
Perspectives plugins for Firefox; the latter renders CAs pretty much moot
and the former gets me HTTPS by default at least some of the time. It's a
fine thing.

Remember when we all dropped telnet like a hot potato and migrated to SSH
pretty much overnight? Let's do that again. Browsers should use secure
transport by default in a way that is meaningful to humans and cheap to
deploy.

 Would it give too much confidence to people whose DNS is being spoofed?

I believe it would be a vast improvement in such a scenario. It would be
hard to do worse than the status quo.

 Great slides!  The TOFU/POP is nice, and my favorite concept was to
 translate every error message into a one sentence, easy-to-understand
 statement.

Thank you.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Chris Palmer
Perry E. Metzger writes:

 All major browsers already trust CAs that have virtually no security to
 speak of,

...and trust any of those CAs on any (TCP) connection in the (web app)
session. Even if your first connection was authenticated by the right CA,
the second one may not be. Zusmann and Sotirov suggested SSL pinning (like
DNS pinning, in which the browser caches the DNS response for the rest of
the browser process' lifetime), but as far as I know browsers haven't
implemented the feature.

A presentation I've given at a few security gatherings may be of interest. I
cover some specific security, UI/UX, and policy problems, as well as some
general observations about incentives and barriers to improvement. Our
overall recommendation is to emulate the success of SSH, but in a browser-y,
gentle-compliance-with-the-status-quo-where-safe way.

https://docs.google.com/present/view?id=df9sn445_206ff3kn9gs

Eckersley's and Burns' presentation at Defcon (coming right up) will present
their findings from a global survey of certs presented by hosts listening on
port 443. Their results are disturbing.

Ivan Ristic is also presenting his results of a survey at Black Hat on the
29th. I don't know anything about his findings.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Chris Palmer
Paul Tiemann writes:

 Since this is a certificate we (DigiCert) have issued, I'm trying to
 understand if there is a vulnerability here that's more apparent to others
 than to me,

If an attacker can steal the cert by any means, perhaps by means particular
to one of the hosted sites, he can now forge the identities of the 100+
sites. It gives the attack a multiplier. (It appears 100+ is not even the
largest number of entities in a single cert.) Potential attacks:

* Attack the server (e.g. buffer overflow in FooHTTPD or some other bug in
a web app the CDN runs (I know not all CDNs run cloud app hosting services,
but some do)). Note that even though all sites are served by the same
server, all sites suffer the risk profile of the highest-profile site. If a
CDN server is serving tiny.unknown.com and also mega.often-attacked.net,
tiny.unknown effectively endures attacks on mega.often-attacked.

* Questionable reseller. Although reselling a CDN might normally give you
access only to a subset of the CDN's subscriber's sites, you can get many in
one go because these certs have so many subjects. See below.

 The bulk of the FQDNs included in the certificate are for subdomains like
 media., www-cdn., static., and the like.  Apply a different test: Is it
 bad for various organizations to use the same CDN for services over http?
 Is it bad for all those different FQDNs to CNAME to the same DNS entry and
 point to the same IP address?  Is it bad for a CDN to host multiple
 individual SSL certificates for its customers on the same set of hardware?

Let's just say I'd rather get the advantages of a CDN by other means, but
that I recognize that using a CDN can be a reasoanble economic trade-off in
many situations.

 If not, then what is so abhorrent about their various FQDNs being included
 in a single SSL certificate?

I wouldn't say abhorrent, but the increased size of the cert could be a
concern.

I just wiresharked an HTTPS connection to https://ne.edgecastcdn.net/. The
cert is 7,044 bytes. Admirably small given how many names are in it, but
still 6 KB larger than another cert I observed containing only one subject.

I have a hard enough time convincing people that HTTPS is not the root of
their web app performance problems and that therefore they CAN afford to use
it; the last thing we need is a certificate that big increasing latency at a
critical time in the page load. TLS sessions to that server don't seem to
last very long either, increasing the frequency of cert delivery; but maybe
that is necessary due to the high traffic such a server handles. (Gotta have
a limit on the size of the session store.)

I know it's a small thing, especially relative to the general content layer
heft of most sites, but still. When trying to convince developers to use
HTTPS, I need every rhetorical advantage I can get. :)

 Considering the business incentives landscape, is it safe to assume a
 strong incentive for a CDN running all those sites to be vigilant about
 their own server security?  Are there any inherently skewed incentives
 that I'm just not seeing which would lead a CDN to be negligent in its
 management of its network security and the SSL certificates it manages?

As Peter noted, http://www.webhostingtalk.com/showthread.php?t=873555.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Chris Palmer
Ralph Holz writes:

  Eckersley's and Burns' presentation at Defcon (coming right up) will
  present their findings from a global survey of certs presented by hosts
  listening on port 443. Their results are disturbing.
 
 Have these results already been published somewhere, or do you maybe even
 have a URL?

Defcon is the publishing event; and Black Hat for Ristic's material. It's in
a few days (Friday evening for Ekersley and Burns). Also keep an eye on the
eff.org site, I bet they'll say something there too. Possibly also at
isecpartners.com.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Chris Palmer
Sampo Syreeni writes:

 I am not sure what quantitative measurement of vulnerability would even
 mean. What units would said quantity be measured in?
 
 I'm not sure either. This is just a gut feeling.

See also:

http://nvd.nist.gov/cvsseq2.htm

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Chris Palmer
Perry E. Metzger writes:

 Unless you can perform an experiment to falsify the self-declared
 objective quantitative security measurement, it isn't science. I can't
 think of an experiment to test whether any of the coefficients in the
 displayed calculation is correct. I don't even know what correct
 means. This is disturbing.

I can recommend a good single-malt scotch or tawny port if you like. Have
you tried the Macallan 18?

False metrics are rampant in the security industry. We really need to do
something about them. I propose that we make fun of them.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MITM attack against WPA2-Enterprise?

2010-07-25 Thread Chris Palmer
Perry E. Metzger writes:

 All in all, this looks bad for anyone depending on WPA2 for high security.

Luckily, that describes nobody, right?

;D

I used to think that non-end-to-end security mechanisms were wastefully
pointless, but adorably harmless. However, in my experience people keep
using link-layer garbage (and network-layer trash, and support protocol
junk) as a way to put off the hard work of real (i.e. E2E) security.
Non-E2E stuff hurts usability, availability, and security (by creating a
false sense).

Of course, we E2E fans have to get our usable security ducks in a row first.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Encryption and authentication modes

2010-07-24 Thread Chris Palmer
Florian Weimer writes:

 I just want to create a generic API which takes a key (most of the time, a
 randomly generated session key) and can encrypt and decrypt small blobs.
 Application code should not need to worry about details (except getting
 key management right, which is difficult enough).  More popular modes such
 as CBC lack this property, it's too easy to misuse them.

I wrote such a thing for web developers, and other people have too.

I can't see a reason to do anything other than

  ciphertext = aes_cbc(enc-key, random-iv, plaintext)

  sealed-blob = hmac(mac-key, ciphertext + timestamp) +
ciphertext + timestamp

You wrap this magic up in a trivial interface:

  byte [] seal(byte [] macKey, byte [] encKey, byte [] plaintext)
throws GeneralSecurityException
  { ... }

  byte [] unseal(byte [] macKey, byte [] encKey, byte [] ciphertext,
 long expirationInterval)
throws UnexplodedCowException
  { ... }

You can find my Java code with a google search, but it's not special.  You
can write it yourself in your favorite language with small effort.

This gives you expiration, integrity, and confidentiality. You can make the
keys implicit by getting them from the server config variables or something.
In case of mangled or expired ciphertext, the checking function can fail
fast (no need to decrypt, since you do the hmac check first).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-22 Thread Chris Palmer
Peter Gutmann writes:

 Readers are cordially invited to go to https://edgecastcdn.net and have a
 look at the subjectAltName extension in the certificate that it presents.

Also, keep your eye on:

https://www.defcon.org/html/defcon-18/dc-18-speakers.html#Eckersley

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question w.r.t. AES-CBC IV

2010-07-10 Thread Chris Palmer
Ralph Holz writes:

 He wanted to scrape off some additional bits when using AES-CBC because
 the messages in his concept are very short (a few hundred bit). So he

I'd rather have a known-safe design than to save 12 bytes.

Seriously: what the hell.

Say you have 1-byte messages, and that the cryptography will expand them to
128 bytes (...you use a MAC, right?). If this overhead factor is really bad
for you, for example because you expect to send thousands of messages per
second, your problem is a bad protocol design. Don't break the safety
mechanism to support an inefficient protocol.

Alternately, if you send messages only rarely, the overhead doesn't matter.

My point is, since you have tiny messages, throughput must not be your goal.
And yet, even with 128-byte messages, your messages are so small that
latency and bloat are not problems. You get confidential and MAC'd
communications for less than the cost of a tweet or SMS.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: NY Times reports: Documents show link between ATT and NSA

2006-04-13 Thread Chris Palmer
lorenzo writes:

 Am I wrong or if we were living in a DRM- or Trusted Computing- World,
 those documents probably would be unreadable, if they were digital
 documents? Also they could have prevented printing of the documents,
 and so on.

Consider the massive effort Daniel Ellsberg undertook when leaking the
Pentagon Papers.  He had to photcopy tons of stuff.  If you had a DRM'd
document that you couldn't print or email out, you could use your camera
to take a picture of the document on your computer's display.  (For
example.)


-- 
https://www.eff.org/about/staff/#chris_palmer



pgpkJ7wgNaxvV.pgp
Description: PGP signature


Re: NPR : E-Mail Encryption Rare in Everyday Use

2006-03-10 Thread Chris Palmer
Peter Saint-Andre writes:

 http://www.saint-andre.com/blog/2006-02.html#2006-02-27T22:13

1. Anonymity does matter. You might have heard of a little thing called
the First Amendment. ;) It's great that you're proud of what you say,
but no matter how proud you are, there could be bad, unfair consequences
if you say certain things and/or if you have a certain identity. A
little wisely-used anonymity can further an honest debate (such as
debating what should be in the Constitution!) and protect people from
low-power groups.

2. Email signing, alone, gives you only pseudonymity.

3. I see on your site you use and advertise for CACert. I hope CACert's
signing cert(s) are never trusted by my browser, because then my browser
would trust any cheap-ass random pseudonym in the world. Which brings us
to my next point...

4. Identity is not, and can never be, a substitute for a real judgement
about goodness. That I sign my messages doesn't make them any smarter;
many good and helpful comments come from such forgeable identities as
Steven Bellovin and Ben Laurie. Even fake names that look
ridiculously fake, like StealthMonger, sometimes send useful
information. When you immediately discount what that person says, you
are doing yourself an unfavor.


-- 
https://www.eff.org/about/staff/#chris_palmer



pgp3QSxLKKGry.pgp
Description: PGP signature


Re: A small editorial about recent events.

2005-12-23 Thread Chris Palmer
[EMAIL PROTECTED] writes:

 You know, as a security person, I say all the time that the greatest
 threat is internal threat, not external threat.  In my day job, I/we
 make surveillance tools to prevent data threat from materializing, and
 to quench it if it does anyhow.  I tell clients all day every day that
 when the opponent can attack location independently, and likely
 without self identification, your only choice is pre-emption, which
 requires intell, which requires surveillance, which requires listening
 posts.

Are you saying we need to carefully surveil our government and pre-empt
it from attacking us, or that our government should carefully surveil us
and pre-empt us from attacking it?

 And I'm just talking about intellectual property in the Fortune 1000,
 not the freaking country.

Let's hope society as a whole never resembles life inside the Fortune
1000 too closely.


-- 
https://www.eff.org/about/staff/#chris_palmer



pgpq538D5dQfM.pgp
Description: PGP signature


Fwd: Tor security advisory: DH handshake flaw

2005-08-18 Thread Chris Palmer
- Forwarded message from Roger Dingledine [EMAIL PROTECTED] -

From: Roger Dingledine [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Date: Thu, 11 Aug 2005 21:31:32 -0400
Subject: Tor security advisory: DH handshake flaw

Versions affected: stable versions up through 0.1.0.13 and experimental
versions up through 0.1.1.4-alpha.

Impact: Tor clients can completely lose anonymity, confidentiality,
and data integrity if the first Tor server in their path is malicious.
Specifically, if the Tor client chooses a malicious Tor server for
her first hop in the circuit, that server can learn all the keys she
negotiates for the rest of the circuit (or just spoof the whole circuit),
and then read and/or modify all her traffic over that circuit.

Solution: upgrade to at least Tor 0.1.0.14 or 0.1.1.5-alpha.


The details:

In Tor, clients negotiate a separate ephemeral DH handshake with each
server in the circuit, such that no single server (Bob) in the circuit
can know both the client (Alice) and her destinations. The DH handshake
is as follows. (See [1,2] for full details.)

Alice - Bob: E_{Bob}(g^x)
Bob - Alice: g^y, H(K)

Encrypting g^x to Bob's public key ensures that only Bob can learn g^x,
so only Bob can generate a K=g^{xy} that Alice will accept. (Alice, of
course, has no need to authenticate herself.)

The problem is that certain weak keys are unsafe for DH handshakes:

Alice - Mallory: E_{Bob}(g^x)
Mallory - Bob:   E_{Bob}(g^0)
Bob - Mallory:   g^y, H(1^y)
Mallory - Alice: g^0, H(1^y)

Now Alice and Bob have agreed on K=1 and they're both happy. In fact,
we can simplify the attack:

Alice - Mallory: E_{Bob}(g^x)
Mallory - Alice: g^0, H(1)

As far as we can tell, there are two classes of weak keys. The first class
(0, 1, p-1=-1) works great in the above attack. The new versions of Tor
thus refuse handshakes involving these keys, as well as keys  0 and = p.

The second class of weak keys are ones that allow Mallory to solve for y
given g^y and some guessed plaintext. These are rumored to exist when the
key has only one bit set [3]. But in Tor's case, Mallory does not know
g^x, so nothing she can say to Alice will be acceptable. Thus, we believe
Tor's handshake is not vulnerable to this second class of weak keys.

Nonetheless, we refuse those keys too. The current Tor release refuses
all keys with less than 16 0 bits set, with less than 16 1 bits set,
with values less than 2**24, and with values more than p - 2**24. This
is a trivial piece of the overall keyspace, and might help with next
year's weak key discoveries too.

Yay full disclosure,
--Roger

[1] http://tor.eff.org/doc/tor-spec.txt section 0 and section 4.1
[2] http://tor.eff.org/doc/design-paper/tor-design.html#subsec:circuits
[3]
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/cvsweb/openssh/dh.c?rev=1.1.1.7content-type=text/x-cvsweb-markup
and look for dh_pub_is_valid()




- End forwarded message -


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simson Garfinkel analyses Skype - Open Society Institute

2005-01-26 Thread Chris Palmer
People may already have seen this, but maybe not. Another Skype 
analysis:

http://www.cs.columbia.edu/~library/TR-repository/reports/reports-2004/cucs-039-04.pdf


-- 
Chris Palmer
Technology Manager, Electronic Frontier Foundation
415 436 9333 x124 (desk), 415 305 5842 (cell)

81C0 E11D CE73 4390 B6C7  3415 B286 CD8F 68E4 09CD



pgpikIGZhSbq4.pgp
Description: PGP signature


Re: Al Qaeda crypto reportedly fails the test

2004-08-12 Thread Chris Palmer
Steven M. Bellovin writes:

 http://www.petitcolas.net/fabien/kerckhoffs/index.html for the actual
 articles.)

Does there exist an English translation (I'd be surprised if not)? If
not, I'd be happy to provide one if there were sufficient interest.


-- 
Chris Palmer
Staff Technologist, Electronic Frontier Foundation
415 436 9333 x124 (desk), 415 305 5842 (cell)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]