Re: Obama administration seeks warrantless access to email headers.

2010-07-30 Thread Steven Bellovin

On Jul 30, 2010, at 3:58 08PM, Perry E. Metzger wrote:

> On Fri, 30 Jul 2010 09:38:44 +0200 Stefan Kelm  wrote:
>> Perry,
>> 
>>>  The administration wants to add just four words -- "electronic
>>>  communication transactional records" -- to a list of items that
>>> the law says the FBI may demand without a judge's approval.
>>> Government
>> 
>> Would that really make that much of a difference? In Germany,
>> at least, the so-called "judge's approval" often isn't worth
>> a penny, esp. wrt. phone surveillance. It simply is way too
>> easy to get such an approval, even afterwards.
> 
> It is significantly harder here in the US.

Actually, no, it isn't.  Transaction record access is not afforded the same 
protection as content.  I'll skip the detailed legal citations; the standard 
now for transactional records is 'if the governmental entity offers specific 
and articulable facts showing that there are reasonable grounds to believe that 
the contents of a wire or electronic communication, or the records or other 
information sought, are relevant and material to an ongoing criminal 
investigation."  This is much less than the "probably cause" and specificity 
standards for full-content wiretaps, which do enjoy very strong protection.

> Equally importantly, it is
> much simpler to determine what warrants were issued after the fact.
> 

Not in this case.  Since the target of such an order is not necessarily the 
suspect, the fact of the information transfer may never be introduced in open 
court.  Nor is there a disclosure requirement here, the way there is for 
full-content wiretaps.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-30 Thread Anne & Lynn Wheeler

On 07/28/2010 11:52 PM, Pat Farrell wrote:

A lot of the smart card development in the mid-90s and beyond was based
on the idea that the smart card, in itself, was the sole authorization
token/algorithm/implementation.


some ssl, payment, smartcard trivia ...

those smartcards were used for the offline authorization (not just authentication) ... which, in at least one 
major product, led to the "YES CARD" ... relatively trivial to skim & replicated a static digital 
certificate for a counterfeit card ... then the counterfeit card was programmed to answer "YES" to 1) 
was the correct PIN entered, 2) should the transaction be performed offline, and 3) was the transaction approved. 
Once the static digital certificate was skimmed, it was no longer even necessary to know the PIN, since the 
counterfeit card accepted every possible PIN as valid. misc. past posts mentioning "YES CARD"
http://www.garlic.com/~lynn/subintegrity.html#yescard

In a 2003, at an ATM Integrity task force meeting ... there was presentation by some LEO explaining the "yes 
card" ... and how there was little or no countermeasure once a "YES CARD" was in existence ... somebody 
in the audience loudly observed that billions were spent on proving smartcards are less secure than magstripe. In the 
"YES CARD" timeframe there was even a rather large pilot of the cards in the US ... but seemed to disappear 
after the "YES CARD" scenario was publicized (it was actually explained to the people doing the pilot, before 
the pilot started ... but apparently they didn't appreciate the significance).

much earlier, we had been working on our ha/cmp product and cluster scaleup. we 
had meeting on cluster scaleup meeting during jan92 sanfran usenet (in 
ellison's conference room) ... past posts mentioning the jan92 meeting
http:www/garlic.com/~lynn/95.html#13

this was just a few weeks before cluster scaleup was transferred (announced as 
supercomputer for numerical intensive only) and we were told we couldn't work 
on anything with more than four processors. some old email from the period on 
cluster scaleup
http://www.garlic.com/~lynn/lhwemail.html#medusa

we then leave a couple months later. two of the other people named in the jan92 meeting also leave and show 
up at small client/server startup responsible for something called "commerce server". we get 
brought in to consult because they want to do payment transactions on the server ... the small client/server 
startup has also invented some technology called "SSL" they want to use. The results is now 
frequently called "electronic commerce".

Then apparently because of the work on electronic commerce ... we also get 
invited to participate in the x9a10 financial standard working group ... which 
had been given the requirement to preserve the integrity of the financial 
infrastructure  for all retail payments.

About the same time there is a pilot program for magstripe-based online 
stored-value cards  (uses existing POS magstripe terminals but the payment 
network routes the transactions to different backend processor, original 
program of its kind in the US). At the time, the US didn't have the telco 
connectivity availability and cost issues that many places in the rest of the 
world were dealing with ... and therefor didn't have that requirement to move 
to offline smartcard payment paradigm. However, it turns out their backend, 
high-availability, no-single-point-of-failure platform developed a glitch ... 
and even tho it was from a different vendor (than our ha/cmp product) we were 
asked to investigate at the various failure modes.

Somewhat as a result of all of the above, when one of the major offline, 
smartcard, european, stored-value payment operators was looking at making an 
entry into the US in the 90s ... we were asked to design, size, and cost their 
backend dataprocessing infrastructure. Along the way, we took an indepth look 
at the business process and cost structure of such payment products. Turns out 
that the major financial motivation for that generation of smartcard 
stored-value payment products ... was that the operators got to keep the float 
on the value resident in the stored-value cards. Not too long later ... several 
of the major european central banks announced that the smartcard, stored-value 
operators would have to start paying interest on value in the smartcards 
(eliminating the float financial incentive to those operators). It wasn't too 
long after that most of the programs disappeared.

The major difference between that generation of smartcard payment products and the AADS chip 
strawman ... was that rather than attempting to be a complex, loadable, multi-function issuer card 
 the objective was changed to being a person-centric, highest-possible integrity, 
lowest-possible cost, hard-to-counterfeit authentication ... which could be registered (publickey) 
for arbitrary number of different environments ("something you have" authentication 
registered in 

Re: Obama administration seeks warrantless access to email headers.

2010-07-30 Thread Perry E. Metzger
On Fri, 30 Jul 2010 09:38:44 +0200 Stefan Kelm  wrote:
> Perry,
> 
> >   The administration wants to add just four words -- "electronic
> >   communication transactional records" -- to a list of items that
> > the law says the FBI may demand without a judge's approval.
> > Government
> 
> Would that really make that much of a difference? In Germany,
> at least, the so-called "judge's approval" often isn't worth
> a penny, esp. wrt. phone surveillance. It simply is way too
> easy to get such an approval, even afterwards.

It is significantly harder here in the US. Equally importantly, it is
much simpler to determine what warrants were issued after the fact.

However, lets say you were right and there was no significant
impediment. It would be disturbing to see even the small protections
currently afforded removed without any apparent benefit to the
removal.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-30 Thread Peter Gutmann
Steven Bellovin  writes:

>When I look at this, though, little of the problem is inherent to PKI.
>Rather, there are faulty communications paths.

"Oh no my Lord, I assure you that parts of it are excellent!" :-).

>[...] how should the CA or Realtek know about the problem? [...]

That was the whole point, that the whole system doesn't work, and it's the 
system as a whole that has to work, not just some parts of it.  Here's another 
description, without the possibly confusing 't +/- x' stuff:

Shortly after the malware appeared (or at least got noticed) it was added to
anti-virus vendor databases and pushed out to users via updates.  Some time
later when it made headlines because of its use of the Realtek certificate,
the CA that had issued it read about it in the news, contacted the certificate
owner, and revoked it.  However due to the dysfunctional nature of revocation
handling, the certificate was still regarded as valid by Windows systems after
it had been revoked, and of a range of machines running Windows 7, Vista, and
XP, all with the latest updates and hotfixes applied and with automatic
updates turned on, the first machine to notice the revocation and treat the
signature as invalid didn't do so until a full week after the revocation had
occurred, and some machines still regard the signature as valid even now (I've
heard this before a number of times in software developer forums and mailing
lists, plaintive complaints from users to the effect that "I know this
certificate is revoked, but no matter what I do I can't get the software to
stop using it!").

So while PKI and code-signing promise the user a fairly deterministic series
of events in which A occurs, the B occurs, then C occurs, and then the user is
safe, what actually happens in practice is that A occurs, then a comedy of
errors ensues [0], and then the user is still unsafe while possibly being
under the mistaken impression that they're actually safe.

[0] I've never understood why this is a comedy of errors, it seems more like
a tragedy of errors to me.

A real-world demonstration of the relative effectiveness of various protection
mechanisms occurred when I wanted to evaluate the ability of code-signing to
protect users.  A researcher sent me a copy of the signed malware (thanks!),
and because of its nature encrypted it with AES using the RAR archiver.
Because RAR (and indeed most other archivers) don't protect file metadata, the
message was blocked by email scanners that identified the overall contents
from the metadata even though the file contents themselves were encrypted.
After some discussion with the IT people ("yes, I am certain what the file is,
it's a rather nasty piece of Windows malware, and I trust the sender to have
sent me malware") they forwarded the email to the PowerPC Linux machine on
which I read email, and which is rather unlikely to be affected by x86 Windows
malware.

Unfortunately I never could check it on the Windows system that I wanted to
test it on because the instant it appeared on there the resident malware
protection activated and deleted it again, despite various attempts to bypass
the protection.  Eventually I got it onto a specially-configured Windows
system, which reported that both the signature and its accompanying
certificate were valid (this is now two weeks after the CA had declared the
certificate revoked).  So it actually proved quite difficult to see just how
ineffective PKI and code-signing actually was in protecting users from malware
because the real protection mechanisms were so effective at doing their job.

(It's also rather an eye-opener about the effectiveness, at least in some
cases, of malware-protection software, no matter what I did I couldn't get the
malware files onto a Windows PC in order to have the code-signing declare them
valid and, by implication, perfectly safe).

>What's interesting here is the claim that AV companies could respond much
>faster.

I'd say it's more than just a claim, the malware was first detected around 1
1/2 months ago and added to AV vendor databases, a full month later the
certificate was declared revoked by the CA, and currently the majority of
Windows systems still regard the signature as valid (I've had a report from
someone else of one machine that records it as revoked, so at least one
machine has been belatedly protected by the code signing, assuming the user
doesn't just click past the warning as pretty much all of them will).

So yes, I'd say the AV companies respond a helluva lot faster, and a helluva
lot more effectively. The bigger lesson, for people who ever believed this to
be the case, is "don't rely on code signing to protect you from malware".

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Hashing messages with lengths between 32 and 128 bytes is one of the most important practical issue (was Re: the skein hash function)

2010-07-30 Thread Paul
Bill Stewart wrote:
Sent: Thursday, October 30, 2008 7:30 AM
To: Cryptography List
Subject: Re: the skein hash function


> So if Skein becomes popular, ASIC accelerator hardware
> may be practical for higher-speed applications.


I see another strong point for Skein:

Deterministically generated and cryptographically strong random numbers
are used in tens of NIST Approved Algorithms. They are constructed by
using an approved hash algorithm, and there, hashing is performed over
relatively short messages from 32 to 128 bytes.
Some examples where approved hash algorithms are used (directly or
indirectly):
1. Approved algorithms for digital signatures.
2. FIPS 196, Entity Authentication Using Public Key Cryptography.
3. Special Publication 800-108. Recommendation for Key Derivation Using
Pseudorandom Functions
4. SP 800-57, Part 3 Recommendation for Key Management - Part 3:
Application-Specific Key Management Guidance (especially recommendations
for selected set of applications: PKI, IPsec, TLS, S/MIME, Kerberos,
OTAR, DNSSEC and Encrypted File Systems)

Additionally millions of secure web servers are constantly producing
cryptographically strong random numbers that are generated by Fortuna or
similar algorithms where hashing is also performed over short messages
of 32 to 128 bytes.

While the performance of future SHA-3 over long messages is very
important, the performance of SHA-3 for hashing messages with lengths
between 32 and 128 bytes is even more important from practical point of
view.

Analyzing eBASH measurements for hashing messages of just 64 bytes gives
us totally different picture of the usefulness of proposed SHA-3
candidates, than the picture that we have for hashing long messages.

Take for example the measurements of the cobra system (measurements from
supercop-20100726) in 64-bit mode, AND FOR 64-byte messages (actually
measurements are very similar on all 64-bit machines).
The ranking of 14 SHA-3 candidates is:

1.  17.44   skein512
2.  18.94   bmw512
3.  21.38   bmw256
4.  23.81   blake32
5.  24.75   blake64
6.  28.31   simd256
7.  30.38   keccakc512
8.  30.56   keccak
9.  31.88   luffa256
10. 35.25   jh384
11. 35.62   jh256
12. 35.62   jh224
13. 35.62   jh512
14. 38.25   shabal512
15. 42.38   hamsi
16. 43.69   luffa384
17. 48.75   shavite3256
18. 56.25   simd512
19. 57.38   groestl256
20. 66.00   luffa512
21. 87.56   cubehash1632
22. 88.69   echo256
23. 93.56   shavite3512
24. 100.69  groestl512
25. 106.69  fugue256
26. 111.38  echo512



Regards,
-- 
  Paul
  paulcrossb...@123mail.org

-- 
http://www.fastmail.fm - Access all of your messages and folders
  wherever you are

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Obama administration seeks warrantless access to email headers.

2010-07-30 Thread Stefan Kelm

Perry,


  The administration wants to add just four words -- "electronic
  communication transactional records" -- to a list of items that the
  law says the FBI may demand without a judge's approval. Government


Would that really make that much of a difference? In Germany,
at least, the so-called "judge's approval" often isn't worth
a penny, esp. wrt. phone surveillance. It simply is way too
easy to get such an approval, even afterwards.

Cheers,

Stefan.

--
Stefan Kelm   
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstrasse 100 Tel: +49-721-96201-1
D-76133 Karlsruhe Fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-30 Thread Peter Gutmann
Paul Tiemann  writes:

>What if... Firefox (or other) could introduce a big new feature (safety
>controls) and ask you up front: "Do you want to be safer on the internet?"

The problem is that neither the browser vendor nor the users will see it like
this.  For the user its:

  "Do you want to have lots of sites that you normally use break?"

For the browser vendor its:

  "Do you want lots of your users to become frustrated when things stop
  working for them so that they switch to another browser?"

>Is there a good reason Firefox and other browsers shouldn't just get tough
>about [various sensible security measures]

None of the non-IE browsers can afford to do this because people will just
switch back to IE, and this has been observed in usability testing of proposed
browser security features by HCI researchers, as soon as anything goes wrong
the users switch back to IE, which allows pretty much anything through.  You
can even get IE as a plugin for other browsers (shudder) in order to "make
things work".

So you'd need to get the change made in IE (or at least get it made in such a
manner that fallback-to-IE is no longer an option).  I don't know what size
hammer you'd need to wield in order to get that done.

>This isn't true for all OCSP services.  For example, DigiCert's is not CRL
>based, so it really can say "Yes"

It can't say "yes" because the only thing OCSP can say is "not revoked" (and
in more general terms the only thing a blacklist can say is "not on the
blacklist").  "Not revoked" doesn't mean "valid", it just means "not in the
blacklist".

>and it really can say "Unknown" meaningfully.

"Unknown" is generally treated by client apps as "good", because if "revoked"
maps to "bad" then anything else must map to "good" (OCSP's muddle of non-
orthogonal response types is yet another perpetual motion-machine debate topic
among PKI people).

>It might not be hard to determine whose OCSP responders are CRL based and
>whose are database backed instead.

How can you do this?  Note that the various timestamps in OCSP responses are
as big a mess as the rest of OCSP, and can't be relied upon for any decision-
making.

More importantly, how can you possibly make any meaningful decisions in time-
critical protocols based on a system for which your responses can have come
from any time in the past?  As one security architect commented some years
ago, "learning in 80ms that the certificate was good as of a week ago and to
not hope for fresher information for another week seems of limited, if any,
utility to us or our customers".

The problem here is best seen by looking at certificates as capabilities.

1. You have an abitrary and unknown number of capabilities floating around out
   there.

2. Some of those capabilities (CA certs) have the ability to mint new
   capabilities.

2a. These capabilities can impersonate existing capabilities, and because of
(1) the real issuer of the capabilities has no idea that they exist.

And the means of dealing with these unknown numbers of arbitraily-identified
capabilities is... a blacklist.

There's no way this can possibly, ever work.  It's the 1960s credit-card model
that Perry mentioned with the added twist that there are an unknown number of
cards and issuers involved, and some of the cards can invent new cards
whenever they feel like it.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A slight modification of my comments on PKI.

2010-07-30 Thread Stephan Neuhaus

On Jul 29, 2010, at 22:23, Anne & Lynn Wheeler wrote:

> On 07/28/2010 10:34 PM, d...@geer.org wrote:
>> The design goal for any security system is that the number of
>> failures is small but non-zero, i.e., N>0.  If the number of
>> failures is zero, there is no way to disambiguate good luck
>> from spending too much.  Calibration requires differing outcomes.
>> Regulatory compliance, on the other hand, stipulates N==0 failures
>> and is thus neither calibratable nor cost effective.  Whether
>> the cure is worse than the disease is an exercise for the reader.
> 
> another design goal for any security system might be "security proportional 
> to risk". 

Warning:  self-promotion (well, rather: project promotion) ahead.

This is exactly what we are trying to do in an EU project in which I'm 
involved. The project, called MASTER, is more concerned with regulatory 
compliance than security, even though security of course plays a large role.

The insight is that complex systems will probably never have N = 0 (in Dan's 
terms), so we will have to calibrate the controls so that the N becomes 
commensurate with the risk.  To do this, we have two main tools:

First, there is a methodology that describes in detail how to break down your 
high-level regulatory goals (which we call control objectives) into actionable 
pieces. This breakdown tells you exactly what you need to control, and how. It 
is controlled by risk analysis, so you can say at any point why you made 
certain decisions, and conversely, if a regulation changes, you know exactly 
which parts of your processes are affected (assuming the risk analysis doesn't 
have to be completely redone as part of the regulatory change).

Second, as part of this breakdown process, you define, for each broken-down 
control objective, indicators.  These are metrics that indicate (1) whether the 
process part you are currently looking at is  compliant (i.e., has low enough 
N), and (2) whether this low N is pure luck or the result of well-placed and 
correctly functioning controls.

One benefit of having indicators at every level of breakdown is that you get 
metrics that mean something *at this level*. For example, at the lowest level, 
you might get "number of workstations with outdated virus signatures", while at 
the top you might get "money spent in the last year on lawsuits asserting a 
breach of privacy". This forces one to do what Andrew Jaquith calls 
"contextualisation" in his book, and prevents the approach sadly taken by so 
many risk analysis papers, namely simply propagating "risk values" from the 
leaves of a risk tree to the root using some propagation rule, leaving the root 
with a beautifully computed, but sadly irrelevant, number. Another benefit is 
that if some indicator is out of some allowed band, the remedy will usually be 
obvious to a person working with that indicator. In other words, our indicators 
are actionable.

The question of whether the cure is worse than the disease can't be settled 
definitively by us.  We have done some evaluation of our approach, and 
preliminary results seem to indicate that users like it. (This is said with all 
the grains of salt usually associated with preliminary user studies.) How much 
it costs to deploy is unknown, since the result of our project will be a 
prototype rather than an industrial-strength product, but our approach allows 
you to deploy only parts.

Best,

Stephan
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Persisting /dev/random state across reboots

2010-07-30 Thread Thomas
Am Donnerstag 29 Juli 2010, 21:47:01 schrieb Richard Salz:
> At shutdown, a process copies /dev/random to /var/random-seed which is
> used on reboots.
> Is this a good, bad, or "shrug, whatever" idea?
> I suppose the idea is that "all startup procs look the same" ?

Indeed. The boot process of a machine is very deterministic
and if you do not have any Hardware RNG you need to seed
/dev/random.
At least old Linux kernels (2.4) also overestimate the entropy
in the pool by about 30% which is especially a problem when you
generate ssh host keys during system installation.

Bye
Thomas


-- 
 Thomas Biege , SUSE LINUX, Security Support & Auditing
 SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nuernberg)
--
  Wer aufhoert besser werden zu wollen, hoert auf gut zu sein.
-- Marie von Ebner-Eschenbach

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com