Re: Five Theses on Security Protocols

2010-08-02 Thread Anne Lynn Wheeler

On 08/01/2010 01:51 PM, Jeffrey I. Schiller wrote:

I remember them well. Indeed these protocols, presumably you are
talking about Secure Electronic Transactions (SET), were a major
improvement over SSL, but adoption was killed by not only failing the
give the merchants a break on the fraud surcharge, but also requiring
the merchants to pick up the up-front cost of upgrading all of their
systems to use these new protocols. And there was the risk that it
would turn off consumers because it required the consumers setup
credentials ahead of time. So if a customer arrived at my SET
protected store-front, they might not be able to make a purchase if
they had not already setup their credentials. Many would just go to a
competitor that doesn't require SET rather then establish the
credentials.


SET specification predated these (as also internet specific, from the mid-90s, went on 
currently with x9a10 financial standards work ... which had requirement to preserve the 
integrity for *ALL* retail payments) ... the decade past efforts were later were much 
simpler and practical ... and tended to be various kinds of something you 
have authentication. I'm unaware of any publicity and/or knowledge about these 
payment products (from a decade ago) outside the payment industry and select high volume 
merchants.

The mid-90s, PKI/certificate-based specifications tended to hide behind a large 
amount of complexity ... and provide no effective additional benefit over  
above SSL (aka with all the additional complexity ... did little more than hide the 
transaction during transit on the internet).  They also would strip all the PKI 
gorp off at the Internet boundary (because of the 100 times payload size and 
processing bloat that the certificate processing represented) and send the 
transaction thru the payment network with just a flag indicating that certificate 
processing had occurred (end-to-end security was not feasible). Various past posts 
mentioning the 100 times payload size and processing bloat that certificates added 
to typical payment transactions
http://www.garlic.com/~lynn/subpubkey.html#bloat

In the time-frame of some of the pilots, there were then presentation by payment network 
business people at ISO standards meetings that they were seeing transactions come thru 
the network with the certificate processed flag on ... but could prove that 
no certificate processing actually occurred (there was financial motivation to lie since 
turning the flag on lowered the interchange fee).

The certificate processing overhead also further increased the merchant 
processing overhead ... in large part responsible for the low uptake ... even 
with some benefit of lowered interchange fee. The associations looked at 
providing additional incentive (somewhat similar to more recent point-of-sale, 
hardware token incentives in europe), effectively changing the burden of proof 
in dispute (rather than the merchant having to prove the consumer was at fault, 
the consumer would have to prove they weren't at fault; of course this would 
have met with some difficulty in the US with regard to regulation-E).

Old past thread interchange with members of that specification team regarding 
the specification was (effectively) never intended to do more than hide the 
transaction during transnmission:
http://www.garlic.com/~lynn/aepay7.htm#norep5 non-repudiation, was re: crypto 
flaw in secure mail standards

aka high-overhead and convoluted, complex processing of the specification 
provided little practical added benefit over and above what was already being 
provided by SSL.

oblique reference to that specification in recent post in this thread regarding 
having done both a PKI-operation benchmark (using BSAFE library) profile as 
well as business benefit profile of the specification (when it was initially 
published ... before any operational pilots):
http://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI

with regard specifically to BSAFE processing bloat referenced in the above ... 
there is folklore that one of the people, working on the specification, 
admitted to a adding a huge number of additional PKI-operations (and message 
interchanges) to the specification ... effectively for no other reason than the 
added complexity and use of PKI-operations.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread D. K. Smetters

Jonathan Katz wrote:

On Sat, 31 Jul 2010, Jakob Schlyter wrote:


On 31 jul 2010, at 08.44, Peter Gutmann wrote:

Apparently the DNS root key is protected by what sounds like a 
five-of-seven
threshold scheme, but the description is a bit unclear.  Does anyone 
know

more?


The DNS root key is stored in HSMs. The key backups (maintained by 
ICANN) are encrypted with a storage master key (SMK), created inside 
the HSM and then split among 7 people (aka Recovery Key Share 
Holders). To recover the SMK in case of all 4 HSMs going bad, 5 of 7 
key shares are required. (https://www.iana.org/dnssec/icann-dps.txt 
section 5.2.4)


According to the FIPS 140-2 Security Policy of the HSM, an AEP Keyper, 
the M-of-N key split is done using a La Grange interpolating Polynomial.



I'd be happy to answer any additional questions,

jakob (part of the team who designed and implemented this)


This is just Shamir secret sharing, not real threshold cryptography. 
(In a threshold cryptosystem, the shares would be used in a protocol to 
perform the desired cryptographic operation [e.g., signing] without ever 
reconstructing the real secret.) Has real threshold cryptography never 
been used anywhere?


CertCo (RIP) built a Certification Authority software product that used 
real threshold cryptography. Key shares for a k of n scheme (where k 
and n were chosen at key split time) were stored in hardware crypto 
tokens, and signatures were generated by having k tokens generate 
partial signatures and then combining them into a regular RSA sig. The 
system was deployed as the SET root CA for some time; we did try to sell 
it as a regular software product, but (corporate) political issues made 
that somewhat challenging. I honestly don't remember whether it was 
deployed by anyone else for anything other than SET, but it may be that 
one of the (many) other CertCo alumni on this list might.


The original CertCo CA software used pretty simple threshold crypto (I 
can provide paper refs for the particular schemes we used if anyone 
really wants them), but by the time I left we had worked on verifiable 
schemes (where you can verify the partial sigs independently), 
proactivization (re-sharing to change k or n, or remove bad players), 
and so on. The deployed system did not implement distributed key 
generation, which had just appeared in the literature at that time -- 
the key was generated on one token, and then split; the key generation 
token was then intended to be destroyed.


Although the system was designed to be used in a globally distributed 
fashion, with automated systems for sending work (things to sign) to 
sites holding key shares (where each signing request was signed by an RA 
to authorize it), and then collecting and recombining the partial sigs, 
it turned out to be way too hard to use that way. I don't know if it was 
ever deployed in a truly geographically distributed configuration, 
rather than having all the shares (except for backups) kept at one site. 
(And as a result, I started working on usability of security :-).


Shortly after the last CertCo CA, Victor Shoup published a new threshold 
RSA scheme that made them much simpler to incorporate into deployable 
systems; building a system that uses real threshold crypto would be 
pretty easy these days if one wanted to. If nothing else, it's a great 
example for cryptographers of how small changes in the algebraic 
formulation of something can have large impact on how easy it is to 
build into systems.


--Diana Smetters

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread Thierry Moreau

Peter Gutmann wrote:

Thierry Moreau thierry.mor...@connotech.com writes:


With the next key generation for DNS root KSK signature key, ICANN may have
an opportunity to improve their procedure.


What they do will really depend on what their threat model is.  I suspect that
in this case their single biggest threat was lack of display of sufficient
due diligence, thus all the security calisthenics (remember the 1990s Clipper
key escrow procedures, which involved things like having keys generated on a
laptop in a vault with the laptop optionally being destroyed afterwards, just
another type of security theatre to reassure users).  Compare that with the
former mechanism for backing up the Thawte root key, which was to keep it on a
floppy disk in Mark Shuttleworth's sock drawer because no-one would ever look
for it there.  Another example of this is the transport of an 1894-S dime
(worth just under 2 million dollars) across the US, which was achieved by
having someone dress in somewhat grubby clothes and fly across the country in
cattle class with the slabbed coin in his pocket, because no-one would imagine
that some random passenger on a random flight would be carrying a ~$2M coin.
So as this becomes more and more routine I suspect the accompanying
calisthenics will become less impressive.

(What would you do with the DNSSEC root key if you had it?  There are many 
vastly easier attack vectors to exploit than trying to use it, and even if you 
did go to the effort of employing it, it'd be obvious what was going on as 
soon as you used it and your fake signed data started appearing, c.f. the 
recent Realtek and JMicron key issues.  So the only real threat from its loss 
seems to be acute embarassment for the people involved, thus the due-diligence 
exercise).




I fully agree with the general ideas above with one very tiny exception 
explained in the next paragraph. The DNSSEC root key ceremonies remains 
nonetheless an opportunity to review the practical implementation details.


The exception lies in a section of a paranoia scale where few 
organizations would position themselves. So let me explain it with an 
enemy of the USG, e.g. the DNS resolver support unit in a *.mil.cc 
organization. Once their user base rely on DNSSEC for traffic encryption 
keys, they become vulnerable to spoofed DNS data responses. I leave it 
as an exercise to write the protocol details of an hypothetical attack 
given that Captain Pueblo in unito-223.naval.mil.cc routinely relies on 
a web site secured by DNSSEC to get instructions about where to sail his 
war ship on June 23, 2035 (using the unrealistic assumption that 
Pueblo's validating resolver uses only the official DNS root trust anchor).


Regards,


Peter.




--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-08-02 Thread Bill Frantz

On 7/28/10 at 8:52 PM, pfarr...@pfarrell.com (Pat Farrell) wrote:


When was the last time you used a paper Yellow Pages?


Err, umm, this last week. I'm in a place where cell coverage 
(ATT, Verizon has a better reputation) is spotty and internet 
is a dream due to a noisy land line. I needed to find a ceramic 
tile store. The paper yellow pages had survived being left in 
the driveway in the rain and I used it.


However, I agree that this is the 2% case for many parts of the world.

Cheers - Bill

---
Bill Frantz|Web security is like medicine - trying to 
do good for

408-356-8506   |an evolved body of kludges - Mark Miller
www.periwinkle.com |

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread Jerry Leichter

On Aug 1, 2010, at 7:10 AM, Peter Gutmann wrote:

Thanks to all the folks who pointed out uses of m-of-n threshold  
schemes,
however all of them have been for the protection of one-off, very  
high-value
keys under highly controlled circumstances by trained personnel,  
does anyone
know of any significant use by J.Random luser?  I'm interested in  
this from a

usability point of view.

As a corollary, has anyone gone through the process of recovering a  
key from
shares held by different parties at some time after the point at  
which they
were created (e.g. six months later), the equivalent of running a  
restore from
your backups at some point to make sure that you can actually  
recover your

sysem from it?  How did it go?
It'll be interesting to see the responses, but ... in this particular  
case, we do actually have plenty of experience from physical  
applications.  Vaults that can be opened only by multiple people each  
entering one part of the combination have been around for some time.   
For that matter, that requires mechanisms for having multiple people  
set their part of the combination.  Even requirements for dual  
signatures on checks beyond a certain size are precedents.


One could certainly screw up the design of a recovery system, but one  
would have to try.  There really ought not be that much of difference  
between recovering from m pieces and recovering from one.


Of course, one wonders how well even the simpler mechanisms work in  
practice.  The obvious guess:  At installations that actually exercise  
and test their recovery mechanisms regularly, they work.  At  
installations that set up a recovery mechanism and then forget about  
it until a lightning strike takes out their KDC 5 years later ...  
well, I wouldn't place big bets on a successful recovery.  This isn't  
really any different from other business continuity operations:   
Backup systems that are never exercised, redundant power systems that  
are simply kept in silent reserve for years ... none of these are  
likely to work when actually needed.

-- Jerry



Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


After spyware fails, UAE gives up and bans Blackberries

2010-08-02 Thread David G. Koontz
http://arstechnica.com/tech-policy/news/2010/08/after-spyware-failed-uae-gives-up-and-bans-blackberries.ars
By John Timmer


Discussing in general terms RIM's Blackberry email server connections to
their servers in Canada's encryption resistance to United Arab Emirates
monitoring efforts when used by enterprise customers (bankers).

From the article:

  Why the apparent ire is focused on the devices themselves rather than
  the general approach isn't clear. An SSL connection to an offshore e-mail
  server would seem to create just as much trouble as RIM's approach, but
  there don't seem to be any efforts afoot to clamp down on other
  smartphone platforms.

The first thing that comes to mind is SSL MITM interception.  Has the UAE
compelled Etisalat to aid in MITM?

You might expect a government to be a bit more subtle dancing around
plausible deniability.  Enough concerns and the 'marks' just may develop
alternative means.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread Peter Gutmann
Jerry Leichter leich...@lrw.com writes:

One could certainly screw up the design of a recovery system, but one  
would have to try.  There really ought not be that much of difference  
between recovering from m pieces and recovering from one.

There's a *huge* difference, see my previous posting on this the last time the 
topic came up, 
http://www.mail-archive.com/cryptography@metzdowd.com/msg07671.html:

  the cognitive load imposed is just so high that most users can't cope with 
  it, particularly since they're already walking on eggshells because they're 
  working on hardware designed to fail closed (i.e. lock everything out) if 
  you as much as look at it funny.

The last time I went through this exercise for a high-value key, after quite 
some time going through the various implications, by unanimous agreement we 
went with lock an encrypted copy in two different safes (this was for an 
organisation with a lot of experience with physical security, and their threat 
assessment was that anyone who could compromise their physical security would 
do far more interesting things with the capability than stealing a key).

For the case of DNSSEC, what would happen if the key was lost?  There'd be a 
bit of turmoil as a new key appeared and maybe some egg-on-face at ICANN, but 
it's not like commercial PKI with certs with 40-year lifetimes hardcoded into 
every browser on the planet is it?  Presumably there's some mechanism for 
getting the root (pubic) key distributed to existing implementations, could 
this be used to roll over the root or is it still a manual config process for 
each server/resolver?  How *is* the bootstrap actually done, presumably you 
need to go from no certs in resolvers to certs in resolvers through some 
mechanism.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread Jerry Leichter

On Aug 2, 2010, at 2:30 AM, Peter Gutmann wrote:


Jerry Leichter leich...@lrw.com writes:


One could certainly screw up the design of a recovery system, but one
would have to try.  There really ought not be that much of difference
between recovering from m pieces and recovering from one.


There's a *huge* difference, see my previous posting on this the  
last time the

topic came up,
http://www.mail-archive.com/cryptography@metzdowd.com/msg07671.html:

 the cognitive load imposed is just so high that most users can't  
cope with
 it, particularly since they're already walking on eggshells because  
they're
 working on hardware designed to fail closed (i.e. lock everythi  ng  
out) if

 you as much as look at it funny

Well ... we do have a history of producing horrible interfaces.

Here's how I would do it:  Key segments are stored on USB sticks.   
There's a spot on the device with m USB slots, two buttons, and red  
and green LED's.   You put your USB keys into the slots and push the  
first button.  If the red LED lights - you don't have enough sticks,  
or they aren't valid.  If the green LED lights, you have a valid key.   
If the green LED lights, you push the second button (which is  
otherwise disabled), and the device loads your key.  (The device could  
also create the USB sticks initially by having a save key setting -  
probably controlled by a key lock.  Voting out and replacing a  
segment requires a bit more, but could be designed along similar lines.)


You can use some kind of secure USB stick if you like.  The content of  
a USB stick is standard - there has to be a file with a known name and  
some simple format, so it's easy to re-create a USB stick from a paper  
copy of the key.


Since specialized hardware is expensive, you can approximate this  
process with software (assuming you get a competent designer).  You  
can get by with only one USB slot, but given the tiny cost of USB hubs  
- I can buy a complete 10-port USB hub, power adapter included,  
shipped free, for less than $16 at mertiline.com, for example (and  
that's gross overkill) - it's probably worth it to give users a nice  
physical feel of inserting multiple keys into multiple locks.


I just don't see the great cognitive load involved, if the problem is  
presented properly.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: init.d/urandom : saving random-seed

2010-08-02 Thread John Denker
On 07/31/2010 09:00 PM, Jerry Leichter wrote:

 I wouldn't recommend this for high-value security, but then if you're
 dealing with high-value information, there's really no excuse for not
 having and using a source of true random bits.

Yes indeed!

 On the question of what to do if we can't be sure the saved seed file
 might be reused:  Stir in the date and time and anything else that might
 vary - even if it's readily guessable/detectable - along with the seed
 file.  [1]

  This adds minimal entropy, but detecting that a seed file has
 been re-used will be quite challenging.  A directed attack can probably
 succeed, but if you consider the case of a large number of nodes that
 reboot here and there and that, at random and not too often, re-use a
 seed file, then detecting those reboots with stale seed files seems like
 a rather hard problem.  (Detecting them *quickly* will be even harder,
 so active attacks - as opposed to passive attacks that can be made on
 recorded data - will probably be out of the question.)

I've been thinking about that.  That approach might be even *better*
than it first appears.

By way of background, recall that a good design for the central part
of a PRNG is:
   output = hash(key, counter) [2]
where
  -- the hash is your favorite cryptographically strong hash function;
  -- the counter is just a plain old counter, with enough bits
   to ensure that it will never wrap around; and
  -- the key is unique to this instance of the PRNG, is unknown to
   the attackers, and has enough bits to rule out dictionary attacks.
  -- There should be some scheme for reseeding the key every so 
   often, using real entropy from somewhere. This is outside of what
   I call the central part of the PRNG, so let's defer discussion
   of this point for a few moments.

Note that this works even though the counter has no entropy at all.
It works even if the attacker knows the counter values exactly.

This is crucial to an analysis of idea [1], because I am not sure
that the date/time string has any useful amount of entropy.  Let's
be clear: if the attacker knows what time it is, the data/time
string contains no entropy at all.

Now, if all we need is a /counter/ then the merit of idea [1] goes
up dramatically.

I reckon date +%s.%N makes a fine counter.

Note that date is /bin/date (not /usr/bin/date) so it is usable
very early in the boot process, as desired.

This requires that all boxes have a working Real Time Clock, which
seems like a reasonable requirement.

Security demands that the key in equation [2] be unique on a
machine-by-machine  basis.  This means that if I want my live CD 
to be secure, I cannot simply download a standard .iso image and 
burn it to CD.  I need to
 -- download the .iso image
 -- give it a random-seed file with something unique, preferably 
  from the output of a good TRNG, and
 -- then burn it to disk.

I have very preliminary draft of a script to install a random-seed
file into an Ubunto live-CD image.  
  http://www.av8n.com/computer/randomize-live-cd
Suggestions for improvement would be welcome.

=

So, let's summarize the recommended procedure as I understand it.
There are two modes, which I call Mode A and Mode B.

In both modes:

 *) there needs to be a random-seed file.  The contents must be
  unique and unknown to the attackers.
 *) /dev/urandom should block or throw an error if it used before
  it is seeded
 *) early in the boot process, the PRNG should be seeded using
  the random-seed file _and_ date +%s.%N.  This should happen
   -- after the random-seed file becomes readable
   -- after the Real Time Clock is available
   -- as soon thereafter as convenient, and
   -- before there is any need to use the output of /dev/urandom
 *) This is all that is necessary for Mode B, which provides a
  /modest/ level of security for a /modest/ level of exposure.  
  As a first rough guess I suggest limiting exposure to 1000 
  hours of operation or 1000 reboots, whichever comes first.
 *) Mode A is the same as Mode B, but has no exposure limits
  because the random-seed is replaced before the Mode-B limits
  are reached.  
 *) It is nice to update the  random-seed on every reboot.  
  This should happen
   -- after the random-seed file becomes writable
   -- as soon thereafter as convenient, to minimize the chance
that the system will crash before the update occurs.
 *) The random-seed file should be updated again during shutdown.
  This allows recovery from a situation where the random-seed
  file might have been compromised.
 *) Updating fails if some wiseguy mounts a filesystem in such a
  way that the random-seed file that gets updated is not the one
  that will be used for seeding the PRNG.  AFAICT Mode A depends
  on having the random-seed file in local writeable persistent
  storage, not on (say) a networked remote file system.  In some
  cases the init.d/urandom script would have to be customized to 
  locate the random-seed file on a 

Re: Five Theses on Security Protocols

2010-08-02 Thread Ian G

On 1/08/10 9:08 PM, Peter Gutmann wrote:

John Levinejo...@iecc.com  writes:


Geotrust, to pick the one I use, has a warranty of $10K on their cheap certs
and $150K on their green bar certs.  Scroll down to the bottom of this page
where it says Protection Plan:

http://www.geotrust.com/resources/repository/legal/

It's not clear to me how much this is worth, since it seems to warrant mostly
that they won't screw up, e.g., leak your private key, and they'll only pay
to the party that bought the certificate, not third parties that might have
relied on it.


A number of CAs provide (very limited) warranty cover, but as you say it's
unclear that this provides any value because it's so locked down that it's
almost impossible to claim on it.


Although distasteful, this is more or less essential.  The problem is 
best seen like this:  take all the potential relying parties for a large 
site / large CA, and multiply that by the damages in (hypothetically) 
fat-ass class action suit.  Think phishing, or an MD5 crunch, or a 
random debian code downsizing.


What results is a Very Large Number (tm).

By fairly standard business processes one ends up at the sad but 
inevitable principle:


   the CA sets expected liabilities to zero

And must do so.  Note that there is a difference between expected 
liabilities and liabilities stated in some document.  I use the term 
expected in the finance sense (c.f. Net Present Value calculations).


In practice, this is what could be called best practices, to the extent 
that I've seen it.


http://www.iang.org/papers/open_audit_lisa.html#rlo says the same thing 
in many many pages, and shows how CAcert does it.




Does anyone know of someone actually
collecting on this?


I've never heard of anyone collecting, but I wish I had (heard).


Could an affected third party sue the cert owner


In theory, yes.  This is expected.  In some sense, the certificate's 
name might be interpreted as suggesting that because the name is 
validated, then you can sue that person.


However, I'd stress that's a theory.  See above paper for my trashing of 
that, What's in a Name? at an individual level.  I'd speculate that 
the problem will be some class action suit because of the enourmous 
costs involved.




who can
then claim against the CA to recover the loss?


If the cause of loss is listed in the documentation . . .


Is there any way that a
relying party can actually make this work, or is the warranty cover more or
less just for show?


We are facing Dan Geer's disambiguation problem:

 The design goal for any security system is that the
 number of failures is small but non-zero, i.e., N0.
 If the number of failures is zero, there is no way
 to disambiguate good luck from spending too much.
 Calibration requires differing outcomes.


Maybe money can buy luck ;)



iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread Peter Trei

On 7/31/2010 2:54 PM, Adam Shostack wrote:

On Sat, Jul 31, 2010 at 06:44:12PM +1200, Peter Gutmann wrote:
| Apparently the DNS root key is protected by what sounds like a five-of-seven
| threshold scheme, but the description is a bit unclear.  Does anyone know
| more?
|
| (Oh, and for people who want to quibble over practically-deployed, I'm not
|  aware of any real usage of threshold schemes for anything, at best you have
|  combine-two-key-components (usually via XOR), but no serious use of real n-
|  of-m that I've heard of.  Mind you, one single use doesn't necessarily count
|  as practically deployed either).

We had a 3 of 7 for the ZKS master keys back in the day. When we
tested, we discovered that no one had written the secret-combining
code, and so Ian Goldberg wrote some and posted it to usenix for
backup.
   
At RSA Security back in the early 2000s, I devised protection schemes, 
and wrote product code using 5 of 7 Shamir secret sharing for certain 
products.


Peter Trei



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: After spyware fails, UAE gives up and bans Blackberries

2010-08-02 Thread Perry E. Metzger
On Mon, 02 Aug 2010 15:10:51 +1200 David G. Koontz
david_koo...@xtra.co.nz wrote:
 http://arstechnica.com/tech-policy/news/2010/08/after-spyware-failed-uae-gives-up-and-bans-blackberries.ars

See also:

https://www.nytimes.com/2010/08/02/business/global/02berry.html

The BBC did a story on this today in which (pretty shockingly) they
talked to a security expert who talked only about how bad the
security problems are because the government can't read the messages,
especially because theoretical terrorists could use the blackberries
to discuss criminal activity. No discussion at all of alternate
viewpoints or the security risks associated with built-in
eavesdropping technology.

Even the New York Times story discussed the issue entirely in privacy
terms, and did not discuss the security risks that GAK systems
pose. There is no guarantee, once an eavesdropping system is
implemented, that it will be used only for legitimate purposes -- see,
for example, the scandal in which Greek government ministers were
listened to using the lawful intercept features of cellphone
equipment.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread Jeffrey Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

OK. I'm being a bit lazy but...

I've read through the ceremony script and all that, but I have a
simple question which the script documents didn't really answer:

Does the root KSK exist in a form that doesn't require the HSM to
re-join, or more to the point if the manufacturer of the HSM fails, is
it possible to re-join the key and load it into a different vendor's
HSM?

In other words, is the value that is split the raw key, or is it in
some proprietary format or encrypted in some vendor internal key?

Back in the day we used an RSA SafeKeyper to store the IPRA key (there
is a bit of history, we even had a key ceremony with Vint Cerf in
attendance). This was the early to mid '90s.

The SafeKeyper had an internal tamper key that was used to encrypt all
exported backups (in addition to the threshold secrets required). If
the box failed, you could order one with the same internal tamper
key. However you could not obtain the tamper key and you therefore
could not choose to switch HSM vendors.

-Jeff


- -- 

Jeffrey I. Schiller
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room W92-190
Cambridge, MA 02139-4307
617.253.0161 - Voice
j...@mit.edu
http://jis.qyv.name

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFMVtt98CBzV/QUlSsRAvCRAJ0esya4xAMEXsFOFUF0kcBaue40owCfRsjZ
Ep+hF6LLzEcS+BDQYPvNbfg=
=qzNb
-END PGP SIGNATURE-

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


GSM eavesdropping

2010-08-02 Thread Bill Squier
...In his presentation at the Black Hat Conference, German GSM expert Karsten 
Nohl presented a tool he calls Kraken, which he claims can crack the A5/1 
encryption used for cell phone calls within seconds.

http://www.h-online.com/security/news/item/Quickly-decrypting-cell-phone-calls-1048850.html

-wps
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Perry E. Metzger
On Mon, 2 Aug 2010 11:02:54 -0400 Bill Squier g...@old-ones.com
wrote:
 ...In his presentation at the Black Hat Conference, German GSM
 expert Karsten Nohl presented a tool he calls Kraken, which he
 claims can crack the A5/1 encryption used for cell phone calls
 within seconds.

 http://www.h-online.com/security/news/item/Quickly-decrypting-cell-phone-calls-1048850.html

This is a really important development. I'll quote a bit more of the
article so people can understand why:

   In his presentation at the Black Hat Conference, German GSM expert
   Karsten Nohl presented a tool he calls Kraken, which he claims can
   crack the A5/1 encryption used for cell phone calls within
   seconds. But first, you have to record the GSM call with a GSM
   catcher, which you can build yourself based on a Universal Software
   Programmable Radio (USRP), which costs just under $1500, and the
   open source GNURadio software.

   To crack the key, Kraken uses rainbow tables, which Nohl calculated
   with ATI graphics processors (GPUs). During a live demonstration,
   the tool cracked the key for a recorded phone call within about 30
   seconds. Nohl then decoded the file with Airprobe and converted it
   into an audio file using Toast.

   Today, recording and cracking GSM is as easy as attacking WiFi was
   a few years ago, the security expert told The H's associates at
   heise Security.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Adam Fields
On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
[...]
 3 Any security system that demands that users be educated,
   i.e. which requires that users make complicated security decisions
   during the course of routine work, is doomed to fail.
[...]

I would amend this to say which requires that users make _any_
security decisions.

It's useful to have users confirm their intentions, or notify the user
that a potentially dangerous action is being taken. It is not useful
to ask them to know (or more likely guess, or even more likely ignore)
whether any particular action will be harmful or not.

-- 
- Adam
--
If you liked this email, you might also like:
Some iPad apps I like 
-- http://workstuff.tumblr.com/post/680301206
Sous Vide Black Beans 
-- http://www.aquick.org/blog/2010/07/28/sous-vide-black-beans/
Sous Vide Black Beans 
-- http://www.flickr.com/photos/fields/4838987109/
fields: Readdle turns 3: Follow @readdle, RT to win an #iPad. $0.99 for any 
ap... 
-- http://twitter.com/fields/statuses/20072241887
--
** I design intricate-yet-elegant processes for user and machine problems.
** Custom development project broken? Contact me, I can help.
** Some of what I do: http://workstuff.tumblr.com/post/70505118/aboutworkstuff

[ http://www.adamfields.com/resume.html ].. Experience
[ http://www.morningside-analytics.com ] .. Latest Venture
[ http://www.confabb.com ]  Founder

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Frank A. Stevenson
On Mon, 2010-08-02 at 11:02 -0400, Bill Squier wrote:
 ...In his presentation at the Black Hat Conference, German GSM expert 
 Karsten Nohl presented a tool he calls Kraken, which he claims can crack the 
 A5/1 encryption used for cell phone calls within seconds.
 
 http://www.h-online.com/security/news/item/Quickly-decrypting-cell-phone-calls-1048850.html
 

A quick list of bullet points on what is new here:

* 2TB (1.7 compressed) of GSM A5/1 rainbow tables have been created
* These tables leverage the fact that A5/1 suffers from keyspace
convergence. After the initial 100 warm-up clockings, only 16% of the
keyspace remains valid.
* The rainbow tables only sample the converged space, such samples are
equivalent to sampling all of the on average 13 initial states that
converge to the sampled point.
* Efficient ATI GPU code has been written, that allowed us to compute
the tables in 8 GPU months, and were effectively completed in just 4
weeks, using 4 computers and 850kWh of power.
* Depending on the random access speed of the storage medium, 64 bits
keys for a particular conversation can be cracked in minutes or seconds.
* We have made all software and the tables freely available.

Frank A. Stevenson


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [Pkg-sysvinit-devel] init.d/urandom : saving random-seed

2010-08-02 Thread Henrique de Moraes Holschuh
On Mon, 02 Aug 2010, Christoph Anton Mitterer wrote:
 On Sat, 2010-07-31 at 13:36 -0700, John Denker wrote:
   And we should move the seed file to somewhere inside /etc or /lib.  It is 
   as
   simple as that.  /var cannot be used for any data you need at early
   userspace.
  
  There are strong arguments for _not_ putting the random-seed in /etc
  or /lib.  There are lots of systems out there which for security 
  reasons and/or performance reasons have /etc and /lib on permanently
  readonly partitions.

In this case, the requirement is that the seed MUST be available to early
userspace.  There are *absolute* requirements for early userspace, which
trump anything else, including the FHS.  When you cannot meet those
requirements, you have to abandon the idea of doing it in early userspace.
It is THAT simple.

Early userspace means you have a read-only / and some virtual filesystems
mounted, a partial /dev (probably on tmpfs), and that's about it.

So anything used by early userspace *must* go in /.  The only hierarchies
that are always in / (for *Debian*) are:

/etc
/bin
/sbin
/lib* (/lib32, /lib64...)

We can certainly create some other toplevel hierarchy which is required to
be in /.  But that is genearally considered an worse option than adding the
hierarchy inside /lib or /etc.

Also, if you want read-only /lib or /etc, you want read-only /.  This is
completely incompatible with file-based random seed persistence done right.
You will have to do away with the file-based approach, and use a hardware
TRNG that can work in early boot, just like a Live-DVD/Live-CD would.

Or you can edit the scripts to have / remounted read-write for the window
required to refresh the seed files at late boot and shutdown.

read-only / is also not a supported configuration, so it is your problem to
make sure your system behaves sanely with read-only /.  We *do* support
Live-CD-style schemes, though (where writes are lost upon reboot), so we can
certainly get rng-tools migrated to / to help those.

 I'm not sure whether it's really strictly the case that /var is
 completely local. It might be in Debian, but AFAIU the FHS

That doesn't matter.  It is not in /, so it is not available for early
userspace.  Therefore, it doesn't meet the requirements.

 /lib/ doesn't fit either IMO,... /boot sounds perhaps ok?!

No. /boot can (and often is) a separate partition.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Adrian Hayter
In a related story, hacker Chris Paget created his own cell-phone base station 
that turned off encryption on all devices connecting to it. The station then 
routes the calls through VoIP.

http://www.wired.com/threatlevel/2010/07/intercepting-cell-phone-calls/

-Adrian

On 2 Aug 2010, at 16:02, Bill Squier wrote:

 ...In his presentation at the Black Hat Conference, German GSM expert 
 Karsten Nohl presented a tool he calls Kraken, which he claims can crack the 
 A5/1 encryption used for cell phone calls within seconds.
 
 http://www.h-online.com/security/news/item/Quickly-decrypting-cell-phone-calls-1048850.html
 
 -wps
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Adam Fields
On Mon, Aug 02, 2010 at 04:55:04PM +0100, Adrian Hayter wrote:
 In a related story, hacker Chris Paget created his own cell-phone base 
 station that turned off encryption on all devices connecting to it. The 
 station then routes the calls through VoIP.
 
 http://www.wired.com/threatlevel/2010/07/intercepting-cell-phone-calls/

Apropos the theses thread, this article contains mention of an
interesting security feature:

'Although the GSM specifications say that a phone should pop up a
warning when it connects to a station that does not have encryption,
SIM cards disable that setting so that alerts are not displayed'

That would be an example of a bad security tradeoff with the intended
result of not bugging the user about something over which they have
neither control nor recourse, but with the actual result of opening a
significant security hole. The incentives are also all misaligned
here. Presumably the right thing to do is refuse to connect to any
unencrypted towers, but assuming that there are some legitimate ones
out in the wild, the net effect is probably just worse service for the
end user. The user has no way to tell the difference, which is of
course the point of using encryption in the first place.

-- 
- Adam
--
If you liked this email, you might also like:
Some iPad apps I like 
-- http://workstuff.tumblr.com/post/680301206
Sous Vide Black Beans 
-- http://www.aquick.org/blog/2010/07/28/sous-vide-black-beans/
Sous Vide Black Beans 
-- http://www.flickr.com/photos/fields/4838987109/
fields: Readdle turns 3: Follow @readdle, RT to win an #iPad. $0.99 for any 
ap... 
-- http://twitter.com/fields/statuses/20072241887
--
** I design intricate-yet-elegant processes for user and machine problems.
** Custom development project broken? Contact me, I can help.
** Some of what I do: http://workstuff.tumblr.com/post/70505118/aboutworkstuff

[ http://www.adamfields.com/resume.html ].. Experience
[ http://www.morningside-analytics.com ] .. Latest Venture
[ http://www.confabb.com ]  Founder

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread Peter Gutmann
Jerry Leichter leich...@lrw.com writes:

Here's how I would do it:  Key segments are stored on USB sticks. There's a
spot on the device with m USB slots, two buttons, and red and green LED's.
You put your USB keys into the slots and push the first button.  If the red
LED lights - you don't have enough sticks, or they aren't valid.  If the
green LED lights, you have a valid key. If the green LED lights, you push the
second button (which is otherwise disabled), and the device loads your key.

That's a good start, but it gets a bit more complicated than that in practice
because you've got multiple components, and a basic red light/green light
system doesn't really provide enough feedback on what's going on.  What you'd
need in practice is (at least) some sort of counter to indicate how many
shares are still outstanding to recreate the secret (We still need two more
shares, I guess we'll have to call Bob in from Bratislava after all).  Also
the UI for recreating shares if one gets lost gets tricky, depending on how
much metadata you can assume if a share is lost (e.g. We've lost share 5 of
7 vs. We've lost one of the seven shares), and suddenly you get a bit
beyond what the UI of an HSM is capable of dealing with.

With a two-share XOR it's much simpler, two red LEDs that turn green when the
share is added, and you're done.  One share is denoted 'A' and the other is
denoted 'B', that should be enough for the shareholder to remember.

If you really wanted to be rigorous about this you could apply the same sort
of analysis that was used for weak/stronglinks and unique signal generators to
see where your possible failure points lie.  I'm not sure if anyone's ever
done this [0], or whether it's just build in enough redundancy that we should
be OK.

Peter.

[0] OK, I can imagine scenarios where it's quite probably been done, but
anyone involved in the work is unlikely to be allowed to talk about it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Perry E. Metzger
On Mon, 2 Aug 2010 12:12:25 -0400 Adam Fields
cryptography23094...@aquick.org wrote:
 
 Apropos the theses thread, this article contains mention of an
 interesting security feature:
 
 'Although the GSM specifications say that a phone should pop up a
 warning when it connects to a station that does not have encryption,
 SIM cards disable that setting so that alerts are not displayed'
 
 That would be an example of a bad security tradeoff with the
 intended result of not bugging the user about something over which
 they have neither control nor recourse, but with the actual result
 of opening a significant security hole. The incentives are also all
 misaligned here. Presumably the right thing to do is refuse to
 connect to any unencrypted towers, but assuming that there are some
 legitimate ones out in the wild, the net effect is probably just
 worse service for the end user. The user has no way to tell the
 difference, which is of course the point of using encryption in the
 first place.

The GSM situation is an example of many problems at once -- bad UI
decisions, the bad decision to allow unencrypted traffic, bad crypto
algorithms even when you get crypto, susceptibility to downgrade
attacks, etc.

Looking forward, the there should be one mode, and it should be
secure philosophy would claim that there should be no insecure
mode for a protocol. Of course, virtually all protocols we use right
now had their origins in the days of the Crypto Wars (in which case,
we often added too many knobs) or before (in the days when people
assumed no crypto at all) and thus come in encrypted and unencrypted
varieties of all sorts.

For example, in the internet space, we have http, smtp, imap and other
protocols in both plain and ssl flavors. (IPSec was originally
intended to mitigate this by providing a common security layer for
everything, but it failed, for many reasons. Nico mentioned one that
isn't sufficiently appreciated, which was the lack of APIs to permit
binding of IPSec connections to users.)


Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread John Kemp
On Aug 2, 2010, at 11:08 AM, Perry E. Metzger wrote:

 On Mon, 2 Aug 2010 11:02:54 -0400 Bill Squier g...@old-ones.com
 wrote:
 ...In his presentation at the Black Hat Conference, German GSM
 expert Karsten Nohl presented a tool he calls Kraken, which he
 claims can crack the A5/1 encryption used for cell phone calls
 within seconds.
 
 http://www.h-online.com/security/news/item/Quickly-decrypting-cell-phone-calls-1048850.html
 
 This is a really important development.

Others have previously cracked A5/1, and Mr Nohl's efforts are not news: 
http://www.pcworld.com/businesscenter/article/185552/gsm_encryption_cracked_showing_its_age.html
 but the main thing here appears to be the compilation of the rainbow tables.

Also, it's worth noting that the GSMA has had A5/3 GSM encryption available 
(http://gsmworld.com/documents/a5_3_and_gea3_specifications.pdf -- PDF) since 
2008, but that the improved technology has apparently not yet seen large-scale 
adoption by mobile operators. 

Regards,

- johnk

 -- 
 Perry E. Metzger  pe...@piermont.com
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Paul Wouters

On Mon, 2 Aug 2010, Perry E. Metzger wrote:


For example, in the internet space, we have http, smtp, imap and other
protocols in both plain and ssl flavors. (IPSec was originally
intended to mitigate this by providing a common security layer for
everything, but it failed, for many reasons. Nico mentioned one that
isn't sufficiently appreciated, which was the lack of APIs to permit
binding of IPSec connections to users.)


If that was a major issue, then SSL would have been much more successful
then it has been.

I have good hopes that soon we'll see use of our new biggest cryptographically
signed distributed database. And part of the signalling can come in via the
AD bit in DNSSEC (eg by adding an EDNS option to ask for special additional
records signifying SHOULD do crypto with this pubkey)

The AD bit might be a crude signal, but it's fairly easy to implement at
the application level. Requesting specific additional records will remove
the need for another latency driven DNS lookup to get more crypto information.

And obsolete the broken CA model while gaining improved support for SSL certs
by removing all those enduser warnings.

Paul

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Perry E. Metzger
On Mon, 2 Aug 2010 12:45:46 -0400 John Kemp j...@jkemp.net wrote:
 On Aug 2, 2010, at 11:08 AM, Perry E. Metzger wrote:
 
  On Mon, 2 Aug 2010 11:02:54 -0400 Bill Squier g...@old-ones.com
  wrote:
  ...In his presentation at the Black Hat Conference, German GSM
  expert Karsten Nohl presented a tool he calls Kraken, which he
  claims can crack the A5/1 encryption used for cell phone calls
  within seconds.
  
  http://www.h-online.com/security/news/item/Quickly-decrypting-cell-phone-calls-1048850.html
  
  This is a really important development.
 
 Others have previously cracked A5/1, and Mr Nohl's efforts are not
 news:

I think demonstrating the whole thing being done for a
negligible amount of money is indeed news.

 Also, it's worth noting that the GSMA has had A5/3 GSM encryption
 available
 (http://gsmworld.com/documents/a5_3_and_gea3_specifications.pdf --
 PDF) since 2008, but that the improved technology has apparently
 not yet seen large-scale adoption by mobile operators. 

Perhaps they hadn't seen a low cost demonstration of the threat yet.

-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 12:32:23PM -0400, Perry E. Metzger wrote:
 Looking forward, the there should be one mode, and it should be
 secure philosophy would claim that there should be no insecure
 mode for a protocol. Of course, virtually all protocols we use right
 now had their origins in the days of the Crypto Wars (in which case,
 we often added too many knobs) or before (in the days when people
 assumed no crypto at all) and thus come in encrypted and unencrypted
 varieties of all sorts.
 
 For example, in the internet space, we have http, smtp, imap and other
 protocols in both plain and ssl flavors. [...]

Well, to be fair, there is much content to be accessed insecurely for
the simple reason that there may be no way to authenticate a peer.  For
much of the web this is the case.

For example, if I'm listening to music on an Internet radio station, I
could care less about authenticating the server (unless it needs to
authenticate me, in which case I'll want mutual authentication).  Same
thing if I'm reading a randmon blog entry or a random news story.

By analogy to the off-line world, we authenticate business partners, but
in asymmetric broadcast-type media, authentication is very weak and only
of the broadcaster to the receiver.  If we authenticate broadcasters at
all, we do it by such weak methods as recognizing logos, broadcast
frequencies, etcetera.

In other words, context matters.  And the user has to understand the
context.  This also means that the UI matters.  I hate to demand any
expertise of the user, but it seems unavoidable.  By analogy to the
off-line world, con-jobs happen, and they happen because victims are
naive, inexperienced, ill, senile, etcetera.  We can no more protect the
innocent at all times online as off, not without their help.

There should be one mode, and it should be secure is a good idea, but
it's not as universally applicable as one might like.  *sadness*

SMTP and IMAP, then, definitely require secure modes.  So does LDAP,
even though it's used to access -mostly- public data, and so is more
like broadcast media.  NNTP must not even bother with a secure mode ;)

Another problem you might add to the list is tunneling.  Firewalls have
led us to build every app as a web or HTTP application, and to tunnel
all the others over port 80.  This makes the relevant context harder, if
not impossible to resolve without the user's help.

HTTP, sadly, needs an insecure mode.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 01:05:53PM -0400, Paul Wouters wrote:
 On Mon, 2 Aug 2010, Perry E. Metzger wrote:
 
 For example, in the internet space, we have http, smtp, imap and other
 protocols in both plain and ssl flavors. (IPSec was originally
 intended to mitigate this by providing a common security layer for
 everything, but it failed, for many reasons. Nico mentioned one that
 isn't sufficiently appreciated, which was the lack of APIs to permit
 binding of IPSec connections to users.)
 
 If that was a major issue, then SSL would have been much more successful
 then it has been.

How should we measure success?  Every user on the Internet uses TLS
(SSL) on a daily basis.  None uses IPsec for anything other than VPN
(the three people who use IPsec for end-to-end protection on the
Internet are too few to count).

By that measure TLS has been so much more successful than IPsec as to
prove the point.

Of course, TLS hasn't been successful in the sense that we care about
most.  TLS has had no impact on how users authenticate (we still send
usernames and passwords) to servers, and the way TLS authenticates
servers to users turns out to be very weak (because of the plethora of
CAs, and because transitive trust isn't all that strong).

 I have good hopes that soon we'll see use of our new biggest
 cryptographically signed distributed database. And part of the
 signalling can come in via the AD bit in DNSSEC (eg by adding an EDNS
 option to ask for special additional records signifying SHOULD do
 crypto with this pubkey)
 
 The AD bit might be a crude signal, but it's fairly easy to implement
 at the application level. Requesting specific additional records will
 remove the need for another latency driven DNS lookup to get more
 crypto information.
 
 And obsolete the broken CA model while gaining improved support for
 SSL certs by removing all those enduser warnings.

DNSSEC will help immensely, no doubt, and mostly by giving us a single
root CA.

But note that the one bit you're talking about is necessarily a part of
a resolver API, thus proving my point :)

The only way we can avoid having such an API requirement is by ensuring
that all zones are signed and all resolvers always validate RRs.  An API
is required in part because we won't get there from day one (that day
was decades ago).

The same logic applies to IPsec.  Suppose we'd deployed IPsec and DNSSEC
back in 1983... then we might have many, many apps that rely on those
protocols unknowingly, and that might be just fine...

...but we grow technologies organically, therefore we'll never have a
situation where the necessary infrastructure gets deployed in a secure
mode from the get-go.  This necessarily means that applications need
APIs by which to cause and/or determine whether secure modes are in
effect.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Anne Lynn Wheeler

minor addenda about speeds  feeds concerning the example of mid-90s payment 
protocol specification that had enormous PKI/certificate bloat ... and SSL.

The original SSL security was predicated on the user understanding the 
relationship between the webserver they thought they were talking to, and the 
corresponding URL. They would enter that URL into the browser ... and the 
browser would then establish that the URL corresponded to the webserver being 
talked to (both parts were required in order to create an environment where the 
webserver you thot you were talking to, was, in fact, the webserver you were 
actually talking to). This requirement was almost immediately violated when 
merchant servers found that using SSL for the whole operation cost them 90-95% 
of their thruput. As a result, the merchants dropped back to just using SSL for 
the payment part and having a user click on a check-out/payment button. The 
(potentially unvalidated, counterfeit) webserver now provides the URL ... and 
SSL has been reduced to just validating that the URL corresponds to the 
webserver being talked to (or validating that the webserver being talke
d to, is the webserver that it claims to be; i.e. NOT validating that the 
webserver is the one you think you are talking to).

Now, the backend of the SSL payment process was SSL connection between the webserver and 
a payment gateway (sat on the internet and acted as gateway to the payment 
networks). Moderate to heavy load, avg. transaction elapsed time (at payment gateway, 
thru payment network) round-trip was under 1/3rd of second. Avg. roundtrip at merchant 
servers could be a little over 1/3rd of second (depending on internet connection between 
the webserver and the payment gateway).

I've referenced before doing BSAFE benchmarks for the PKI/certificate bloated 
payment specification ... and using a speeded up BSAFE library ... the people 
involved in the bloated payment specification claimed the benchmark numbers 
were 100 times too slow (apparently believing that standard BSAFE library at 
the time ran nearly 1000 times faster than it actually did).

When pilot code (for the enormously bloated PKI/certificate specification) was 
finally available, using BSAFE library (speedup enhancements had been 
incorporated into standard distribution) ... dedicated pilot demos for 
transaction round trip took nearly minute elapsed time ... effectively all of 
it was BSAFE computations (using dedicated computers doing nothing else).

Merchants that found using SSL for the whole consumer interaction would have 
required ten to twenty times the number of computers ... to handle equivalent 
non-SSL load ... were potentially being faced with needing hundreds of 
additional computers to handle just the BSAFE computational load (for the 
mentioned extremely PKI/certificate bloated payment specification) ... and 
still wouldn't be able to perform the transaction anywhere close to the elapsed 
time of the implementation being used with SSL.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


/dev/random and virtual systems

2010-08-02 Thread Yaron Sheffer

Hi,

the interesting thread on seeding and reseeding /dev/random did not 
mention that many of the most problematic systems in this respect are 
virtual machines. Such machines (when used for cloud computing) are 
not only servers, so have few sources of true and hard-to-observe 
entropy. Often the are cloned from snapshots of a single virtual 
machine, i.e. many VMs start life with one common RNG state, that 
doesn't even know that it's a clone.


In addition to the mitigations that were discussed on the list, such 
machines could benefit from seeding /dev/random (or periodically 
reseeding it) from the *host machine's* RNG. This is one thing that's 
guaranteed to be different between VM instances. So my question to the 
list: is this useful? Is this doable with popular systems (e.g. Linux 
running on VMWare or VirtualBox)? Is this actually being done?


Thanks,
Yaron

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com