ADMIN: slowly shutting down the high level security discussion, and formatting

2010-08-03 Thread Perry E. Metzger
The discussion spurred by Peter Gutmann's original mail on
astonishingly widely authoritative certs has gone on for quite a
while, and much of what is now being said is repetitive. I'll be
using a pretty heavy hand on moderating the messages for the moment,
unless people come up with particularly interesting things to say.

On another note: please, please, please trim replies to messages,
avoid top posting, and do not send HTML multiparts.

I also note a new trend, in which people are failing to format their
messages very effectively or break them into paragraphs. This makes
reading extremely difficult. Please take the few moments necessary to
assure your messages are readable before clicking send.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: /dev/random and virtual systems

2010-08-03 Thread Henrique de Moraes Holschuh
On Mon, 02 Aug 2010, Paul Wouters wrote:
> On Mon, 2 Aug 2010, Yaron Sheffer wrote:
> >In addition to the mitigations that were discussed on the list,
> >such machines could benefit from seeding /dev/random (or
> >periodically reseeding it) from the *host machine's* RNG. This is
> >one thing that's guaranteed to be different between VM instances.
> >So my question to the list: is this useful? Is this doable with
> >popular systems (e.g. Linux running on VMWare or VirtualBox)? Is
> >this actually being done?
> 
> Both xen and kvm do not do this currently. It is problematic for servers.

The virtio-rng driver does it almost out-of-the-box, but it is sort of
new.

Both Xen and KVM let you create communication channels between the
Hypervisor and a specific VM, which you can use to distribute entropy
from the hypervisor to rng-tools inside the VM.

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: /dev/random and virtual systems

2010-08-03 Thread Perry E. Metzger
On Mon, 2 Aug 2010 20:17:42 -0300 Henrique de Moraes Holschuh
 wrote:
> Desktops with live-CDs and half-assed embedded boxes that lack a
> TRNG are the real problem.

I'm not sure what to do about the live CD problem, but in a previous
iteration of this discussion a couple of years ago, I proposed that
using a strong cipher (like AES) with a key installed at the factory
was probably the right solution to the $40 embedded device problem. I
can dig up my much longer exposition on that if anyone wishes.

-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-03 Thread Perry E. Metzger
On Tue, 3 Aug 2010 17:49:00 +0200 Eugen Leitl  wrote:
> Encryption is cheap enough (especially if you cache keys from
> previous sessions). Why not encrypt everything?

I'm not sure it is actually cheap enough in all cases. Imagine the
state explosion problem that DNS root servers would face, for
example, in providing pairwise crytpographic sessions for all
queries, especially in a situation where for the most part one only
wants to get a response that is authenticated but which is not per se
secret.

Also, as a practical matter, we don't really have protocol
infrastructure for encrypting absolutely everything at this point.
There is, for example, no protocol by which anonymous DNS queries
could be easily encrypted.

-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-03 Thread Eugen Leitl
On Mon, Aug 02, 2010 at 03:46:24PM -0500, Nicolas Williams wrote:

> > "The default mode for any internet communication is encrypted"
> 
> That's... extreme.  There are many things that will not be encrypted,

Extreme? I don't see why my ISP should be able to inspect and monetize
my data stream.

> starting with the DNS itself, and also most public contents (because

Encryption is cheap enough (especially if you cache keys from
previous sessions). Why not encrypt everything?

> their purveyors won't want to pay for the crypto; sad but true).

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-03 Thread Thierry Moreau

Peter Gutmann wrote:


That's a good start, but it gets a bit more complicated than that in practice
because you've got multiple components, and a basic red light/green light
system doesn't really provide enough feedback on what's going on.  What you'd
need in practice is (at least) some sort of counter to indicate how many
shares are still outstanding to recreate the secret ("We still need two more
shares, I guess we'll have to call Bob in from Bratislava after all").  Also
the UI for recreating shares if one gets lost gets tricky, depending on how
much metadata you can assume if a share is lost (e.g. "We've lost share 5 of
7" vs. "We've lost one of the seven shares"), and suddenly you get a bit
beyond what the UI of an HSM is capable of dealing with.



There is more than the UI at stake here, i.e. the basic functionality of 
the scheme. Say you distribute shares in a 4 out of 7 scheme (ABCDEF) 
and share A is published on the web. How do you recover from the 
remaining 3 out of 6 scheme into a 4 out of 6 scheme without having a 
key ceremony? In an ad-hoc multi-party scheme, you request 4 of the 
remaining compliant parties to destroy key material allowing them to 
participate in a group with the traitor A, but no other key material. No 
system UI, but admittedly a coordination nightmare!



--
- Thierry Moreau



With a two-share XOR it's much simpler, two red LEDs that turn green when the
share is added, and you're done.  One share is denoted 'A' and the other is
denoted 'B', that should be enough for the shareholder to remember.

If you really wanted to be rigorous about this you could apply the same sort
of analysis that was used for weak/stronglinks and unique signal generators to
see where your possible failure points lie.  I'm not sure if anyone's ever
done this [0], or whether it's just "build in enough redundancy that we should
be OK".

Peter.

[0] OK, I can imagine scenarios where it's quite probably been done, but
anyone involved in the work is unlikely to be allowed to talk about it.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Using file-hiding rootkits for good

2010-08-03 Thread Peter Gutmann
I recently came across an example of a file-hiding rootkit for Windows that's
used for good instead of evil: It's a minifilter that hides (or at least
blocks, the files are still visible) access to executables on removable media,
with user-configurable options to block autorun.inf and/or all executables, as
well as making files on the media non-executable (although you could still map
them into memory and then execute them from there if you really wanted to).
This is a neat idea, since it stops a pile of exploits that take advantage of
the autorun capability.  More at http://blog.didierstevens.com/programs/ariad/.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-03 Thread Jerry Leichter

On Aug 2, 2010, at 1:25 PM, Nicolas Williams wrote:


On Mon, Aug 02, 2010 at 12:32:23PM -0400, Perry E. Metzger wrote:

Looking forward, the "there should be one mode, and it should be
secure" philosophy would claim that there should be no insecure
mode for a protocol. Of course, virtually all protocols we use right
now had their origins in the days of the Crypto Wars (in which case,
we often added too many knobs) or before (in the days when people
assumed no crypto at all) and thus come in encrypted and unencrypted
varieties of all sorts.

For example, in the internet space, we have http, smtp, imap and  
other

protocols in both plain and ssl flavors. [...]


Well, to be fair, there is much content to be accessed insecurely for
the simple reason that there may be no way to authenticate a peer.   
For

much of the web this is the case.

For example, if I'm listening to music on an Internet radio station, I
could care less about authenticating the server (unless it needs to
authenticate me, in which case I'll want mutual authentication).  Same
thing if I'm reading a randmon blog entry or a random news story.

By analogy to the off-line world, we authenticate business partners,  
but
in asymmetric broadcast-type media, authentication is very weak and  
only
of the broadcaster to the receiver.  If we authenticate broadcasters  
at

all, we do it by such weak methods as recognizing logos, broadcast
frequencies, etcetera.
And, indeed, there are movie cons - and many episodes of Mission:  
Impossible - that turn on the ability to plant false information by  
convincingly impersonating such a medium.



In other words, context matters.  And the user has to understand the
context.  This also means that the UI matters.  I hate to demand any
expertise of the user, but it seems unavoidable.  By analogy to the
off-line world, con-jobs happen, and they happen because victims are
naive, inexperienced, ill, senile, etcetera.  We can no more protect  
the

innocent at all times online as off, not without their help.
It's not as if identification solves all problems anyway.  A  
completely secure link to use when sending your money to Bernie Madoff  
still leaves you with nothing.


"There should be one mode, and it should be secure" is a good idea,  
but

it's not as universally applicable as one might like.  *sadness*
I think it's an oversimplification.  Cryptography can, in effect,  
guarantee the syntax:  You receive the right bytes from a source whose  
identify matches some purely internal description, and no one else  
could have seen those bytes.  But any real-world binding - of those  
bytes to semantics, of that identity to some actual actor, of "no one  
else" to trust that the sender didn't send it to Wikileaks because he  
doesn't actually trust you ... all of this is entirely outside the  
cryptographic envelope.  There's no escaping the need to understand  
the semantics, not just the syntax - as valuable as the syntax is.



SMTP and IMAP, then, definitely require secure modes.  So does LDAP,
even though it's used to access -mostly- public data, and so is more
like broadcast media.  NNTP must not even bother with a secure mode ;)
Are you sure?  How about maintaining the privacy of the groups to  
which a user subscribes?  (Granted, whoever you get your feed from  
obviously knows - but why should anyone in a position to see the  
message stream?)


Another problem you might add to the list is tunneling.  Firewalls  
have

led us to build every app as a web or HTTP application, and to tunnel
all the others over port 80.
Indeed.  Consider the all-too-well-named SOAP, which lets arbitrary  
commands and data slip right through your firewall.  If you step back  
a moment and look at the whole notion of using firewalls to control  
port numbers, you see just what an absurd corner we've gotten  
ourselves into.  We have an OS that can run multiple independent  
programs at different port numbers, and is actually very competent at  
keeping them isolated from each other, relying at base on hardware  
protection.  We've replaced it with a browser which nominally allows  
only one kind of connection, but then runs multiple independent  
programs at different HTTP addresses - and, with no hardware  
protection to help, has proved quite incompetent at keeping them  
isolated from each other.  This is progress?



 This makes the relevant context harder, if
not impossible to resolve without the user's help.
...but it opens the door to generations of improved DPI and other  
technologies that try to do it for you.  With limited success at the  
original mission, but all kinds of interesting privacy-invading and  
censorship-enabling new missions discovered along the way.



HTTP, sadly, needs an insecure mode.

Hmm.  I'm not sure exactly sure how that follows.

-- Jerry

-
The Cryptography Maili

Re: /dev/random and virtual systems

2010-08-03 Thread Thomas
Hi,
we are using haveged in our VMs to feed the random pool and
it seems to work good (means: statistical verification of
the output looks good, nearly 0 entropy overestimation, but
we never correlated output from cloned VMs).

I assume feeding the VMs from the host system can be problematic
because the host system itself often doesn't have enough entropy.
Much entropy is needed today for protocolls, session IDs and the
elf_loader(!).

Cheerio
Thomas

Am Montag 02 August 2010, 21:38:10 schrieb Yaron Sheffer:
> Hi,
> 
> the interesting thread on seeding and reseeding /dev/random did not
> mention that many of the most problematic systems in this respect are
> virtual machines. Such machines (when used for "cloud computing") are
> not only servers, so have few sources of true and hard-to-observe
> entropy. Often the are cloned from snapshots of a single virtual
> machine, i.e. many VMs start life with one common RNG state, that
> doesn't even know that it's a clone.
> 
> In addition to the mitigations that were discussed on the list, such
> machines could benefit from seeding /dev/random (or periodically
> reseeding it) from the *host machine's* RNG. This is one thing that's
> guaranteed to be different between VM instances. So my question to the
> list: is this useful? Is this doable with popular systems (e.g. Linux
> running on VMWare or VirtualBox)? Is this actually being done?
> 
> Thanks,
>  Yaron
> 
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


customizing Live CD images (was: urandom etc.)

2010-08-03 Thread John Denker
We have been discussing the importance of a unique random-seed
file each system.  This is important even forsystems that boot 
from read-only media such as CD.

To make this somewhat more practical, I have written a script
to remix a .iso image so as to add one or more last-minute files.
The leading application (but probably not the only application)
is adding random-seed files.

The script can be found at
  http://www.av8n.com/computer/fixup-live-cd

This version is literally two orders of magnitude more 
efficient than the rough pre-alpha version that I put up
yesterday ... and it solves a more general problem, insofar
as random-seed files are not the only things it can handle.

Early-boot software is outside my zone of comfort, let
alone expertise, so I reckon somebody who is friends with
Casper could make further improvements ... but at least 
for now this script serves as an "existence proof" to show 
that 
 a) the PRNG situation is not hopeless, even for read-only
   media; and
 b) it is possible to remix Live CD images automatically
   and somewhat efficiently.

I think by taking two steps we can achieve a worthwhile
improvement in security:
 -- each system should have its own unique random-seed
  file, with contents not known to the attackers; and
 -- the init.d/urandom script should seed the PRNG 
  using "date +%s.%N"  (as well as the random-seed file).

Neither step is worth nearly as much without the other,
but the two of them together seem quite worthwhile.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: /dev/random and virtual systems

2010-08-03 Thread Paul Wouters

On Mon, 2 Aug 2010, Yaron Sheffer wrote:

In addition to the mitigations that were discussed on the list, such machines 
could benefit from seeding /dev/random (or periodically reseeding it) from 
the *host machine's* RNG. This is one thing that's guaranteed to be different 
between VM instances. So my question to the list: is this useful? Is this 
doable with popular systems (e.g. Linux running on VMWare or VirtualBox)? Is 
this actually being done?


Both xen and kvm do not do this currently. It is problematic for servers.

Paul

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: lawful eavesdropping by governments - on you, via Google

2010-08-03 Thread John Gilmore
> There is no guarantee, once an eavesdropping system is
> implemented, that it will be used only for legitimate purposes -- see,
> for example, the scandal in which Greek government ministers were
> listened to using the "lawful intercept" features of cellphone
> equipment.

And, by the way, what ever happened with the Google "lawful access"
system and China?  Inside the Google internal network is a whole
wiretapping subsystem designed to answer orders and requests from cops
and governments all over the globe (including US warrants, subpoenas,
National Security Letters, and court orders, as well as those of other
countries).  The trigger that gave Google the sudden courage to tell
the Chinese where to stuff it, was that they analyzed the malware
which had succeeded in penetrating their internal network, and
discovered that it was designed to specifically try to break into
Google's internal wiretapping system -- presumably so China could do
covert wiretaps into the mountain of up-to-the-minute personal data
that is Google -- wiretaps that wouldn't get reported to the US
government or to Google management or to anybody else.

So, six months later, Google and the Chinese government had a nicely
staged negotiated moment where each of them could claim victory, and
things have gone more or less back to normal on the surface.  But
nobody on either side has said anything about what kind of access the
government of China is getting to Google's internal network.  My guess
is that their detente also involved some negotiation about that, not
just about censored or non-censored searches.  Anybody know more?

John

PS: One of the great things about having a big global company that
collects and retains massive data about individuals is that
governments can get that data with simple subpoenas.  Most of the time
they could never get a judge to sign a warrant, or the legislature to
pass a law, to collect the same information directly from the data
subject (i.e. you).  Why?  A terrible US Supreme Court decision
(California Bankers Association) from decades ago decided that you
have zero Fourth Amendment protection for data that third parties have
collected about you.  The government can't collect it themselves, by
watching you or searching your house or your communications, but they
can grab it freely from anybody who happens to collect it.  (In a
classic blow-a-hole-in-the-constitution-and-They-Will-Come maneuver,
numerous laws now *require* businesses to collect all kinds of data
about their customers, employees, etc, IN ORDER that governments can
later look at it with no Fourth Amendment protection for the victims.)
Google, of course, needed no law from the Feds to inspire them to make
a database entry every time you move your mouse from one side of the
screen to the other.  Or open your Google phone.  Or call their "free"
411 service.  Or read your email.  Or visit any web site (free "Google
Analytics" is on most, even if there are no ads).  Or ...

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: /dev/random and virtual systems

2010-08-03 Thread Henrique de Moraes Holschuh
On Mon, 02 Aug 2010, Yaron Sheffer wrote:
> the interesting thread on seeding and reseeding /dev/random did not
> mention that many of the most problematic systems in this respect
> are virtual machines. Such machines (when used for "cloud

Any decent hypervisor can supply entropy to the VMs.  For about
US$100/hypervisor you add a slow speed (less than 1Mbit/s) TRNG, or you
can get a high-speed one for around US$ 1000/hypervisor, and distribute
the entropy for all VMs.  It is very cost-effective.

Datacenters are easy, you can just buy a few low power VIA PadLock boxes
and have them distribute several Mbit/s of entropy over the network.
You can have at least 2 of them per 1U, or a lot more for custom
designs or piled up in 2U using a shelf.

You don't need entropy to use assymetric crypto to authenticate, receive
an encripted session key, and proceed to receive an encripted stream, so
the network and a cluster of entropy boxes is usable for initial seeding
as well.

Desktops with live-CDs and half-assed embedded boxes that lack a TRNG
are the real problem.

> In addition to the mitigations that were discussed on the list, such
> machines could benefit from seeding /dev/random (or periodically
> reseeding it) from the *host machine's* RNG. This is one thing
> that's guaranteed to be different between VM instances. So my
> question to the list: is this useful? Is this doable with popular
> systems (e.g. Linux running on VMWare or VirtualBox)? Is this
> actually being done?

It is done, yes.  I am not sure how out-of-the-box that is, but there
are Linux kernel drivers to get entropy from the hypervisor.

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: /dev/random and virtual systems

2010-08-03 Thread Paul Hoffman
At 10:38 PM +0300 8/2/10, Yaron Sheffer wrote:
>the interesting thread on seeding and reseeding /dev/random did not mention 
>that many of the most problematic systems in this respect are virtual 
>machines. Such machines (when used for "cloud computing") are not only 
>servers, so have few sources of true and hard-to-observe entropy. Often the 
>are cloned from snapshots of a single virtual machine, i.e. many VMs start 
>life with one common RNG state, that doesn't even know that it's a clone.
>
>In addition to the mitigations that were discussed on the list, such machines 
>could benefit from seeding /dev/random (or periodically reseeding it) from the 
>*host machine's* RNG. This is one thing that's guaranteed to be different 
>between VM instances. So my question to the list: is this useful? Is this 
>doable with popular systems (e.g. Linux running on VMWare or VirtualBox)? Is 
>this actually being done?

It is certainly doable: put a "file" on the host whose contents are random and 
change every second. On the VM, read that file on wakeup or boot and mix it 
into /dev/random. This guarantees a different value for each wakeup/boot, but 
not that every cloned machine that starts will have a unique state (because 
they might start within the same refresh. If you need that, you probably want 
to automatically mix a microsecond-accurate time at the same time.

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-03 Thread Perry E. Metzger
On Mon, 2 Aug 2010 16:19:38 -0400 (EDT) Paul Wouters
 wrote:
> [Speaking here about DNSSEC...]
> Yes, but in some the API is pretty much done. If you trust your
> (local) resolver, the one bit is the only thing you need to check.
> You let the resolver do most of the bootstrap crypto. One you have
> that, your app can rip out most of the X.509 nonsense and use the
> public key obtained from DNS for its further crypto needs.

I would like to note that this is not sufficient for the sort of
security I've been talking about.

If, for example, users are still authenticating to web sites by typing
in passwords over an encrypted channel, DNSSEC based keys don't
help. The user still has to actively make sure that they're not giving
their key away to the wrong web site.

You still need to re-engineer the system so that the user cannot
give away their credentials without serious effort. Simply changing
where the opaque third party certified key comes from doesn't help.

What DNSSEC really gives you is the ability to trust the replies the
DNS gives you -- to trust that a DNS label and IP address really are
bound together. It doesn't change the fact that the current user
authentication models are broken.

> > ...but we grow technologies organically, therefore we'll never
> > have a situation where the necessary infrastructure gets deployed
> > in a secure mode from the get-go.  This necessarily means that
> > applications need APIs by which to cause and/or determine whether
> > secure modes are in effect.
> 
> But by now, upgrades happen more automatic and more quickly. Adding
> something new to DNS won't take 10 years to get deployed. We've
> come a long way. It's time to reap the benefits from our new
> infrastructure.

I disagree that we can deploy new systems quickly. See, for example,
the large fraction of IE6 users in the world. Indeed, I suspect it
will be another 10 years before over 95% of machines are even paying
attention to DNSSEC.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-03 Thread Perry E. Metzger
On Mon, 2 Aug 2010 16:20:01 -0500 Nicolas Williams
 wrote:
> But users have to help you establish the context.  Have you ever
> been prompted about invalid certs when navigating to pages where
> you couldn't have cared less about the server's ID?  On the web,
> when does security matter?  When you fill in a field on a form?
> Maybe you're just submitting an anonymous comment somewhere.  But
> certainly not just when making payments.
>
> I believe the user has to be involved somewhat.  The decisions the
> user has to make need to be simple, real simple (e.g., never about
> whether to accept a defective cert).  But you can't treat the user
> as a total ignoramus unless you're also willing to severely
> constrain what the user can do (so that you can then assume that
> everything the user is doing requires, say, mutual authentication
> with peer servers).

There are decisions, and there are decisions.

If, for example (and this is really just an example, not a worked
design), your browser authenticates the bank website using a USB
attached hardware token containing both parties credentials, which
also refuses to work for any other web site, it is very difficult for
the user to do anything to give away the store, and the user has very
little scope for decision making (beyond, say, deciding whether to
make a transfer once they're authenticated).

This is a big contrast to the current situation, where the user needs
to figure out whether they're typing their password in to the correct
web site etc., and can be phished into giving up their credentials.

You can still be attacked even in an ideal situation, of course. For
example, you could still follow instructions from con men telling you
to wire money to them. However, the trick is to change the system from
one where the user must be constantly on the alert lest they do
something wrong, like typing in a password to the wrong web site, to
one in which the user has to go out of their way to do something
wrong, like actively making the decision to send a bad guy all their
money.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-03 Thread Jakob Schlyter
On 2 aug 2010, at 08.30, Peter Gutmann wrote:

> For the case of DNSSEC, what would happen if the key was lost?  There'd be a 
> bit of turmoil as a new key appeared and maybe some egg-on-face at ICANN, but 
> it's not like commercial PKI with certs with 40-year lifetimes hardcoded into 
> every browser on the planet is it?  Presumably there's some mechanism for 
> getting the root (pubic) key distributed to existing implementations, could 
> this be used to roll over the root or is it still a manual config process for 
> each server/resolver?  How *is* the bootstrap actually done, presumably you 
> need to go from "no certs in resolvers" to "certs in resolvers" through some 
> mechanism.

Initial bootstrap is done by

- distribution of the key by ICANN (via http://data.iana.org/root-anchors/)
- distribution of the key by the vendors themselves

Authentication of the root key can be achieved as part of the the distribution 
mechanisms above, or by transitive trust through people who attended the key 
generation ceremony. We've already seen public attestations from participants 
(e.g., [1], [2] and [3]).

Key rollovers are performed as specified in RFC 5011, i.e. a new key is 
authenticated by the current key. This does of course not work when the 
existing private key material is inaccessible (on form of "lost"). It could 
work if the key is "lost" by compromise, but one has to take into consideration 
how the key was compromised in such cases (key misuse, crypto analysis, etc).

For the generic end user, I would expect vendors to ship the root key as part 
of their software and keep the key up to date using their normal software 
update scheme.


jakob


[1] http://www.kirei.se/en/2010/06/20/root-ksk/
[2] http://www.trend-watcher.org/archives/dnssec-root-key-declaration/
[3] http://www.ask-mrdns.com/2010/07/root-dnssec-key-attestation/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-03 Thread Jakob Schlyter
On 2 aug 2010, at 16.51, Jeffrey Schiller wrote:

> Does the root KSK exist in a form that doesn't require the HSM to
> re-join, or more to the point if the manufacturer of the HSM fails, is
> it possible to re-join the key and load it into a different vendor's
> HSM?

With the assistance of the vendor (or their employees), it would be possible to 
reassemble the storage master key (SMK) by combining 5 of 7 key shares, then 
decrypting the key backup. There is nothing in the HSM units itself that is 
needed for a key restore.

> In other words, is the value that is split the "raw" key, or is it in
> some proprietary format or encrypted in some vendor internal key?

The value that is split is the SMK, used to encrypt the actual key. The actual 
key is not split and, once in production, is never to be transported outside 
the ICANN Key Management Facility.

> Back in the day we used an RSA SafeKeyper to store the IPRA key (there
> is a bit of history, we even had a key ceremony with Vint Cerf in
> attendance). This was the early to mid '90s.

Aha, that's why Vint was so on top of things during the East Coast key ceremony 
:-)

> The SafeKeyper had an internal tamper key that was used to encrypt all
> exported backups (in addition to the threshold secrets required). If
> the box failed, you could order one with the same internal tamper
> key. However you could not obtain the tamper key and you therefore
> could not choose to switch HSM vendors.

In this case, the SMK == the tamper key.


jakob

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-03 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 11:29:32AM -0400, Adam Fields wrote:
> On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
> [...]
> > 3 Any security system that demands that users be "educated",
> >   i.e. which requires that users make complicated security decisions
> >   during the course of routine work, is doomed to fail.
> [...]
> 
> I would amend this to say "which requires that users make _any_
> security decisions".
> 
> It's useful to have users confirm their intentions, or notify the user
> that a potentially dangerous action is being taken. It is not useful
> to ask them to know (or more likely guess, or even more likely ignore)
> whether any particular action will be harmful or not.

But users have to help you establish the context.  Have you ever been
prompted about invalid certs when navigating to pages where you couldn't
have cared less about the server's ID?  On the web, when does security
matter?  When you fill in a field on a form?  Maybe you're just
submitting an anonymous comment somewhere.  But certainly not just when
making payments.

I believe the user has to be involved somewhat.  The decisions the user
has to make need to be simple, real simple (e.g., never about whether to
accept a defective cert).  But you can't treat the user as a total
ignoramus unless you're also willing to severely constrain what the user
can do (so that you can then assume that everything the user is doing
requires, say, mutual authentication with peer servers).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-03 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 04:19:38PM -0400, Paul Wouters wrote:
> On Mon, 2 Aug 2010, Nicolas Williams wrote:
> >How should we measure success?
> 
> "The default mode for any internet communication is encrypted"

That's... extreme.  There are many things that will not be encrypted,
starting with the DNS itself, and also most public contents (because
their purveyors won't want to pay for the crypto; sad but true).

> >By that measure TLS has been so much more successful than IPsec as to
> >prove the point.
> 
> I never claimed IPsec was more successfulIt was not.

No, but you claimed that APIs weren't a major issue.  I believe they are.

> >But note that the one bit you're talking about is necessarily a part of
> >a resolver API, thus proving my point :)
> 
> Yes, but in some the API is pretty much done. If you trust your (local)
> resolver, the one bit is the only thing you need to check. You let the
> resolver do most of the bootstrap crypto. One you have that, your app
> can rip out most of the X.509 nonsense and use the public key obtained
> from DNS for its further crypto needs.

You missed the point.  The point was: do not design security solutions
without designing their interfaces.

IPsec has no user-/sysadmin-/developer-friendly interfaces -> IPsec is
not used.  DNS has interfaces -> when DNSSEC comes along we can extend
those intefaces.

Note that IPsec could have had trivial APIs -- trivial by comparison to
the IPsec configuration interfaces that operating systems typically
have.  For example, there's a proposal in the IETF apps area for an API
that creates connections to named servers, hiding all the details of
name resolution, IPv4/v6/v4-mapped-v6 addressing.  Such an API could
trivially have a bit by which the app can request cryptographic
protection (via IPsec, TLS, whatever can be negotiated).  Optional
complexity could be added to deal with subtleties of the secure
transport (e.g., what cipher suites do you want, if not the default).
But back in the day APIs were seen as not really in scope, so IPsec
never got them, so IPsec has been underused (and rightly so).

> >...but we grow technologies organically, therefore we'll never have a
> >situation where the necessary infrastructure gets deployed in a secure
> >mode from the get-go.  This necessarily means that applications need
> >APIs by which to cause and/or determine whether secure modes are in
> >effect.
> 
> But by now, upgrades happen more automatic and more quickly. Adding something
> new to DNS won't take 10 years to get deployed. We've come a long way. It's
> time to reap the benefits from our new infrastructure.

No objection there.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-03 Thread Paul Wouters

On Mon, 2 Aug 2010, Nicolas Williams wrote:


If that was a major issue, then SSL would have been much more successful
then it has been.


How should we measure success?


"The default mode for any internet communication is encrypted"


By that measure TLS has been so much more successful than IPsec as to
prove the point.


I never claimed IPsec was more successfulIt was not.


Of course, TLS hasn't been successful in the sense that we care about
most.  TLS has had no impact on how users authenticate (we still send
usernames and passwords) to servers, and the way TLS authenticates
servers to users turns out to be very weak (because of the plethora of
CAs, and because transitive trust isn't all that strong).


Let's first focus on foiling the grand scale of things by protecting
against passive attacks of large scale monitoring. Then let's worry
about protecting against active targetted attacks.


But note that the one bit you're talking about is necessarily a part of
a resolver API, thus proving my point :)


Yes, but in some the API is pretty much done. If you trust your (local)
resolver, the one bit is the only thing you need to check. You let the
resolver do most of the bootstrap crypto. One you have that, your app
can rip out most of the X.509 nonsense and use the public key obtained
from DNS for its further crypto needs.


...but we grow technologies organically, therefore we'll never have a
situation where the necessary infrastructure gets deployed in a secure
mode from the get-go.  This necessarily means that applications need
APIs by which to cause and/or determine whether secure modes are in
effect.


But by now, upgrades happen more automatic and more quickly. Adding something
new to DNS won't take 10 years to get deployed. We've come a long way. It's
time to reap the benefits from our new infrastructure.

Paul

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com