Re: Photos of an FBI tracking device found by a suspect

2010-10-08 Thread Nicolas Williams
On Fri, Oct 08, 2010 at 11:21:16AM -0400, Perry E. Metzger wrote:
 My question: if someone plants something in your car, isn't it your
 property afterwards?

If you left a wallet in someone's car, isn't it still yours?  And isn't
that so even if you left it there on purpose (e.g., to test a person's
character)?  But this is not the same situation, of course, since the
item left behind is an active device.

If your planting of the device violates the target's rights you might
(or might not) lose ownership of the device, along with other penalties.
The FBI is a state actor though, so the rules that apply in this case
are different than in the case of a tracking device planted by a private
investigator, and those might be different than the rules that would
apply if the device's owner is a private actor not even licensed as a
PI.

IOW: ask a lawyer.  But I strongly suspect that the answer in this case
is the FBI still owns the device, and the question is not moot (as
it might be if the device had stopped working then fallen off the car
(e.g., after hitting a number of nasty potholes).  I mean, I seriously
doubt that relevant laws would be written as to grant the subject
ownership of devices planted as part of a legal surveillance of them,
and though it's possible that judge-made law would conclude differently,
I doubt that judges would make such law.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Photos of an FBI tracking device found by a suspect

2010-10-08 Thread Nicolas Williams
On Fri, Oct 08, 2010 at 05:45:16PM -0400, Perry E. Metzger wrote:
 On Fri, 8 Oct 2010 16:13:13 -0500 Nicolas Williams
 nicolas.willi...@oracle.com wrote:
  On Fri, Oct 08, 2010 at 11:21:16AM -0400, Perry E. Metzger wrote:
   My question: if someone plants something in your car, isn't it
   your property afterwards?
  
  If you left a wallet in someone's car, isn't it still yours?
 
 Yes. However, that's an accident. If you deliberately leave a package
 on someone's doorstep, they then own the contents. (In fact, if
 someone mails you something, US law is very clear that it is yours.)

I covered that, didn't I?

 I'd be interested in hearing what a lawyer thinks.

Indeed, but I'm pretty sure the FBI wouldn't lose that question.  If the
surveillance subject said it's mine now they could probably arrest
him, and the legal question can get settled later, possibly in a
protracted appeals battle that would likely ultimately favor the FBI
anyways.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: English 19-year-old jailed for refusal to disclose decryption key

2010-10-07 Thread Nicolas Williams
On Thu, Oct 07, 2010 at 01:10:12PM -0400, Bernie Cosell wrote:
 I think you're not getting the trick here: with truecrypt's plausible 
 deniability hack you *CAN* give them the password and they *CAN* decrypt 
 the file [or filesystem].  BUT: it is a double encryption setup.  If you 
 use one password only some of it gets decrypted, if you use the other 
 password all of it is decrypted.  There's no way to tell if you used the 
 first password that you didn't decrypt everything.  So in theory you 
 could hide the nasty stuff behind the second passsword, a ton of innocent 
 stuff behind the first password and just give them the first password 
 when asked.  In practice, I dunno if it really works or will really let 
 you slide by.

There is no trick, not really.  If decryption results in plaintext much
shorter than the ciphertext -much shorter than can be explained by the
presence of a MAC- then it'd be fair to assume that you're pulling this
trick.  The law could easily deal with this.

Plausible deniability with respect to crypto technology used is not
really any different than plausible deniability with respect to
knowledge of actual keys.  Moreover, possession of software that can do
double encryption could be considered probable cause that your files
are likely to be encrypted with it.

Repeat after me: cryptography cannot protect citizens from their states.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Nicolas Williams
On Tue, Sep 14, 2010 at 03:16:18PM -0500, Marsh Ray wrote:
 On 09/14/2010 09:13 AM, Ben Laurie wrote:
 Of some interest to me is the approach I saw recently (confusingly named
 WebID) of a pure Javascript implementation (yes, TLS in JS, apparently),
 allowing UI to be completely controlled by the issuer.
 
 First, let's hear it for out of the box thinking. *yay*
 
 Now, a few questions about this approach:
 
 How do you deliver Javascript to the browser securely in the first
 place? HTTP?

I'll note that Ben's proposal is in the same category as mine (which
was, to remind you, implement SCRAM in JavaScript and use that, with
channel binding using tls-server-end-point CB type).

It's in the same category because it has the same flaw, which I'd
pointed out earlier: if the JS is delivered by normal means (i.e., by
the server), then the script can't be used to authenticate the server.

And if you've authenticated the server vi HTTPS (TLS) then you might as
well just POST the usernamepassword to the server, since the server
could just as well send you a script that does just that.

This approach works only if you deliver the script in some out-of-band
manner, such as via a browser plug-in/add-on (hopefully signed [by a
trustworthy trusted third party]).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security

2010-08-26 Thread Nicolas Williams
On Thu, Aug 26, 2010 at 12:40:04PM +1000, James A. Donald wrote:
 On 2010-08-25 11:04 PM, Richard Salz wrote:
 Also, note that HSTS is presently specific to HTTP. One could imagine
 expressing a more generic STS policy for an entire site
 
 A really knowledgeable net-head told me the other day that the problem
 with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs
 are now more prohibitive than the crypto costs.  I was quite surprised to
 hear this; he was stunned to find it out.

It'd help amortize the cost of round-trips if we used HTTP/1.1
pipelining more.  Just as we could amortize the cost of public key
crypto by making more use of TLS session resumption, including session
resumption without server-side state [RFC4507].

And if only end-to-end IPsec with connection latching [RFC5660] had been
deployed years ago we could further amortize crypto context setup.

We need solutions, but abandoning security isn't really a good solution.

 This is inherent in the layering approach - inherent in our current
 crypto architecture.

The second part is a correct description of the current state of
affairs.  I don't buy the first part (see below).

 To avoid inordinate round trips, crypto has to be compiled into the
 application, has to be a source code library and application level
 protocol, rather than layers.

Authentication and key exchange are generally going to require 1.5 round
trips at least, which is to say, really, 2.

Yes, Kerberos AP exchanges happen in 1 round trip, but at the cost of
requiring a persistent replay cache (and also there's the non-trivial
TGS exchanges as well).  Replay caches historically have killed
performance, though they don't have to[0], but still, there's the need
for either a persistent replay cache backing store or a trade-off w.r.t.
startup time and clients with slow clocks[0], and even then you need to
worry about large (1s) clock adjustments.

So, really, as a rule of thumb, budget 2 round trips for all crypto
setup.  That leaves us with amortization and piggy-backing as ways to
make up for that hefty up-front cost.

 Every time you layer one communication protocol on top of another,
 you get another round trip.
 
 When you layer application protocol on ssl on tcp on ip, you get
 round trips to set up tcp, and *then* round trips to set up ssl,
 *then* round trips to set up the application protocol.

See draft-williams-tls-app-sasl-opt-04.txt [1], a variant of false
start, which alleviates the latter.  See also draft-bmoeller-tls-
falsestart-00.txt [2].

Back to layering...

If abstractions are leaky, maybe we should consider purposeful
abstraction leaking/piercing.

There's no reason that we couldn't piggy-back one layer's initial message
(and in some cases more) on a lower layer connection setup message
exchange -- provide much care is taken in doing so.

That's what PROT_READY in the GSS-API is for, that's one use for GSS-API
channel binding (see SASL/GS2 [RFC5801] for one example).  It's what TLS
false start proposals are about...  draft-williams-tls-app-sasl-opt-04
gets an up to 1.5 round-trip optimization for applications over TLS.

We could apply the same principle to TCP... (Shades of the old, failed?
transaction TCP [RFC1644] proposal from the mid `90s, I know.  Shades
also of TCP-AO and other more recent proposals perhaps as well.)

But there is a gotcha: the upper layer must be aware of the early
message send/delivery semantics.  For example, early messages may not
have been protected by the lower layer, with protection not confirmed
till the lower layer succeeds, which means... for example, that the
upper layer must not commit much in the way of resources until the lower
layer completes (e.g., so as to avoid DoS attacks).

I'm not saying that piercing layers is to be done cavalierly.  Rather,
that we should consider this approach, carefully.  I don't really see
better solutions (amortization won't always help).

Nico

[0] Turns out that there is a way to optimize replay caches greatly, so
that an fsync(2) is not needed on every transaction, or even most.

This is an optimization that turned out to be quite simple to
implement (with much commentary), but took a long time to think
through.  Writing a test program and then using it to test the
implementation's correctness was the lion's share of the
implementation work.

You can see it here:


http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/gss_mechs/mech_krb5/krb5/rcache/rc_file.c

Diffs:


http://src.opensolaris.org/source/diff/onnv/onnv-gate/usr/src/lib/gss_mechs/mech_krb5/krb5/rcache/rc_file.c?r2=%252Fonnv%252Fonnv-gate%252Fusr%252Fsrc%252Flib%252Fgss_mechs%252Fmech_krb5%252Fkrb5%252Frcache%252Frc_file.c%4012192%3Ab9153e7686cfr1=%252Fonnv%252Fonnv-gate%252Fusr%252Fsrc%252Flib%252Fgss_mechs%252Fmech_krb5%252Fkrb5%252Frcache%252Frc_file.c%407934%3A6aeeafc994de

RFE (though IIRC the description is wrong/out of date):


Re: Has there been a change in US banking regulations recently?

2010-08-16 Thread Nicolas Williams
On Fri, Aug 13, 2010 at 02:55:32PM -0500, eric.lengve...@wellsfargo.com wrote:
 There are some possibilities, my co-workers and I have discussed. For
 purely internal systems TLS-PSK (RFC 4279) provides symmetric
 encryption through pre-shared keys which provides us with whitelisting
 as well as removing asymmetric crypto.  [...]

For purely internal systems Kerberos is really the way to go, mostly
because it's so easy to deploy nowadays.

TLS-PSK is not a useful way of building any but the smallest networks,
and for two reasons: a) there's no agreed PBKDF and password salting
mechanisms, so passwords are out, b) there's no enrolment mechanism, so
PSK setup is completely ad-hoc.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 12:32:23PM -0400, Perry E. Metzger wrote:
 Looking forward, the there should be one mode, and it should be
 secure philosophy would claim that there should be no insecure
 mode for a protocol. Of course, virtually all protocols we use right
 now had their origins in the days of the Crypto Wars (in which case,
 we often added too many knobs) or before (in the days when people
 assumed no crypto at all) and thus come in encrypted and unencrypted
 varieties of all sorts.
 
 For example, in the internet space, we have http, smtp, imap and other
 protocols in both plain and ssl flavors. [...]

Well, to be fair, there is much content to be accessed insecurely for
the simple reason that there may be no way to authenticate a peer.  For
much of the web this is the case.

For example, if I'm listening to music on an Internet radio station, I
could care less about authenticating the server (unless it needs to
authenticate me, in which case I'll want mutual authentication).  Same
thing if I'm reading a randmon blog entry or a random news story.

By analogy to the off-line world, we authenticate business partners, but
in asymmetric broadcast-type media, authentication is very weak and only
of the broadcaster to the receiver.  If we authenticate broadcasters at
all, we do it by such weak methods as recognizing logos, broadcast
frequencies, etcetera.

In other words, context matters.  And the user has to understand the
context.  This also means that the UI matters.  I hate to demand any
expertise of the user, but it seems unavoidable.  By analogy to the
off-line world, con-jobs happen, and they happen because victims are
naive, inexperienced, ill, senile, etcetera.  We can no more protect the
innocent at all times online as off, not without their help.

There should be one mode, and it should be secure is a good idea, but
it's not as universally applicable as one might like.  *sadness*

SMTP and IMAP, then, definitely require secure modes.  So does LDAP,
even though it's used to access -mostly- public data, and so is more
like broadcast media.  NNTP must not even bother with a secure mode ;)

Another problem you might add to the list is tunneling.  Firewalls have
led us to build every app as a web or HTTP application, and to tunnel
all the others over port 80.  This makes the relevant context harder, if
not impossible to resolve without the user's help.

HTTP, sadly, needs an insecure mode.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 01:05:53PM -0400, Paul Wouters wrote:
 On Mon, 2 Aug 2010, Perry E. Metzger wrote:
 
 For example, in the internet space, we have http, smtp, imap and other
 protocols in both plain and ssl flavors. (IPSec was originally
 intended to mitigate this by providing a common security layer for
 everything, but it failed, for many reasons. Nico mentioned one that
 isn't sufficiently appreciated, which was the lack of APIs to permit
 binding of IPSec connections to users.)
 
 If that was a major issue, then SSL would have been much more successful
 then it has been.

How should we measure success?  Every user on the Internet uses TLS
(SSL) on a daily basis.  None uses IPsec for anything other than VPN
(the three people who use IPsec for end-to-end protection on the
Internet are too few to count).

By that measure TLS has been so much more successful than IPsec as to
prove the point.

Of course, TLS hasn't been successful in the sense that we care about
most.  TLS has had no impact on how users authenticate (we still send
usernames and passwords) to servers, and the way TLS authenticates
servers to users turns out to be very weak (because of the plethora of
CAs, and because transitive trust isn't all that strong).

 I have good hopes that soon we'll see use of our new biggest
 cryptographically signed distributed database. And part of the
 signalling can come in via the AD bit in DNSSEC (eg by adding an EDNS
 option to ask for special additional records signifying SHOULD do
 crypto with this pubkey)
 
 The AD bit might be a crude signal, but it's fairly easy to implement
 at the application level. Requesting specific additional records will
 remove the need for another latency driven DNS lookup to get more
 crypto information.
 
 And obsolete the broken CA model while gaining improved support for
 SSL certs by removing all those enduser warnings.

DNSSEC will help immensely, no doubt, and mostly by giving us a single
root CA.

But note that the one bit you're talking about is necessarily a part of
a resolver API, thus proving my point :)

The only way we can avoid having such an API requirement is by ensuring
that all zones are signed and all resolvers always validate RRs.  An API
is required in part because we won't get there from day one (that day
was decades ago).

The same logic applies to IPsec.  Suppose we'd deployed IPsec and DNSSEC
back in 1983... then we might have many, many apps that rely on those
protocols unknowingly, and that might be just fine...

...but we grow technologies organically, therefore we'll never have a
situation where the necessary infrastructure gets deployed in a secure
mode from the get-go.  This necessarily means that applications need
APIs by which to cause and/or determine whether secure modes are in
effect.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 10:50:10AM +0200, Alexandre Dulaunoy wrote:
 On Thu, Jul 29, 2010 at 3:09 AM, Nicolas Williams
 nicolas.willi...@oracle.com wrote:
  This is a rather astounding misunderstanding of the protocol.  [...]
 
 I agree on this and but the implementation of OCSP has to deal with
 all non definitive (to take the wording of the RFC) answers. That's
 where the issue is. All the exception case, mentioned in 2.3, are
 all unauthenticated and it seems rather difficult to provide authenticated
 scheme for that part as you already mentioned in [*].
 
 That's why malware authors are already adding fake entries of OCSP
 server in the host file... simple and efficient.

A DoS attack on OCSP clients (which is all this really is) should either
cause the clients to fallback on CRLs or to fail the larger operation
(TLS handshake, whatever) altogether.  The latter makes this just a DoS.
The former makes this less than a DoS.

The real risk would be OCSP clients that don't bother with CRLs if OCSP
Responder can't respond successfully, but which proceed anyways af if
peers' certs are valid.  If there exist such clients, don't blame OCSP.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Persisting /dev/random state across reboots

2010-07-29 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 03:47:01PM -0400, Richard Salz wrote:
 At shutdown, a process copies /dev/random to /var/random-seed which is 
 used on reboots.
 Is this a good, bad, or shrug, whatever idea?

If the entropy pool has other, reasonable/fast sources of entropy at
boot time, then seeding the entropy pool at boot time with a seed
generated at shutdown time is harmless (assuming a good enough entropy
pool design).  Else, then this approach can be a good idea (see below).

 I suppose the idea is that all startup procs look the same ?

The idea is to get enough entropy into the entropy pool as fast as
possible at boot time, faster than the system's entropy sources might
otherwise allow.

The security of a system that works this way depends critically on
several things: a) no one reads the seed between the time it's generated
and the time it's used to seed the entropy pool, b) the seed cannot be
used twice accidentally, c) the system can cope with crashes (i.e., no
seed at boot) such as by blocking reads of /dev/random and even
/dev/urandom until enough entropy is acquired, d) the entropy pool
treats the seed as entropy from any other source and applies the normal
mixing procedure to it, e) there is a way to turn off this chaining of
entropy across boots.  (Have I missed anything?)

(a) can't really be ensured.  But one could be sufficiently confident
that (a) is true that one would want to enable this.  (d) means that
every additional bit of entropy obtained from other sources at boot time
will make it harder for an attacker that managed to read this seed to
successfully mount any attacks on you.  (e) would be for the paranoid;
for most users, most of the time, chaining entropy across reboots is
probably a very good idea.  But most importantly, on-CPU RNGs should
make this totally pointless (see previous RNG-on-CPU threads).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-28 Thread Nicolas Williams
On Tue, Jul 27, 2010 at 10:10:54PM -0600, Paul Tiemann wrote:
 I like the idea of SSL pinning, but could it be improved if statistics
 were kept long-term (how many times I've visited this site and how
 many times it's had certificate X, but today it has certificate Y from
 a different issuer and certificate X wasn't even near its expiration
 date...)

My preference would be for doing something like SCRAM (and other
SASL/GSS mechanisms) with channel binding (using tls-server-end-point CB
type).  It has the effect that the server can confirm that the
certificate seen by the client is the correct one -- whereas the server
cannot do that in the SSL pinning approach.  It'd have other major
benefits as well.

The problem is: there's no standard way to do this in web browser
applications.  Worse, there's not even any prototypes.

I also like the Moonshot approach.

 Another thought: Maybe this has been thought of before, but what about
 emulating the Sender Policy Framework (SPF) for domains and PKI?
 Allow each domain to set a DNS TXT record that lists the allowed CA
 issuers for SSL certificates used on that domain.  (Crypto Policy
 Framework=CPF?)

Better yet: use DNSSEC and publish TLS EE certs in the DNS.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 01:21:33PM +0100, Ben Laurie wrote:
 On 28/07/2010 13:18, Peter Gutmann wrote:
  Ben Laurie b...@links.org writes:
  
  I find your response strange. You ask how we might fix the problems, then 
  you 
  respond that since the world doesn't work that way right now, the fixes 
  won't 
  work. Is this just an exercise in one-upmanship? You know more ways the 
  world 
  is broken than I do?
  
[...].  I'm 
  after effective practical solutions, not just a solution exists, QED 
  solutions.
 
 The core problem appears to be a lack of will to fix the problems, not a
 lack of feasible technical solutions.
 
 I don't know why it should help that we find different solutions for the
 world to ignore?

Solutions at higher layers might have a better chance of getting
deployed.  No, I'm not suggesting that we replace TLS and HTTPS with
application-layer crypto over HTTP, not entirely anyways.  I am
suggesting that we use what little TLS does give us in ways that don't
require changing TLS much or at all.

Application-layer authentication with tls-server-end-point channel
bindings seems like a feasible candidate.  This too would require
changes on clients and servers, which makes it not-that-likely to get
implemented and deployed, but not changes at the TLS layer (other than
an API by which to extract a TLS connection's server cert).  It could be
deployed incrementally such that users who can use it get better
security.  Then if the market gives a damn about security, it might get
closer to fully deployed in our lifetimes.

The assumption here is that improvements at the TLS and PKI layers occur
with enormous latency.  If this were true at all layers then we could
just give up, or aim to fix not just today's problems, but tomorrow's, a
decade or three from now (ha).  It'd be nice if that assumption were not
true at all.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 10:05:22AM -0400, Perry E. Metzger wrote:
 PKI was invented by Loren Kohnfelder for his bachelor's degree thesis
 at MIT. It was certainly a fine undergraduate paper, but I think we
 should forget about it, the way we forget about most undergraduate
 papers.

PKI alone is certainly not the answer to all our problems.

Infrastructure (whether of a pk variety or otherwise) and transitive
trust probably have to be part of the answer for scalability reasons,
even if transitive trust is a distasteful concept.  However, we need to
be able to build direct trust relationships, otherwise we'll just have a
house of transitive trust cards.  Again, think of the the SSH leap-of-
faith and SSL pinning concepts, but don't constrain yourselves purely
to pk technology.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 03:16:32PM +0100, Ben Laurie wrote:
 Maybe it doesn't, but no revocation mechanism at all makes me nervous.
 
 I don't know Kerberos well enough to comment.
 
 DNSSEC doesn't have revocation but replaces it with very short
 signature lifetimes (i.e. you don't revoke, you time out).

Kerberos too lacks revocation, and it also makes up for it with short
ticket lifetimes.

OCSP Responses are much like a PKI equivalent of Kerberos tickets.  All
you need to do to revoke a principal with OCSP is to remove it from the
Responder's database or mark it revoked.  To revoke an individual
certificate you need only mark a date for the given subject such that no
cert issued prior to it will be considered valid.

An OCSP Responder implementation could be based on checking a real CRL
or checking a database of known subjects (principals).  Whichever is
likely to be smaller over time is best, though the latter is just
simpler to administer (since you don't need to know the subject public
key nor the issuerserial, nor the actual TBSCertificate in order to
revoke, just the subject name and current date and time).

 SSH does appear to have got away without revocation, though the nature
 of the system is s.t. if I really wanted to revoke I could almost
 always contact the users and tell them in person. This doesn't scale
 very well to SSL-style systems.

The SSH ad-hoc pubkey model is a public key pre-sharing (for user keys)
and pre-sharing and/or leap-of-faith (for host keys) model.  It doesn't
scale without infrastructure.  Add infrastructure and you're back to a
PKI-like model (maybe with no hierarchy, but still).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 10:42:43AM -0400, Anne  Lynn Wheeler wrote:
 On 07/28/2010 10:05 AM, Perry E. Metzger wrote:
 I will point out that many security systems, like Kerberos, DNSSEC and
 SSH, appear to get along with no conventional notion of revocation at all.
 
 long ago and far away ... one of the tasks we had was to periodically
 go by project athena to audit various activities ... including
 Kerberos. The original PK-INIT for kerberos was effectively
 certificateless public key ... 

And PKINIT today also allows for rp-only user certs if you want them.
They must be certificates, but they needn't carry any useful data beyond
the subject public key, and the KDC must know the {principal,
cert|pubkey} associations.

 An issue with Kerberos (as well as RADIUS ... another major
 authentication mechanism) ... is that account-based operation is
 integral to its operation ... unless one is willing to go to a
 strictly certificate-only mode ... where all information about an
 individuals authority and access privileges are also carried in the
 certificate (and eliminate the account records totally).

This is true time you have rp-only certs or certs that carry less
information than the rp will require.  The latter almost always true.
The account can be local to each rp, however, or centralized -- that's
up to the relying parties.

 As long as the account record has to be accessed as part of the
 process ... the certificate remains purely redundant and superfluous
 (in fact, some number of operations running large Kerberos based
 infrastructure have come to realize that they have large redundant
 administrative activity maintaining both the account-based information
 as well as the duplicate PKI certificate-based information).

Agreed.  Certificates should, as much as possible, be rp-only.

 The account-based operations have sense of revocation by updating the
 account-based records. [...]

Exactly.  OCSP can work in that manner.  CRLs cannot.  In terms of
administration updating an account record is much simpler than updating
a CRL (because much less information needs to be available for the
former than for the latter).

 The higher-value operations tend to be able to justify the real-time,
 higher quality, and finer grain information provided by an
 account-based infrastructure ... and as internet and technology has
 reduced the costs and pervasiveness of such operations ... it further
 pushes PKI, certificate-based mode of operation further and further
 into no-value market niches.

Are you arguing for Kerberos for Internet-scale deployment?  Or simply
for PKI with rp-only certs and OCSP?  Or other federated
authentication mechanism?  Or all of the above?  :)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 11:13:36AM -0400, Perry E. Metzger wrote:
 On Wed, 28 Jul 2010 09:30:22 -0500 Nicolas Williams
 nicolas.willi...@oracle.com wrote:
 
 I have no objections to infrastructure -- bridges, the Internet,
 and electrical transmission lines all seem like good ideas. However,
 lets avoid using the term Public Key Infrastructure for things that
 depart radically from the Kohnfelder and subsequent X.509 models.

Well, OK.  But PKI no longer means that, not with bridges and what not
in the picture.

  Infrastructure (whether of a pk variety or otherwise) and transitive
  trust probably have to be part of the answer for scalability
  reasons, even if transitive trust is a distasteful concept.
 
 Well, it depends a lot on what kind of trust.
 
 Let me remind everyone of one of my long-standing arguments.
 
 Say that Goldman Sachs wants to send Morgan Stanley an order for a
 billion dollars worth of bonds. Morgan Stanley wants to know that
 Goldman sent the order, because the consequences of a mistake on a
 transaction this large would be disastrous.

Indeed.  They must first establish a direct trust relationship.  They
might leverage transitive trust to bootstrap direct trust if doing so
makes the process easier (which it almost certainly does, and which we
use in the off-line world all the time using pieces of paper or plastic
issued by various authorities, such as drivers' licenses, passports,
...).

  However, we need to be able to build direct trust relationships,
  otherwise we'll just have a house of transitive trust cards.
  Again, think of the the SSH leap-of- faith and SSL pinning
  concepts, but don't constrain yourselves purely to pk technology.
 
 I believe we may, in fact, be in violent agreement here.

We are.  Perhaps I hadn't made my point obvious enough: transitive trust
is necessary, but primarily as a method of bootstrapping direct trust
relationships.  I really should have used that specific formulation.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 04:23:52AM +1200, Peter Gutmann wrote:
 Nicolas Williams nicolas.willi...@oracle.com writes:
 Sorry, but this is wrong.  The OCSP protocol itself really is an online
 certificate status protocol.  
 
 It's not an online certificate status protocol because it can provide neither
 a yes or a no response to a query about the validity of a certificate.

You should be more specific.  I'm looking at RFC2560 and I don't see
this.

OCSP Responses allow the Responder to assert:

 - A time at which the given cert was known to be valid (thisUpdate;
   REQUIRED).

   Relying parties are free to impose a freshness requirement (e.g.,
   thisUpdate must be no more than 5 minutes in the past).

   Perhaps you're concerned that protocols that allow for carrying OCSP
   Responses don't provide a way for peers to indicate what their
   freshness requirements are?

 - A time after which the given OCSP Response is not to be considered
   valid (nextUpdate, which is OPTIONAL).

 - The certificate's status (certStatus, one of good, revoked, unknown;
   REQUIRED).

How is responding certStatus=good, thisUpdate=now - a few minutes
not a yes response to a query about the validity of a certificate?

What am I missing?

 (For an online status protocol I want to be able to submit a cert and get back
 a straight valid/not valid response, exactly as I can for credit cards with
 their authorised/declined response.  Banks were doing this twenty years ago
 with creaky mainframes over X.25 and (quite probably) wet bits of string, but
 we still can't do this today with multicore CPUs and gigabit links if we're
 using OCSP).

OCSP gives you that.  Seriously.  In fact, an OCSP Responder either must
not respond or it must give you at least {certStatus, thisUpdate}
information about a cert.  Yes, certStatus can be unknown, but a
Responder that regularly asserts certStatus=unknown would be a rather
useless responder.

 Responder implementations may well be based on checking CRLs, but they aren't
 required to be.
 
 They may be, or they may not be, but you as a relying party have no way of 
 telling.

And why would a relying party need to know internal details of the OCSP
Responder?

 In any event though since OCSP can't say yes or no, it doesn't matter whether 
 the response is coming from a live database or a month-old CRL, since it's 
 still a fully CRL-bug-compatible blacklist I can trivially avoid it with a 
 manufactured-cert attack.

Manufactured cert attack?  If you can mint certs without having the CA's
private key then who cares about OCSP.  If you can do it only as a
result of hash collisions, well, switch hashes.  Let's not confuse hash
collision issues with whether OCSP does what it's advertised to do.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 12:18:56PM -0400, Perry E. Metzger wrote:
 Again, I understand that in a technological sense, in an ideal world,
 they would be equivalent. However, the big difference, again, is that
 you can't run Kerberos with no KDC, but you can run a PKI without an
 OCSP server. The KDC is impossible to leave out of the system. That is
 a really nice technological feature.

Whether PKI can run w/o OCSP is up to the relying parties.  Today,
because OCSP is an afterthought, they have little choice.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 01:25:21PM -0400, Perry E. Metzger wrote:
 My mother relies on many certificates. Can she make a decision on
 whether or not her browser uses OCSP for all its transactions?
 
 I mention this only because your language here is quite sticky.
 Saying it is up to the relying parties is incorrect. It is really
 up to a host of people who are nowhere near the relying parties. In
 most cases, the relying parties aren't even capable of understanding
 the issue.

Precise and concise language in a fast moving thread with participants
with diverse backgrounds is going to be hard to come by.  Better to quit
than hold out for that (unless you enjoy being disappointed).  I'm
hardly the only sinner here on that score.

up to the relying parties means up to the browsers, where users-as-
relying-parties are concerned.  That also means getting software
updated, which to some degree means getting my mom to do stuff she
doesn't and shouldn't have to know how.  It shouldn't mean getting my
mom to enable OCSP -- that would be hopeless.

up to the relying parties means up to the server as well, since
servers too are relying-parties.

Again, if everything is too hard, why do we bother even talking about
any of this?  ETOOHARD cannot usefully be a retort to every suggestion.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 02:41:35PM -0400, Perry E. Metzger wrote:
 On the other edge of the spectrum, many people now use quite secure
 protocols (though I won't claim the full systems are secure --
 implementation bugs are ubiquitous) for handling things like remote
 login and file transfer, accessing shared file systems on networks,
 etc., with little to no knowledge on their part about how their
 systems work or are configured. This seems like a very good thing. One
 may complain about many issues in Microsoft's systems, for example,
 but adopting Kerberos largely fixed the distributed authentication
 problem for them, and without requiring that users know what they're
 doing.

Hear, hear!  But... great for corporate networks, not quite for
Internet-scale, but a great example of how we can make progress when we
want to.

 (I am reminded of the similar death-by-complexity of the IPSec
 protocol's key management layers, where I am sad to report that even I
 can't easily configure the thing. Some have proposed standardizing on
 radically simplified profiles of the protocol that provide almost no
 options -- I believe to be the last hope for the current IPSec suite.)

IPsec is a great example of another kind of failure: lack of APIs.
Applying protection to individual packets without regard to larger
context is not terribly useful.  Apps have no idea what's going on, if
anything, in terms of IPsec protection.  Worse, the way in which IPsec
access control is handled means that typically many nodes can claim any
given IP address, which dilutes the protection provided by IPsec as the
number of such nodes goes up.  Just having a way to ask that a TCP
connection's packets all be protected by IPsec, end-to-end, with similar
SA pairs (i.e., with same peers, same transforms) would have been a
great API to have years ago.

The lack of APIs has effectively relegated IPsec to the world of VPN.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Nicolas Williams
On Tue, Jul 27, 2010 at 09:54:51PM +0100, Ben Laurie wrote:
 On 27/07/2010 15:11, Peter Gutmann wrote:
  The intent with posting it to the list was to get input from a collection of
  crypto-savvy people on what could be done.  The issue had previously been
  discussed on a (very small) private list, and one of the members suggested I
  post it to the cryptography list to get more input from people.  The 
  follow-up
  message (the Part II one) is in a similar vein, a summary of a problem and
  then some starters for a discussion on what the issues might be.
 
 Haven't we already decided what to do: SNI?

But isn't that the problem, that SNI had to be added therefore it isn't
everywhere therefore site operators don't trust its presence therefore
SNI is irrelevant?

Do we have any information as to which browsers in significant current
use don't support SNI?  Hopefully at some point site operators could
declare that browsers that don't support SNI will not be supported.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Nicolas Williams
On Tue, Jul 27, 2010 at 06:30:51PM -0600, Paul Tiemann wrote:
  **  But talking about TLS/SNI to SSL suppliers is like talking about the
  lifeboats on the Titanic ... we don't need it because SSL is unsinkable.
 
 Apache support for this came out 12 months ago.  Does any one know of
 statistics that show what percentage of installed Apache servers out
 there are running 2.2.12 or greater?  How many of the top 10 Linux
 distributions are past 2.2.12?  

Yet browser SNI support is what matters regarding adoption.  No hosting
service will provision services such that SNI is required if too much of
the browser installed base does not support it.

Of course server support is a requirement in order to get SNI deployed,
but that's much less of an issue than client support.

Thanks for pointing out IE6 though.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What if you had a very good patent lawyer...

2010-07-24 Thread Nicolas Williams
On Thu, Jul 22, 2010 at 05:59:50PM -0700, John Gilmore wrote:
 It's pretty outrageous that anyone would try to patent rolling barcoded
 dice to generate random numbers.

If you have children at home you could just point a webcam at their
gameroom, or, depending on how obsessive compulsive their guardians are
regarding cleanliness, anywhere in their homes.  Of course, they won't
be there all the time, and their guardians will sometimes cleanup, which
means that such a generator will tend to be biased, which means you need
an entropy extractor and entropy pool (but you knew you needed those
anyways).  Even so, I believe that such an entropy generator will
generally produce better entropy than a geiger counter, at least when
it's operational.

I wouldn't put it past any PTO, especially the USPTO, to issue a patent
on gathering entropy from a webcam pointed at tiny, human entropy
generators.  But IANAL.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Intel to also add RNG

2010-07-12 Thread Nicolas Williams
On Mon, Jul 12, 2010 at 01:13:10PM -0400, Jack Lloyd wrote:
 I think it's important to make the distinction between trusting Intel
 not to have made it actively malicious, and trusting them to have
 gotten it perfectly correct in such a way that it cannot fail.
 Fortunately, the second problem, that it is a well-intentioned but
 perhaps slightly flawed RNG [*], could be easily alleviated by feeding
 the output into a software CSPRNG (X9.31, a FIPS 186-3 design, take
 your pick I guess). And the first could be solved by also feeding your
 CSPRNG with anything that you would have fed it with in the absense of
 the hardware RNG - in that case, you're at least no worse off than you
 were before. (Unless your PRNG's security can be negatively affected
 by non-random or maliciously chosen inputs, in which case you've got
 larger problems).

You need an entropy pool anyways.  Adding entropy (from the CPU's RNG,
from hopefully-random event timings, ...) and non-entropy (from a flawed
HW RNG, from sadly-not-random event timings, ...) to the pool results in
having enough entropy (once enough entropy has been added to begin
with).  You'll want multiple entropy sources no matter what, to deal
with HW RNG failures for example.

BTW, SPARC CPUs have shipped with on-board HW RNGs; Intel is hardly
first.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-26 Thread Nicolas Williams
On Fri, Mar 26, 2010 at 10:22:06AM -0400, Peter Gutmann wrote:
 I missed that in his blog post as well.  An equally big one is the SSHv2
 rekeying fiasco, where for a long time an attempt to rekey across two
 different implementations typically meant drop the connection, and it still
 does for the dozens(?) of SSH implementations outside the mainstream of
 OpenSSH, Putty, ssh.com and a few others, because the procedure is so complex
 and ambiguous that only a few implementations get it right (at one point the
 ssh.com and OpenSSH implementations would detect each other and turn off
 rekeying because of this, for example).  Unfortunately in SSH you're not even
 allowed to ignore rekey requests like you can in TLS, so you're damned if you
 do and damned if you don't [0].

I made much the same point, but just so we're clear, SSHv2 re-keying has
been interoperating widely since 2005.  (I was at Connectathon, and
while the details of Cthon testing are proprietary, I can generalize and
tell you that interop in this area was very good.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-26 Thread Nicolas Williams
On Sat, Mar 27, 2010 at 12:31:45PM +1300, Peter Gutmann (alt) wrote:
 Nicolas Williams nicolas.willi...@sun.com writes:
 
 I made much the same point, but just so we're clear, SSHv2 re-keying has been
 interoperating widely since 2005.  (I was at Connectathon, and while the
 details of Cthon testing are proprietary, I can generalize and tell you that
 interop in this area was very good.)
 
 Whose SSH rekeying though?  I follow the support forums for a range of non-
 mainstream (i.e. not the usual suspects of OpenSSH, ssh.com, or Putty) SSH
 implementations and why does my connection die after an hour with [decryption
 error/invalid packet/unrecognised message type/whatever] (all signs of
 rekeying issues) is still pretty much an FAQ across them at the current time.

Several key ones, including SunSSH.  I'd have to go ask permission in
order to disclose, since Connectathon results are private, IIRC.  Also,
it's been five years, so some of the information has fallen off my
cache.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Nicolas Williams
On Thu, Mar 25, 2010 at 01:24:16PM +, Ben Laurie wrote:
 Note, however, that one of the reasons the TLS renegotiation attack was
 so bad in combination with HTTP was that reauthentication did not result
 in use of the new channel to re-send the command that had resulted in a
 need for reauthentication. This command could have come from the
 attacker, but the reauthentication would still be used to authenticate it.

It would have sufficed to bind the new and old channels.  In fact, that
is pretty much the actual solution.

 In other words, designing composable secure protocols is hard. And TLS
 isn't one. Or maybe it is, now that the channels before and after
 rekeying are bound together (which would seem to invalidate your
 argument above).

Channel binding is one tool that simplifies the design and analysis of
composable secure protocols.  Had channel binding been used to analyze
TLS re-negotiation earlier the bug would have been obvious earlier as
well.  Proof of that last statement is in the pudding: Martin Rex
independently found the bug when reasoning about channel binding to TLS
channels in the face of re-negotiation; once he started down that path
he found the vulnerability promptly.

(There are several champions of the channel binding technique who could
and should have noticed the TLS bug earlier.  I myself simply took the
security of TLS for granted; I should have been more skeptical.  I
suspect that what happened, ultimately, is that TLS re-negotiation was
an afterthought, barely mentioned in the TLS 1.2 RFC and barely used,
therefore many experts were simply not conscious enough of its existence
to care.  Martin was quite conscious of it while also analyzing a
tangential channel binding proposal.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Nicolas Williams
On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:
 
 http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

I fully agree with EKR on this: if you're using block ciphers with
128-bit block sizes in suitable modes and with suitably strong key
exchange, then there's really no need to ever (for a definition of
ever relative to common connection lifetimes for whatever protocols
you have in mind, such as months) re-key for cryptographic reasons.

There may be reasons for re-keying, but the commonly given one that a
given key gets weak over time from use (meaning the attacker can gather
ciphertexts) and just the passage of time (during which an attacker
might brute force it) does not apply to modern crypto.

Ensuring that a protocol that uses modern crypto also supports re-keying
only complicates the protocol, which adds to the potential for bugs.

Consider SSHv2: popular implementations of the server do privilege
separation, but after successful login there's the potential for having
to do re-keys that require privilege (e.g., if you're using SSHv2 w/
GSS-API key exchange), which complicates privilege separation.  But for
that wrinkle the only post-login privsep complications are: logout
processing (auditing, ...), and utmpx processing (if you want tty
channels to appear in w(1) output; this could always be handled in ways
that are not specific to sshd).  What a pain!  (OTOH, the ability
delegate fresh GSS credentials via re-keying is useful.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Nicolas Williams
On Tue, Mar 23, 2010 at 10:42:38AM -0500, Nicolas Williams wrote:
 On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
  Ekr has an interesting blog post up on the question of whether protocol
  support for periodic rekeying is a good or a bad thing:
  
  http://www.educatedguesswork.org/2010/03/against_rekeying.html
  
  I'd be interested in hearing what people think on the topic. I'm a bit
  skeptical of his position, partially because I think we have too little
  experience with real world attacks on cryptographic protocols, but I'm
  fairly open-minded at this point.
 
 I fully agree with EKR on this: if you're using block ciphers with
 128-bit block sizes in suitable modes and with suitably strong key
 exchange, then there's really no need to ever (for a definition of
 ever relative to common connection lifetimes for whatever protocols
 you have in mind, such as months) re-key for cryptographic reasons.

I forgot to mention that I was referring to session keys for on-the-wire
protocols.  For data storage I think re-keying is easier to justify.

Also, there is a strong argument for changing ephemeral session keys for
long sessions, made by Charlie Kaufman on EKRs blog post: to limit
disclosure of earlier ciphertexts resulting from future compromises.

However, I think that argument can be answered by changing session keys
without re-keying in the SSHv2 and TLS re-negotiation senses.  (Changing
session keys in such a way would not be trivial, but it may well be
simpler than the alternative.  I've only got, in my mind, a sketch of
how it'd work.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 1024 bit RSA cracked?

2010-03-16 Thread Nicolas Williams
On Wed, Mar 10, 2010 at 09:27:06PM +0530, Udhay Shankar N wrote:
 Anyone know more?
 
 http://news.techworld.com/security/3214360/rsa-1024-bit-private-key-encryption-cracked/

My initial reaction from reading only the abstract and parts of the
introduction is that the authors are talking about attacking hardware
that implements RSA (say, a cell phone) by injecting faults into the
system via the power supply of the device.

This isn't really applicable to server hardware in a data center (where
the power, presumably, will be conditioned and physical security will be
provided, also presumably) but this attack is definitely applicable to
portable devices -- laptops, mobiles, smartcards.

 The RSA algorithm gives security under the assumption that as long as
 the private key is private, you can't break in unless you guess it.
 We've shown that that's not true, said Valeria Bertacco, an associate
 professor in the Department of Electrical Engineering and Computer
 Science, in a statement.

They're not the first ones to show that!  Side-channel attacks have been
around for a while now.  It's not just the algorithms, but the machine
executing them and its physical characteristics that matter.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: TLS break

2009-11-25 Thread Nicolas Williams
On Wed, Nov 11, 2009 at 10:57:04AM -0500, Jonathan Katz wrote:
 Anyone care to give a layman's explanation of the attack? The 
 explanations I have seen assume a detailed knowledge of the way TLS/SSL 
 handle re-negotiation, which is not something that is easy to come by 
 without reading the RFC. (As opposed to the main protocol, where one can 
 find textbook descriptions.)

Not to sound like a broken record, and not to plug work I've done[*],
but IMO the best tool to apply to this situation, both, to understand
the problem, produce solutions, and to analyze proposed solutions, is
channel binding [0].

Channel binding should be considered whenever one combines two (or more)
two-peer end-to-end security protocols.

In this case two instances of the same protocol are combined, with an
outer/old TLS connection and an inner/new connection negotiated with the
protection of the outer one.  That last part, negotiated with the
protection of the outer one may have led people to believe that the
combination technique was safe.  However, applying channel binding as an
analysis technique would have made it clear that that technique was
vulnerable to MITM attacks.

What channel binding does not give you as an analysis technique is
exploit ideas beyond try being an MITM.

The nice thing about channel binding is that it allows you to avoid
having to analyze the combined protocols in order to understand whether
the combination is safe.  As a design technique all you need to do is
this: a) design a cryptographically secure name for an estabilished
channel of the outer protocol, b) design a cryptographically secure
facility in the inner protocol for veryfying that the applications at
both ends observe the same outer channel, c) feed (a) to (b), and if the
two protocols are secure and (a) and (b) are secure then you'll have a
secure combination.

[*] I've written an RFC on the topic, but the idea isn't mine -- it
goes as far back as 1992 in IETF RFCs.  I'm not promoting channel
binding because I had anything to do with it, but because it's a
useful technique in combining certain cryptographic protocols that I
think should be more widely understood and applied.

[0] On the Use of Channel Bindings to Secure Channels, RFC5056.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-02 Thread Nicolas Williams
On Sun, Nov 01, 2009 at 10:33:34PM -0700, Zooko Wilcox-O'Hearn wrote:
 I don't understand why you need a MAC when you already have the hash  
 of the ciphertext.  Does it have something to do with the fact that  
 the checksum is non-cryptographic by default (http://docs.sun.com/app/ 
 docs/doc/819-5461/ftyue?a=view ), and is that still true?  Your  
 original design document [1] said you needed a way to force the  
 checksum to be SHA-256 if encryption was turned on.  But back then  
 you were planning to support non-authenticating modes like CBC.  I  
 guess once you dropped non-authenticating modes then you could relax  
 that requirement to force the checksum to be secure.

[Not speaking for Darren...]  No, the requirement to use a strong hash
remains, but since the hash would be there primarily for protection
against errors, I don't the requirement for a strong hash is really
needed.

 Too bad, though!  Not only are you now tight on space in part because  
 you have two integrity values where one ought to do, but also a  
 secure hash of the ciphertext is actually stronger than a MAC!  A  
 secure hash of the ciphertext tells whether the ciphertext is right  
 (assuming the hash function is secure and implemented correctly).   
 Given that the ciphertext is right, then the plaintext is right  
 (given that the encryption is implemented correctly and you use the  
 right decryption key).  A MAC on the plaintext tells you only that  
 the plaintext was chosen by someone who knew the key.  See what I  
 mean?  A MAC can't be used to give someone the ability to read some  
 data while withholding from them the ability to alter that data.  A  
 secure hash can.

Users won't actually get the data keys, only the data key wrapping keys.
Users who can read the disk and find the wrapped keys and know the
wrapping keys can find the actual data keys, of course, but add in a
host key that the user can't read and now the user cannot recover their
data keys.  One goal is to protect a system against its users, but
another is to protect user data against maliciou modification by anyone
else.  A MAC provides the first kind of protection if the user can't
access the data keys, and a MAC provides the second kind of protection
if the data keys can be kept secret.

 One of the founding ideas of the whole design of ZFS was end-to-end  
 integrity checking.  It does that successfully now, for the case of  
 accidents, using large checksums.  If the checksum is secure then it  
 also does it for the case of malice.  In contrast a MAC doesn't do  
 end-to-end integrity checking.  For example, if you've previously  
 allowed someone to read a filesystem (i.e., you've given them access  
 to the key), but you never gave them permission to write to it, but  
 they are able to exploit the isses that you mention at the beginning  
 of [1] such as Untrusted path to SAN, then the MAC can't stop them  
 from altering the file, nor can the non-secure checksum, but a secure  
 hash can (provided that they can't overwrite all the way up the  
 Merkle Tree of the whole pool and any copies of the Merkle Tree root  
 hash).

I think we have to assume that an attacker can write to any part of the
pool, including the Merkle tree roots.  It'd be odd to assume that the
attacker can write anywhere but there -- there's nothing to make it so!

I.e., we have to at least authenticate the Merkle tree roots.  That
still means depending on collision resistance of the hash function for
security.  If we authenticate every block we don't have that dependence
(I'll come back to this).

The interesting thing here is that we want the hash _and_ the MAC, not
just the MAC.  The reason is that we want block pointers (which include
the {IV, MAC, hash} for the block being pointed to) to be visible to the
layer below the filesystem, so that we can scrub/resilver and evacuate
devices from a pool (meaning: re-write all the block pointers point to
blocks on the evacuated devices so that they point elsewhere) even
without having the data keys at hand (more on this below).

We could MAC the Merkle tree roots alone, thus alleviating the space
situation in the block pointer structure (and also saving precious CPU
cycles).  But interestingly we wouldn't alleviate it that much!  We need
to store a 96-bit IV, and if we don't MAC every block then we'll want
the strongest hash we can use, so we'll need at least another 256 bits,
for a total of 352 bits of the 384 that we have to play with.  Whereas
if we MAC every block we might store a 96-bit IV, a 128-bit
authentication tag and 160-bit hash, using all 384 bits.

You get more collision resistance from an N-bit MAC than from a hash of
the same length.  That's because in the MAC case the forger can't check
the forgery without knowing the key, while in the hash case the attacker
can verify that some contents collides with another's hash.  In the MAC
case an attacker that hasn't broken the MAC/key must wait until the
system 

Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-09-30 Thread Nicolas Williams
On Sun, Sep 27, 2009 at 02:23:16PM -0700, Fuzzy Hoodie-Monster wrote:
 As usual, I tend to agree with Peter. Consider the time scale and
 severity of problems with cryptographic algorithms vs. the time scale
 of protocol development vs. the time scale of bug creation
 attributable to complex designs. Let's make up some fake numbers,
 shall we? (After all, we're software engineers. Real numbers are for
 real engineers! Bah!)
 
 [snip]
 
 Although the numbers are fake, perhaps the orders of magnitude are
 close enough to make the point. Which is: your software will fail for
 reasons unrelated to cryptographic algorithm problems long before
 SHA-256 is broken enough to matter. Perhaps pluggability is a source
 of frequent failures, designed to solve for infrequent and
 low-severity algorithm failures. I would worry about an overfull \hbox
 (badness 1!) long before I worried about AES-128 in CBC mode with
 a unique IV made from /dev/urandom. Between now and the time our

AES-128 in CBC mode with a unique IV made from /dev/urandom is
manifestly not the issue of the day.  The issue is hash function
strength.  So when would you worry about MD5?  SHA-1?  By your own
admission MD5 has already been fatally wounded and SHA-1 is headed
that way.

 ciphers and hashes and signatures are broken, we'll have a decade to
 design and implement the next simple system to replace our current
 system. Most software developers would be overjoyed to have a full
 decade. Why are we whining?

We don't have a decade to replace MD5.  We've had a long time to replace
MD5, and even SHA-1 already, but we haven't done it yet.  The reason is
simple: there's more to it than you've stated.  Specifically, for
example, you ignored protocol update development (you assumed 1 new
protocol per-year, but this says nothing about how long it takes to,
say, update TLS) and deployment issues completely, and you supposed that
software development happens at a consistent, fast clip throughout.
Software development and deployment are usually constrained by legacy
and customer behavior, as well as resource availability, all of which
varies enormously.  Protocol upgrade development, for example, is harder
than you might think (I'm guessing though, since you didn't address that
issue).  Complexity exists outside protocol.  This is why we must plan
ahead and make reasonable trade-offs.  Devising protocols that make
upgrade easier is important, supposing that they actually help with the
deployment issues (cue your argument that they do not).

I'm OK with making up numbers for the sake of argument.  But you have to
make up all the relevent numbers.  Then we can plug in real data where
we have it, argue about the other numbers, ...

 What if TLS v1.1 (2006) specified that the only ciphersuite was RSA
 with = 1024-bit keys, HMAC_SHA256, and AES-128 in CBC mode. How
 likely is it that attackers will be able to reliably and economically
 attack those algorithms in 2016? Meanwhile, the comically complex
 X.509 is already a punching bag
 (http://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf
 and 
 http://www.blackhat.com/presentations/bh-usa-09/MARLINSPIKE/BHUSA09-Marlinspike-DefeatSSL-SLIDES.pdf,
 including the remote exploit in the certificate handling code itself).

We don't have crystal balls.  We don't really know what's in store for
AES, for example.  Conservative design says we should have a way to
deploy alternatives in a reasonably short period of time.

You and Peter are clearly biased against TLS 1.2 specifically, and
algorithm negotiation generally.  It's also clear that you're outside
the IETF consensus on both matters _for now_.  IMO you'll need to make
better arguments, or wait enough time to be proven right by events, in
order to change that consensus.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-09-08 Thread Nicolas Williams
On Thu, Sep 03, 2009 at 04:26:30PM +1200, Peter Gutmann wrote:
 Steven Bellovin s...@cs.columbia.edu writes:
 This returns us to the previously-unsolved UI problem: how -- with today's
 users, and with something more or less like today's browsers since that's
 what today's users know -- can a spoof-proof password prompt be presented?
 
 Good enough to satisfy security geeks, no, because no measure you take will
 ever be good enough.  [...]

Well, if you're willing to reserve screen real estate, keyboard key
combinations, and so on, with said reserved screen space used to
indicate unambiguously the nature of other things displayed, and
reserved input combinations used to trigger trusted software paths, then
yes, you can solve that problem.  That's the premise of trusted
desktops, at any rate.  There are caveats, like just how large the TCB
becomes (including parts of the browser), the complexity of the trusted
information to be presented to users versus the limited amount of screen
real estate available to convey it, the need to train users to
understand the concept of trusted desktops, no fullscreen apps can be
allowed, accessibility issues, it all falls apart if the TCB is
compromised, ...

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Nicolas Williams
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?
 
 It seems to me protocol designers get all excited about this because
 they want to design the protocol once and be done with it. But software
 authors are generally content to worry about the new algorithm when they
 need to switch to it - and since they're going to have to update their
 software anyway and get everyone to install the new version, why should
 they worry any sooner?

Many good replies have been given already.  Here's a few more reasons to
want pluggability in the protocol:

 - Yes, we want to design the protocol once and be done with the hard
   parts of the design problem that we can reasonably expect to have to
   do only once.  Having to do things only once is not just cool.

 - Pluggability at the protocol layer enable pluggability in the
   implementations.  A pluggable design does not imply open plug-in
   interfaces, but a pluggable design does imply highly localized
   development of new plug-ins.

 - It's a good idea to promote careful thought about the future,
   precisely what designing a pluggable protocol does and requires.

   We may get it wrong (e.g., the SSHv2 alg nego protocol has quirks,
   some of which were discovered when we worked on RFC4462), but the
   result is likely to be much better than not putting much or any such
   thought into it.

If the protocol designers and the implementors get their respective
designs right, the best case scenario is that switching from one
cryptographic algorithm to another requires less effort in the pluggable
case than in the non-pluggable case.  Specifically, specification and
implementation of new crypto algs can be localized -- no existing
specification nor code need change!  Yes, new SW must still get
deployed, and that's pretty hard, but it helps to make it easier to
develop that SW.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-23 Thread Nicolas Williams
On Thu, Jul 23, 2009 at 05:34:13PM +1200, Peter Gutmann wrote:
 mhey...@gmail.com mhey...@gmail.com writes:
 2) If you throw TCP processing in there, unless you are consistantly going to
 have packets on the order of at least 1000 bytes, your crypto algorithm is
 almost _irrelevant_.
 [...]
 for a Linux 2.2.14 kernel, remember, this was 10 years ago.
 
 Could the lack of support for TCP offload in Linux have skewed these figures
 somewhat?  It could be that the caveat for the results isn't so much this was
 done ten years ago as this was done with a TCP stack that ignores the
 hardware's advanced capabilities.

How much NIC hardware does both, ESP/AH and TCP offload?  My guess: not
much.  A shame, that.

Once you've gotten a packet off the NIC to do ESP/AH processing, you've
lost the opportunity to use TOE.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-22 Thread Nicolas Williams
On Wed, Jul 22, 2009 at 06:49:34AM +0200, Dan Kaminsky wrote:
 Operationally, HMAC-SHA-256 is the gold standard.  There's wonky stuff all
 over the place -- Bernstein's polyaes work appeals to me -- but I wouldn't
 really ship anything but HMAC-SHA-256 at present time.

Oh, I agree in general.  As far as new apps and standards work I'd make
HMAC-SHA-256 or AES, in an AEAD cipher mode, REQUIRED to implement and
the default.

But that's not what I'm looking for here.  I'm looking for the fastest
MACs, with extreme security considerations (e.g., warning, warning!
must rekey every 10 minutes) being possibly OK, depending on just how
extreme -- the sort of algorithm that one would not make REQUIRED to
implement, but which nonetheless one might use in some environments
simply because it's fast.

For example, many people use arcfour in SSHv2 over AES because arcfour
is faster than AES.  The SSHv2 AES-based ciphers ought to be RTI and
default choice, IMO, but that doesn't mean arcfour should not be
available.

In the crypto world one never designs weak-but-fast algorithms on
purpose, only strong-and-preferably-fast ones.  And when an algorithm is
successfully attacked it's usually deprecated, put in the ash heap of
history.  But there is a place for weak-but-fast algos, as long as
they're not too weak.  Any weak-but-fast algos we might have now tend to
be old algos that turned out to be weaker than designed to be, and new
ones tend to be slower because resistance against new attacks tends to
require more computation.  I realized this would make my question seem a
bit pointless, but hoped I might get a surprising answer :(

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Fast MAC algorithms?

2009-07-21 Thread Nicolas Williams
I've an application that is performance sensitive, which can re-key very
often (say, every 15 minutes, or more often still), and where no MAC is
accepted after 2 key changes.  In one case the entity generating a MAC
is also the only entity validating the MAC (but the MAC does go on the
wire).  I'm interested in any MAC algorithms which are fast, and it
doesn't matter how strong they are, as long as they meet some reasonable
lower bound on work factor to forge a MAC or recover the key, say 2^64,
given current cryptanalysis, plus a comfort factor.

On the other hand, practical MAC forgery / key recovery attacks would
completely break the security of this application.  So stronger MACs
would have to be available as well, as a performance vs. security
trade-off.

Key length is not an issue.  Having to confound the MAC by adding a
nonce is an acceptable and desirable requirement.  MAC and nonce length
are also not an issue (128-bits is acceptable).  Implementation of any
MAC algorithms for this application must be in software; parallelization
is not really an option.  Algorithm agility is also not a problem, and
certainly desirable.

I see many MAC algorithms out there: HMAC, NMAC, OMAC, PMAC, CBC-MAC,
UMAC, ...  And, of course, AEAD ciphers that can be used for
authentication only, (AES-GCM, AES-CCM, Helix/Phelix, ...).  What I'm
interested in is a comprehensive table showing relative strength under
current cryptanalysis and relative performance.  I suspect there's no
such thing, sadly.  UMAC and HMAC-SHA-1 seem like obvious default
candidates.

I also see papers like Differential-Linear Attacks against the Stream
Cipher Phelix, by Wu and Preneel.  Wu and Preneel declare Phelix to be
insecure because if you violate the requirement that nonces not be
reused then the key can be recovered rather easily.  Helix seems to be
stronger than Phelix in this regard, even though the opposite was
intended.  That makes Phelix and Helix seem likely to be in for further
weakening.  For uses such as mine (see above), such weaknesses are fine
-- nonces will not be reused, and keys will be changed very often.  So
I'm willing to consider algorithms, for this particular use, that I'd
not consider for general use cases, though these sorts of weaknesses
make me feel uneasy.

Which MAC algorithms would you recommend?  (Off-list replies will be
summarized to the list with attribution if you'll allow me to.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: HSM outage causes root CA key loss

2009-07-14 Thread Nicolas Williams
On Tue, Jul 14, 2009 at 11:09:41PM +0200, Weger, B.M.M. de wrote:
 Suppose this happens in a production environment of some CA
 (root or not), how big a problem is this? I can see two issues:
 - they have to build a new CA and distribute its certificate
   to all users, which is annoying and maybe costly but not a 
   security problem,

Not a security problem?  Well, if you have a way to do authenticated
trust anchor distribution that doesn't depend on the lost CA, then sure,
it's not a security problem.  But that's just not likely, or at least
there's no standard for authenticated TA distribution, yet.  If you can
do unauthenticated TA distribution without much trouble (as opposed to
by, say, having to physically visit every host), then chances are you
have no security to begin with.

If there was such a standard you'd want to make real sure that you have
separate keys for TA distribution than for your CA, with similar
physical and other security safeguards.

This goes to show that we do need a TA distribution protocol (not for
the web, mind you), and it needs to use PKI -- a distinct, but related
PKI.  As long as both sets of hardware tokens don't die simultaneously,
then you'll be OK.  Add multiple CAs for TA distro and you get more
redundancy.

 - if they rely on the CA for signing CRLs (or whatever 
   revocation mechanism they're using) then they have to find 
   some other way to revoke existing certificates.

The only other ways are: distribute the new CA certs, and/or use OCSP
(which must use a different cert than the CA).  OCSP is the better
answer, if you can get all apps to use it.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: password safes for mac

2009-07-01 Thread Nicolas Williams
On Wed, Jul 01, 2009 at 12:32:40PM -0400, Perry E. Metzger wrote:
 I think he's pointing out a more general problem.

Indeed.  IIRC, the Mac keychain uses your login password as its passphrase
by default, which means that to keep your keychain unlocked requires
either keeping the password around (bad), keeping the keys in cleartext
around (worse?), or prompting for the password/passphrase every time
they are needed (unusable).

This applies to ssh-agent, the GNOME keychain, etcetera.  It also
applies to distributed authentication systems with password-based
options, like Kerberos.

ISTM that keeping the password around (preferably in mlocked memory,
and, to be sure, with swap encrypted with ephemeral keys) is probably
the better solution.  Of course, the keys themselves have to be handled
with care too.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: password safes for mac

2009-07-01 Thread Nicolas Williams
I should add that a hardware token/smartcard, would be even better, but
the same issue arises: keep it logged in, or prompt for the PIN every
time it's needed?  If you keep it logged in then an attacker who
compromises the system will get to use the token, which I bet in
practice is only moderately less bad than compromising the keys
outright.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: password safes for mac

2009-06-30 Thread Nicolas Williams
On Mon, Jun 29, 2009 at 11:29:48PM -0700, Jacob Appelbaum wrote:
 This would be great if LoginWindow.app didn't store your unencrypted
 login and password in memory for your entire session (including screen
 lock, suspend to ram and hibernate).
 
 I keep hearing that Apple will close my bug about this and they keep
 delaying. I guess they use the credentials in memory for some things
 where they don't want to bother the user (!) but they still want to be
 able to elevate privileges.

Suppose a user's Kerberos credentials are about to expire.  What to do?

If Kerberos TGT renewable lifetime is set long enough then chances are
very good that the user will have to unlock their screen sometime within
a few hours of TGT expiration.  But what if TGT renewable lifetime is
set very short?  Or if the user doesn't lock then unlock their screen in
time?  You have to prompt the user.  But this could be an asynchronous
prompt coming from deep within the kernel (think secure NFS) -- not
impossible, but certainly tricky to implement.  And what if the user
were not using a graphical login (stop thinking Mac all the time :)?
You can't do async prompts on text-based consoles (though you can do
async warnings).

You can see where the temptation to cache the user's password comes
from.

The password can't be cached in encrypted form either (it could be
cached in string2key() form, but that's password-equivalent).  It could
be cached in scrambled form, or encrypted with a key that's stored in
cleartext or in a hardware token (think TPM), but ultimately it'd be
extractable by any sufficiently privileged process.  In any case, the
password must not end up in cleartext on unencrypted swap, and
preferably not on swap at all.

FWIW, Solaris doesn't cache the user's password.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Property RIghts in Keys

2009-02-12 Thread Nicolas Williams
On Tue, Feb 03, 2009 at 04:54:48PM -0500, Steven M. Bellovin wrote:
 Under what legal theory might a certificate -- or a key! -- be
 considered property?  There wouldn't seem to be enough creativity in
 a certificate, let alone a key, to qualify for copyright protection.

Private and secret keys had better be property.  Public keys are...
well, *public*, and CA public keys really, really had better be public,
so I'm as perplexed as you.

Most likely this is just a case of lawyers gone wild.  Too bad a TV show
or DVD product based on that idea wouldn't be successful.

 I won't even comment on the rest of the CPS, not even such gems as
 Subscribers warrant that ... their private key is protected and that
 no unauthorized person has ever had access to the Subscriber's private
 key.  And just how can I tell that?

Really, really wild lawyers.  (Or maybe not so wild, in the U.S.,
depending on what happens in the Lori Drew case.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-01-31 Thread Nicolas Williams
On Fri, Jan 30, 2009 at 03:37:22PM -0800, Taral wrote:
 On Fri, Jan 30, 2009 at 1:41 PM, Jonathan Thornburg
 jth...@astro.indiana.edu wrote:
  For open-source software encryption (be it swap-space, file-system,
  and/or full-disk), the answer is yes:  I can assess the developers'
  reputations, I can read the source code, and/or I can take note of
  what other people say who've read the source code.
 
 Really? What about hardware backdoors? I'm thinking something like the
 old /bin/login backdoor that had compiler support, but in hardware.

Plus: that's a lot of code to read!  A single person can't hope to
understand the tens of millions of lines of code that make up the
software (and firmware, and hardware!) that they use every day on a
single system.  Note: that's not to say that open source doesn't have
advantages over proprietary source.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Proof of Work - atmospheric carbon

2009-01-29 Thread Nicolas Williams
On Wed, Jan 28, 2009 at 04:35:50PM -0500, Jerry Leichter wrote:
 [Proposals to use reversible computation, which in principle consume  
 no energy, elided.]
 
 There's a contradiction here between the computer science and economic
 parts of the problem being discussed.  What gives a digital coin value
 is exactly that there is some real-world expense in creating it.

For some definition of digital coin.

An alternative design where all coins are double-spend checked against
on-line infrastructure belonging to the issuer don't have this
constraint.  Though they have different properties.  For example,
anonymity might then depend on trusting mixmaster-type networks to
exchange coins the issuer knows you have for coins that the issuer
doesn't know you have, but that might make anonymity entirely
impractical.  But then, how practical are POW coins anyways?

I suspect most people in the formal sectors of most economies would
gladly live with digital credit/bank cards most of the time and to heck
with digital coins.

 So, how do you tie the cost of a token to power?  Curiously, something  
 of the sort has already been proposed.  It's been pointed out - I'm  
 afraid I don't have the reference - that CPU's keep getting faster and  
 more parallel and a high rate, but memories, while they are getting  
 enormously bigger, aren't getting much faster.  So what the paper I  
 read proposed is hash functions that are expensive, not in CPU  
 seconds, but in memory reads and writes.  Memory writes are inherently  
 non-reversible so inherently cost power; a high-memory-write algorithm  
 is also one that uses power.

Clever!

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Obama's secure PDA

2009-01-27 Thread Nicolas Williams
On Mon, Jan 26, 2009 at 04:18:39PM -0500, Jerry Leichter wrote:
 An email system for the White  
 House has the additional complication of the Presidential Records  
 Act:  Phone conversations don't have to be recorded, but mail messages  
 do (and have to remain accessible).

[OT for this list, I know.]

It seems that the President's lawyers believe that IM is covered by the
Presidential Records Act and shouldn't be used in the White House:

http://www.newser.com/tag/31542/1/presidential-records-act.html
http://www.newser.com/story/48239/team-obama-told-to-ditch-instant-messaging.html

One possible workaround might be to allow WH staff to _receive_ IMs, and
follow twitting from outside the WH, but not respond to any of it except
by phone.  (Even phone calls, though not recorded, are dangerous to the
WH since there is a record of calls made and taken.)

Of course, if there's nothing to hide, then, why not just use IM and be
done?  The legal advice seems sounds, but it's just advice.  Obama and
his staff could easily use and archive IMs and avoid embarrassment by,
well, keeping discussions above board.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Nicolas Williams
On Mon, Jan 19, 2009 at 01:38:02PM +, Darren J Moffat wrote:
 I don't think it depends at all on who you trust but on what algorithms 
 are available in the protocols you need to use to run your business or 
 use the apps important to you for some other reason.   It also very much 
 depends on why the app uses the crypto algorithm in question, and in the 
 case of digest/hash algorithms wither they are key'd (HMAC) or not.

As Jeff Hutzelman suggested recently, inspired by the SSHv2 CBC mode
vulnerability, hash algorithm agility for PKI really means having more
than one signature, each using a different hash, in each certificate;
this enlarges certificates.  Alternatively, it needs to be possible to
select what certificate to present to a peer based on an algorithm
negotiation; this tends to mean adding round-trips to our protocols.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Why the poor uptake of encrypted email?

2008-12-19 Thread Nicolas Williams
On Thu, Dec 18, 2008 at 01:06:37PM +1000, James A. Donald wrote:
 Peter Gutmann wrote:
  ... to a statistically irrelevant bunch of geeks.
  Watch Skype deploy a not- terribly-anonymous (to the
  people running the Skype servers) communications
  system.
 
 Actually that is pretty anonymous.  Although I am sure
 that Skype would play ball with any bunch of goons that
 put forward a plausible justification, or threated to
 rip their fingernails off, most government agencies find
 it difficult to deal with anyone that they cannot
 casually have thrown in jail - dealing with equals is
 not part of their mindset.  So if your threat model does
 not include the FBI and the CIA, chances are that  the
 people who are threatening you will lack the
 organization and mindset to get Skype's cooperation.

That's also true for e-mail where the only encryption is in the
transport.  Except that you tend to store your e-mails and not your
phone calls, of course.  But you could always encrypt your filesystem
and not your e-mail itself, and that way avoid all the portability
issues that Alec brought up.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-18 Thread Nicolas Williams
On Wed, Dec 17, 2008 at 03:02:54PM -0500, Perry E. Metzger wrote:
 The longer I'm in this field, the more the phrase use with extreme
 caution seems to mean don't use to me. More and more, I think that
 if you don't have a really good way to test and get assurance about a
 component of your security architecture, you should leave that
 component out.

But do beware of becoming something of a luddite w.r.t. entropy sources.

If you can mix seeds into your entropy pool without destroying the
entropy of your pool (and we agree that you can) while adding some of
any entropy in your seeds (and we agree that you can), then why not?

Yes, I saw your other message.  Testing entropy pools and sources is
hard if you want real entropy.  One way to test the pool and its mixing
function is to add and use a hook for supplying test vectors instead of
real entropy for each source.  But to test the operational system, if it
has real entropy sources, is harder.  So you might as well add in a
fixed, manufacture-time seed + time/counter-based salting, as you
suggested.  And you'll still want to test the result, but you can only
apply statistical analysis to the outputs to decide if they're
random-*looking*.

Having no entropy sources is not a good option for systems where the
threat model requires good entropy sources (e.g., if you want PFS to
prevent compromise of an end-point from compromising pre-compromise
communications).  IMO it's not wise to trivially reject an all of the
above approach to entropy gathering.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Why the poor uptake of encrypted email?

2008-12-17 Thread Nicolas Williams
On Tue, Dec 16, 2008 at 03:06:04AM +, StealthMonger wrote:
 Alec Muffett alec.muff...@sun.com writes:
  In the world of e-mail the problem is that the end-user inherits a
  blob of data which was encrypted in order to defend the message as it
  passes hop by hop over the store-and-forward SMTP-relay (or UUCP?) e-
  mail network...  but the user is left to deal with the effects of
  solving the *transport* security problem.
 
  The model is old.  It is busted.  It is (today) wrong.
 
 But the capabilities of encrypted email go beyond mere confidentiality
 and authentication.  They include also strongly untraceable anonymity
 and pseudonymity.  This is accomplished by using chains of anonymizing
 remailers, each having a large random latency for mixing with other
 traffic.

The subject is [w]hy the poor uptake of encrypted email?.

Alec's answer shows that encrypted email when at rest is not easy to
use.

Providing a suitable e-mail security solution for the masses strikes me
as more important than providing anonymity to the few people who want or
need it.  Not that you can't have both, unless you want everyone to use
PGP or S/MIME as a way to hide anonymized traffic from non-anonymized
traffic.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Quantum direct communication: secrecy without key distribution

2008-12-05 Thread Nicolas Williams
[I'm guessing that nobody here wants yet another quatum crypto is snake
oil, no it's not, yes it is, though it has a bright future, no it's not,
... thread.]

On Fri, Dec 05, 2008 at 02:16:09PM +0100, Eugen Leitl wrote:
In the last couple of years, we've seen a number of quantum key
distribution systems being set up that boast close-to-perfect security
([4]although they're not as secure as the marketing might imply).
 
These systems rely on two-part security. The first is the quantum part
which reveals whether a message has been intercepted or not. Obviously
this is no use when it comes to sending secret message because it can
only uncover eavesdroppers after the fact.

That's not the most serious, obvious flaw in quantum cryptography.

The most obvious flaw is that when we're talking fiber optics the
eavesdropper might as well be a man in the middle, and so...  well, see
the list archive.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-18 Thread Nicolas Williams
On Fri, Nov 14, 2008 at 11:04:21PM -0800, Ray Dillinger wrote:
 On Sat, 2008-11-15 at 12:43 +0800, Satoshi Nakamoto wrote:
   If someone double spends, then the transaction record 
   can be unblinded revealing the identity of the cheater. 
  
  Identities are not used, and there's no reliance on recourse.  It's all 
  prevention.
 
 Okay, that's surprising.  If you're not using buyer/seller 
 identities, then you are not checking that a spend is being made 
 by someone who actually is the owner of (on record as having 
 recieved) the coin being spent.  

How do identities help?  It's supposed to be anonymous cash, right?  And
say you identify a double spender after the fact, then what?  Perhaps
you're looking at a disposable ID.  Or perhaps you can't chase them
down.

Double spend detection needs to be real-time or near real-time.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-09-23 Thread Nicolas Williams
On Mon, Sep 22, 2008 at 08:59:25PM -1000, James A. Donald wrote:
 The major obstacle is that the government would want a strong binding 
 between sim cards and true names, which is no more practical than a 
 strong binding between physical keys and true names.

I've a hard time believing that this is the major obstacle.  We all use
credit cards all the time -- apparently that's as good a strong binding
between [credit] cards and true names and as the government needs.  (If
not then throw in cameras at many intersections and along freeways, add
in license plate OCR, and you can tie things together easily enough.
Wasn't that a worry in another recent thread here?)

More likely there are other problems.

First, there's a business model problem.  Every one wants in: the cell
phone manufacturer, the software developer, the network operators, and
the banks.  With everyone wanting a cut of every transaction done
through cell phones the result would likely be too expensive to compete
with credit cards, even after accounting for the cost of credit card
fraud.  Credit card fraud and online security, in any case, are pretty
low on the list of banking troubles these past few weeks, and not
without reason!

Second, there's going to be standard issues.

Third the nfc technology has to be commoditized.

Fourth there's cost of doing an initial rollout of the POS nfc
terminals and building momentum for the product.  Once momentum is there
you're done.  And there's risk too -- if you fail you lose your
investment.

...

 Trouble is, what happens if the user's email account is stolen?

Touble is: what happens if the user's cell phone is stolen?

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-09-10 Thread Nicolas Williams
On Wed, Sep 10, 2008 at 01:29:32PM -0400, William Allen Simpson wrote:
 I agree.   I'm sure this is a world-wide problem, and head-in-the-sand
 cyber-libertarianism has long prevented better solutions.  The market
 doesn't work for this, as there is a competitive *disadvantage* to
 providing improved security, and it's hard to quantify safety.

Or maybe there's a civil liability law issue that causes the market to
fail in this instance.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 02:08:37PM -0400, Perry E. Metzger wrote:
 The kerberos style of having credentials expire very quickly is one
 (somewhat less imperfect) way to deal with such things, but it is far
 from perfect and it could not be done for the ad-hoc certificate
 system https: depends on -- the infrastructure for refreshing all the
 world's certs every eight hours doesn't exist, and if it did imagine
 the chaos if it failed for a major CA one fine morning.

The PKIX moral equivalent of Kerberos V tickets would be OCSP Responses.

I understand most current browsers support OCSP.

 One also worries about what will happen in the UI when a certificate
 has been revoked. If it just says this cert has been revoked,
 continue anyway? the wrong thing will almost always happen.

No doubt.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 11:20:15AM -0700, Eric Rescorla wrote:
 At Fri, 08 Aug 2008 10:43:53 -0700,
 Dan Kaminsky wrote:
  Funnily enough I was just working on this -- and found that we'd end up 
  adding a couple megabytes to every browser.  #DEFINE NONSTARTER.  I am 
  curious about the feasibility of a large bloom filter that fails back to 
  online checking though.  This has side effects but perhaps they can be 
  made statistically very unlikely, without blowing out the size of a browser.
 
 Why do you say a couple of megabytes? 99% of the value would be
 1024-bit RSA keys. There are ~32,000 such keys. If you devote an
 80-bit hash to each one (which is easily large enough to give you a
 vanishingly small false positive probability; you could probably get
 away with 64 bits), that's 320KB.  Given that the smallest Firefox
 [...]

You could store {hash, seed} and check matches for false positives
by generating a key with the corresponding seed and then checking for an
exact match -- slow, but rare.  This way you could choose your false
positive rate / table size comfort zone and vary the size of the hash
accordingly.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 12:35:43PM -0700, Paul Hoffman wrote:
 At 1:47 PM -0500 8/8/08, Nicolas Williams wrote:
 On Fri, Aug 08, 2008 at 02:08:37PM -0400, Perry E. Metzger wrote:
  The kerberos style of having credentials expire very quickly is one
  (somewhat less imperfect) way to deal with such things, but it is far
  from perfect and it could not be done for the ad-hoc certificate
  system https: depends on -- the infrastructure for refreshing all the
  world's certs every eight hours doesn't exist, and if it did imagine
  the chaos if it failed for a major CA one fine morning.
 
 The PKIX moral equivalent of Kerberos V tickets would be OCSP Responses.
 
 I understand most current browsers support OCSP.
 
 ...and only a tiny number of CAs do so.

Not that long ago nothing supported OCSP.  If all that's left (ha) is
the CAs then we're in good shape.  (OCSP services can be added without
modifying a CA -- just issue the OCSP Responders their certs and let
them use CRLs are their source of revocation information.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The PKC-only application security model ...

2008-07-24 Thread Nicolas Williams
On Wed, Jul 23, 2008 at 05:32:02PM -0500, Thierry Moreau wrote:
 The document I published on my web site today is focused on fielding 
 certificateless public operations with the TLS protocol which does not 
 support client public keys without certificates - hence the meaningless 
 security certificate. Nothing fancy in this technique, just a small 
 contribution with the hope to facilitate the use of client-side PKC.

Advice on how to generate self-signed certs for this purpose would be
good for an FYI, or even a BCP.  I don't think we need extensions to any
protocols that support PKI to support bare PK (though some protocols
have both, e.g., IKE).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how bad is IPETEE?

2008-07-11 Thread Nicolas Williams
On Fri, Jul 11, 2008 at 05:08:39PM +0100, Dave Korn wrote:
   It does sound a lot like SSL/TLS without certs, ie. SSL/TLSweakened to
 make it vulnerable to MitM.  Then again, if no Joe Punter ever knows the
 difference between a real and spoofed cert, we're pretty much in the same
 situation anyway.

Note that this is not all that bad because many apps can do
authentication at the application layer, and if you add channel binding
then you can leave session crypto to IPsec while avoiding MITMs (they
get flushed by channel binding).

This is the premise of BTNS + connection latching.  W/o channel binding
it's better than nothing, though not much.  W/ channel binding it should
be much easier to deploy (beyond software updates) than plain IPsec with
similar security guarantees.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how bad is IPETEE?

2008-07-10 Thread Nicolas Williams
On Thu, Jul 10, 2008 at 06:10:27PM +0200, Eugen Leitl wrote:
 In case somebody missed it, 
 
 http://www.tfr.org/wiki/index.php?title=Technical_Proposal_(IPETEE)

I did miss it.  Thanks for the link.  I don't think in-band key exchange
is desirable here, but, you never know what will triumph in the
marketplace.

 I'm not sure what the status of http://postel.org/anonsec/
 is, the mailing list traffic dried up a while back.

Connection latching, which is the BTNS WG equivalent of 'IPETEE', but
much simpler, is in the IESG's hands now.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The wisdom of the ill informed

2008-06-30 Thread Nicolas Williams
On Mon, Jun 30, 2008 at 07:16:17AM -0700, Allen wrote:
 Given this, the real question is, /Quis custodiet ipsos custodes?/ 

Putting aside the fact that cryptographers aren't custodians of
anything, it's all about social institutions.

There are well-attended conferences, papers published online and in many
journals, etcetera.  So it's not so difficult for people who don't know
anything about security and crypto to eventually figure out who does, in
the process also learning who else knows who the experts are.

For example, in the IETF there's an institutional structure that makes
finding out who to ask relatively simple.  Large corporations tend to
have some experts in house, even if they are only expert in finding the
real experts.

We (society) have new experts joining the field, with very low barriers
to entry (financial and political barriers to entry are minimal -- it's
all about brain power), and diversity amongst the existing experts.

There's no major personal gain to be had, besides fame, and too much
diversity and openness for anyone to have a prayer of manipulating the
field undetected for too long.

When it comes to expertise in crypto, Quis custodiet ipsos custodes
seems like a relatively simple problem.  I'm sure it's much, much more
difficult a problem for, say, police departments, financial
organizations, intelligence organizations, etc...

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The wisdom of the ill informed

2008-06-30 Thread Nicolas Williams
On Mon, Jun 30, 2008 at 11:47:54AM -0700, Allen wrote:
 Nicolas Williams wrote:
 On Mon, Jun 30, 2008 at 07:16:17AM -0700, Allen wrote:
 Given this, the real question is, /Quis custodiet ipsos custodes?/ 
 
 Putting aside the fact that cryptographers aren't custodians of
 anything, it's all about social institutions.
 
 Well, I wouldn't say they aren't custodians. Perhaps not in the 
 sense that the word is commonly used, but most certainly in the 
 sense custodians of the wisdom used to make the choices. This is 
 exemplified by Bruce Schneier, an acknowledged expert,  changing 
 his mind about the way to do security from encrypt everything to 
 monitor everything. Yes, I have simplified his stance, but just to 
 make the point that even experts learn and change over time.

What does that have to do with anything?  Expert != knowledge cast in
stone.

 There are well-attended conferences, papers published online and in many
 journals, etcetera.  So it's not so difficult for people who don't know
 anything about security and crypto to eventually figure out who does, in
 the process also learning who else knows who the experts are.
 
 Actually I think it is just about as difficult to tell who is a 
 trustworthy expert in the field of cryptography as it is in any 
 field of science or medicine. Just look at the junk science and 
 medical studies. One retrospective study of 90+ clinical trials 
 found that over 600 potentially important reaction to the drugs 
 occurred but only 39 were reported in the papers. I suspect if we 
 did the same sort of retrospective study for cryptography we would 
 find some similar issues, just, perhaps, not as large because there 
 is not as much money to be made with junk cryptography as junk 
 pharmaceuticals.

The above does not really refute what I wrote.  It takes effort to
figure out who's an expert.  But I believe that the situation w.r.t.
crypto is similar to that in science (cold fusion frauds were identified
rather quickly, were they not?) and better than in medicine (precisely
because there is not much commercial incentive to fraud here; there is
incentive for intelligence organizations to interfere, I suppose, but
here the risk of getting caught is high and the potential cost of
getting caught high as well).

 I'm curious, how does software get sold for so long that is clearly 
 weak or broken? Detected, yes, but still sold like Windows LANMAN 
 backward compatibility.

I thought we were talking about cryptographers, not marketing
departments, market dynamics, ...  If you want to include the latter in
custodes then there is a clear custody hierarchy: the community of
experts in the field is above individual implementors.  Thus we have
reports of snake oil on this list, on various blogs, etc...

So we're back to quis custodiet ipsos custodes?  Excluding marketing
here is the right thing to do (see above).  Which brings us back to my
answer.

 When it comes to expertise in crypto, Quis custodiet ipsos custodes
 seems like a relatively simple problem.  I'm sure it's much, much more
 difficult a problem for, say, police departments, financial
 organizations, intelligence organizations, etc...
 
 Well, Nico, this is where I diverge from your view. It is the 
 police departments, financial organizations, intelligence 
 organizations, etc... who deploy the cryptography. Why should they 

In my experience market realities have much more to do with what gets
deployed than the current state of the art does; never mind who the
experts are.  We'd love to deploy technology X, but in our
heterogeneous network only one quarter of the vendors support X, and
only if we upgrade large number systems, which requires QA testing,
which... -- surely you've run into that sort of situation, amongst
others.  Legacy, broken code dwarfs snake oil in terms of deployment;
legacy != snake oil -- we're allowed to learn, as you yourself point
out.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Nicolas Williams
On Tue, May 06, 2008 at 03:40:46PM +, Steven M. Bellovin wrote:
 Experiment part two: implement remote login (or remote IMAP, or remote
 Web with per-user privileges, etc.) under similar conditions.  Recall
 that being able to do this was a goal of the IPsec working group.
 
 I think that part one is doable, though possibly the existing APIs are
 incomplete.  I don't think that part two is doable, and certainly not
 with high assurance.  In particular, with TLS the session key can be
 negotiated between two user contexts; with IPsec/IKE, it's negotiated
 between a user and a system.  (Yes, I'm oversimplifying here.)

Connection latching and connection-oriented IPsec APIs can address
this problem.

Solaris, and at least one other IPsec implementation (OpenSwan?  I
forget) makes sure that all packets for any one TCP connection (or UDP
connection) are protected (or bypassed) the same way during their
lifetime.  The same way - by similar SAs, that is, SAs with the same
algorithms, same peers, and various other parameters.

A WGLC is about to start in the IETF BTNS WG on an I-D that describes
this.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how to read information from RFID equipped credit cards

2008-04-18 Thread Nicolas Williams
On Tue, Apr 01, 2008 at 12:47:45AM +1300, Peter Gutmann wrote:
 Ben Laurie [EMAIL PROTECTED] writes:
 
 And so we end up at the position that we have ended up at so many times
 before: the GTCYM has to have a decent processor, a keyboard and a screen,
 and must be portable and secure.
 
 One day we'll stop concluding this and actually do something about it.
 
 Actually there are already companies doing something like this, but they've
 run into a problem that no-one has ever considered so far: The GTCYM needs a
 (relatively) high-bandwidth connection to a remote server, and there's no easy
 way to do this.

Cell phones have that.

The bigger problem is pairing with the local POS (or whatever), which is
where NFC comes in -- the obvious thing to do here is to make this
pairing not-really-wireless (e.g., the cell phone could scan a barcode
from the POS, or the POS could scan a barcode displayed by the cell
phone, or both, or any number of variants of this).

 (Hint: You can't use anything involving USB because many corporates lock down
 USB ports to prevent data leaking onto other corporates' networks, or
 conversely to prevent other corporates' data leaking onto their networks. Same
 for Ethernet, Firewire, ...).

Right, it's got to be wireless :)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-06 Thread Nicolas Williams
On Sun, Feb 03, 2008 at 09:24:48PM +1000, James A. Donald wrote:
 Nicolas Williams wrote:
 What, specifically, are you proposing?
 
 I am still writing it up.
 
  Running the web over UDP?
 
 In a sense.
 
 That should have been done from the beginning, even before security 
 became a problem.  TCP is a poor fit to a transactional protocol, as the 
 gyrations with Keep-alive and its successors illustrate.

In the beginning most pages were simple enough that to speak of
transactional protocol is almost an exageration.  Web techonologies
grew organically.  Solutions to the various resulting problems will, I
bet, also grow organically.

A complete revamping is probably not in the cards.  But if one should be
then it should not surprise you that I'm all in favor of piercing
abstraction layers.  User authentication should happen that the
application layer, and session crypto should happen at the transport
layer, with everything cryptographically bound up.  In any case we
should re-use what we know works (e.g., ESP/AH for transport session
crypto, IKEv2/TLS/DTLS for key exchange, ...).

 In rough summary outline, what I propose is to introduce a distinction 
 between connections and streams, that a single long lasting connection 
 contains many transient streams.  This is equivalent to TCP in the case 
 that a single connection always contains exactly two streams, one in 
 each direction, and the two streams are created when the connection is 
 created and shut down when the connection is shut down, but the main 
 objective is to support usages that are not equivalent to TCP. This is 
 pretty much the same thing as T/TCP, except that a connection can have 
 a large shared secret associated with it to encrypt the streams.  For an 
 unencrypted connection, it can be spoof flooded the same way as T/TCP 
 can be spoof flooded, 

Sounds a bit like SCTP, with crypto thrown in.

   but the main design objective is to make 
 encryption efficient enough that one always encrypts everything.

I thought it was the latency cause by unnecessary round-trips and
expensive key exchange crypto that motivated your proposal.  The cost of
session crypto is probably not as noticeable as that of the latency of
key exchange and authentication.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-06 Thread Nicolas Williams
On Tue, Feb 05, 2008 at 08:17:32AM +1000, James A. Donald wrote:
 Nicolas Williams wrote:
  Sounds a bit like SCTP, with crypto thrown in.
 
 SCTP is what we should have done http over, though of
 course SCTP did not exist back then.  Perhaps, like
 quite a few other standards, it still does not quite
 exist.

Proposing something new won't help make that available sooner than SCTP
if that something new, like SCTP, must be implemented in kernel-land.

  I thought it was the latency cause by unnecessary
  round-trips and expensive key exchange crypto that
  motivated your proposal.  The cost of session crypto
  is probably not as noticeable as that of the latency
  of key exchange and authentication.
 
 The big problem is that between the time one logs on to
 one's bank, and the time one logs off, one is apt to
 have done lots and lots of cryptographic key exchanges.
 One key exchange per customer session is a really small
 cost, but we have a storm of them.

This is what session resumption is all about, and now that we have a way
to do it without server-side state (RFC4507) there should be no more
complaints.

If the latency of multiple key exchanges is the issue then we should
push for deployment of RFC4507 before we go push for a brand new
transport protocol.

 Whenever the web page shows what is particular to the
 individual rather than universal, it uses a session
 cookie, visible to server side web page code.
 Encryption, the bundle of shared secrets that enable
 encrypted communications, should be visible at that
 level, should be a session cookie characteristic rather
 than a low level transport characteristic, should have
 the durability and scope of a session cookie, instead of
 the durability and scope of a transaction.

If I understand what you mean then the ticket in RFC4507 is just that.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-03 Thread Nicolas Williams
On Thu, Jan 31, 2008 at 11:12:45PM -0500, Victor Duchovni wrote:
 On Fri, Feb 01, 2008 at 01:15:09PM +1300, Peter Gutmann wrote:
  If anyone's interested, I did an analysis of this sort of thing in an
  unpublished draft Performance Characteristics of Application-level Security
  Protocols, http://www.cs.auckland.ac.nz/~pgut001/pubs/app_sec.pdf.  It
  compares (among other things) the cost in RTT of several variations of SSL 
  and
  SSH.  It's not the TCP RTTs that hurt, it's all the handshaking that takes
  place during the crypto connect.  SSH is particularly bad in this regard.
 
 Thanks, an excellent reference! Section 6.2 is most enlightening, we were
 already considering adopting HPN fixes in the internal OpenSSH deployment,
 this provides solid material to motivate the work...

To be fair, the handbrake in SFTP isn't -- the clients and servers
should be using async I/O and support interleaving the transfers of many
files concurrently, which should allow the peers to exchange data as
fast as it can be read from disk.

The same is true of NFS, and keep in mind that SFTP is more of a remote
filesystem protocol than a file transfer protocol.

But nobody writes archivers that work asynchronously (or which are
threaded, since, e.g., close(2) has no async equivalent, and is required
to be synchronous in the NFS case).  And nobody writes SFTP clients and
server that work asynchronously.  But, we could, and we should.

And the handbrake in the SSHv2 connection protocol has its rationale as
well (namely to allow interactive sessions to be responsive).  As
described in Peter's paper, it can be turned off, effectively.  It's
most useful when mixing interactive sessions and X11 display forwarding
(and port forwarding which don't involve bulk data transfers).  It's
most useless when doing bulk transfers.  So use separate connections for
bulk transfers.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-01 Thread Nicolas Williams
On Wed, Jan 30, 2008 at 02:47:46PM -0500, Victor Duchovni wrote:
 If someone has a faster than 3-way handshake connection establishment
 protocol that SSL could leverage instead of TCP, please explain the
 design.

I don't have one that exists today and is practical.  But we can
certainly imagine possible ways to improve this situation: move parts of
TLS into TCP and/or IPsec.  There are proposals that come close enough
to this (see the last IETF SAAG meeting's proceedings, see the IETF BTNS
WG) that it's not too farfetched, but for web stuff I just don't think
they're remotely likely.

Prior to the advent of AJAX-like web design patterns the most noticeable
latency in web apps was in the server (for dynamic content) and the
client (re-rendering the whole page on every click).  Applying GUI
lessons to the web (asynchrony!  callbacks/closures!) fixed that.

TLS was not to blame.

TLS probably still isn't to blame for whatever latency users might be
annoyed by in web apps.

It's *much* easier to look for improvements in the app layer first given
that web app updates are much easier to deploy than TLS (which in turn
is much easier to deploy than changes to TCP or IPsec).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-01 Thread Nicolas Williams
On Fri, Feb 01, 2008 at 09:24:10AM -0500, Perry E. Metzger wrote:
  Does tinc do something that IPsec cannot?
 
 I use a VPN system other than IPSec on a regular basis. The reason is
 simple: it is easy to configure for my application and my OS native
 IPsec tools are very difficult to configure.
 
 There is a lesson in this, I think.

I agree wholeheartedly.  I'm trying to fix this too.  But for web stuff,
IPsec won't have a chance for a long time, maybe never.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-01 Thread Nicolas Williams
On Fri, Feb 01, 2008 at 07:58:16PM +, Steven M. Bellovin wrote:
 On Fri, 01 Feb 2008 13:29:52 +1300
 [EMAIL PROTECTED] (Peter Gutmann) wrote:
  (Anyone have any clout with Firefox or MS?  Without significant
  browser support it's hard to get any traction, but the browser
  vendors are too busy chasing phantoms like EV certs).
  
 The big issue is prompting the user for a password in a way that no one
 will confuse with a web site doing so.  Given all the effort that's
 been put into making Javascript more and more powerful, and given
 things like picture-in-picture attacks, I'm not optimistic.   It might
 have been the right thing, once upon a time, but the horse may be too
 far out of the barn by now to make it worthwhile closing the barn door.

And on top of that web site designers don't want browser dialogs for
HTTP/TLS authentication.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: two-person login?

2008-01-29 Thread Nicolas Williams
On Tue, Jan 29, 2008 at 06:34:29PM +, The Fungi wrote:
 On Mon, Jan 28, 2008 at 03:56:11PM -0700, John Denker wrote:
  So now I throw it open for discussion.  Is there any significant
  value in two-person login?  That is, can you identify any threat 
  that is alleviated by two-person login, that is not more wisely 
  alleviated in some other way?
 [...]
 
 I don't think it's security theater at all, as long as established
 procedure backs up this implementation in a sane way. For example,
 in my professional life, we use this technique for commiting changes
...

I think you missed John's point, which is that two-person *login* says
*nothing* about what happens once logged in -- logging in enables
arbitrary subsequent transactions that may not require two people to
acquiesce.

What if one of the persons leaves the other alone to do whatever they
wish with the system?  Or are the two persons chained to each other?
(And even then, there's no guarantee that they are both conscious at the
same time, that no third person shows up and knocks them out *after*
they've logged in, ...)

 Technology can't effectively *force* procedure (ingenious people
 will always find a way around the better mousetrap), but it can help
 remind them how they are expected to interact with systems.

When you force two people to participate on a *per-transaction* basis
then you probably get both of them to pay attention, though such schemes
might not scale to thousands, or even hundreds of transactions per-team,
per-day.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: refactoring crypto handshakes (SSL in 3 easy steps)

2007-11-13 Thread Nicolas Williams
On Thu, Nov 08, 2007 at 01:49:30PM -0600, [EMAIL PROTECTED] wrote:
 PREVIOUS WORK:
 
 Three messages is the proven minimum for mutual authentication.  Last
 two messages all depend on the previous message, so minimum handshake
 time is 1.5 RTTs.

Kerberos V manages in one round-trip.  And it could do one round-trip
without a replay cache if it used ephemeral-ephemeral DH to exchange
sub-session keys.  (OTOH, high performance, secure replay caches are
difficult to implement, ultimately being limited by the number of write
to persistent storage ops that the system can manage.)

I think you might want to say that three messages is the minimum for
mutual authentication with neither a replay cache nor a trusted third
party negotiating a key for use during the authentication exchanges.
Or something along those lines.

Of course, you might claim that the TGS exchanges should be added to the
number of messages needed for AP exchanges, but if you re-authenticate
often then you amortize the cost of the TGS exchanges over many AP
exchanges.

I think first and foremost we need authentication protocols to be
secure, while at the same time being algorithm agile.  I think you can
generally manage that in 1.5 round-trips optimistically, more when
optimistic negotiation fails.  And you can do better if you have
something like a KDC that can do negotiation out of band.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: improving ssh

2007-07-19 Thread Nicolas Williams
Doesn't this belong on the old SSHv2 WG's mailing list?

On Sat, Jul 14, 2007 at 11:43:53AM -0700, Ed Gerck wrote:
 SSH (OpenSSH) is routinely used in secure access for remote server
 maintenance. However, as I see it, SSH has a number of security issues
 that have not been addressed (as far I know), which create unnecessary
 vulnerabilities.

The SSHv2 protocol or OpenSSH (an implementation of SSHv1 and SSHv2)?

 Some issues could be minimized by turning off password authentication,
 which is not practical in many cases. Other issues can be addressed by
 additional means, for example:
 
 1. firewall port-knocking to block scanning and attacks

Do you think that implementations of the protocol should implement this?
(From what you say below I think your answer is yes.  Which brings up
the questions why? and how?)

 2. firewall logging and IP disabling for repeated attacks (prevent DoS,
 block dictionary attacks)

SSH servers could integrate features like this without needing firewall
APIs.

 3. pre- and post-filtering to prevent SSH from advertising itself and
 server OS

Unfortunately SSH implementations tend to depend on accurate client and
server software version strings in order to work around bugs.

Anyways, security by obscurity doesn't help.

 4. block empty authentication requests

What are those?

Are they requests with an empty username?  The only SSHv2 userauth
methods that support that are the GSS ones, and that's a good feature
(it allows the server to derive the username from the client's principal
name).

 5. block sending host key fingerprint for invalid or no username

Currently the only way to do this is to configure SSH servers to support
only SSHv2 and only the gss-* key exchange algorithms (see RFC4462,
section 2).  There exist implementations that support this.

To get rid of the host authenticates itself first attitude for all
non-GSS-based SSHv2 userauth methods will require radical changes to the
protocol and deployment transitions.

 6. drop SSH reply (send no response) for invalid or no username

The server has to answer with something.  Silence is still an answer.
So is closing the TCP connection.

 I believe it would be better to solve them in SSH itself, as one would
 not have to change the environment in order to further secure SSH.
 Changing firewall rules, for example, is not always portable and may
 have unintended consequences.

Coding to firewall APIs is even less portable (heck, not all OSes have
firewall APIs).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quantum Cryptography

2007-06-27 Thread Nicolas Williams
On Tue, Jun 26, 2007 at 02:03:29PM -0700, Jon Callas wrote:
 On Jun 26, 2007, at 10:10 AM, Nicolas Williams wrote:
 This too is a *fundamental* difference between QKD and classical
 cryptography.
 
 What does this classical word mean? Is it the Quantum way to say  
 real? I know we're in violent agreement, but why are we letting  
 them play language games?

I don't mind using classical here.  I don't think Newtonian physics
(classical) is bad -- it works great at every day human scales.

 IMO, QKD's ability to discover passive eavesdroppers is not even
 interesting (except from an intellectual p.o.v.) given: its
 inability to detect MITMs, its inability to operate end-to-end across
 across middle boxes, while classical crypto provides protection
 against  eavesdroppers *and* MITMs both *and* supports end-to-end
 operation across middle boxes.
 
 Moreover, the quantum way of discovering passive eavesdroppers is  
 really just a really delicious sugar coating on the classical term  
 denial of service. I'm not being DoSed, I'm detecting a passive  
 eavesdropper!

Heh!  Indeed: with classical (or non-quantum, or standard, or...) crypto
eavesdroppers are passive attackers and passive attackers cannot mount
DoS attacks (oh, I suppose that wiretapping can cause some slightly
noticeable interference in some cases, but usually that's no DoS), but
in QKD passive attackers become active attackers.

But it gets worse!  To eavesdrop on a QKD link requires much the same
effort (splice the fiber) as to be an MITM on a QKD link, so why would
any attacker choose to eavesdrop and be detected instead of being an
MITM, go undeteceted and get the cleartext they're after?  Right, they
wouldn't.  Attackers aren't stupid, and an attacker that can splice your
fibers can probably afford the QKD HW they need to mount an MITM attack.

So, really, you need authentication.  And, really, you need end-to-end,
not hop-by-hop authentication and data confidentiality + integrity
protection.

This reminds me of Feynman's presentation of Quantum Electro Dynamics,
which finished with QED.  Has it now been sufficiently established
that QKD is not useful that whenever it rears its head we can point
folks at archives of these threads and not spill anymore ink?

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quantum Cryptography

2007-06-26 Thread Nicolas Williams
On Fri, Jun 22, 2007 at 08:21:25PM -0400, Leichter, Jerry wrote:
 BTW, on the quantum subway tokens business:  In more modern terms,
 what this was providing was unlinkable, untraceable e-coins which
 could be spent exactly once, with *no* central database to check
 against and none of this well, we can't stop you from spending it
 more than once, but if we ever notice, we'll learn all kinds of
 nasty things about you.  (The coins were unlinkable and untraceable
 because, in fact, they were *identical*.)  Now, of course, they
 were also physical objects, not just collections of bits.  The same
 is true of the photons used in quantum key exchange.  Otherwise,
 it wouldn't work.  We're inherently dealing with a different model
 here.  Where it ends up is anyone's guess at this point.

This relates back to the inutility of QKD as follows: when physical
exchanges are required you cannot run such exchanges end-to-end over an
Internet -- the middle boxes (routers, etc...) get in the way of the
physical exchange.

This too is a *fundamental* difference between QKD and classical
cryptography.

That difference makes QKD useless in *today's* Internet.

IF we had a quantum authentication facility then we could build
hop-by-hop authentication to build an Internet out of QKD and QA
(quantum authentication).  That's a *big* condition, and the change in
security models is tremendous, and for the worse: since the trust chains
get enormously enlarged.

IMO, QKD's ability to discover passive eavesdroppers is not even
interesting (except from an intellectual p.o.v.) given: its inability to
detect MITMs, its inability to operate end-to-end across across middle
boxes, while classical crypto provides protection against eavesdroppers
*and* MITMs both *and* supports end-to-end operation across middle
boxes.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quantum Cryptography

2007-06-26 Thread Nicolas Williams
On Mon, Jun 25, 2007 at 08:23:14PM -0400, Greg Troxel wrote:
 Victor Duchovni [EMAIL PROTECTED] writes:
  Secure in what sense? Did I miss reading about the part of QKD that
  addresses MITM (just as plausible IMHO with fixed circuits as passive
  eavesdropping)?
 
 It would be good to read the QKD literature before claiming that QKD is
 always unauthenticated.

Noone claimed that it isn't -- the claim is that there is no quantum
authentication, so QKD has to be paired with classical crypto in order
to defeat MITMs, which renders it worthless (because if you'll rely on
classical crypto then you might as well only use classical crypto as QKD
doesn't add any security that classical crypto, which you still have to
use, doesn't already).

The real killer for QKD is that it doesn't work end-to-end across middle
boxes like routers.  And as if that weren't enough there's the
exhorbitant cost of QKD kit.

 The generally accepted approach among the physics crowd is to use
 authentication with a secret keys and a universal family of has
 functions.

Everyone who's commented has agreed that authentication is to be done
classically as there is no quantum authentication yet.

But I can imagine how quantum authentication might be done: generate an
entangled pair at one end of the connection, physically carry half of it
to the other end, and then run a QKD exchange that depends on the two
ends having half of the same entangled particle or photon pair.  I'm no
quantum physicist, so I can't tell how workable that would be at the
physics-wise, but such a scheme would be analogous to pre-sharing
symmetric keys in classical crypto.  Of course, you'd have to do this
physical pre-sharing step every time you restart the connection after
having run out of pre-shared entabled pair halfs; ouch.

  Once QKD is augmented with authentication to address MITM, the Q
  seems entirely irrelevant.
 
 It's not if you care about perfect forward secrecy and believe that DH
 might be broken, and can't cope with or don't trust a Kerberos-like
 scheme.  You can authenticate QKD with a symmetric mechanism, and get
 PFS against an attacker who records all the traffic and breaks DH later.

The end-to-end across middle boxes issue kills this argument about
protection against speculative brokenness of public key cryptography.

All but the smallest networks depend on middle boxes.

Quantum cryptography will be useful when:

 - it can be deployed in an end-to-end fashion across middle boxes

 OR

 - we adopt hop-by-hop methods of building end-to-end authentication

And, of course, quantum kit has got to be affordable, but let's assume
that economies of scale will be achieved once quantum crypto becomes
useful.

Critical breaks of public key crypto will NOT be sufficient to drive
adoption of quantum crypto: we can still build networks out of symmetric
key crypto (and hash/MAC functions) only if need be (with pre-shared
keying, Kerberos, and generally Needham-Schroeder).

 There are two very hard questions for QKD systems:
 
  1) Do you believe the physics?  (Most people who know physics seem to.)
 
  2) Does the equipment in your lab correspond to the idealized models
 with which the proofs for (1) were done.  (Not even close.)

But the only real practical issue, for Internet-scale deployment, is the
end-to-end issue.  Even for intranet-scale deployments, actually.

 I am most curious as to the legal issue that came up regarding QKD.

Which legal issue?

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ad hoc IPsec or similiar

2007-06-26 Thread Nicolas Williams
On Fri, Jun 22, 2007 at 10:43:16AM -0700, Paul Hoffman wrote:
 Note that that RFC is Informational only. There were a bunch of 
 perceived issues with it, although I think they were more purity 
 disagreements than anything.
 
 FWIW, if you do *not* care about man-in-the-middle attacks (called 
 active attacks in RFC 4322), the solution is much, much simpler than 
 what is given in RFC 4322: everyone on the Internet agrees on a 
 single pre-shared secret and uses it. You lose any authentication 
 from IPsec, but if all you want is an encrypted tunnel that you will 
 authenticate all or parts of later, you don't need RFC 4322.
 
 This was discussed many times, and always rejected as not good 
 enough by the purists. Then the IETF created the BTNS Working Group 
 which is spending huge amounts of time getting close to purity again.

That's pretty funny, actually, although I don't quite agree with the
substance (surprise!)  :)

Seriously, for those who merely want unauthenticated IPsec, MITMs and
all, then yes, agreeing on a globally shared secret would suffice.

For all the other aspects of BTNS (IPsec connection latching [and
channel binding], IPsec APIs, leap-of-faith IPsec) agreeing on a
globally shared secret does not come close to being sufficient.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ad hoc IPsec or similiar

2007-06-26 Thread Nicolas Williams
On Tue, Jun 26, 2007 at 01:20:41PM -0700, Paul Hoffman wrote:
 For all the other aspects of BTNS (IPsec connection latching [and
 channel binding], IPsec APIs, leap-of-faith IPsec) agreeing on a
 globally shared secret does not come close to being sufficient.
 
 Fully agree. BTNS will definitely give you more than just one-off 
 encrypted tunnels, once the work is finished. But then, it should 
 probably be called MMTBTNS (Much More Than...).

I strongly dislike the WG's name.  Suffice it to say that it was not my
idea :); it created a lot of controversy at the time, though perhaps
that controversy helped sell the idea (why would you want this silly,
insecure stuff? because it enables this other actually secure stuff).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why self describing data formats:

2007-06-23 Thread Nicolas Williams
On Mon, Jun 11, 2007 at 11:28:37AM -0400, Richard Salz wrote:
 Many protocols use some form of self describing data format, for example
  ASN.1, XML, S expressions, and bencoding.
 
 I'm not sure what you're getting at.  All XML and S expressions really get 
 you is that you know how to skip past something you don't understand. This 
 is also true for many (XER, DER, BER) but not all (PER) encodings for 
 ASN.1.

If only it were so easy.  As we discovered in the IETF KRB WG you can't
expect that just because the protocol uses a TLV encoding (DER) you can
just add items to sequences (structures) or choices (discriminated
unions) willy nilly: code generated by a compiler might choke because
formally the protocol didn't allow extensibility and the compiler did
the Right Thing.  Extensibility of this sort requires that one be
explicit about it in the original spec.

 Are you saying why publish a schema?

I doubt it: you can have schemas without self-describing encodings
(again, PER, XDR, are examples of non-self-describing encodings for
ASN.1 and XDR, respectively).  Schemas can be good while self-describing
encodings can be bad...

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why self describing data formats:

2007-06-21 Thread Nicolas Williams
On Fri, Jun 01, 2007 at 08:59:55PM +1000, James A. Donald wrote:
 Many protocols use some form of self describing data format, for example 
 ASN.1, XML, S expressions, and bencoding.

ASN.1 is not an encoding, and not all its encodings are self-describing.

Specifically, PER is a compact encoding such that a PER encoding of some
data cannot be decoded without access to the ASN.1 module(s) that
describes the data types in question.

Yes, it's a nit.

Then there's XDR -- which can be thought of as a subset of ASN.1 and a
four-octet aligned version of PER (XDR being both, a syntax and an
encoding).

 Why?

Supposedly it is (or was thought to be) easier to write encoders/
decoders for TLV encodings (BER, DER, CER) and S-expressions, but I
don't believe it (though I certainly believe that it was thought to be
easier): rpcgen is a simple enough program, for example.

TLV encodings tend to quite redundant, in a way that seems dangerous: a
lazy programmer can (and many have) write code that fails to validate
parts of an encoding and mostly get away with it (until the then
inevitable subsequent buffer overflow, of course).

Of course, code generators and libraries for self-describing and non-
self-describing encodings alike are not necessarily bug free (have any
been?) but at least they have the virtue that they are automatic tools
that consume a formal language, thus limiting the number of lazy
programmers involved and the number of different ways in which they can
screw up (and they leave their consumers off the hook, to a point).

 Presumably both ends of the conversation have negotiated what protocol 
 version they are using (and if they have not, you have big problems) and 
 when they receive data, they need to get the data they expect.  If they 
 are looking for list of integer pairs, and they get a integer string 
 pairs, then having them correctly identified as strings is not going to 
 help much.

I agree.  The redundancy of TLV encodings, XML, etcetera, is
unnecessary.  Note though that I'm only talking about serialization
formats for data in protocols; XML, I understand, was intended for
_documents_, and it does seem quite appropriate for that, and so it can
be expected that there should be a place for it in Internet protocols in
transferring pieces of documents.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why self describing data formats:

2007-06-21 Thread Nicolas Williams
 But the main motivation (imho) is that it's trendy. And once anyone
 proposes a heavyweight standard encoding, anyone who opposes it is
 labeled a Luddite.

Maybe.  But there's quite a lot to be said for standards which lead to
widespread availability of tools implementing them, both, open source
and otherwise.

One of the arguments we've heard for why ASN.1 sucks is the lack of
tools, particularly open source ones, for ASN.1 and its encodings.

Nowadays there is one GPL ASN.1 compiler and libraries: SNACC.  (I'm not
sure if it's output is unencumbered, like bison, or what, but that's
important to a large number of developers who don't want to be forced to
license under GPL, and there's not any full-featured ASN.1 compilers and
libraries licensed under the BSD or BSD-like licenses.)

The situation is markedly different with XML.  Even if you don't like
XML, or its redundancy (as an encoding, but then, see FastInfoSet, a
PER-based encoding of XML), it has that going for it: tool availability.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why self describing data formats:

2007-06-21 Thread Nicolas Williams
On Mon, Jun 11, 2007 at 09:28:02AM -0400, Bowness, Piers wrote:
 But what is does help is allowing a protocol to be expanded and enhanced
 while maintaining backward compatibility for both client and server.

Nonsense.  ASN.1's PER encoding does not prevent extensibility.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-15 Thread Nicolas Williams
On Mon, May 14, 2007 at 11:06:47AM -0600, [EMAIL PROTECTED] wrote:
  Ian G wrote:
  * Being dependent on PKI style certificates for signing, 
 ...
 
 The most important motivation at the time was to avoid the risk of Java being
 export-controlled as crypto.  The theory within Sun was that crypto with a
 hole would be free from export controls but also be useful for programmers.

crypto with a hole (i.e., a framework where anyone can plug anyone
else's crypto) is what was seen as bad.

The requirement for having providers signed by a vendor's key certified
by Sun was to make sure that only providers from suppliers not from,
say, North Korea etc., can be loaded by the pluggable frameworks.  As
far as I know the process for getting a certificate for this is no more
burdensome to any third parties, whether open source communities or
otherwise, than is needed to meet the legal requirements then, and
since, in force.

Of course, IANAL and I don't represent Sun, and you are free not to
believe me and try getting a certificate as described in Chapter 8 of
the Solaris Security Developers Guide for Solaris 10, which you can find
at:

http://docs.sun.com

Comments should probably be sent to [EMAIL PROTECTED]

Cheers,

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-15 Thread Nicolas Williams
On Tue, May 15, 2007 at 11:37:56AM +0200, Ian G wrote:
 Nicolas Williams wrote:
 The requirement for having providers signed by a vendor's key certified
 by Sun was to make sure that only providers from suppliers not from,
 say, North Korea etc., can be loaded by the pluggable frameworks.
 
 OK, but can we agree that this is a motive outside normal 
 engineering practices?  And it is definately nothing to do 
 with security as understood at the language and application 
 levels?

If we ignore politics, and if we ignore TPMs, yes.  Those are big
caveats.

 As
 far as I know the process for getting a certificate for this is no more
 burdensome to any third parties, whether open source communities or
 otherwise, than is needed to meet the legal requirements then, and
 since, in force.
 
 From what the guys in Cryptix have told me, this is true. 
 Getting the certificate is simply a bureaucratic hurdle, at 
 the current time.  This part is good.  But, in the big picture:

Good.

 J1.0:  no crypto
 J1.1:  crypto with no barriers
 J1.2:  JCA with no encryption, but replaceable
 J1.4:  JCA with low encryption, stuck, but providers are easy
 J1.5:  JCA, low encryption, signed providers, easy to get a 
 key for your provider
 J1.6:  ??
 
 (The java version numbers are descriptive, not accurate.)

I'm not sure I understand the significance of the above.  I'm sure that
there are better lists to ask about the prospects for evolution here.

 The really lucky part here is that (due to circumstances 
 outside control) the entire language or implementation has 
 gone open source.

That's not due to luck.

 No more games are possible ==  outside requirements are 
 neutered.  This may save crypto security in Java.

Save it from what exactly?

 Of course, IANAL and I don't represent Sun, and you are free not to
 believe me and try getting a certificate as described in Chapter 8 of
 the Solaris Security Developers Guide for Solaris 10, which you can find
 at:
 
 
 Sure.  There are two issues here, one backwards-looking and 
 one forwards-looking.
 
 1.  What is the way this should be done?  the Java story is 

By whom?  The code is GPLed -- you're free to hack on it.  OpenSolaris
is CDDLed and you're free to hack on that too.

Sun may or may not be subject to more relaxed export rules as a result
of open sourcing these things.  I don't know, IANAL.  The point is that
Sun may not be able to do in the products it ships what the community
can do with the source code.

 2.  What is needed now?  Florian says the provider is 
 missing and the root list is empty.  What to do?  Is it 
 time to reinvigorate the open source Java crypto scene?

Ah, but you're free to: the code is GPLed and you can figure out what to
do to make the crypto framework not require provider signing.

Also, the provider surely can't be missing due to export rules -- the
C/assemler equivalents in Solaris are open source.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More info in my AES128-CBC question

2007-05-12 Thread Nicolas Williams
On Wed, May 09, 2007 at 06:04:20PM -0400, Leichter, Jerry wrote:
 |   Frankly, for SSH this isn't a very plausible attack, since it's not
 |   clear how you could force chosen plaintext into an SSH session between
 |   messages.  A later paper suggested that SSL is more vulnerable:
 |   A browser plugin can insert data into an SSL protected session, so
 |   might be able to cause information to leak.
 |  
 |  Hmm, what about IPSec?  Aren't most of the cipher suites used there
 |  CBC mode?
 | 
 | ESP does not chain blocks across packets.  One could produce an ESP
 | implementation that did so, but there is really no good reason for
 | that, and as has been widely discussed, an implementation SHOULD use
 | a PRNG to generate the IV for each packet.
 I hope it's a cryptographically secure PRNG.  The attack doesn't require
 any particular IV, just one known to an attacker ahead of time.
 
 However, cryptographically secure RNG's are typically just as expensive
 as doing a block encryption.  So why not just encrypt the IV once with
 the session key before using it?  (This is the equivalent of pre-pending
 a block of all 0's to each packet.)

But if the key doesn't change between messages then this makes the IV of
the second block constant and if any plaintext repeats in the first
block of plaintext then you have a problem.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-12 Thread Nicolas Williams
 Subject: Re: no surprise - Sun fails to open source the crypto part of Java

Were you not surprised because you knew that said source is encumbered,
or because you think Sun has some nefarious motive to not open source
that code?

If the latter then keep in mind that you can find plenty of crypto code
in OpenSolaris, which, unless you think the CDDL does not qualify as
open source, is open source.  I've no first hand knowledge, but I
suspect that the news story you quoted from is correct: the code is
encumbered and Sun couldn't get the copyright holders to permit release
under the GPL in time for the release of Java source under the GPL.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Was a mistake made in the design of AACS?

2007-05-04 Thread Nicolas Williams
On Thu, May 03, 2007 at 10:25:34AM -0700, Steve Schear wrote:
 At 03:52 PM 5/2/2007, Ian G wrote:
 This seems to assume that when a crack is announced, all revenue 
 stops.  This would appear to be false.  When cracks are announced in such 
 systems, normally revenues aren't strongly effected.  C.f. DVDs.
 
 Agreed.  But there is an incremental effect.  In the same way many people 
 now copy DVDs they have rented many will gain access to HD content made 

Wait, are you saying that people copy rented DVDs onto DVD media?  Or
that they _extract_ the content?

There's a big difference: there's no need to crack the DVD DRM system to
do the former, but there is for the latter.

I expect the same to be true for HD-DVDs, unless the readers themselves
perform one-way transformations on the content and the readers are
tamper-resistant enough that DMCA protection for them as access control
devices can be claimed.

 available by those more technically sophisticated.  There a number of Bit 
 Torrent trackers which focus on HD content.  All current released 
 HD-DVD/BluRay movies are available for download. For those with 
 higher-performance PCs for playback, broadband connections and who know how 
 to burn a single- or dual layer DVD, the content is there for the talking.
 
 A new generation of HD media players (initially from offshore consumer 
 electronics and networking companies, for example, Cisco/LinkSys) are 
 poised to enter the market.  These appliances will allow playback of all 
 the common HD encoded media, including those ripped from the commercial HD 
 discs.  This will place the content from pirates and P2P community in the 
 hands of the less sophisticated Home Theater consumer.

So?  If breaking AACS has nothing to do with disk-to-disk copies then I
don't see how the coming market for HD players/writers is going to
affect that kind of piracy.  Or analog hole piracy.  Let's face it: DRM
only stops anyone from trying to make fair use of content (e.g.,
sampling) -- pirates might as well not even know that DRM is there,
unless you can create scarcity of media for the pirates (blank media
taxes), but that's harder than you think when in a couple of years
someone can be manufacturing blank media in some far off place that's
politically hard to reach.

Well, there's an idea: use different physical media formats for
entertainment and non-entertainment content (meaning, content created by
MPAA members vs. not) and don't sell writable media nor devices capable
of writing it for the former, not to the public, keeping very tight
controls on the specs and supplies.  Then finding, say, a Disney movie
on an HD-DVD of the data format would instantly imply that it's pirated.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More info in my AES128-CBC question

2007-04-27 Thread Nicolas Williams
On Wed, Apr 25, 2007 at 10:58:01PM -0500, Travis H. wrote:
 On Wed, Apr 25, 2007 at 05:42:44PM -0500, Nicolas Williams wrote:
  A confounder is an extra block of random plaintext that is prepended to
  a message prior to encryption with a block cipher in CBC (or CTS) mode;
  the resulting extra block of ciphertext must also be sent to the peer.
 
 Not true.  Since we are comparing confounders to IVs, let's make identical
 assumptions; that the value is somehow agreed upon in advance.

The term confounder as used in Kerberos V is as I described.

  If the
  IV chained across continguous messages as in SSHv2 then you have a
  problem (see above).
 
 I don't fully understand what it means to have IVs chained across
 contiguous (?) messages, as in CBC mode each ciphertext block forms
 the IV of the block after it, effectively; basically an IV is just
 C_0 for some stream.

The last ciphertext block of one message is the IV for the next.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More info in my AES128-CBC question

2007-04-27 Thread Nicolas Williams
On Fri, Apr 27, 2007 at 05:13:44PM -0400, Leichter, Jerry wrote:
 What the RFC seems to be suggesting is that the first block of every
 message be SSH_MSG_IGNORE.  Since the first block in any message is now
 fixed, there's no way for the attacker to choose it.  Since the attacker

SSH_MSG_IGNORE messages carry [random] data.

Effectively what the RFC is calling for is a confounder.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Public key encrypt-then-sign or sign-then-encrypt?

2007-04-25 Thread Nicolas Williams
On Wed, Apr 25, 2007 at 03:24:06PM -0300, Mads Rasmussen wrote:
 Jee Hea An, Yevgeniy Dodis and Tal Rabin claims that the order doesn't 
 matter [2]. Encrypt-then-sign or sign-then-encrypt is equally secure.
 Is this really true? My feeling was that the principle from Krawczyk's 
 paper should apply to the public key setting as well.

Instinctively sign-then-encrypt offers privacy protection: only the
intended receipient can verify the signature.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More info in my AES128-CBC question

2007-04-25 Thread Nicolas Williams
On Wed, Apr 25, 2007 at 05:20:30PM +0300, Hagai Bar-El wrote:
 On 25/04/07 02:18, Nicolas Williams wrote:
  But be careful.  Simply chaining the IV from message to message will
  create problems (see SSH).
 
 What problem does this (chaining IV from message to message) introduce
 in our case?

See RFC4251:


   Additionally, another CBC mode attack may be mitigated through the
   insertion of packets containing SSH_MSG_IGNORE.  Without this
   technique, a specific attack may be successful.  For this attack
   (commonly known as the Rogaway attack [ROGAWAY], [DAI], [BELLARE]) to
   work, the attacker would need to know the Initialization Vector (IV)
   of the next block that is going to be encrypted.  In CBC mode that is
   the output of the encryption of the previous block.  If the attacker
   does not have any way to see the packet yet (i.e., it is in the
   internal buffers of the SSH implementation or even in the kernel),
   then this attack will not work.  If the last packet has been sent out
   to the network (i.e., the attacker has access to it), then he can use
   the attack.


  As long as it doesn't repeat.  Also, if it's not random then make that
  IV the first block of plaintext (with a fixed IV) -- that is, use a
  confounder, and make sure it doesn't repeat.
 
 It seems as Aram uses a different IV for each message encrypted with
 CBC. I am not sure I see a requirement for randomness here. As far as I
 can tell, this IV can be a simple index number or something as
 predictable, as long as it does not repeat within the same key scope.

I think you should really consider the SSHv2 experience and add a
confounder.  The confounder plaintext block need not be random or
pseudo-random, just non-repeating.

  A legitimate response w.r.t. confounders might be but that wastes a
  cipher block's worth of bits on the wire, which it certainly does, and
  if you're really hard pressed for bandwidth and use mostly small
  messages then you'd mind the confounder.  But I see no reason not to use
  a random or pseudo-random IV -- a device that can do crypto can and
  should have a decent PRNG (and a true, if low-bandwidth RNG to seed it).
 
 I don't understand the difference between a confounder and an IV in
 terms of bits on the wire. After all, in both cases the confounder or IV
 need to be passed to the other side, unless they are implicitly known.

A confounder is an extra block of random plaintext that is prepended to
a message prior to encryption with a block cipher in CBC (or CTS) mode;
the resulting extra block of ciphertext must also be sent to the peer.

An IV is a cipher block size's worth of bits that is XORed into the
first plaintext block when encrypting/decrypting; if you somehow agree
upon an IV to use and so does not take up any bits on the wire.  If the
IV chained across continguous messages as in SSHv2 then you have a
problem (see above).  If it's constant then you have a problem that you
can solve by deriving per-message keys or by generating a pseudo-random
IV from a sequence number.

Since the protocol describe generates per-message integrity keys I
imagine that it might generate per-message confidentiality keys as well,
in which case the IV issue goes away.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More info in my AES128-CBC question

2007-04-24 Thread Nicolas Williams
On Sun, Apr 22, 2007 at 05:59:54PM -0700, Aram Perez wrote:
 No, there will be message integrity. For those of you asking, here's  
 a high level overview of the protocol is as follows:

 [...]

 3) Data needing confidentiality is encrypted with the SK in the mode  
 selected in step 1. The message is integrity protected with MK. A new  
 MK is generated after a message is sent using MK(i+1) = H[MK(i)]

You don't necessarily have to change the integrity protection key for
every message.  One thing this says is that the protocol involves an
ordered stream of messages.

 Hope this clarifies things somewhat.

It does.  You can get by without a random IV by using CBC analogously to
how you use counter modes and cipher streams in general.  The key thing
is to avoid key and IV/counter re-use.  For a protocol where ordered
delivery of messages is expected/ required this is easy to achieve.

Derive the key and/or counter/IV from a message sequence number and do
it in such a way that you either cannot repeat them or are very, very
unlikely to repeat them and you're fine.

But be careful.  Simply chaining the IV from message to message will
create problems (see SSH).

What is the concern with using random IVs/confounders anyways?  The need
for an entropy source?  If so keep in mind that a PRNG will be
sufficient for generating the IVs/confounders and that you'll generally
need some source of entropy for at least some protocol elements (e.g.,
nonces).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More info in my AES128-CBC question

2007-04-24 Thread Nicolas Williams
On Mon, Apr 23, 2007 at 11:23:54AM -0700, Aram Perez wrote:
 On Apr 23, 2007, at 8:11 AM, Nicolas Williams wrote:
 On Sun, Apr 22, 2007 at 05:59:54PM -0700, Aram Perez wrote:
 No, there will be message integrity. For those of you asking, here's
 a high level overview of the protocol is as follows:
 
 [...]
 
 3) Data needing confidentiality is encrypted with the SK in the mode
 selected in step 1. The message is integrity protected with MK. A new
 MK is generated after a message is sent using MK(i+1) = H[MK(i)]
 
 You don't necessarily have to change the integrity protection key for
 every message.  One thing this says is that the protocol involves an
 ordered stream of messages.
 
 You need to change the integrity key if you want to prevent replay  
 attacks.

Or construct your MAC so that there is a sequence number in it.

E.g., SSHv2 uses HMAC without changing the integrity key.

If deriving a new key is slower than adding a sequence number into the
input of HMAC (which it most likely is) then you're likely to prefer the
latter.

If there isn't a good reason for rejecting what I suggest then one might
worry that changing the integrity key on every message (but not the
confidentiality key?) is something that a non-expert might do and that
there may be other problems with this protocol.  Much experience has
been gained with other protocols in these areas; do leverage it.

 But be careful.  Simply chaining the IV from message to message will
 create problems (see SSH).
 
 The intention would be a new IV with each message begin sent.

As long as it doesn't repeat.  Also, if it's not random then make that
IV the first block of plaintext (with a fixed IV) -- that is, use a
confounder, and make sure it doesn't repeat.

 What is the concern with using random IVs/confounders anyways?  The
 need for an entropy source?  If so keep in mind that a PRNG will be
 sufficient for generating the IVs/confounders and that you'll
 generally need some source of entropy for at least some protocol
 elements (e.g., nonces).
 
 The concern was that that's the way SD cards do it today. Another  
 response was you haven't heard of anyone breaking SD cards have you?

Fallacious responses, those.

A legitimate response w.r.t. confounders might be but that wastes a
cipher block's worth of bits on the wire, which it certainly does, and
if you're really hard pressed for bandwidth and use mostly small
messages then you'd mind the confounder.  But I see no reason not to use
a random or pseudo-random IV -- a device that can do crypto can and
should have a decent PRNG (and a true, if low-bandwidth RNG to seed it).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES128-CBC Question

2007-04-19 Thread Nicolas Williams
On Fri, Apr 20, 2007 at 08:56:32AM +1200, Sidney Markowitz wrote:
 Aram Perez wrote, On 19/4/07 6:29 PM:
  Is there any danger in using AES128-CBC with a fixed IV of all zeros?
 
 Here is some discussion about doing this, in the context of PGP doing
 just that and why PGP inserts random characters at the begining of the
 plaintext.

Kerberos V calls this a confounder (a block of randomly selected bits
that is prepended to plaintext prior to encryption).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DNSSEC to be strangled at birth.

2007-04-06 Thread Nicolas Williams
On Thu, Apr 05, 2007 at 04:49:33PM -0700, Paul Hoffman wrote:
 At 7:26 PM -0400 4/5/07, Thor Lancelot Simon wrote:
 On Thu, Apr 05, 2007 at 07:32:09AM -0700, Paul Hoffman wrote:
  Control: The root signing key only controls the contents of the root,
  not any level below the root.
 
 That is, of course, false,
 
 This is, of course false. In order to control the contents of the 
 second level of the DNS, they have to either change the control of 
 the first level (it's kinda obvious when they take .net away from 
 VeriSign) or they have to sign across the hierarchy (it's kinda 
 obvious when furble.net is signed by someone other than .net).

Think of the DNSSEC root as the root CA of a universal PKI (finally).

The root CA of any PKI can act as an MITM between any pair of peers in
that PKI, no matter how many intervening CAs there may be between the
root and each peer.

The problem with wanting the DNSSEC root keys for facilitating MITM
attacks is that people are likely to notice, and secrecy is typically
something that an MITM attacker wants.  To avoid detection the MITM
would have to get between the target client and all of DNS; and that's
difficult because typically clients get DNS cache service from their
immediate network service provider -- which cache the MITM does not want
to pollute, so as to avoid discovery...

Which means that the MITM would need the cooperation of the client's
provider in many/most cases (a political problem) in order to be able to
quickly get in the middle so close to a leaf node (a technical problem).

Then there's the need to scale this -- if you can only use this MITM
capability occasionally, what's the point?  And what targets would DHS
have that it could subvert in this way but not in other, simpler ways?
Criminals?  Not likely (besides, isn't that DoJ's job?).  Spies?  Less
likely.  Clients abroad?  Less likely still.  Dumb spies/criminals?
Well, there'd be other ways to attack those.

IMO, DHS gets too little real value from having the DNSSEC root keys in
terms of MITM attack capability.

And it will not get much value in terms of DoS attacks on, say, ccTLDs
-- alternate roots would spring up and if the DoS were widely seen as
unjustified most of the world outside the U.S. would end up using the
alternate root.  A DoS on a ccTLD would be a one-time deal, politically.

The DHS would get real value in terms of veto power over new TLDs, IFF
it is the only one to possess the root private key.  But that's not what
the story said, IIRC.

The real problem with DHS having these keys in _addition_ to ICANN is
that the more fingers in the pie the more likely it is that the key will
be breached, leading to key rollover.

I must admit that I am mystified as to why DHS would want these keys.
Count me as among those who think the story is in error, or that DHS has
received bad advice.  I am NOT among those who are prepared to believe
the worst of DHS; I expect that those of you more paranoid than I will
discount my analysis of the MITM attack potential.  Or perhaps I
discount the difficulty of pulling off these MITM attacks too much
(perhaps noone would notice cache pollution?).  Tell me.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Failure of PKI in messaging

2007-02-15 Thread Nicolas Williams
On Thu, Feb 15, 2007 at 11:36:35AM -0500, Victor Duchovni wrote:
 On Thu, Feb 15, 2007 at 10:10:21AM -0500, Leichter, Jerry wrote:
  Meanwhile, the next generation of users is growing up on the immediacy
  of IM and text messaging.  Mail is ... so 20th century.
 
 Well, you certainly don't want to use email when coordinating a place to
 meet in the next 10-15 minutes, while on the move with a cell phone, or
 other near-real-time social activity so important to the next generation
 while they are still the next generation.

As mobile devices improve in compute/memory/display/input capabilities
the distinction between texting/IM/e-mail will get blurred, and at the
same time mobiles will become more and more tempting vehicle for
securing transactions.

E.g., I use the GMail J2ME app on my cell phone and it's almost as good
as SMS in some ways and better in others (plus I forward some e-mails to
SMS so that this app need not be running all the time).  I can even pay
via paypal using my phone, supposedly -- I've not tried it.

Just as we laugh when we recall 1980s cell phones (ha!) the next
generation will laugh at the best of our current crop of mobile devices,
never mind the more basic ones.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: One Laptop per Child security

2007-02-09 Thread Nicolas Williams
On Fri, Feb 09, 2007 at 01:22:06PM +1000, James A. Donald wrote:
 Nicolas Williams wrote:
  The text you quote doesn't answer the question; the
  rest of the wiki frontpage says little more.  It tends
  to make me think that if an application wants to do
  something that I've not enabled it to do ahead of time
  then it fails.  Failure is incovenient.  So as near as
  I can tell from the text you quote BitFrost sets its
  convenience/security parameters differently than other
  OSes, but there's nothing truly Earth shatteringly new
  there.
 
 There is a great deal that is earth shatteringly new,
 and it is documented - albeit in rather unclear and non
 standard format.
 
 The fundamental difference is that each application is
 run in its own VM, and so *cannot* exercise full user
 powers, whereas with *all* other OSs, if your solitaire

This is a good summary -- the analogy that I asked for.

It doesn't sound so new either though.  Labelled OSes and trusted
desktops allow as much.  My employer makes this stuff (much, if not all
of it FOSS), and there have been some very impressive blog posts showing
how you can have applications, including browsers, running in different
VMs, with some VMs VPNed into a private network, and some not.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: One Laptop per Child security

2007-02-08 Thread Nicolas Williams
On Thu, Feb 08, 2007 at 06:32:44PM +1000, James A. Donald wrote:
 For many tasks, they have to call upon a small amount of
 trusted code.  For example the normal way an editor
 opens a file is that one gives the editor a file name,
 and the editor, having full user authority to read or
 change any file in the system, plays nice and opens and
 changes *only* that file.   In this OS, instead the
 editor asks trusted code for a file handle, and gets the
 handle to a file chosen by the user, and can modify that
 file and no other.

If this means pop-up dialogs for every little thing an application wants
to do then the result may well be further training users to click 'OK'.

The more complex the application, the harder it is for the user to
evaluate all its access requests (if nothing else due to lack of
time/patience).

As for browsers, you'd have to make sure that every window/tab/frame is
treated as a separate application, and even then that probably wouldn't
be enough.  Remember, the browser is a sort of operating system itself
-- applying policy to it is akin to applying policy to the open-ended
set of applications that it runs.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: One Laptop per Child security

2007-02-08 Thread Nicolas Williams
On Thu, Feb 08, 2007 at 12:23:40PM -0800, Ivan Krstić wrote:
 Hi Nico,
 
 Nicolas Williams wrote:
  If this means pop-up dialogs for every little thing an application wants
  to do then the result may well be further training users to click 'OK'.
 
 It really does help to read at least the introduction to the document in
 question before hitting 'reply' to an e-mail :)

The text you quote doesn't answer the question; the rest of the wiki
frontpage says little more.  It tends to make me think that if an
application wants to do something that I've not enabled it to do ahead
of time then it fails.  Failure is incovenient.  So as near as I can
tell from the text you quote BitFrost sets its convenience/security
parameters differently than other OSes, but there's nothing truly Earth
shatteringly new there.  Now, if it's a new OS presumably you start from
scratch in terms of applications, so you get to have usable profiles for
all of them initially, and maybe _that_ is what is truly new.

I'm imagining BitFrost as something like OpenBSD's systrace facility + a
small number of well-profiled apps.  If this is a good analogy, please
confirm it.  If it isn't and there is another similarly simple analogy,
then tell me what it is -- simple analogies, imprecise though they might
be, can help provide a good starting point to understand something new.

  As for browsers, you'd have to make sure that every window/tab/frame is
  treated as a separate application, and even then that probably wouldn't
  be enough.  Remember, the browser is a sort of operating system itself
  -- applying policy to it is akin to applying policy to the open-ended
  set of applications that it runs.
 
 The browser is an environment, which makes it an edge case. Even so,
 Bitfrost provides guarantees on what happens if you take over the
 browser: it's very hard to violate the user's privacy, you can't harm
 the machine in any way, you can't get unauthorized access to the user's
 documents. From a systems security point of view, that's all I could
 hope for. Security within the browser cannot lie in the scope of the
 spec. (Not to say that I don't care about it, though -- I'm meeting with
 Mozilla's CSO later today to talk about what we can do to make the
 browsing experience more secure.)

In a world where web-based applications are all the applications you
need, this attitude towards the browser leaves BitFrost with a big hole
in it.

I think you have to think of each site as a separate application, and
profile that, if I understood BitFrost correctly.  And that seems
unrealistic.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy of other languages

2007-02-07 Thread Nicolas Williams
On Mon, Feb 05, 2007 at 09:08:07PM -0600, Travis H. wrote:
 IIRC, it turned out that Egyptian heiroglyphs were actually syllabic,
 like Mesopotamian, so no fun there.  Mayan, on the other hand, remains
 an enigma.  I read not long ago that they also had a way of recording
 stories on bundles of knotted string, like the end of a mop.

Er, no, Mayan has been decoded:

http://www.omniglot.com/writing/mayan.htm

The knotted string system was an Inca writing system, IIRC.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


  1   2   >