Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-15 Thread Tobias Gondrom
On 06/09/13 14:45, Scott Brim wrote:
 I wouldn't focus on government surveillance per se.  The IETF should
 consider that breaking privacy is much easier than it used to be,
 particularly given consolidation of services at all layers, and take
 that into account in our engineering best practices.  Our mission is
 to make the Internet better, and right now the Internet's weakness in
 privacy is far from better.  The mandatory security considerations
 section should become security and privacy considerations.  The
 privacy RFC should be expanded and worded more strongly than just nice
 suggestions.  Perhaps the Nomcom should ask candidates about their
 understanding of privacy considerations.

 Scott

I am not sure that the mandatory security and privacy considerations
section in every draft would be sufficient. IMHO a number of issues
arise from the combination of various standards/technologies and that we
are sometimes missing a few but important pieces (e.g. stuff WGs said we
do later or which were seen as nice to have or optional, and then
never happened...).

So although I think privacy concerns should be addressed in every draft,
I also think this goes into the architecture domain across a number of
technologies.

Best regards, Tobias




Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-08 Thread John C Klensin


--On Friday, September 06, 2013 17:11 +0100 Tony Finch
d...@dotat.at wrote:

 John C Klensin j...@jck.com wrote:
 
 Please correct me if I'm wrong, but it seems to me that
 DANE-like approaches are significantly better than traditional
 PKI ones only to the extent to which:
...
 Yes, but there are some compensating pluses:

Please note that I didn't say worse, only not significantly
better.  

 You can get a meaningful improvement to your security by good
 choice of registrar (and registry if you have flexibility in
 your choice of name). Other weak registries and registrars
 don't reduce your DNSSEC security, whereas PKIX is only as
 secure as the weakest CA.

Yes and no.  Certainly I can improve my security as you note.  I
can also improve the security of a traditional certificate by
selecting from only those CAs who require a high degree of
assurance that I am who I say I am.  But, from the standpoint of
a casual user using readily-available and understandable tools
(see my recent note) and encountering a key or signature from
someone she doesn't know already, there is little or no way to
tell whether the owner of that key used a reliable registrar or
a sleazy one or, for the PKI case, a high-assurance and reliable
CA or one whose certification criterion is the applicant's
ability to pay.  There are still differences and I don't mean to
dismiss them.I just don't think we should exaggerate their
significance.

And, yes, part of what I'm concerned about is the very ugly
problem of whether, if I encounter an email address and key for
tonyfi...@email-expert.pro or, (slightly) worse, in one of the
thousand new TLDs that ICANN assures us will improve the quality
of their lives, how I determine whether that is you, some other
Tony Finch who claims expertise in email, or Betty Attacker
Bloggs pretending to be one of you.  As Pete has suggested, one
way to do that is to set up an encrypted connection without
worrying much about authentication and then quiz each other
about things that Tony(2), Betty, or John(2) are unlikely to
know until we are confident enough for the purposes.  But,
otherwise

By contrast, if I know a priori that the Tony Finch I'm
concerned about is the person who controls dotat.at and you know
that the John Klensin you are concerned about is the person who
controls jck.com, and both of us are using addresses in those
domains with which we have been familiar for years, then the
task is much easier with either a PKI or DANE -- and certainly
more convenient and reliable with the latter because we know
each other well enough, even if mostly virtually, to be
confident that the other is unlikely to be dealing with
registrars or registries who would deliberately enable domain or
key impersonation.  Nor would either of us be likely to be quiet
about such practices if they were discovered.

 An attacker can use a compromise of your DNS infrastructure to
 get a certificate from a conventional CA, just as much as they
 could compromise DNSSEC-based service authentication.

Exactly.  Again, my point in this note and the one I sent to the
list earlier today about the PGP-PKI relationship is that we
should understand and take advantage of the differences among
systems if and when we can, but that it is a bad idea to
exaggerate those advantages or differences.

john





Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread t . p .
- Original Message -
From: Phillip Hallam-Baker hal...@gmail.com
To: Andrew Sullivan a...@anvilwalrusden.com
Cc: IETF Discussion Mailing List ietf@ietf.org
Sent: Friday, September 06, 2013 4:56 AM
 On Thu, Sep 5, 2013 at 11:32 PM, Andrew Sullivan
a...@anvilwalrusden.comwrote:

  On Fri, Sep 06, 2013 at 03:28:28PM +1200, Brian E Carpenter wrote:
  
   OK, that's actionable in the IETF, so can we see the I-D before
   the cutoff?
 
  Why is that discussion of this nailed to the cycle of IETF meetings?


 It is not. I raised the challenge over a week ago in another forum.
Last
 thing I would do is to give any institution veto power.


 The design I think is practical is to eliminate all UI issues by
insisting
 that encryption and decryption are transparent. Any email that can be
sent
 encrypted is sent encrypted.

That sounds like the 'End User Fallacy number one' that I encounter all
the time in my work.  If only everything were encrypted, then we would
be completely safe.  Well, no (as you Phillip know well).  It depends on
the strength of the ciphers (you can get a little padlock on your screen
with SSL 2 which was the default in my local public access system until
recently).  It depends on the keys being secret (one enterprise system I
was enrolled on in 2003 will not let me change my password, ever - only
the system administrator has that power).  It depends on authentication
(I have a totally secure channel, unbreakable in the next 50 years, but
it is not to my bank but to a Far Eastern Power).  And so on.  Yet every
few weeks I hear the media saying, 'look for the padlock'.

I think that the obvious step to improving security is to get the world
at large possessing and using certificates, in the same way as the
governments of the world, not very long agao, persuaded us to use
passports.

Tom Petch


 So that means that we have to have a key distribution infrastructure
such
 that when you register a key it becomes available to anyone who might
need
 to send you a message. We would also wish to apply the Certificate
 Transparency approach to protect the Trusted Third Parties from being
 coerced, infiltrated or compromised.

 Packaging the implementation is not difficult, a set of proxies for
IMAP
 and SUBMIT enhance and decrypt the messages.

 The client side complexity is separated from the proxy using
Omnibroker.





Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Pete Resnick

On 9/6/13 12:54 AM, t.p. wrote:

- Original Message -
From: Phillip Hallam-Baker hal...@gmail.com
Cc: IETF Discussion Mailing List ietf@ietf.org
Sent: Friday, September 06, 2013 4:56 AM

The design I think is practical is to eliminate all UI issues by 
insisting that encryption and decryption are transparent. Any email 
that can be sent encrypted is sent encrypted.


That sounds like the 'End User Fallacy number one' that I encounter 
all the time in my work. If only everything were encrypted, then we 
would be completely safe.


Actually, I disagree that this fallacy is at play here. I think we need 
to separate the concept of end-to-end encryption from authentication 
when it comes to UI transparency. We design UIs now where we get in the 
user's face about doing encryption if we cannot authenticate the other 
side and we need to get over that. In email, we insist that you 
authenticate the recipient's certificate before we allow you to install 
it and to start encrypting, and prefer to send things in the clear until 
that is done. That's silly and is based on the assumption that 
encryption isn't worth doing *until* we know it's going to be done 
completely safely. We need to separate the trust and guarantees of 
safeness (which require *later* out-of-band verification) from the whole 
endeavor of getting encryption used in the first place.


pr

--
Pete Resnickhttp://www.qualcomm.com/~presnick/
Qualcomm Technologies, Inc. - +1 (858)651-4478



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Scott Brim
I wouldn't focus on government surveillance per se.  The IETF should
consider that breaking privacy is much easier than it used to be,
particularly given consolidation of services at all layers, and take
that into account in our engineering best practices.  Our mission is
to make the Internet better, and right now the Internet's weakness in
privacy is far from better.  The mandatory security considerations
section should become security and privacy considerations.  The
privacy RFC should be expanded and worded more strongly than just nice
suggestions.  Perhaps the Nomcom should ask candidates about their
understanding of privacy considerations.

Scott


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 06:20 -0700 Pete Resnick
presn...@qti.qualcomm.com wrote:

 Actually, I disagree that this fallacy is at play here. I
 think we need to separate the concept of end-to-end encryption
 from authentication when it comes to UI transparency. We
 design UIs now where we get in the user's face about doing
 encryption if we cannot authenticate the other side and we
 need to get over that. In email, we insist that you
 authenticate the recipient's certificate before we allow you
 to install it and to start encrypting, and prefer to send
 things in the clear until that is done. That's silly and is
 based on the assumption that encryption isn't worth doing
 *until* we know it's going to be done completely safely. We
 need to separate the trust and guarantees of safeness (which
 require *later* out-of-band verification) from the whole
 endeavor of getting encryption used in the first place.

Pete,

At one level, I completely agree.  At another, it depends on the
threat model.  If the presumed attacker is skilled and has
access to packets in transit then it is necessary to assume that
safeguards against MITM attacks are well within that attacker's
resource set.  If those conditions are met, then encrypting on
the basis of a a key or certificate that can't be authenticated
is delusional protection against that threat.  It may still be
good protection against more casual attacks, but we do the users
the same disservice by telling them that their transmissions are
secure under those circumstances that we do by telling them that
their data are secure when they see a little lock in their web
browsers.

Certainly encrypt first, authenticate later is reasonable if
one doesn't send anything sensitive until authentication has
been established, but it seems to me that would require a rather
significant redesign of how people do things, not just how
protocols work.

best,
   john



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Tony Finch
Theodore Ts'o ty...@mit.edu wrote:

 Speaking of which, Jim Gettys was trying to tell me yesterday that
 BIND refuses to do DNSSEC lookups until the endpoint client has
 generated a certificate.

That is wrong. DNSSEC validation affects a whole view - i.e. it is
effectively global.

Clients can request DNSSEC records or not, regardless of whether they do
any transaction security. Clients can do DNSSEC validation without any
private keys.

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first.
Rough, becoming slight or moderate. Showers, rain at first. Moderate or good,
occasionally poor at first.


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Theodore Ts'o
On Fri, Sep 06, 2013 at 03:26:42PM +0100, Tony Finch wrote:
 Theodore Ts'o ty...@mit.edu wrote:
 
  Speaking of which, Jim Gettys was trying to tell me yesterday that
  BIND refuses to do DNSSEC lookups until the endpoint client has
  generated a certificate.
 
 That is wrong. DNSSEC validation affects a whole view - i.e. it is
 effectively global.
 
 Clients can request DNSSEC records or not, regardless of whether they do
 any transaction security. Clients can do DNSSEC validation without any
 private keys.

That's what I hoped, thanks.

- Ted


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Pete Resnick

On 9/6/13 7:02 AM, John C Klensin wrote:

...It may still be
good protection against more casual attacks, but we do the users
the same disservice by telling them that their transmissions are
secure under those circumstances that we do by telling them that
their data are secure when they see a little lock in their web
browsers.

Certainly encrypt first, authenticate later is reasonable if
one doesn't send anything sensitive until authentication has
been established, but it seems to me that would require a rather
significant redesign of how people do things, not just how
protocols work.
   


Actually, I think the latter is really what I'm suggesting. We've got do 
the encryption (for both the minimal protection from passive attacks as 
well as setting things up for doing good security later), but we've also 
got to design UIs that not only make it easier for users to deal with 
encrpytion, but change the way people think about it.


(Back when we were working on Eudora, we got user support complaints 
that people can read my email without typing my password. What they in 
fact meant was that if you started the application, it would normally 
ask for your POP password in order to check mail, but you could always 
click Cancel and read the mail that had been previously downloaded. 
Users presumed that since they were being prompted for the password when 
the program launched -- just like what used to happen when they logged 
in to read mail on their Unix/etc. accounts -- the password was 
protecting the local data, not that it was only being used to 
authenticate to the server to download mail. You'd ask them why they 
weren't so worried about people reading their Microsoft Word files and 
they'd give you dumb looks. Sometimes you do have to redesign how 
people do things.)


pr

--
Pete Resnickhttp://www.qualcomm.com/~presnick/
Qualcomm Technologies, Inc. - +1 (858)651-4478



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Theodore Ts'o
On Fri, Sep 06, 2013 at 06:20:48AM -0700, Pete Resnick wrote:
 
 In email,
 we insist that you authenticate the recipient's certificate before
 we allow you to install it and to start encrypting, and prefer to
 send things in the clear until that is done. That's silly and is
 based on the assumption that encryption isn't worth doing *until* we
 know it's going to be done completely safely.

Speaking of which, Jim Gettys was trying to tell me yesterday that
BIND refuses to do DNSSEC lookups until the endpoint client has
generated a certificate.  Which is bad, since out-of-box, a home
router doesn't have much in the way of entropy at that point, so you
shouldn't be trying to generate certificates at the time of the first
boot-up, but rather to delay until you've had enough of a chance to
gather some entropy.  (Or put in a real hardware RNG, but a
race-to-the-bottom in terms of BOM costs makes that not realistic.)  I
told him that sounds insane, since you shouldn't need a
certificate/private key in order to do digital signature verification.

Can someone please tell me that BIND isn't being this stupid?

- Ted


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Joe Abley

On 2013-09-06, at 10:16, Theodore Ts'o ty...@mit.edu wrote:

 On Fri, Sep 06, 2013 at 06:20:48AM -0700, Pete Resnick wrote:
 
 In email,
 we insist that you authenticate the recipient's certificate before
 we allow you to install it and to start encrypting, and prefer to
 send things in the clear until that is done. That's silly and is
 based on the assumption that encryption isn't worth doing *until* we
 know it's going to be done completely safely.
 
 Speaking of which, Jim Gettys was trying to tell me yesterday that
 BIND refuses to do DNSSEC lookups until the endpoint client has
 generated a certificate.

All modern DNSSEC-capable resolvers (regardless of whether validation has been 
turned on) will set DO=1 in the EDNS0 header and will retrieve signatures in 
responses if they are available. BIND9 is not a counter-example. Regardless, an 
end host downstream of a resolver that behaves differently (but that is capable 
of and desires to perform its own validation) can detect an inability to 
receive signatures, and can act accordingly.

There is no client certificate component of DNSSEC. The trust anchor for the 
system is published as part of root zone processes at IANA, and a variety of 
mechanisms are available to infer trust in a retrieved trust anchor. (These 
could use more work, but they exist.)

There is a (somewhat poorly-characterised and insufficiently-measured) 
interaction with a variety of middleware in firewalls, captive hotel hotspot, 
etc that will prevent an end host from being able to validate responses from 
the DNS, but in those cases the inability to validate is known by the end host; 
you still have the option of closing your laptop and reattaching it to the 
network somewhere else.

  Which is bad, since out-of-box, a home
 router doesn't have much in the way of entropy at that point, so you
 shouldn't be trying to generate certificates at the time of the first
 boot-up, but rather to delay until you've had enough of a chance to
 gather some entropy.

In DNSSEC, signatures are generated before publication of zone data, and are 
verified by validators. You don't need a high-quality entropy source to 
validate a signature. There is no DNSSEC requirement for entropy in a home 
router or an end host.

  (Or put in a real hardware RNG, but a
 race-to-the-bottom in terms of BOM costs makes that not realistic.)  I
 told him that sounds insane, since you shouldn't need a
 certificate/private key in order to do digital signature verification.

I think you were on the right track, there.

 Can someone please tell me that BIND isn't being this stupid?

This thread has mainly been about privacy and confidentiality. There is nothing 
in DNSSEC that offers either of those, directly (although it's an enabler 
through approaches like DANE to provide a framework for secure distribution of 
certificates). If every zone was signed and if every response was validated, it 
would still be possible to tap queries and tell who was asking for what name, 
and what response was returned.


Joe

Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Scott Brim
On Fri, Sep 6, 2013 at 11:41 AM, Pete Resnick presn...@qti.qualcomm.com wrote:
 OK, one last nostalgic anecdote about Eudora before I go back to finishing
 my spfbis Last Call writeup:

 MacTCP (the TCP/IP stack for the original MacOS) required a handler routine
 for ICMP messages for some dumb reason; you couldn't just set it to null in
 your code. So Steve implemented one. Whenever an ICMP message came in for a
 current connection (e.g., Destination Unreachable), Eudora would put up a
 dialog box. It read Eudora has received an ICMP Destination Unreachable
 message. The box had a single button. It read, So What?

 Working for Steve was a hoot.

(like)


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 08:41 -0700 Pete Resnick
presn...@qti.qualcomm.com wrote:

...
 Absolutely. There is clearly a good motivation: A particular
 UI choice should not *constrain* a protocol, so it is
 essential that we make sure that the protocol is not
 *dependent* on the UI. But that doesn't mean that UI issues
 should not *inform* protocol design. If we design a protocol
 such that it makes assumptions about what the UI will be able
 to provide without verifying those assumptions are realistic,
 we're in serious trouble. I think we've done that quite a bit
 in the security/application protocol space.

Yes.  It also has another implication that goes to Dave's point
about how the IETF should interact with UI designers.   In my
youth I worked with some very good early generation HCI/ UI
design folks.  Their main and most consistent message was that,
from a UI functionality standpoint, the single most important
consideration for a protocol, API, or similar interface was to
be sure that one had done a thorough analysis of the possible
error and failure conditions and that sufficient information
about those conditions could get to the outside to permit the UI
to report things and take action in an appropriate way.From
that point of view, any flavor of a you lose - ok message,
including blue screens and I got irritated and disconnected
you  is a symptom of bad design and much more commonly bad
design in the protocols and interfaces than in the UI.  

Leaving the UI designs to the UI designers is fine but, if we
don't give them the tools and information they need, most of the
inevitable problems are ours.


 OK, one last nostalgic anecdote about Eudora before I go back
 to finishing my spfbis Last Call writeup:
...
 Working for Steve was a hoot.

I can only imagine, but the story is not a great surprise.

   john





Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 07:38 -0700 Pete Resnick
presn...@qti.qualcomm.com wrote:

 Actually, I think the latter is really what I'm suggesting.
 We've got do the encryption (for both the minimal protection
 from passive attacks as well as setting things up for doing
 good security later), but we've also got to design UIs that
 not only make it easier for users to deal with encrpytion, but
 change the way people think about it.
 
 (Back when we were working on Eudora, we got user support
 complaints that people can read my email without typing my
 password. What they in fact meant was that if you started the
 application, it would normally ask for your POP password in
...

Indeed.  And I think that one of the more important things we
can do is to rethink UIs to give casual users more information
about what it going on and to enable them to take intelligent
action on decisions that should be under their control.  There
are good reasons why the IETF has generally stayed out of the UI
area but, for the security and privacy areas discussed in this
thread, there may be no practical way to design protocols that
solve real problems without starting from what information a UI
needs to inform the user and what actions the user should be
able to take and then working backwards.  As I think you know,
one of my personal peeves is the range of unsatisfactory
conditions --from an older version of certificate format or
minor error to a verified revoked certificate -- that can
produce a message that essentially says continuing may cause
unspeakable evil to happen to you with an ok button (and only
an ok button).  

Similarly, even if users can figure out which CAs to trust and
which ones not (another issue and one where protocol work to
standardize distribution of CA reputation information might be
appropriate) editing CA lists whose main admission qualification
today seems to be cosy relationships with vendors (and maybe the
US Govt) to remove untrusted ones and add trusted ones requires
rocket scientist-level skills.  If we were serous, it wouldn't
be that way.  

And the fact that those are 75% of more UI issues is probably no
longer an excuse.

john





Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Brian Trammell
hi Scott, all,

On Sep 6, 2013, at 3:45 PM, Scott Brim scott.b...@gmail.com wrote:

 I wouldn't focus on government surveillance per se.  The IETF should
 consider that breaking privacy is much easier than it used to be,
 particularly given consolidation of services at all layers, and take
 that into account in our engineering best practices.  Our mission is
 to make the Internet better, and right now the Internet's weakness in
 privacy is far from better.

Indeed, pervasive surveillance is merely a special case of eavesdropping as a 
privacy threat, with the important difference that eavesdropping (as discussed 
in RFC 6973) explicitly has an target in mind, while pervasive surveillance 
explicitly doesn't. So what we do to improve privacy will naturally make 
surveillance harder, in most cases; I hope that draft-trammell-perpass-ppa will 
evolve to fill in the gaps.

 The mandatory security considerations
 section should become security and privacy considerations.  The
 privacy RFC should be expanded and worded more strongly than just nice
 suggestions.  Perhaps the Nomcom should ask candidates about their
 understanding of privacy considerations.

Having read RFC 6973 in detail while working on that draft, I'd say it's a very 
good starting point, and indeed even consider it required reading. We can 
certainly take its guidance to heart as if it were more strongly worded than it 
is. :)

Cheers,

Brian

Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 10:43 -0400 Joe Abley
jab...@hopcount.ca wrote:

 Can someone please tell me that BIND isn't being this stupid?
 
 This thread has mainly been about privacy and confidentiality.
 There is nothing in DNSSEC that offers either of those,
 directly (although it's an enabler through approaches like
 DANE to provide a framework for secure distribution of
 certificates). If every zone was signed and if every response
 was validated, it would still be possible to tap queries and
 tell who was asking for what name, and what response was
 returned.

Please correct me if I'm wrong, but it seems to me that
DANE-like approaches are significantly better than traditional
PKI ones only to the extent to which:

- The entities needing or generating the certificates
are significantly more in control of the associated DNS
infrastructure than entities using conventional CAs are
in control of those CAs.

- For domains that are managed by registrars or other
third parties (I gather a very large fraction of them at
the second level), whether one believes those registrars
or other operators have significantly more integrity and
are harder to compromise than traditional third party CA
operators.

best,
   john




Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Pete Resnick

On 9/6/13 8:23 AM, John C Klensin wrote:


I think that one of the more important things we
can do is to rethink UIs to give casual users more information
about what it going on and to enable them to take intelligent
action on decisions that should be under their control.  There
are good reasons why the IETF has generally stayed out of the UI
area but, for the security and privacy areas discussed in this
thread, there may be no practical way to design protocols that
solve real problems without starting from what information a UI
needs to inform the user and what actions the user should be
able to take and then working backwards.
[...]
And the fact that those are 75% of more UI issues is probably no
longer an excuse.
   


Absolutely. There is clearly a good motivation: A particular UI choice 
should not *constrain* a protocol, so it is essential that we make sure 
that the protocol is not *dependent* on the UI. But that doesn't mean 
that UI issues should not *inform* protocol design. If we design a 
protocol such that it makes assumptions about what the UI will be able 
to provide without verifying those assumptions are realistic, we're in 
serious trouble. I think we've done that quite a bit in the 
security/application protocol space.



one of my personal peeves is the range of unsatisfactory
conditions --from an older version of certificate format or
minor error to a verified revoked certificate -- that can
produce a message that essentially says continuing may cause
unspeakable evil to happen to you with an ok button (and only
an ok button).
   


OK, one last nostalgic anecdote about Eudora before I go back to 
finishing my spfbis Last Call writeup:


MacTCP (the TCP/IP stack for the original MacOS) required a handler 
routine for ICMP messages for some dumb reason; you couldn't just set it 
to null in your code. So Steve implemented one. Whenever an ICMP message 
came in for a current connection (e.g., Destination Unreachable), Eudora 
would put up a dialog box. It read Eudora has received an ICMP 
Destination Unreachable message. The box had a single button. It read, 
So What?


Working for Steve was a hoot.

pr

--
Pete Resnickhttp://www.qualcomm.com/~presnick/
Qualcomm Technologies, Inc. - +1 (858)651-4478



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Tony Finch
John C Klensin j...@jck.com wrote:

 Please correct me if I'm wrong, but it seems to me that
 DANE-like approaches are significantly better than traditional
 PKI ones only to the extent to which:

   - The entities needing or generating the certificates
   are significantly more in control of the associated DNS
   infrastructure than entities using conventional CAs are
   in control of those CAs.

   - For domains that are managed by registrars or other
   third parties (I gather a very large fraction of them at
   the second level), whether one believes those registrars
   or other operators have significantly more integrity and
   are harder to compromise than traditional third party CA
   operators.

Yes, but there are some compensating pluses:

You can get a meaningful improvement to your security by good choice of
registrar (and registry if you have flexibility in your choice of name).
Other weak registries and registrars don't reduce your DNSSEC security,
whereas PKIX is only as secure as the weakest CA.

DNSSEC has tricky timing requirements for key rollovers. This makes it
hard to steal a domain without causing validation failures.

An attacker can use a compromise of your DNS infrastructure to get a
certificate from a conventional CA, just as much as they could compromise
DNSSEC-based service authentication.

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first.
Rough, becoming slight or moderate. Showers, rain at first. Moderate or good,
occasionally poor at first.


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Phillip Hallam-Baker
On Fri, Sep 6, 2013 at 9:20 AM, Pete Resnick presn...@qti.qualcomm.comwrote:

 On 9/6/13 12:54 AM, t.p. wrote:

 - Original Message -
 From: Phillip Hallam-Baker hal...@gmail.com
 Cc: IETF Discussion Mailing List ietf@ietf.org
 Sent: Friday, September 06, 2013 4:56 AM

  The design I think is practical is to eliminate all UI issues by
 insisting that encryption and decryption are transparent. Any email that
 can be sent encrypted is sent encrypted.


 That sounds like the 'End User Fallacy number one' that I encounter all
 the time in my work. If only everything were encrypted, then we would be
 completely safe.


 Actually, I disagree that this fallacy is at play here. I think we need to
 separate the concept of end-to-end encryption from authentication when it
 comes to UI transparency. We design UIs now where we get in the user's face
 about doing encryption if we cannot authenticate the other side and we need
 to get over that. In email, we insist that you authenticate the recipient's
 certificate before we allow you to install it and to start encrypting, and
 prefer to send things in the clear until that is done. That's silly and is
 based on the assumption that encryption isn't worth doing *until* we know
 it's going to be done completely safely. We need to separate the trust and
 guarantees of safeness (which require *later* out-of-band verification)
 from the whole endeavor of getting encryption used in the first place.


Actually, let me correct my earlier statement.

I believe that UIs fail because they require too much effort from the user
and they fail because they present too little information. Many times they
do both.

What I have been looking at as short term is how to make sending and
receiving secure email to be ZERO effort and how to make initialization no
more difficult than installing and configuring a regular email app. And I
think I can show how that can be done. And I think that is a part of the
puzzle we can just start going to work on in weeks without having to do
usability studies.


The other part, too little (or inconsistent) information is also a big
problem. Take the email I got from gmail this morning telling me that
someone tried to access my email from Sao Paulo. The message told me to
change my password but did not tell me that the attacker had known my
password. That is a problem of too little information.

The problem security usability often faces is that the usability mafia are
trained how to make things easy to learn in ten minutes because that is how
to sell a product. They are frequently completely clueless when it comes to
making software actually easy to use long term. Apple, Google and Microsoft
are all terrible at this. They all hide information the user needs to know.

I have some ideas on how to fix that problem as well, in fact I wrote a
whole chapter in my book suggesting how to make email security usable by
putting an analog of the corporate letterhead onto emails. But that part is
a longer discussion and focuses on authentication rather than
confidentiality.


The perfect is the enemy of the good. I think that the NSA/GCHQ has often
managed to discourage the use of crypto by pushing the standards community
to make the pudding so rich nobody can eat it.



-- 
Website: http://hallambaker.com/