Re: [cryptography] preventing protocol failings

2011-07-22 Thread Zooko O'Whielacronx
On Tue, Jul 12, 2011 at 5:25 PM, Marsh Ray ma...@extendedsubset.com wrote:

 Everyone here knows about the inherent security-functionality tradeoff. I 
 think it's such a law of nature that any control must present at least some 
 cost to the legitimate user in order to provide any effective security. 
 However, we can sometimes greatly optimize this tradeoff and provide the best 
 tools for admins to manage the system's point on it.

From http://www.hpl.hp.com/techreports/2009/HPL-2009-53.pdf :

“1. INTRODUCTION
Most people agree with the statement, ―There is an inevitable tension
between usability and security. We don’t, so we set out to build a
useful tool to prove our point.”

 Hoping to find security for free somewhere is akin to looking for free 
 energy. The search may be greatly educational or produce very useful
related discoveries, but at the end of the day the laws of
thermodynamics are likely to remain satisfied.

If they've done what they claim (which I find plausible), then how
could it be possible? Where does this free energy come from?

I think it comes from taking advantage of information which is already
present but which is just lying about unused by the security
mechanism: expressions of intent that the user makes but that some
security mechanisms ignore.

For example, if you send a file to someone, then there is no need for
your tools to interrupt your workflow with security-specific
questions, like prompting for a password or access code, popping up a
dialog that says This might be insecure! Are you sure?, or asking
you to specify a public key of your recipient. You've already
specified (as part of your *normal* workflow) what file and who to
send it to, and that information is sufficient the security system to
figure out what to do. Likewise there is no need for the recipient of
the file to have her workflow interrupted by security issues.

Again, the point is that *you've already specified*. The human has
already communicated all of the necessary information to the computer.
Security tools that request extra steps are usually being deaf to what
the human has already told the computer. (Or else they are just doing
CYA Security a.k.a Blame The Victim Security where if anything
goes wrong later they can say Well I popped up an 'Are You Sure?'
dialog box, so what happened wasn't my fault!.)

Okay, now I admit that once we have security tools that integrate into
user workflow and take advantage of the information that is already
present, *then* we'll still have some remaining hard problems about
fitting usability and security together.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-22 Thread James A. Donald

On 2011-07-23 7:29 AM, Marsh Ray wrote:
 What does the user see when they *are* under attack and the
 server authentication step fails?

Then his task fails.

 How do the security properties change when the user clicks
 on a link in a phishing email?

A phishing email is normally phishing for shared secrets.
Don't use shared secrets - recall our previous discussion
about EKE.

More generally, when someone contacts me on skype, they can never 
successfully pretend to be one of my existing contacts.  Why should 
someone who contacts me by email be able to pretend to be one of my 
existing contacts?


 The design says
 A webkey is the moral equivalent of a password, but one
 the user treats as a bookmark and that controls access to
 a specific object

 So what do you do when one of these webkey passwords
 eventually does get disclosed? Can you revoke it or is is
 equivalent to the name of the document?

Let us suppose that everything in network - users, mutable
files, and immutable files and anything else of interest, is
identified by zooko's triangle.

Then you cannot revoke the globally unique name of the
document, which is both its decryption key, and the means by
which it is to be found.  You could however delete the
document, and issue a new trivially different document -
differing perhaps only in being version 1.1 in place of 1.0.

 How do you specify what file without an existing server
 authentication infrastructure?

 How do you specify who without presuming an existing user
 identity and authentication infrastructure?

We already identify users, documents, and web addresses by
lengthy and not very intelligble globally unique names.  For
example the contacts list of your email program has short,
non unique names for these contacts, and longer, less
memorable, globally unique names.  Similarly for the
bookmarks in your browser.

Why should these not contain keys, or hashes of keys?
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-22 Thread James A. Donald

On 2011-07-23 7:29 AM, Marsh Ray wrote:

  How do the security properties change when the user clicks
  on a link in a phishing email?



On 2011-07-23 2:06 PM, James A. Donald wrote:

when someone contacts me on skype, they can never
successfully pretend to be one of my existing contacts. Why should
someone who contacts me by email be able to pretend to be one of my
existing contacts?


On skype, when someone who is not one of your existing contacts attempts 
to instant message you, a quite different user interface pops up, a user 
interface that prominently displays a block button.


After a while, end users become well trained to hit the block button on 
sight whenever it appears unexpectedly.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Ian G

On 13/07/11 9:25 AM, Marsh Ray wrote:

On 07/12/2011 04:24 PM, Zooko O'Whielacronx wrote:

On Tue, Jul 12, 2011 at 11:10 AM, Hill, Bradbh...@paypal-inc.com
wrote:


I have found that when H3 meets deployment and use, the reality
too often becomes: Something's gotta give. We haven't yet found
a way to hide enough of the complexity of security to make it
free, and this inevitably causes conflicts with goals like
adoption.


This is an excellent objection. I think this shows that most crypto
systems have bad usability in their key management (SSL, PGP). People
don't use such systems if they can help it, and when they do they
often use them wrong.


But the entire purpose of securing a system is to deny access to the
protected resource.


And that's why it doesn't work;  we end up denying access to the 
protected resource.


Security is just another function of business, it's not special.  The 
purpose of security is to improve the profitability of the resource. 
Protecting it is one tool to serve security  profits, and 
re-engineering it so it doesn't need any protection is another tool... 
There are many such tools :)




In the case of systems susceptible to potential
phishing attacks, we even require that the user themselves be the one to
decline access to the system!

Everyone here knows about the inherent security-functionality tradeoff.
I think it's such a law of nature that any control must present at least
some cost to the legitimate user in order to provide any effective
security. However, we can sometimes greatly optimize this tradeoff and
provide the best tools for admins to manage the system's point on it.



Not at all.  I view this as hubris from those struggling to make 
security work from a technical pov, from within the box.  Once you start 
to learn the business and the human interactions, you are looking 
outside your techie box.  From the business, you discover many 
interesting things that allow you to transfer the info needed to make 
the security look free.


A couple of examples:  Skype works because people transfer their 
introductions first over other channels, hey, my handle is bobbob, and 
then secondly over the packet network.  It works because it uses the 
humans to do what they do naturally.


2nd.  When I built a secure payment system, I was able to construct a 
complete end-to-end public infrastructure without central points of 
trust (like with CAs).  And I was able to do it completely.  The reasons 
is that the start of the conversation was always a. from person to 
person, and b. concerning a financial instrument.  So the financial 
instrument was turned into a contract with embedded crypto keys.  Alice 
hands Bob the contract, and his softwate then bootstraps to fully 
secured comms.




Hoping to find security for free somewhere is akin to looking for free
energy. The search may be greatly educational or produce very useful
related discoveries, but at the end of the day the laws of
thermodynamics are likely to remain satisfied.



:)


Those looking for no-cost or extremely low-cost security either don't
place a high value on the protected resource or, given the options they
have imagined them, that they may profit more by the system being in the
less secure state. Sometimes they haven't factored all the options into
their cost-benefit analysis. Sometimes it never occurs to them that the
cost of a security failure can be much much greater than the nominal
value of the thing being protected (ask Sony).


No, it's much simpler than that:  denying someone security because they 
don't push the right buttons is stilly denying them security.  The 
summed benefit of internet security protocols typically goes up with the 
number of users, not with the reduction of flaws.  The techie view has 
it backwards.


...

So even if you're a web site just selling advertising and your users'
personal information, security is a feature that attracts and retains
users, specifically those who value their _own_ stuff. (Hint hint: this
is the kind with money to spend with your advertisers.) Smart people
value their own time most of all and would find it a major pain to have
to put everything back in order after some kind of compromise.


This is a curiousity to me;  has anyone actually figured out how to find 
a marketplace full of security conscious users?  Was there ever such a 
product where vendors successfully relied upon the users' good security 
sense?



...

I hope there was a coherent point in all of that somewhere :-) I know
I'm preaching to the choir but Brad seemed to be asking for arguments of
this sort.




:)


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Ian G

On 13/07/11 3:10 AM, Hill, Brad wrote:

Re: H3, There is one mode and it is secure

I have found that when H3 meets deployment and use, the reality too often becomes: 
Something's gotta give.  We haven't yet found a way to hide enough of the 
complexity of security to make it free, and this inevitably causes conflicts with goals 
like adoption.

An alternate or possibly just auxiliary hypothesis I've been promoting on how 
to respond to these pressures is:

Build two protocols and incentivize.

That is:

Recognize in advance that users will demand an insecure mode and give it to 
them.


I've heard of users demanding easy modes, but never demanding insecure 
modes :)



Make it a totally different protocol, not an option, mode or negotiation of 
the secure protocol.
Encourage appropriate self-sorting between the secure and insecure 
protocols.

Making two completely different protocols means that neither has to pay the 
complexity cost of the other mode, (avoiding e.g. the state explosion Zooko 
described with ZRTP) eliminates or greatly reduces introduced attack classes 
around negotiation and downgrade, and makes the story around managing and 
eventually deprecating legacy clients simpler.

The self-sorting is the tricky bit.  Google Checkout and SXIP are good examples 
of this.   Google Checkout allowed both signed and unsigned shopping carts.  
Unsigned shopping carts were dead-easy to implement, but had a higher fee 
structure than the signed carts.  This meant that it was easy to join the 
ecosystem as a prototyper, hobbyist or small and unsophisticated business.  But 
it also meant that as soon as your transaction volume got large enough, it was 
worthwhile to move to the secure version.   SXIP built the incentive between 
protocols by having additional features / attributes that were only available 
to users of the secure protocol.


I would never have done that.  I would have had signed shopping carts, 
period.  I would have just set the fee structure on whether I recognise 
the signer of the shopping cart, or not.


(I'm not saying it is wrong, just that there is an easy way to get the 
same benefit without having two modes...)



The other advantage of building two protocols is that if/when the insecure 
protocol actually becomes a target of attack, the secure version is ready to 
go, deployed, proven, ready for load, with libraries, sample code, the works 
needed for a smooth transition.

This is a bit like Ian's Build one to throw away, except that I'd say, build 
them both at the same time, and maybe you won't need to throw away the insecure one.


I know it sounds good, but has it ever worked?  Has any vendor ever been 
successfully attacked through a weak demo system, and then rolled out a 
new one *which happened to be prepared in time for this eventuality* ?


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Hill, Brad
I know it sounds good, but has it ever worked?  Has any vendor ever been 
successfully attacked through a 
 weak demo system, and then rolled out a new one *which happened to be 
 prepared in time for this eventuality* ?

Not a shining example of secure protocol design, but here's one example:

http://developers.facebook.com/blog/post/497

Although I'm not aware of FB applying proactive incentive models for users of 
the legacy auth prior to simply announcing an EOL. 

And while there is no history of exploitation against unsigned carts I know of, 
Google Checkout did deploy both protocols simultaneously and to this day 
maintains both APIs; one requires only a working knowledge of HTML while the 
other requires API programming experience.  
http://checkout.google.com/seller/integrate_custom.html 

And I'm not saying put two modes in the protocol.  I'm saying put two modes in 
your business model, and use a distinct protocol for each.  The business model 
is where the pressure for multiple modes comes from, so expect it and manage it 
at that layer, instead of letting it pollute your protocol.  H3 is great advice 
from a very narrow perspective of crypto protocol design, but for a great many 
systems it either pretends that business pressures relating to complexity don't 
exist, or that the business people answer to the crypto people.  [I know it 
sounds good, but has it ever worked? ;) ]

-Brad
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Marsh Ray

On 07/13/2011 01:01 AM, Ian G wrote:

On 13/07/11 9:25 AM, Marsh Ray wrote:


But the entire purpose of securing a system is to deny access to
the protected resource.


And that's why it doesn't work; we end up denying access to the
protected resource.


Denying to the attacker - good.

Denying to the legitimate user - unfortunately unavoidable some of the
time. The main purpose of authentication is to decide if the party is,
in fact, the legitimate user. So that process can't presume the outcome
in the interest of user experience.

I mis-type my password a significant percentage of the time. Of course I
know it's me but it would be absurd for the system to still log me in.
Me being denied access is a bad user experienceTM (especially compared
to a system with no login authentication at all) but it's also necessary
for security.

However, a scheme which allowed me to log in with N correct password
characters out of M could still be quite strong (with good choices for N
and M) but it would allow for tuning out the bad user experiences to the
degree allowed by the situation-specific security requirements.


Security is just another function of business, it's not special.


I disagree, I think it depends entirely on the business. Quite often
there are multiple parties involved with very divergent interests.


The purpose of security is to improve the profitability of the
resource.


Often the purpose is to reduce existential risks.

I think it's such a law of nature that any control must present at
least some cost to the legitimate user in order to provide any
effective security. However, we can sometimes greatly optimize this
tradeoff and provide the best tools for admins to manage the
system's point on it.


Not at all. I view this as hubris from those struggling to make
security work from a technical pov, from within the box. Once you
start to learn the business and the human interactions, you are
looking outside your techie box. From the business, you discover
many interesting things that allow you to transfer the info needed to
make the security look free.


Well, you're right, except that it's not so much hubris as it is being
aware of one's limitations. The more general-purpose the protocol or
library is that you're working on, the less you can know about the
scenarios in which it will eventually be deployed.

You can't even take for granted that there even is a business or
primarily financial interest on either endpoint. The endpoints needing
to securely communicate may be a citizen and their government, an
activist and a human rights organization, a soldier and his weapons
system, or a patient and their embedded drug pump.


A couple of examples: Skype works because people transfer their
introductions first over other channels, hey, my handle is bobbob,
and then secondly over the packet network. It works because it uses
the humans to do what they do naturally.


Yeah, it's a big win when the users can bring their pre-established
relationships to bootstrap the secure authentication. This is the way
the Main St. district worked in small towns - you knew the hardware
store guy, you knew the barber, etc. Even if not, an unfamiliar business
wouldn't be around long without the blessing of the mayor and town cop.

But this is the exact opposite model that Netscape (and friends) used
for ecommerce back in the early 90s. They recognized that the key
property necessary to enable the ecommerce explosion was for users to
feel comfortable doing business with merchants with which they had no
prior relationship at all. In order for this to happen there needed to
be a trusted introducer system and the CA system was born. This system
sucks eggs for many things for which it is used, but it is an undeniable
success at its core business goal: the lock icon has convinced users
that it's safe enough to enter their credit card info on line.


2nd. When I built a secure payment system, I was able to construct a
complete end-to-end public infrastructure without central points of
trust (like with CAs). And I was able to do it completely. The
reasons is that the start of the conversation was always a. from
person to person, and b. concerning a financial instrument. So the
financial instrument was turned into a contract with embedded crypto
keys. Alice hands Bob the contract, and his softwate then bootstraps
to fully secured comms.


Ask yourself if just maybe you picked one of the easier problems to
solve? One where the rules and the parties' motivations were all
well-understood in advance?


No, it's much simpler than that: denying someone security because
they don't push the right buttons is stilly denying them security.


I don't understand. Are you speaking of denying them access to the
protected resource, or are you saying they are denied some nebulous form
of security in general?


The summed benefit of internet security protocols typically goes up
with the number of users, not with the reduction of flaws. The
techie 

Re: [cryptography] preventing protocol failings

2011-07-13 Thread Peter Gutmann
Ralph Holz h...@net.in.tum.de writes:

The question, after all, is how often do you really read the SSH warnings?
How often do you just type on or retry or press accept? What if you're the
admin who encounters this maybe 2-3 times day?

The August (I think) issue of ;login, the Usenix magazine (
http://www.usenix.org/publications/login/, it's not out yet), has a brief
article on this.  The answer is effectively zero.

Peter.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Peter Gutmann
Andy Steingruebl a...@steingruebl.com writes:

The way it for for everyone I knew that went through it was:

1. Sniffing was sort of a problem, but most people didn't care
2. Telnet was quite a bit of a pain, especially when using NAT, and wanting
to do X11 forwarding
3. Typing in your password again and again over telnet (which did have
advantages over rlogin/rsh) was a pain.

Enter SSH.  It solved #1, but the big boon to sysadmins to figure it out and
installed it was that it *really* solved #2 and #3, hence major adoption.

Uhh, this seems like a somewhat unusual reinterpretation of history.  SSH was
primarily an encrypted telnet, and everything else was an optional add-on
(when it was first published it was almost rejected with the comment this is
just another encrypted telnet).  The big boon to sysadmins was that (a) you
could now safely type in your root password without having to walk to the room
the box was in to sit at the console and (b) you could build and run it on
pretty much everything without any hassle or cost.  That combination was what
made it universal.

Peter.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread James A. Donald

On 2011-07-13 8:43 PM, d...@geer.org wrote:

I'll certainly agree that security cannot be made free,
on the obvious grounds that security's costs are decision
making under uncertainty plus enforcement of those decisions.


Skype is an excellent example of free security.

Skype has not one click for security
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Peter Gutmann
Andy Steingruebl a...@steingruebl.com writes:

Hmm, do you know that many sysadmins outside high-security conscious areas
that really cared about typing the root password over telnet, especially back
in 1997?  I don't.  Academia and banks cared, and often deployed things like
securid or OPIE/SKEY to get away from this problem, but your average IT shop
didn't care at all.

From a discussion on an international sysadmin list (most of whom were non-
academic) in about 1995 (not 1997) pretty much everyone went to ssh by
osmosis, no matter who you worked for.  The nice thing was that you could
retrofit it to almost any existing system (there's a patch in the ssh1 code
for 386BSD 0.1 that I contributed, for example, and that was a 1991 or 1992
software release), shut off telnet, and have one less thing to worry about.

Maybe this calls for a survey/retrospective on reasons for adoption of SSH?
:)

Maybe we travel in different circles, but both in sysadmin circles and in
instances where it's come up in the past on security lists as an example of a
successful security protocol, it reason for success has always been presented
as a telnet replacement (and other usage followed from that).

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Andy Steingruebl
On Wed, Jul 13, 2011 at 8:40 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:

 Maybe we travel in different circles, but both in sysadmin circles and in
 instances where it's come up in the past on security lists as an example of a
 successful security protocol, it reason for success has always been presented
 as a telnet replacement (and other usage followed from that).

Right, I agree it was a telnet replacement, but my argument is that
the value proposition wasn't just or event mostly security.  It plain
old just worked better, the security benefits were just a nice
addition.

- Andy
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Kevin W. Wall
On Wed, Jul 13, 2011 at 11:39 AM, Andy Steingruebl a...@steingruebl.com wrote:
 On Wed, Jul 13, 2011 at 7:11 AM, Peter Gutmann
 pgut...@cs.auckland.ac.nz wrote:
 Andy Steingruebl a...@steingruebl.com writes:

The way it for for everyone I knew that went through it was:

1. Sniffing was sort of a problem, but most people didn't care
2. Telnet was quite a bit of a pain, especially when using NAT, and wanting
to do X11 forwarding
3. Typing in your password again and again over telnet (which did have
advantages over rlogin/rsh) was a pain.

Enter SSH.  It solved #1, but the big boon to sysadmins to figure it out and
installed it was that it *really* solved #2 and #3, hence major adoption.

 Uhh, this seems like a somewhat unusual reinterpretation of history.  SSH was
 primarily an encrypted telnet, and everything else was an optional add-on
 (when it was first published it was almost rejected with the comment this is
 just another encrypted telnet).  The big boon to sysadmins was that (a) you
 could now safely type in your root password without having to walk to the 
 room
 the box was in to sit at the console and (b) you could build and run it on
 pretty much everything without any hassle or cost.  That combination was what
 made it universal.

 Hmm, do you know that many sysadmins outside high-security conscious
 areas that really cared about typing the root password over telnet,
 especially back in 1997?  I don't.  Academia and banks cared, and
 often deployed things like securid or OPIE/SKEY to get away from this
 problem, but your average IT shop didn't care at all.

 Or are you really suggesting we got massive SSH adoption because of
 the security properties?   Certainly not in my experience...

 Maybe this calls for a survey/retrospective on reasons for adoption of SSH? :)

I can't speak of the experience of other companies, but I had a bunch of
sysadmins reporting to me at the time, and my recollection is that the main
reason why that SSH caught on over other secure versions of telnet or rsh
is because it could be used in script without having to place the
user's password
in plaintext anywhere. That was a major improvement because SSH allowed one
to authenticate to a remote system and execute a command without hard-coding
passwords or require manual input of said password. As such, it was ideal for
running automated scripts from crontab, at bootup, etc.

The fact that it did all this over a secure channel was really not
that important
to the sysadmins who worked with me. In fact, I can't recall a single one of
them who were concerned about that. Then again, network sniffing was pretty
rare back then, but they were definitely concerned about leaving passwords in
scripts where some unauthorized person could see them. (And yes, this meant
that they didn't protect the SSH private key with a passphrase...a practice that
is still common today when SSH is used for scripting.)

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Hill, Brad
Re: H3, There is one mode and it is secure

I have found that when H3 meets deployment and use, the reality too often 
becomes: Something's gotta give.  We haven't yet found a way to hide enough 
of the complexity of security to make it free, and this inevitably causes 
conflicts with goals like adoption.

An alternate or possibly just auxiliary hypothesis I've been promoting on how 
to respond to these pressures is: 

Build two protocols and incentivize.

That is:

   Recognize in advance that users will demand an insecure mode and give it to 
them.
   Make it a totally different protocol, not an option, mode or negotiation of 
the secure protocol.
   Encourage appropriate self-sorting between the secure and insecure protocols.

Making two completely different protocols means that neither has to pay the 
complexity cost of the other mode, (avoiding e.g. the state explosion Zooko 
described with ZRTP) eliminates or greatly reduces introduced attack classes 
around negotiation and downgrade, and makes the story around managing and 
eventually deprecating legacy clients simpler.  

The self-sorting is the tricky bit.  Google Checkout and SXIP are good examples 
of this.   Google Checkout allowed both signed and unsigned shopping carts.  
Unsigned shopping carts were dead-easy to implement, but had a higher fee 
structure than the signed carts.  This meant that it was easy to join the 
ecosystem as a prototyper, hobbyist or small and unsophisticated business.  But 
it also meant that as soon as your transaction volume got large enough, it was 
worthwhile to move to the secure version.   SXIP built the incentive between 
protocols by having additional features / attributes that were only available 
to users of the secure protocol.

The other advantage of building two protocols is that if/when the insecure 
protocol actually becomes a target of attack, the secure version is ready to 
go, deployed, proven, ready for load, with libraries, sample code, the works 
needed for a smooth transition.

This is a bit like Ian's Build one to throw away, except that I'd say, build 
them both at the same time, and maybe you won't need to throw away the insecure 
one.

Brad Hill

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Nico Williams
On Tue, Jul 12, 2011 at 12:10 PM, Hill, Brad bh...@paypal-inc.com wrote:
 Re: H3, There is one mode and it is secure

 I have found that when H3 meets deployment and use, the reality too often 
 becomes: Something's gotta give.  We haven't yet found a way to hide enough 
 of the complexity of security to make it free, and this inevitably causes 
 conflicts with goals like adoption.

 An alternate or possibly just auxiliary hypothesis I've been promoting on how 
 to respond to these pressures is:

 Build two protocols and incentivize.

 [...]

 Making two completely different protocols means that neither has to pay the 
 complexity cost of the other mode, (avoiding e.g. the state explosion Zooko 
 described with ZRTP) eliminates or greatly reduces introduced attack classes 
 around negotiation and downgrade, and makes the story around managing and 
 eventually deprecating legacy clients simpler.

Two protocols... still has a downgrade attack: either the user or user
agent will have to choose one of the protocols.  If the user - we've
failed to make the protocols user-friendly (unless only they could
make the decision).  If the user agent - we still have a downgrade.
You might intend for the user to choose, but what if the user-agent
developers decide to help the user choose?  - downgrade.

This is an excellent demonstration of Jon Callas' point regarding
complexity: we can only move it about.  Moving it to the user is not
really friendly to them, and it will result in someone later
innovating by moving that complexity back into the user-agent.

For the developer the simplest protection against downgrade attacks is
to default to a secure setting and fob off any fallback-on-nsecure
decision on the user.  This works whether there's one protocol with
two options or two with none.

 The self-sorting is the tricky bit.  Google Checkout and SXIP are good 
 examples of this.   Google Checkout allowed both signed and unsigned shopping 
 carts.  Unsigned shopping carts were dead-easy to implement, but had a higher 
 fee structure than the signed carts.  This meant that it was easy to join the 
 ecosystem as a prototyper, hobbyist or small and unsophisticated business.  
 But it also meant that as soon as your transaction volume got large enough, 
 it was worthwhile to move to the secure version.   SXIP built the incentive 
 between protocols by having additional features / attributes that were only 
 available to users of the secure protocol.

Here the user is in a position to decide, and there's few of them
anyways and they can be expected to understand or educate themselves
about these issues.

If we were talking about all the users of the Internet (who do have to
grok the difference between http and https, and understand what to
do about invalid/expired/self-signed certs, what trust anchors are,
and so on)...

The main difference between two protocols, no options and one
protocol, two options may well be that the former is packet filter
friendly, while the latter would require deep packet inspection for
effective filtering.  I wish this reason didn't have to matter much.
There is no difference as to UIs though: in both cases the two choices
can be forced on the user or hidden from them (like when you select
whether to use TLS in IM applications, or choose between or act upon
the service's choice of http or https in the browser).  Neither
approach fundamentally prevents downgrade attacks.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Andy Steingruebl
On Tue, Jul 12, 2011 at 2:24 PM, Zooko O'Whielacronx zo...@zooko.com wrote:

 When systems come with good usability properties in the key management
 (SSH, and I modestly suggest ZRTP and Tahoe-LAFS) then we don't see
 this pattern. People are willing to use secure tools that have a good
 usable interface. Compare HTTPS-vs-HTTP to SSH-vs-telnet (this
 observation is also due to Ian Grigg).

I reject the SSH key management example though.  Especially if you've
ever maintained a large number/variety of unix servers running SSH,
where hardware failures, machine upgrades, etc. lead to frequent SSH
key churn.  In those cases the only solutions are:

1. Automate key distribution to things like the /etc/known_hosts file
via means that aren't built into or supported by SSH itself really,
they are an adhoc add-on.
2. Go to insane pains to ensure that keys don't ever change. Quite
tricky when you're trying to automate OS installs, etc.
3. Use keys-in-DNS for this, which defaults back to something quite
easy to attack.
4. Give up. Accept all keys without fail and just assume you're not
getting owned.

In practice unskilled sysadmins in large environments go with #4 most
of the time, and you're right back to where you started... you can't
defend against active attackers.

- Andy
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Nico Williams
On Tue, Jul 12, 2011 at 5:36 PM, Andy Steingruebl a...@steingruebl.com wrote:
 I reject the SSH key management example though.  Especially if you've
 ever maintained a large number/variety of unix servers running SSH,
 where hardware failures, machine upgrades, etc. lead to frequent SSH
 key churn.  In those cases the only solutions are:

 1. Automate key distribution to things like the /etc/known_hosts file
 via means that aren't built into or supported by SSH itself really,
 they are an adhoc add-on.
 2. Go to insane pains to ensure that keys don't ever change. Quite
 tricky when you're trying to automate OS installs, etc.
 3. Use keys-in-DNS for this, which defaults back to something quite
 easy to attack.
 4. Give up. Accept all keys without fail and just assume you're not
 getting owned.

 In practice unskilled sysadmins in large environments go with #4 most
 of the time, and you're right back to where you started... you can't
 defend against active attackers.

I've seen several cases (two of them high profile Wall St. banks)
where people fallback on SSHv2 with GSS keyex.  This way you get to
avoid SSHv2 public host keys altogether.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Ian G

On 13/07/11 8:36 AM, Andy Steingruebl wrote:

On Tue, Jul 12, 2011 at 2:24 PM, Zooko O'Whielacronxzo...@zooko.com  wrote:


When systems come with good usability properties in the key management
(SSH, and I modestly suggest ZRTP and Tahoe-LAFS) then we don't see
this pattern. People are willing to use secure tools that have a good
usable interface. Compare HTTPS-vs-HTTP to SSH-vs-telnet (this
observation is also due to Ian Grigg).


I reject the SSH key management example though.


The SSH-vs-telnet example was back in the mid-90s where there were two 
alternatives:  secure telnet and this new-fangled thing called SSH.


What's instructive is this:  secure telnet told the user to do 
everything correctly, and was too much trouble.  SSH on the other hand 
got up and going with as little trouble as it could think of at the 
time.  Basically it used the TOFU model, and that worked.


The outstanding factoid is that SSH so whipped the secure telnet product 
that these days it's written out of history.


(Granted, SSH wasn't really thinking about the large scale admin issues 
that came later.)


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Andy Steingruebl
On Tue, Jul 12, 2011 at 3:56 PM, Ian G i...@iang.org wrote:

 The SSH-vs-telnet example was back in the mid-90s where there were two
 alternatives:  secure telnet and this new-fangled thing called SSH.

The way it for for everyone I knew that went through it was:

1. Sniffing was sort of a problem, but most people didn't care
2. Telnet was quite a bit of a pain, especially when using NAT, and
wanting to do X11 forwarding
3. Typing in your password again and again over telnet (which did have
advantages over rlogin/rsh) was a pain.

Enter SSH.  It solved #1, but the big boon to sysadmins to figure it
out and installed it was that it *really* solved #2 and #3, hence
major adoption.  I know this wasn't the case for everyone to adopt it,
some people did it purely for security reasons.  That said, the major
threat was the passive attacker, the person running a sniffer on some
network.  Against them SSH was incredibly effective.

- Andy
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Marsh Ray

On 07/12/2011 04:24 PM, Zooko O'Whielacronx wrote:

On Tue, Jul 12, 2011 at 11:10 AM, Hill, Bradbh...@paypal-inc.com
wrote:


I have found that when H3 meets deployment and use, the reality
too often becomes: Something's gotta give.  We haven't yet found
a way to hide enough of the complexity of security to make it
free, and this inevitably causes conflicts with goals like
adoption.


This is an excellent objection. I think this shows that most crypto
systems have bad usability in their key management (SSL, PGP). People
don't use such systems if they can help it, and when they do they
often use them wrong.


But the entire purpose of securing a system is to deny access to the
protected resource. In the case of systems susceptible to potential
phishing attacks, we even require that the user themselves be the one to
decline access to the system!

Everyone here knows about the inherent security-functionality tradeoff.
I think it's such a law of nature that any control must present at least
some cost to the legitimate user in order to provide any effective
security. However, we can sometimes greatly optimize this tradeoff and
provide the best tools for admins to manage the system's point on it.

Hoping to find security for free somewhere is akin to looking for free
energy. The search may be greatly educational or produce very useful
related discoveries, but at the end of the day the laws of
thermodynamics are likely to remain satisfied.

Those looking for no-cost or extremely low-cost security either don't
place a high value on the protected resource or, given the options they
have imagined them, that they may profit more by the system being in the
less secure state. Sometimes they haven't factored all the options into 
their cost-benefit analysis. Sometimes it never occurs to them that the 
cost of a security failure can be much much greater than the nominal 
value of the thing being protected (ask Sony).


It was once said that nuclear physics would provide electric power that
was too cheap to meter, i.e., they might not even bother sending you a
utility bill. Obviously that didn't happen. If your device's power
requirements don't justify power from the nuke plant the better question
might be how to make the battery-based options as painless as possible.
Toys used to always come batteries not included. Now toys often
include a battery, but the batteries don't seem to have gotten much
better. Toy companies probably found that a potential customer being
able to press the button in the store display was worth the cost of a
bulk-rate battery.

So even if you're a web site just selling advertising and your users'
personal information, security is a feature that attracts and retains
users, specifically those who value their _own_ stuff. (Hint hint: this
is the kind with money to spend with your advertisers.) Smart people
value their own time most of all and would find it a major pain to have
to put everything back in order after some kind of compromise. Google
knows exactly what they're doing when they do serious security audits
and deploy multiple factors of authentication even for their free Gmail
users. This difference in mindset is why Hotmail and Yahoo! are now
also-rans.

I hope there was a coherent point in all of that somewhere :-) I know
I'm preaching to the choir but Brad seemed to be asking for arguments of
this sort.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread James A. Donald

On 2011-07-13 7:24 AM, Zooko O'Whielacronx wrote:

On Tue, Jul 12, 2011 at 11:10 AM, Hill, Bradbh...@paypal-inc.com  wrote:


I have found that when H3 meets deployment and use, the reality too often becomes: 
Something's gotta give.  We haven't yet found a way to hide enough of the 
complexity of security to make it free, and this inevitably causes conflicts with goals 
like adoption.


This is an excellent objection. I think this shows that most crypto
systems have bad usability in their key management (SSL, PGP). People
don't use such systems if they can help it, and when they do they
often use them wrong.


Considering how often engineers have screwed up key management, asking 
end users to manage keys is guaranteed to fail.


All new systems combine key management with address management, so that 
the user faces no extra clicks to keep his keys in sync with his 
addresses.  For example a bitcoin address looks like 
1Kaa6Y7F61aQER8jZBoBtfEVscAQ1KjAGk  (a petname is associated with each 
address)


and a tor hidden service looks like
http://ianxz6zefk72ulzz.onion/index.php  (Tor relies on the Mozilla 
bookmarking system for petnames, while bitcoin has its own address 
management UI to enter petnames)



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread James A. Donald

On 2011-07-13 3:10 AM, Hill, Brad wrote:

Recognize in advance that users will demand an insecure mode and give it to 
them.



I don't see any demand for an insecure mode for tor hidden services, and 
though SSH provides an insecure mode, no one uses it.


If users demand an insecure mode, it is because your secure mode has bad 
user interface.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread James A. Donald

On 2011-07-13 8:36 AM, Andy Steingruebl wrote:

I reject the SSH key management example though.  Especially if you've
ever maintained a large number/variety of unix servers running SSH,
where hardware failures, machine upgrades, etc. lead to frequent SSH
key churn.  In those cases the only solutions are:

1. Automate key distribution to things like the /etc/known_hosts file
via means that aren't built into or supported by SSH itself really,
they are an adhoc add-on.
2. Go to insane pains to ensure that keys don't ever change. Quite
tricky when you're trying to automate OS installs, etc.
3. Use keys-in-DNS for this, which defaults back to something quite
easy to attack.
4. Give up. Accept all keys without fail and just assume you're not
getting owned.


Option 2 does not seem to require insane pains,  It is less horrid 
than installing an SSL certificate.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Hill, Brad
 If users demand an insecure mode, it is because your secure mode has bad user 
 interface.

I'm actually thinking about things like web services where the user isn't 
someone sitting in front of a UI, but a programmer, or a team of programmers, 
testers, and operational personnel.It's easy to hand wave about how secure 
*should* be easy, but in practice it's not, when you need to provision mutual 
authentication without any previously shared trust infrastructure or secrets, 
do it twice for testing and production, manage separation of duties between 
dev, test and ops, when TOFU isn't good enough for your auditors and you're 
trying to provide libraries and APIs for your protocol in various flavors and 
for multiple frameworks on C, Ruby, Python, PHP, .NET, VisualBasic, Java, 
Scala, ECMAScript or whatever the hot new thing is today.  (and even if you do 
give them libraries, half of the devs will try to implement it themselves 
because crypto is cool)

Managing all this friction isn't strictly the job of the crypto protocol, but 
this meta-layer can exert considerable force on protocol designs.  Responding 
to it by saying, H3, doesn't always cut it with the people paying your 
salary, even with a great argument about how bad an insecure mode will be in 
the future - because there is no future if you go out of business because you 
can't onboard customers.  I'm saying there are better ways to manage this 
common design pressure and accommodate the real needs of your customers than by 
adding multiple modes or negotiation to a protocol.

-Brad
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-09 Thread Peter Gutmann
Zooko O'Whielacronx zo...@zooko.com writes:

Hm, digging around in my keepsakes cabinet, I unfortunately do not find the
original state transition diagram that I mentioned above, but I do find an
artifact that I wrote a few months later=E2=80=94a sketch of a protocol that
I called ZRTP lite which was ZRTP as it existed at that time minus insecure
mode, algorithm negotiation, the confirm packet, and the max-retries
timeout.

Back in the 1970s and 80s, anyone who was seriously into role-playing games
inevitably ended up designing their own system at some point, with the goal of
fixing all the flaws in whatever existing systems they used.  It always ended
up being, oh, about a thousand times more complex than any other system
around, and never got used much (or, usually, even finished).

I think there's a dual of this for people who've worked with security
protocols.  For example I've got a draft for a cut-down SSH that's probably
about one tenth the complexity of the existing protocol while satisfying the
majority of users (secure telnet/secure file transfer) that, like your ZRTP
lite, I've never got around to posting.  And a profile for CMP (a remarkably
unworkable mess that pretty much faded into oblivion after only a couple of
years) that drops most of the original protocol and actually works quite well,
and so on.

Has anyone else come up with an XYZ Lite that offers 90% of the functionality
of the original at 10% of the complexity, and 5% of the attack surface?

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-07 Thread Peter Gutmann
Sampo Syreeni de...@iki.fi writes:

To my mind the difference seemed to be about shallow versus deep parsing. You 
can't really deep parse anything in BER with implicit tagging, 

You can deep-parse, you just need to apply some basic heuristics (e.g. if 
it's an octet string and the first byte is a standard tag that's used with 
octet-string holes and the following bytes are a length that's the same as the 
octet-string content then it's an octet string hole, continue drilling down).

In this sense I would agree: to me parsing an input means parsing it right 
down to the last bit. If there's anything you have to skip, or munge, or 
skirt/skip over, that's not parsing proper, but shallow parsing.

Right, and that's quite possible with ASN.1.  As I've already mentioned, run 
dumpasn1 on certs or S/MIME data or whatever and see for yourself.

The problem with the non-ASN.1 approaches is that they're all PER, unless you 
know every detail of what to expect at every point of the encoded data you 
can't even get past the first byte.  In addition you're forced to use 
handcoded parsers for everything, there isn't even scope for something like an 
ASN.1 compiler.  SSH, which freely mixes binary data and comma-delimited ASCII 
text is the worst of the lot, that's just a nightmare to parse safely.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-06 Thread Peter Gutmann
Nico Williams n...@cryptonector.com writes:
On Wed, Jul 6, 2011 at 12:06 AM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 (The ASN.1 filter I mentioned earlier is a stripped-down version of dumpasn1.
 Remember that dataset of 400K broken certs that NISCC generated a few years
 ago and that broke quite a number of ASN.1-using apps (and filesystems when
 you untarred it :-)?  It processed all of those without any problems).

Do you have a link for that dataset?  

You have to write to them and they'll send you a CD.  I'm not sure if it's 
available online anywhere.

I want to check if the data is for explicitly or implicitly tagged modules.

It's randomly-modified cert data, there's every kind of tagging in there, 
including ones you've never heard of before (due to the random permutations 
used).

See ASN.1 Communication Between Heterogeneous Systems, page 213, which says 
that [a] type tagged in implicit mode can be decoded only if the receiving 
application 'knows' the abstract syntax, that is, the decoded has been 
generated from the same ASN.1 module as the encoded was.  

I know what implicit and explicit tagging is.  You don't need to know the 
syntax at all, a few simple heuristics will get BIT STRING and OCTET STRING 
holes and the like.  Throw stuff at dumpasn1 and see what it gives you.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-06 Thread Peter Gutmann
I wrote:

BER and DER are actually the safest encodings of the major security protocols
I work with.

Based on the following, which just appeared on another list: 

  In contrast to RFC 5280,  X.509 does not require DER encoding. It only
  requires that the signature is generated across a DER encoded certificate,
  but the itself certificate may be encoded using BER.

  Should we add a sentence somewhere in X.509 and possibly in RFC 5280
  specifying that when verifying a signature a relying party shall decode and
  then encode the certificate in DER to verifying the signature?

may I amend my previous statement to insert if used under correct adult
supervision after the words safest encodings.

Thank you.

Peter.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-06 Thread Jeffrey Walton
On Wed, Jul 6, 2011 at 7:07 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 I wrote:

BER and DER are actually the safest encodings of the major security protocols
I work with.

 Based on the following, which just appeared on another list:

  In contrast to RFC 5280,  X.509 does not require DER encoding. It only
  requires that the signature is generated across a DER encoded certificate,
  but the itself certificate may be encoded using BER.

  Should we add a sentence somewhere in X.509 and possibly in RFC 5280
  specifying that when verifying a signature a relying party shall decode and
  then encode the certificate in DER to verifying the signature?

 may I amend my previous statement to insert if used under correct adult
 supervision after the words safest encodings.
Promoting interoperability (write strict/read loose) is a feature!
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-06 Thread Sampo Syreeni

On 2011-07-04, Jon Callas wrote:

Let me be blunt here. The state of software security is so immature 
that worrying about crypto security or protocol security is like 
debating the options between hardened steel and titanium, when the 
thing holding the chain of links to the actual user interaction is a 
twisted wire bread tie.


Agreed: human factors are the number one problem, with script-kiddie 
level bugs coming a close second. However, the few real-life problems 
caused by defective protocols or algorithms have the potential to have 
very wide impact, including high target value institutions which have 
already nailed the usual problems, and to be very costly to repair once 
the algos have been set in silicon.



Yeah, it's hard to get the crypto right, but that's why they pay us.


That's one thing No Options also helps with: it should be the paid 
cryptographers who make the hard choices, not the end user.


That's just puritanism, the belief that if you just make a few 
absolute rules, everything will be alright forever.


I rather like to think of myself as an empiricist: at least now that we 
have a long track record of things going more wrong with complex 
crypto protocols than simple ones or the primitives they employ, we 
should perhaps fix at least this corner of the overall problem.



I'm smiling as I say this -- puritanism: just say no.


OTOH, Puritanism: The haunting fear that someone, somewhere, may be 
happy. -- H. L. Mencken


Meh. My answer to your first question is that you can't. If you want 
an interesting protocol, it can't resist protocol attacks.


So the corollary of what I'm talking about is that protocols should not 
be interesting. May you live in a time with interesting protocols 
should perhaps be a cryptographers' curse?


As for X.509, want to hear something *really* depressing? It isn't a 
total mess. It actually works very well, even though all the mess 
about it is quite well documented. Moreover, the more that X.509 gets 
used, the more elegant its uses are. There are some damned fine 
protocols using it and just drop it in.


Well, why don't they then just pick each of those elegant uses, codify 
it into a maximally restricted, formal grammar, and supercede X.509 with 
the combined results? That could potentially make everybody happy at the 
same time. And I mean, as you say, that's been the general direction, 
e.g. within the RFC series. Not to mention many other polymorphic 
formats -- nowadays it's pretty rare that even ISO pushes out anything 
too complicated without also defining much-simplified profiles of it.


Yeah, yeah, having more than one encoding rule is madness, but to make 
that make you run screaming is to be squeamish.


I'm perfectly happy dealing with complications like these. But the 
trouble is, not everybody can hack it, and since even a perfectly good 
implementation can be weakened simply by interoperating with a bad one, 
this sort of stuff can impact the entire ecosystem.



However, the problems with PKI have nothing to do


I'm not so sure PKI is completely innocent. I mean, it aims at being a 
silver bullet which solves any and every authentication related problem 
within a single framework and, usually, by reusing the same protocols or 
formats. To me that seems like a prime reason for high polymorphism and 
open-ended design.


OpenPGP is a trivially simple protocol at its purest structure. It's 
just tag, length, binary blob.


TLV encodings are conceptually rather simple, yes. But in practice once 
you allow nesting, mix in length fields outside of the block structure, 
allow indefinite length blocks and reuse of globally defined tag values 
in different contexts, allow mixing of free form binary and block 
content, and so on, there's suddenly ample room for error.


I mean, I can understand why we want extensible protocols, that is, 
protocols which let the receiver be lax in what it is willing to accept. 
It's just that crypto doesn't seem to be one of the applications where 
this sort of polymorphism is too desirable or even useful.


You know where the convolutedness comes from? A lack of options. That 
and over-optimization, which is actually a form of unneeded 
complexity.


Do you happen to have a particular example in mind?

If you create a system with truly no options, you create brittleness 
and inflexibility. It will fail the first time an underlying component 
fails and you can't revise it.


That's why you probably need some minimum form of versioning and/or 
tagging. But, say, embedding the choice of crypto primitives to be used 
together in the protocol, letting key lengths vary willy-nilly and that 
sort of general compositionality, it just doesn't seem too useful to me. 
Neither does a tagging structure which lets you embed whatever kinds of 
generic packet types into whatever context -- which is why I've actually 
become a big fan of ASN.1's implicit tags as opposed to the universal 
ones.


I think that crypto 

Re: [cryptography] preventing protocol failings

2011-07-05 Thread Jon Callas
On Jul 4, 2011, at 10:10 PM, coderman wrote:

 H3 should be Gospel: There is Only One Mode and it is Secure
 
 anything else is a failure waiting to happen…

Yeah, sure. I agree completely. How could any sane person not agree? We could 
rephrase this as, The Nineties Called, and They Want Their Exportable Crypto 
Back. Exportable crypto was risible at the time and we all knew it.

But how is this actionable? How can I use this principle as a touchstone to let 
me know the right thing to do. I suppose we could consider it a rule of thumb 
instead, but that flies in the face of making it Gospel.

Rather than rant, I'll propose a practical problem and pose a question.

You're writing an S/MIME system. Do you include RC2/40 or not? Why?

Hint: Gur pbeerpg nafjre vf gung lbh vaqrrq fubhyq vapyhqr vg. Ohg V yrnir gur 
jurersberf nf na rkrepvfr. Ubjrire, guvf uvag vf nyfb n zrgn-uvag nf gb gur 
ernfbaf jul lbh fubhyq vapyhqr vg.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Peter Gutmann
coderman coder...@gmail.com writes:

H3 should be Gospel: There is Only One Mode and it is Secure

Also known as Grigg's Law.  The corollary, for protocols where there *are*
options, is There is one one cipher suite and that is Suite #1.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread coderman
On Mon, Jul 4, 2011 at 11:31 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 ... The corollary, for protocols where there *are*
 options, is There is one one cipher suite and that is Suite #1.

hey, removing all other options can be an option.

uh oh, i just contradicted myself...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread coderman
On Mon, Jul 4, 2011 at 11:11 PM, Jon Callas j...@callas.org wrote:
 ...
 Yeah, sure. I agree completely.

no you don't ;)


 How can I use this principle as a touchstone to let me know the right thing 
 to do. I suppose we could consider it a rule of thumb instead, but that flies 
 in the face of making it Gospel.

what are the good reasons for options that don't include:
- backwards compatibility
- intentional crippling (export restrictions)
- patents or other license restrictions
- interoperability with others
?

there may be a pragmatic need for options dealing with existing
systems or business requirements, however i have yet to hear a
convincing argument for why options are necessary in any new system
where you're able to apply lessons learned from past mistakes.


 You're writing an S/MIME system...

well there's your problem right there!


as for formal verification, i agree completely.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Jon Callas

On Jul 4, 2011, at 11:35 PM, coderman wrote:

 On Mon, Jul 4, 2011 at 11:11 PM, Jon Callas j...@callas.org wrote:
 ...
 Yeah, sure. I agree completely.
 
 no you don't ;)
 

Actually I do. I also believe in truth and justice and beauty, too. And 
simplicity. I just value actionable, as well.


 
 How can I use this principle as a touchstone to let me know the right thing 
 to do. I suppose we could consider it a rule of thumb instead, but that 
 flies in the face of making it Gospel.
 
 what are the good reasons for options that don't include:
 - backwards compatibility
 - intentional crippling (export restrictions)
 - patents or other license restrictions
 - interoperability with others
 ?
 
 there may be a pragmatic need for options dealing with existing
 systems or business requirements, however i have yet to hear a
 convincing argument for why options are necessary in any new system
 where you're able to apply lessons learned from past mistakes.
 
 

Pragmatic. That's what I'm talking about pragmatism. It's not pragmatic to go 
write a new protocol all the time. Especially if the time to create one with no 
known flaws is longer than the time to find a flaw.


 You're writing an S/MIME system...
 
 well there's your problem right there!
 

Hey, you mentioned backwards compatibility, yourself.

Jon
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Peter Gutmann
Nico Williams n...@cryptonector.com writes:

Why even have a tag??  The ASN.1 Packed Encoding Rules (think ONC XDR with 1-
byte alignment instead of 4-byte alignment) doesn't use tags at all.

Which makes them impossible to statically check, and leads to hellishly
complex decoders.

In BER/DER/CER/XML you get a lot of redundancy: tag-length-value, sometimes
tag-length-tag-length-value (e.g., when explicit tagging is used). 

This is a feature, not a flaw, because it means you can statically type-check
it.  With BER/DER I can implement a filter that takes as input any encoded
blob and reports true or false for the question is this well-formed data.
With CER (and XML, and PGP, and SSH, and SSL/TLS, and IPsec) I can't.

If you want to prevent new bugs in these areas, let's start with putting the
venerable BER/DER/CER to rest in the trash bin.  Legacy will make that a
difficult proposition.

BER and DER are actually the safest encodings of the major security protocols
I work with.  I'd rank them, in terms of danger, as:

SSH

[Long gap]

PGP, SSL/TLS

[Smaller gap]

BER/DER

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Steven Bellovin
 there may be a pragmatic need for options dealing with existing
 systems or business requirements, however i have yet to hear a
 convincing argument for why options are necessary in any new system
 where you're able to apply lessons learned from past mistakes.

You said it yourself: different businesses have different requirements.
The requirements may be operational environment or they may be
marketing-related.  I'll give just one example: web authentication.
If, say, I'm building a web-based interface to a system for tasking
an orbital COMSAT satellite.  That system should likely require
strong role-based authentication, possibly coupled with authentication
of the machine it's coming from, plus personal authentication for
later auditing.  By contrast, an airline reservation system that's
used for selecting seats (and printing boarding passes) will frequently
be used at hotel and airport kiosks, may be delegated to administrative
assistants, etc.  At some level, it's the same problem -- reserving
a resource (surveillance slot or an airplane seat), but the underlying
needs are very different.

More importantly (and to pick a less extreme scenario), security isn't
an absolute, it's a matter of economics.  If the resource you're
protecting isn't worth much, why should you spend a lot?  There are
certainly kinds of security that cost very little (RC4-128 has exactly
the same run-time overhead as RC4-40, though the cost of the public
key operations commensurate with those key lengths will differ);
other times, though, requirements are just plain different.

To quote the old Einstein line, a system should be as simple as possible
but no simpler.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Arshad Noor

On 07/05/2011 09:09 AM, Steven Bellovin wrote:


More importantly (and to pick a less extreme scenario), security isn't
an absolute, it's a matter of economics.  If the resource you're
protecting isn't worth much, why should you spend a lot?


And, one does not need to guess at how much a lot is; the legal
community uses a ruling from 1947, issued by Judge Learned Hand in
the case of United States vs. Carroll Towing Co., to determine how
much someone should have spent:

http://en.wikipedia.org/wiki/United_States_v._Carroll_Towing_Co.
or
http://en.wikipedia.org/wiki/Calculus_of_negligence

The only issue with our rather immature security industry is, that
without a central repository of information about attacks (that
might have provided quantitative data to researchers), its very hard
to calculate estimated damage.

Arshad Noor
StrongAuth, Inc.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Ian G

On 5/07/11 3:59 PM, Jon Callas wrote:


There are plenty of people who agree with you that options are bad. I'm not one 
of them. Yeah, yeah, sure, it's always easy to make too many options. But just 
because you can have too many options that doesn't mean that zero is the right 
answer. That's just puritanism, the belief that if you just make a few absolute 
rules, everything will be alright forever. I'm smiling as I say this -- 
puritanism: just say no.


I find it ironic to be on the side of the puritans, but I think it's not 
inappropriate.


The 90s were the times of an excess of another religious crowd -- the 
hedonists.  In those times, more modes was more better.  The noble drive 
to secure the Internet intersected with the jihadic expression of code 
as freedom, the net as the new world, crypto as numbers, government as 
the enemy, and as much as possible of all of them.  Right now!  Today!


Hell, I was even part of it.  I thought it was so cool I coded up extra 
algorithms for Cryptix, just for fun, and lobbied to get extra 
identifiers stuffed into OpenPGP.




But what was the benefit?  Let's just take one example, the 
oft-forgotten client certificate.


Does anyone make much use of client certificate mode in SSL?  No, 
probably not.  They work [0], but nobody uses them, much.  And, it turns 
out that there is a good reason why nobody uses this fairly workable 
product:  because you don't have to.  Because it is optional.  As client 
certificates are optional, sites can't rely on the client certs being 
available.  So they fall back to that which they can insist on, which is 
passwords.  Which humans can be told to invent, and they will, without 
any audible grumbling.


So, options means unavailability.  Which means it can't be used.

Yet, there's no *security* reason for them being optional.  Client certs 
could be mandatory, just like server certs.  There is no *business 
benefit* for users in client certs being optional (and by this I mean 
client-side and server-side).




That's just one mode.  It turns out there is another mode -- HTTP.  This 
mode is turned on far more than it should be, resulting in a failure of 
user discrimination.  Hence, phishing.


Now, we may poo-poo the whole phishing thing, but consider that phishing 
is a bypass on SSL's authentication properties for online banking, etc. 
 At whatever layer we found it.  Phishing is the breach that exploits 
HTTP mode in browsing.


And consider that phishing, alongside server-breaching, financed the 
current wave of crime, step by step, to our current government 
cybercrime social disaster.


It's a lot to lay at the feet of a little mode like optional HTTP in 
secure browsing, but the bone points squarely at it.


If you've followed the history of real use and real breach, modes can be 
shown to cause failure.  OTOH, if we look at famous systems with few 
modes, we see less failure.  Skype has only one mode.  And it is secure. 
 SSH has very few modes.  And what modes it has -- password login for 
example -- caused a wave of SSH password snaffling until sysadms learned 
to turn off password mode!


In contrast:  SSL again.  Some packet bugs fixed in SSL v3.  MD5 
deprecation, much anticipated by a squillion cipher suites but target 
missed completely!  Re-negotiation - a mode to re-negotiate modes!  And 
finally the TLS/SNI bug.  Ug.




I claim that we've got causality and we've got correlation.  Which gives 
us the general hypothesis:


   there is only one mode, and it is secure.


I think that crypto people are scared of options because options are hard to 
get right, but one doesn't get away from options by not having them. The only 
thing that happens is that when one's system fails, someone builds a completely 
new one and writes papers about how stupid we were at thinking our system would 
not need an upgrade. Options are hard, but you only get paid to solve hard 
problems.



What's left is arguing about the exceptions.  In H6.6 [6], I argued that:

   Knowing the Hypotheses is a given, that's the job of a
   protocol engineer. That which separates out engineering
   from art is knowing when to breach a hypothesis.

Another way of putting it is, do you think you know as much as Jon or 
Peter or the designers at Skype or Tatu Ylönen?  Probably not, but I for 
one am not going to criticise you if you've got the balls for trying, 
and you *know the risks*.




iang



[0] An alternate view on why  how client certs work:
http://wiki.cacert.org/Technology/KnowledgeBase/ClientCerts/theOldNewThing

[6]http://iang.org/ssl/h6_its_your_job_do_it.html#6.6
Hmm, perhaps that should be numbered H6.6.6 ?
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Peter Gutmann
Nico Williams n...@cryptonector.com writes:

In other words, in ASN.1 as it's used you have to know the schema and message 
type in order to do a good job of parsing the message, 

No you don't.  I give as a counterexample dumpasn1, which knows nothing about 
message types or schemas, but parses any (valid) ASN.1 you throw at it.

(The ASN.1 filter I mentioned earlier is a stripped-down version of dumpasn1. 
Remember that dataset of 400K broken certs that NISCC generated a few years 
ago and that broke quite a number of ASN.1-using apps (and filesystems when 
you untarred it :-)?  It processed all of those without any problems).

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-04 Thread Steven Bellovin

On Jul 4, 2011, at 7:28 10PM, Sampo Syreeni wrote:

 (I'm not sure whether I should write anything anytime soon, because of Len 
 Sassaman's untimely demise. He was an idol of sorts to me, as a guy who Got 
 Things Done, while being of comparable age to me. But perhaps it's equally 
 valid to carry on the ideas, as a sort of a nerd eulogy?)
 
 Personally I've slowly come to believe that options within crypto protocols 
 are a *very* bad idea. Overall. I mean, it seems that pretty much all of the 
 effective, real-life security breaches over the past decade have come from 
 protocol failings, if not trivial password ones. Not from anything that has 
 to do with hard crypto per se.
 
 So why don't we make our crypto protocols and encodings *very* simple, so as 
 to resist protocol attacks? X.509 is a total mess already, as Peter Gutmann 
 has already elaborated in the far past. Yet OpenPGP's packet format fares not 
 much better; it might not have many cracks as of yet, but it still has a very 
 convoluted packet structure, which makes it amenable to protocol attacks. Why 
 not fix it into the simplest, upgradeable structure: a tag and a binary blob 
 following it?
 
 Not to mention those interactive protocols, which are even more difficult to 
 model, analyze, attack, and then formally verify. In Len's and his spouse's 
 formalistic vein, I'd very much like to simplify them into a level which is 
 amenable to formal verification. Could we perhaps do it? I mean, that would 
 not only lead to more easily attacked protocols, it would also lead to more 
 security...and a eulogy to one of the new cypherpunks I most revered.
 -- 

Simplicity helps with code attacks as well as with protocol attacks, and the 
former are a lot more common than the latter.  That was one of our goals in JFK:

@inproceedings{aiello.bellovin.ea:efficient,
  author = {William Aiello and Steven M. Bellovin and Matt Blaze and
  Ran Canetti and John Ioannidis and Angelos D. Keromytis and
  Omer Reingold},
  title = {Efficient, {DoS}-Resistant, Secure Key Exchange for
  Internet Protocols},
  booktitle = {Proceedings of the ACM Computer and Communications
  Security (CCS) Conference},
  year = 2002,
  month = {November},
  url = {https://www.cs.columbia.edu/~smb/papers/jfk-ccs.pdf},
  psurl = {https://www.cs.columbia.edu/~smb/papers/jfk-ccs.ps}
}



--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-04 Thread Nico Williams
On Mon, Jul 4, 2011 at 6:28 PM, Sampo Syreeni de...@iki.fi wrote:
 Personally I've slowly come to believe that options within crypto protocols
 are a *very* bad idea. Overall. I mean, it seems that pretty much all of the
 effective, real-life security breaches over the past decade have come from
 protocol failings, if not trivial password ones. Not from anything that has
 to do with hard crypto per se.

 So why don't we make our crypto protocols and encodings *very* simple, so as
 to resist protocol attacks? [...]

What did you mean by options?  Did you mean optional elements and
negotiations?  Or were you referring specifically to encoding issues?

Regarding the latter: BER and friends are dangerous because they
result in much redundancy in encodings.

Regarding the former: it's generally best to push any negotiations to
one level (more on that below) and to avoid any two-level negotiation
like the plague.

Consider an IMAP application that allows the negotiation and use of
one of various SASL mechanisms, including GSS-SPNEGO, and through it
NTLM, Kerberos, and maybe others.  Such an application has up to three
levels of negotiation!  One for selecting a SASL mechanism, and if
GSS-SPNEGO is selected, then a negotiation in SPNEGO, and if Kerberos
is selected, then a negotiation for a Kerberos enctype (i.e., cipher
suite).  Three levels of negotiation makes negotiation unpredictable
and makes it difficult to put enough control over negotiation choices
in the hands of the application.  Thus one of the worst API elements
is the Cyrus SASL security strength factor, which is intended to
allow sorting of mechanisms by cryptographic strength, and which
always assigns a value of 56 to Kerberos, as if 1-DES were the only
cipher Kerberos supports!   But SSF is bad primarily because it's
too rough and subjective a measure of cryptographic strength.  Instead
we could have named profiles which applications tell the
cryptosystem must be met, leaving it to the security mechanisms to
enforce the profiles.

(The SASL community has decided to avoid the two-level negotiation
problem in the future by insisting that new mechanisms have all cipher
suite choices baked in.  If you want to add a new cipher suite to a
mechanism you just add a new *name* for that mechanism and use the
negotiation of mechanisms to embody the negotiation of cipher suites.)

Can we get rid of all options?  Hardly.  First, we need at least one
level of negotiation so we can have some degree of cipher suite
agility.  Second, I've a hard time imagining how we might avoid all
other optionality in crypto protocol designs.  Extensibility has been
a good thing.  We could extend things by abandoning old protocols and
replacing them with extended versions, but even if we accepted all the
resulting garbage we'd still have a need for optionality in many
cases.

I can imagine a world that relies on relying-party only certificates,
which require nothing more than a public key and public key algorithm
identifier, and DNSSEC as the only PKI.  That would mean we could say
goodbye to all of the complexity of PKIX, but we'd still have
extensibility via algorithm IDs and DNS, such as via new RR types.
But I can't imagine a world in which relying parties don't have to
obtain authorization data regarding their peers in order to perform
authorization, and there's lots of options that people want regarding
authorization...

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-04 Thread Jon Callas

On Jul 4, 2011, at 4:28 PM, Sampo Syreeni wrote:

 (I'm not sure whether I should write anything anytime soon, because of Len 
 Sassaman's untimely demise. He was an idol of sorts to me, as a guy who Got 
 Things Done, while being of comparable age to me. But perhaps it's equally 
 valid to carry on the ideas, as a sort of a nerd eulogy?)
 
 Personally I've slowly come to believe that options within crypto protocols 
 are a *very* bad idea. Overall. I mean, it seems that pretty much all of the 
 effective, real-life security breaches over the past decade have come from 
 protocol failings, if not trivial password ones. Not from anything that has 
 to do with hard crypto per se.

Let me be blunt here. The state of software security is so immature that 
worrying about crypto security or protocol security is like debating the 
options between hardened steel and titanium, when the thing holding the chain 
of links to the actual user interaction is a twisted wire bread tie. 

Lots of other discussion is people noting that if you coated that bread tie 
with plastic rather than paper, it would be a lot more resistant to rust. And 
you know what, they're right!

In general, the crypto protocols are not the issue. I can enumerate the obvious 
exceptions where they were a problem as well as you can, and I think that they 
prove the rule. Yeah, it's hard to get the crypto right, but that's why they 
pay us. It's hard to get bridges and buildings and pavement right, too.

There are plenty of people who agree with you that options are bad. I'm not one 
of them. Yeah, yeah, sure, it's always easy to make too many options. But just 
because you can have too many options that doesn't mean that zero is the right 
answer. That's just puritanism, the belief that if you just make a few absolute 
rules, everything will be alright forever. I'm smiling as I say this -- 
puritanism: just say no.

 
 So why don't we make our crypto protocols and encodings *very* simple, so as 
 to resist protocol attacks? X.509 is a total mess already, as Peter Gutmann 
 has already elaborated in the far past. Yet OpenPGP's packet format fares not 
 much better; it might not have many cracks as of yet, but it still has a very 
 convoluted packet structure, which makes it amenable to protocol attacks. Why 
 not fix it into the simplest, upgradeable structure: a tag and a binary blob 
 following it?

Meh. My answer to your first question is that you can't. If you want an 
interesting protocol, it can't resist protocol attacks. More on that later.

As for X.509, want to hear something *really* depressing? It isn't a total 
mess. It actually works very well, even though all the mess about it is quite 
well documented. Moreover, the more that X.509 gets used, the more elegant its 
uses are. There are some damned fine protocols using it and just drop it in. 
Yeah, yeah, having more than one encoding rule is madness, but to make that 
make you run screaming is to be squeamish. However, the problems with PKI have 
nothing to do 

OpenPGP is a trivially simple protocol at its purest structure. It's just tag, 
length, binary blob. (Oh, so is ASN.1, but let's not clutter the issue.) You 
know where the convolutedness comes from? A lack of options. That and 
over-optimization, which is actually a form of unneeded complexity. One of the 
ironies about protocol design is that you can make something complex by making 
it too simple.

I recommend Don Norman's new book, Living With Complexity. He quotes what he 
calls Tesler's law of complexity, which is that the complexity of a system 
remains constant. You can hide the complexity, or expose it. If you give the 
user of your system no options, it means you end up with a lot of complexity 
underneath. If you expose complexity, you can simplify things underneath. The 
art is knowing when to do each.

If you create a system with truly no options, you create brittleness and 
inflexibility. It will fail the first time an underlying component fails and 
you can't revise it. 

If you want a system to be resilient, it has to have options. It has to have 
failover. Moreover it has to failover into the unknown. Is it hard. You bet. Is 
it impossible? No. It's done all the time.

I started off being a mathematician and systems engineer before I got into 
crypto. I learned about building complex systems before I learned crypto, and 
complexity doesn't scare me. I look askance at it, but I don't fear it.

Yes, yes, simpler systems are more secure. They're also more efficient, easier 
to build, support, maintain, and everything else. Simplicity is a virtue. But 
it is not the *only* virtue, and I hope you'll forgive me for invoking 
Einstein's old saw that a system should be as simple as possible and no 
simpler. 

I think that crypto people are scared of options because options are hard to 
get right, but one doesn't get away from options by not having them. The only 
thing that happens is that when one's system fails,