Re: User interface, security, and simplicity

2008-05-06 Thread Victor Duchovni
On Sun, May 04, 2008 at 10:24:13PM -0400, Thor Lancelot Simon wrote:

 I believe that those who supply security products have a responsibility
 to consider the knowledge, experience, and tendencies of their likely
 users to the greatest extent to which they're able, and supply products
 which will function properly _as users are likely to apply them_.

The TLS support in Postfix tries to behave sensibly with easy setings.

- Cipher list selection is indirect, via grades: export, low,
medium and high. The actual ciphers for each grade are buried
in parameters users are advised to not mess with.

- The cipher grade for opportunistic TLS is export, but you single
out a destination for mandatory TLS, the grade rises to medium.

- The secure peer cert validation level compares the peer's cert to
the nexthop domain (allowing a sub-domain match by default). Hostnames
derived from MX lookups are of course subject to DNS MITM and are not
trusted.  If you want to trust your DNS you can use verify instead.

http://www.postfix.org/TLS_README.html#client_tls_limits
http://www.postfix.org/TLS_README.html#client_tls_may
http://www.postfix.org/TLS_README.html#client_tls_encrypt
http://www.postfix.org/TLS_README.html#client_tls_verify
http://www.postfix.org/TLS_README.html#client_tls_secure

- With the upcoming EECDH support, users don't choose curves
directly, they again choose a security grade, and the correspnding
curves are configurable via parameters they are not expected to
ever look at or modify.

If you don't botch your CAfile, it is rather easy to provision
secure-channel connections to a select set of high-value peers.

If you don't trust any CAs:

http://www.postfix.org/TLS_README.html#client_tls_fingerprint

once you have a system designed in all its features to behave sensibly
by default (e.g. with an empty main.cf file), making security behave
sensibly by default is not that unnatural.

So I think there should be a broad design bias towards *implicit* correct
behaviour in all system features, with rope available for advanced users
to *explicitly* craft more complex use-cases. Once you have that, practical
security is not too difficult.

The same is true in the source code, unsafe practices are avoided
globally, (e.g. both strcpy() and strncpy() are absent together with fixed
size automatic buffers) rather than used with care locally. I won't bore
you with all the implementation safety habits, but there are many.

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New result in predicate encryption: disjunction support

2008-05-06 Thread Jonathan Katz

On Mon, 5 May 2008, Ariel Waissbein wrote:


[Moderator's note: Again, top posting is discouraged, and not editing
quoted material is also discouraged. --Perry]

Hi list,

Interesting. Great work! I had been looking *generic* predicate
encryption for some time. Encryption over specific predicates is much
older. Malware (e.g., virus) and software protection schemes have been
using some sort of predicate encryption or trigger for over two
decades in order to obfuscate code. For example, an old virus used to
scan hard drives looking for a BBS configuration files in a similar
manner and some software protection schemes have encrypted pieces of
code that are decrypted only if some integrity checks (predicates) over
other pieces of the program are passed.

Triggers/predicates are very promising. Yet, they are only useful in
certain applications, since eavesdropping one decryption is enough to
recover the keys and plaintext.

I co-authored a paper were we used this same concept in a software
protection application ([1]) and later we formalized this concept, that
we called secure triggers, in a paper eventually publised at TISSEC
([2]). We were only able to construct triggers for very specific
predicate families, e.g.,
 - p(x)=1 iff x=I for some I in {0,1}^k
 - q(x,y,z,...)=1 iff x=I_1, y=I_2, z=I_3,...; and finally
 - r(x)=1 iff x_{j_1}=b_1,...,x_{j_k}=b_k for some b_1,...,b_k in {0,1}
   and indexes i_1,...,i_k (|x|=k).
While these predicates do not cover arbitrary large possibilities, they
are implemented by efficient algorithms and require assuming only the
existence of IND-CPA secure symmetric ciphers. In [2] we came up with
more applications other than sofprot;)

[1] Diego Bendersky, Ariel Futoransky, Luciano Notarfrancesco, Carlos
Sarraute and Ariel Waissbein. Advanced Software Protection Now. Core
Security Technologies Tech report.
http://www.coresecurity.com/index.php5?module=ContentModaction=itemid=491

[2] Ariel Futoransky, Emiliano Kargieman, Carlos Sarraute, Ariel
Waissbein. Foundations and applications for secure triggers. ACM TISSEC,
Vol 9(1) (February 2006).

Cheers,
Ariel


Predicate encryption sounds very different from the work you are 
referencing above. (In particular, as we discuss in the paper, predicate 
encryption for equality tests is essentially identity-based encryption.) 
I refer you to the Introduction and Definition 2.1 of our paper, which 
should give a pretty good high-level overview.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Comments on SP800-108

2008-05-06 Thread Peter Gutmann
Jack Lloyd [EMAIL PROTECTED] writes:

As a standard, this is specification is a disaster.

Somewhat more strongly worded than my comments :-), but I had the same
feeling: Why yet another bunch of arbitrary PRF/KDFs to implement?  We now
have ones for SSL, for TLS, for SSH, for IKE, for PGP, for S/MIME, for... well
I don't know every crypto protocol in existence but I'm sure there's plenty
more.  What's wrong with PBKDF2, which seems to do the job quite nicely?
Whoever dies with the most KDFs wins?

There just doesn't seem to be any reason for this document to exist except
NIH.  PBKDF2 is a well-specified KDF, is relatively easy to implement (and
implement in an interoperable manner), has been around for years, and has
numerous interoperable implementations, including OSS ones if you don't want
to implement it yourself.  What's the point of SP800-108?  What
requirement/demand is this meeting?

Peter.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-06 Thread James A. Donald

Perry E. Metzger wrote:

 What you can't do, full stop, is
 know that there are no unexpected security related behaviors in the
 hardware or software. That's just not possible.


Ben Laurie wrote:
Rice's theorem says you can't _always_ solve this problem. It says 
nothing about figuring out special cases.


True, but the propensity of large teams of experts to issue horribly 
flawed protocols, and for the flaws in those protocols to go 
undiscovered for many years, despite the fact that once discovered they 
look glaringly obvious in retrospect, indicates that this problem, 
though not provably always hard, is in practice quite hard.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL and Malicious Hardware/Software

2008-05-06 Thread Arcane Jill

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
On Behalf Of Steven M. Bellovin

Sent: 03 May 2008 00:51
To: Arcane Jill
Cc: cryptography@metzdowd.com
Subject: Re: SSL and Malicious Hardware/Software


  I can't think of a great way of alerting the user,

 I would be alerted immediately, because I'm using the Petname Tool
 Firefox plugin.

 For an unproxied site, I get a small green window with my own choice
 of text in it (e.g. Gmail if I'm visiting https://mail.google.com).
 If a proxy were to insert itself in the middle, that window would turn
 yellow, and the message would change to (untrusted).

Assorted user studies suggest that most users do not notice the color
of random little windows in their browsers...




The point is that the plugin does not trust the browser's list of installed 
CAs. The only thing it trusts is the fingerprint of the certificate. If the 
fingerprint is one that you, personally, (not your browser), have approved in 
the past, then the plugin is green. If not, the plugin is yellow.


Without this plugin, identifying proxies is hard, because the proxy certificate 
will likely be installed in your browser, so it will just automatically pass 
the usual SSL checks, and will appear to you as an authenticated site. If you 
have an expectation that your web traffic will not be eavesdropped en route, 
then the sudden appearance of a proxy can flout that expectation.


On the other hand, a system which checks /only/ that the certificate 
fingerprint is what you expect it to be does not suffer from the same 
disadvantage. This is a technical difference. There's more to it than just the 
color of the warning sign! (...though I do concede, a Red Alert siren would 
probably get more attention :-) ).


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


reminder of upcoming deadline

2008-05-06 Thread dan

Call for Participation

MetriCon 3.0
Third Workshop on Security Metrics
Tuesday, 29 July 2008, San Jose, California
Overview

Security metrics -- an idea whose time has come. No matter whether you 
read the technical or the business press, there is a desire for 
converting security from a world of adjectives to a world of numbers. 
The question is, of course, how exactly to do that. The advantage of 
starting early is, as ever, harder problems but a clearer field though 
it is very nearly too late to start early. MetriCon is where hard 
progress is made and harder problems brought forward.

The MetriCon Workshops offer lively, practical discussion in the area of 
security metrics. It is a, if not the, forum for quantifiable approaches 
and results to problems afflicting information security today, with a 
bias towards practical, specific implementations. Topics and 
presentations will be selected for their potential to stimulate 
discussion in the Workshop. Past events are detailed here [1] and here 
[2]; see, especially, the meeting Digests on those pages.

MetriCon 3.0 will be a one-day event, Tuesday, July 29, 2008, in San 
Jose, California, USA. The Workshop begins first thing in the morning, 
meals are taken in the meeting room, and work/discussion extends into 
the evening. As this is a workshop, attendance is by invitation (and 
limited to 60 participants). Participants are expected to come with 
findings, to come with problems, or, better still, both. Participants 
should be willing to discuss what they have and need, i.e., to address 
the group in some fashion, formally or not. Preference will naturally be 
given to the authors of position papers/presentations who have actual 
work in progress.

Presenters will each have a short 10-15 minutes to present his or her 
idea, followed by a another 10-15 minutes of discussion. If you would 
like to propose a panel or a group of related presentations on different 
approaches to the same problem, then please do so. Also consistent with 
a Workshop format, the Program Committee will be steered by what sorts 
of proposals come in response to this Call.

Goals and Topics

Our goal is to stimulate discussion of, and thinking about, security 
metrics and to do so in ways that lead to realistic, early results of 
lasting value. Potential attendees are invited to submit position papers 
to be shared with all, with or without discussion on the day of the 
Workshop. Such position papers are expected to address security metrics 
in one of the following categories:

Benchmarking of security technologies
Empirical studies in specific subject matter areas
Financial planning
Long-term trend analysis and forecasts
Metrics definitions that can be operationalized
Security and risk modeling including calibrations
Tools, technologies, tips, and tricks
Visualization methods both for insight and lay audiences
Data and analyses emerging from ongoing metrics efforts
Other novel areas where security metrics may apply
Practical implementations, real world case studies, and detailed models 
will be preferred over broader models or general ideas.

How to Participate

Submit a short position paper or description of work done or ongoing. 
Your submission must be brief -- no longer than five (5) paragraphs or 
presentation slides. Author names and affiliations should appear first 
in or on the submission. Submissions may be in PDF, PowerPoint, HTML, or 
plaintext email and must be submitted to metricon3 AT 
securitymetrics.org. These requests to participate are due no later than 
noon GMT, Monday, May 12, 2008 (a hard deadline).

The Program Committee will invite both attendees and presenters. 
Participants of either sort will be notified of acceptance quickly -- by 
June 2, 2008. Presenters who want hardcopy materials to be distributed 
at the Workshop must provide originals of those materials to the Program 
Committee by July 21, 2008. All slides, position papers, and what-not 
will be made available to all participants at the Workshop. No formal 
academic proceedings are intended, but a digest of the meeting will be 
prepared and distributed to participants and the general public. 
(Digests for previous MetriCon meetings are on the past event pages 
mentioned above.) Plagiarism is dishonest, and the organizers of this 
Workshop will take appropriate action if dishonesty of this sort is 
found. Submission of recent, previously published work as well as 
simultaneous submissions to multiple venues is entirely acceptable, but 
only if you disclose this in your proposal.

Location

MetriCon 3.0 will be co-located with the 17th USENIX Security Symposium 
at the Fairmont Hotel in San Jose, California.

Cost

$225 all-inclusive of meeting space, materials preparation, and meals 
for the day.

Important Dates

Requests to participate: by May 12, 2008
Notification of acceptance: by June 2, 2008
Materials for distribution: by July 21, 2008
Workshop Organizers

Dan Geer, Geer Risk Services, 

Re: New result in predicate encryption: disjunction support

2008-05-06 Thread Ariel Waissbein
Jonathan Katz wrote:
 On Mon, 5 May 2008, Ariel Waissbein wrote:
 
 [Moderator's note: Again, top posting is discouraged, and not editing
 quoted material is also discouraged. --Perry]

 Hi list,

 Interesting. Great work! I had been looking *generic* predicate
 encryption for some time. Encryption over specific predicates is much
 older. Malware (e.g., virus) and software protection schemes have been
 using some sort of predicate encryption or trigger for over two
 decades in order to obfuscate code. For example, an old virus used to
 scan hard drives looking for a BBS configuration files in a similar
 manner and some software protection schemes have encrypted pieces of
 code that are decrypted only if some integrity checks (predicates) over
 other pieces of the program are passed.

 Triggers/predicates are very promising. Yet, they are only useful in
 certain applications, since eavesdropping one decryption is enough to
 recover the keys and plaintext.

 I co-authored a paper were we used this same concept in a software
 protection application ([1]) and later we formalized this concept, that
 we called secure triggers, in a paper eventually publised at TISSEC
 ([2]). We were only able to construct triggers for very specific
 predicate families, e.g.,
  - p(x)=1 iff x=I for some I in {0,1}^k
  - q(x,y,z,...)=1 iff x=I_1, y=I_2, z=I_3,...; and finally
  - r(x)=1 iff x_{j_1}=b_1,...,x_{j_k}=b_k for some b_1,...,b_k in {0,1}
and indexes i_1,...,i_k (|x|=k).
 While these predicates do not cover arbitrary large possibilities, they
 are implemented by efficient algorithms and require assuming only the
 existence of IND-CPA secure symmetric ciphers. In [2] we came up with
 more applications other than sofprot;)

 [1] Diego Bendersky, Ariel Futoransky, Luciano Notarfrancesco, Carlos
 Sarraute and Ariel Waissbein. Advanced Software Protection Now. Core
 Security Technologies Tech report.
 http://www.coresecurity.com/index.php5?module=ContentModaction=itemid=491


 [2] Ariel Futoransky, Emiliano Kargieman, Carlos Sarraute, Ariel
 Waissbein. Foundations and applications for secure triggers. ACM TISSEC,
 Vol 9(1) (February 2006).

 Cheers,
 Ariel
 
 Predicate encryption sounds very different from the work you are
 referencing above. (In particular, as we discuss in the paper, predicate
 encryption for equality tests is essentially identity-based encryption.)
 I refer you to the Introduction and Definition 2.1 of our paper, which
 should give a pretty good high-level overview.
 

Hi Jonathan,

and thanks for taking your time to answer. I had already read the
Introduction and had a quick --i admit-- read over the paper before
posting to the list. I think that the main difference are the
applications we are looking at (and I know Sahai's earlier work in
obfuscation). Take a look at the first three sentences of our article:

 Fix a bitstring, that we regard as a secret. Let be given a family of 
 predicates, and
 secretly draw a predicate from this family according to a known distribution. 
 Think
 of predicates as functions with range in {true, false}. We consider 
 algorithms that
 return the secret if their input evaluates to true on the chosen predicate, 
 else they
 return nothing.

Of course, the main difference is that one must hold SK (and f) in order
to decrypt messages according to the predicate encryption scheme. Note
that if the adversary is given the algorithm i\mapsto SK_{f_i} then
predicate encryption turns out to be similar to generic secure triggers.
However, we didn't cover predicates evaluating inner product so that's
what caught my interest, why I want to analyze how your work applies to
other problems (and why I think that the schemes are similar).

Cheers,
Ariel

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Steven M. Bellovin
On Sun, 04 May 2008 11:22:51 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

 Steven M. Bellovin wrote:
  On Sat, 03 May 2008 17:00:48 -0400
  Perry E. Metzger [EMAIL PROTECTED] wrote:
  
  [EMAIL PROTECTED] (Peter Gutmann) writes:
  I am left with the strong suspicion that SSL VPNs are easier to
  configure and use because a large percentage of their user
  population simply is not very sensitive to how much security is
  actually provided.
  They're easier to configure and use because most users don't
  want to have to rebuild their entire world around PKI just to set
  up a tunnel from A to B.
  I'm one of those people who uses OpenVPN instead of IPSEC, and I'm
  one of the people who helped create IPSEC.
 
  Right now, to use SSH to remotely connect to a machine using public
  keys, all I have to do is type ssh-keygen and copy the locally
  generated public key to a remote machine's authorized keys file.
  When there is an IPSEC system that is equally easy to use I'll
  switch to it.
 
  Until then, OpenVPN let me get started in about five minutes, and
  the fact that it is less than completely secure doesn't matter
  much to me as I'm running SSH under it anyway.
 
  There's a technical/philosophical issue lurking here.  We tried to
  solve it in IPsec; not only do I think we didn't succeed, I'm not at
  all clear we could or should have succeeded.
  
  IPsec operates at layer 3, where there are (generally) no user
  contexts.  This makes it difficult to bind IPsec credentials to a
  user, which means that it inherently can't be as simple to
  configure as ssh.
  
  Put another way, when you tell an sshd whom you wish to log in as,
  it consults that user's home directory and finds an authorized_keys
  file. How can IPsec -- or rather, any key management daemon for
  IPsec -- do that?  Per-user SPDs?  Is this packet for port 80 for
  user pat or user chris?
  
  I can envision ways around this (especially if we have an IP address
  per user of a system -- I've been writing about fine-grained IP
  address assignment for years), but they're inherently a lot more
  complex than ssh.
 
 I don't see why.
 
 The ssh server determines who the packets are for from information
 sent to it by the ssh client.
 
 The ssh client knows on whose behalf it is acting by virtue of being 
 invoked by that user (I'll admit this is a simplification of the most 
 general case, but I assert my argument is unaffected), and thus is
 able to include the information when it talks to the server.
 
 Similarly, the client end of an IPSEC connection knows who opened the 
 connection and could, similarly, convey that information. That data
 may not be available in some OSes by the time it gets to the IPSEC
 stack, but that's a deficiency of the OS, not a fundamental problem.
 
The problem is more on the server end.




--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Steven M. Bellovin
On Sat, 03 May 2008 19:50:01 -0400
Perry E. Metzger [EMAIL PROTECTED] wrote:
 
 Almost exclusively the use for such things is nailing up a tunnel to
 bring someone inside a private network. For that, there is no need for
 per user auth -- the general assumption is that the remote box is a
 single user laptop or something similar anyway. You really just want
 to verify that the remote host has a particular private key, and if it
 does, you nail up a tunnel to it (possibly allocating it a local IP
 address in the process). That solves about 95% of the usage scenarios
 and it requires very little configuration. It also covers virtually
 all use of IPSec I see in the field.
 
 Again, there are more complex usage scenarios, and it may be more
 complicated to set one of *those* up, but it is a shame that it is
 difficult to do the simple stuff.
 
So here's an interesting experiment.  Part one: Take a common IPsec
implementation -- Linux, *BSD, Windows, what have you.  Assume this
common scenario: laptop connecting to a corporate server.  Assume a
user authentication credential.  (I'd prefer that that be a public/
private key pair, for many reasons, not the least of which is the bug
in IKEv1 with main mode and shared secrets.)  Do not assume a 1:1 ratio
between laptops and internal IP address, because such servers are
frequently underprovisioned.  Challenge: design -- and implement -- a
*simple* mechanism by which the client user can set up the VPN
connection, both on the client and on the server.  This part can
happen while the client is physically on the corporate net.  Variant A:
the VPN server is a similar box to which the client has login-grade
access. Variant B: the VPN server is something like a restricted-access
Cisco box, in which case a trusted proxy is probably needed.  User
setup should be something like 'configvpn cs.columbia.edu', where I
supply my username and authenticator.  User connection should be
'startvpn cs.columbia.edu' (or, of course, the GUI equivalent); all I
supply is some sort of authenticator.  Administrator setup should be a
list of authorized users, and probably an IP address range to use
(though having the VPN server look like a DHCP relay would be cool).

Experiment part two: implement remote login (or remote IMAP, or remote
Web with per-user privileges, etc.) under similar conditions.  Recall
that being able to do this was a goal of the IPsec working group.

I think that part one is doable, though possibly the existing APIs are
incomplete.  I don't think that part two is doable, and certainly not
with high assurance.  In particular, with TLS the session key can be
negotiated between two user contexts; with IPsec/IKE, it's negotiated
between a user and a system.  (Yes, I'm oversimplifying here.)

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ADMIN: posting standards

2008-05-06 Thread Perry E. Metzger

Just a few reminders from your moderator about posting etiquette:

0) Text only, please. HTML and text encoded weird ways like base-64,
   as well as MIME multiparts, are a big pain in the neck. I generally
   just reject them rather than repairing them.
1) Please do not top post when replying to other people.
2) If you're replying to someone else's email, edit down the quoted
   text to the minimum needed for comprehension.
3) Try to be concise.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Jack Lloyd
On Tue, May 06, 2008 at 03:40:46PM +, Steven M. Bellovin wrote:

 In particular, with TLS the session key can be negotiated between
 two user contexts; with IPsec/IKE, it's negotiated between a user
 and a system.  (Yes, I'm oversimplifying here.)

Is there any reason (in principle) that IPsec/IKE could not be done
entirely in userspace / application space, though?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


User interface, security, and simplicity

2008-05-06 Thread David Wagner

In article [EMAIL PROTECTED] you write:
On Sun, May 04, 2008 at 10:24:13PM -0400, Thor Lancelot Simon wrote:
 I believe that those who supply security products have a responsibility
 to consider the knowledge, experience, and tendencies of their likely
 users to the greatest extent to which they're able, and supply products
 which will function properly _as users are likely to apply them_.

The TLS support in Postfix tries to behave sensibly with easy setings.

- Cipher list selection is indirect, via grades: export, low,
medium and high. The actual ciphers for each grade are buried
in parameters users are advised to not mess with.

- The cipher grade for opportunistic TLS is export, but you single
out a destination for mandatory TLS, the grade rises to medium.

[..]

- With the upcoming EECDH support, users don't choose curves
directly, they again choose a security grade, and the correspnding
curves are configurable via parameters they are not expected to
ever look at or modify.

This struck me as poor design, not good design.  Asking the user to
make these kinds of choices seems like the kind of thing that only a
cryptographer could consider sensible.  In this day and age, software
should not be asking users to choose ciphers.  Rather, the software
should just pick a sensible high-grade security level (e.g., AES-128,
RSA-1024 or RSA-2048) and go with that, and avoid bothering the user.
Why even offer low as an option?  (And this export business sounds
like a throwback to a decade ago; why is that still there?)

Good crypto is cheap.  Asking a user is expensive and risky.

So I think there should be a broad design bias towards *implicit* correct
behaviour in all system features, with rope available for advanced users
to *explicitly* craft more complex use-cases. Once you have that, practical
security is not too difficult.

Amen.  I know of quite a few software packages that could use more of
that philosophy.

The same is true in the source code, unsafe practices are avoided
globally, (e.g. both strcpy() and strncpy() are absent together with fixed
size automatic buffers) rather than used with care locally. I won't bore
you with all the implementation safety habits, but there are many.

It's too bad that today such elementary practices are something to brag
about.  Perhaps one day we'll be lucky enough that the answer to these
questions becomes more like of course we use safe programming practices;
what kind of incompetent amateurs do you take us for?.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Nicolas Williams
On Tue, May 06, 2008 at 03:40:46PM +, Steven M. Bellovin wrote:
 Experiment part two: implement remote login (or remote IMAP, or remote
 Web with per-user privileges, etc.) under similar conditions.  Recall
 that being able to do this was a goal of the IPsec working group.
 
 I think that part one is doable, though possibly the existing APIs are
 incomplete.  I don't think that part two is doable, and certainly not
 with high assurance.  In particular, with TLS the session key can be
 negotiated between two user contexts; with IPsec/IKE, it's negotiated
 between a user and a system.  (Yes, I'm oversimplifying here.)

Connection latching and connection-oriented IPsec APIs can address
this problem.

Solaris, and at least one other IPsec implementation (OpenSwan?  I
forget) makes sure that all packets for any one TCP connection (or UDP
connection) are protected (or bypassed) the same way during their
lifetime.  The same way - by similar SAs, that is, SAs with the same
algorithms, same peers, and various other parameters.

A WGLC is about to start in the IETF BTNS WG on an I-D that describes
this.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Victor Duchovni
On Tue, May 06, 2008 at 11:40:53AM -0700, David Wagner wrote:

 - With the upcoming EECDH support, users don't choose curves
 directly, they again choose a security grade, and the correspnding
 curves are configurable via parameters they are not expected to
 ever look at or modify.
 
 This struck me as poor design, not good design.  Asking the user to
 make these kinds of choices seems like the kind of thing that only a
 cryptographer could consider sensible.

They are not *asked* to make any cipher choices. The are able to make:

- no explicit choice, and get sensible default behaviour

- a high level choice (secure verification + high grade cipher
without having to spell out the gory details of what these mean.

- an exteremely detailed specification of all the details.

 In this day and age, software
 should not be asking users to choose ciphers.

The users in question are email administrators, not end users, and you
missed my point. They are not asked to choose ciphers, these are chosen
for them, and the default choice is even context dependent, so you get
sensible combinations of security properties:

- Opportunistic TLS allows SSLv2 and export ciphers

- Mandatory TLS, enforces SSLv3/TLSv1 and medium or high
ciphers.


 Rather, the software
 should just pick a sensible high-grade security level (e.g., AES-128,
 RSA-1024 or RSA-2048) and go with that

This is what is done (using OpenSSL's HIGH, MEDIUM, ... selectors).

 and avoid bothering the user.
 Why even offer low as an option?  (And this export business sounds
 like a throwback to a decade ago; why is that still there?)

You don't know how TLS is used with SMTP. Most TLS is opportunistic and
plain text is used if TLS is absent. In such an environment insisting
on 128 bits is silly, even 40 bits is better than plain-text.

 Good crypto is cheap.  Asking a user is expensive and risky.

Breaking interoperability by limiting cipher selection and causing mail
to queue is not cheap.

 So I think there should be a broad design bias towards *implicit* correct
 behaviour in all system features, with rope available for advanced users
 to *explicitly* craft more complex use-cases. Once you have that, practical
 security is not too difficult.
 
 Amen.  I know of quite a few software packages that could use more of
 that philosophy.
 
 The same is true in the source code, unsafe practices are avoided
 globally, (e.g. both strcpy() and strncpy() are absent together with fixed
 size automatic buffers) rather than used with care locally. I won't bore
 you with all the implementation safety habits, but there are many.
 
 It's too bad that today such elementary practices are something to brag
 about.  Perhaps one day we'll be lucky enough that the answer to these
 questions becomes more like of course we use safe programming practices;
 what kind of incompetent amateurs do you take us for?.

Practices are culture not technology, and it is difficult to displace
existing cultures with new ones :-(

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-06 Thread Jon Callas


On May 6, 2008, at 1:14 AM, James A. Donald wrote:


Perry E. Metzger wrote:

 What you can't do, full stop, is
 know that there are no unexpected security related behaviors in the
 hardware or software. That's just not possible.


Ben Laurie wrote:
Rice's theorem says you can't _always_ solve this problem. It says  
nothing about figuring out special cases.


True, but the propensity of large teams of experts to issue horribly  
flawed protocols, and for the flaws in those protocols to go  
undiscovered for many years, despite the fact that once discovered  
they look glaringly obvious in retrospect, indicates that this  
problem, though not provably always hard, is in practice quite hard.


Yes, but.

I tend to agree with Marcos, Ben, and others.

It is certainly true that detecting an evil actor is ultimately  
impossible because it's equivalent to a non-computable function. It  
doesn't matter whether that actor is a virus, an evil vm, evil  
hardware, or whatever.


That doesn't mean that you can't be successful at virus scanning or  
other forms of evil detection. People do that all the time.


Ben perhaps over-simplified by noting that a single gate isn't  
applicable to Rice's Theorem, but he pointed the way out. The way out  
is that you simply declare that if a problem doesn't halt before time  
T, or can't find a decision before T, you make an arbitrary decision.  
If you're optimistic, you just decide it's good. If you're  
pessimistic, you decide it's bad. You can even flip a coin.


These correspond to the adage I last heard from Dan Geer that you can  
make a secure system either by making it so simple you know it's  
secure, or so complex that no one can find an exploit.


So it is perfectly reasonable to turn a smart analyzer like Marcos on  
a system, and check in with him a week later. If he says, Man, this  
thing is so hairy that I can't figure out which end us up, then  
perhaps it is a reasonable decision to just assume it's flawed.  
Perhaps you give him more time, but by observing the lack of a halt or  
the lack of a decision, you know something, and that feeds into your  
pessimism or optimism. Those are policies driven by the data. You just  
have to decide that no data is data.


The history of secure systems has plenty of examples of things that  
were so secure they were not useful, or so useful they were not  
secure. You can, for example, create a policy system that is not  
Turing-complete, and then on to being decideably secure. The problem  
is that people will want to do cool things with your system than it  
supports, so they will extend it. It's possible they'll extend it so  
it is more-or-less secure, but usable. It's likely they'll make it  
insecure, and decideably so.


Jon


Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread James A. Donald

  The same is true in the source code, unsafe
  practices are avoided globally, (e.g. both strcpy()
  and strncpy() are absent together with fixed size
  automatic buffers) rather than used with care
  locally. I won't bore you with all the
  implementation safety habits, but there are many.

David Wagner wrote:
 It's too bad that today such elementary practices are
 something to brag about.  Perhaps one day we'll be
 lucky enough that the answer to these questions
 becomes more like of course we use safe programming
 practices; what kind of incompetent amateurs do you
 take us for?.

Dynamic strings tempt people to forget about enforcing
length limits and forget about correctly handling the
case when the length limits are exceeded.

There is no such thing as a string with no maximum
length, merely strings of UNKNOWN maximum length.  If
one has dynamic buffers and fully dynamic strings, it is
always possible for an attacker to discover the
previously UNKNOWN maximum length, and exceed it,
causing the program to fail in a manner likely to be
useful to the attacker.

In any program subject to attack, all strings should
have known, documented, and enforced maximum length, a
length large enough for all likely legitimate uses, and
no larger.

If enforcing length limits, it is frequently advisable,
and often necessary, to use, not strcpy or strncpy, but
routines such as _mbscpy_s, string manipulation routines
which can, and frequently do, employ buffers of fixed
and known length, sometimes pre-allocated fixed length.

In C++, incomprehensibly obscure functions such as
_mbscpy_s should never be called directly, but rather
called through a template library that automatically
does the sensible thing when the destination parameter
is a fixed length buffer, and can be relied upon to
object when commanded to do the stupid thing.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]