Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Victor Duchovni
On Sun, Feb 12, 2006 at 04:45:33PM +, Ben Laurie wrote:

 Werner Koch wrote:
  On Sat, 11 Feb 2006 12:36:52 +0100, Simon Josefsson said:
  
1) It invoke exit, as you have noticed.  While this only happen
   in extreme and fatal situations, and not during runtime,
   it is not that serious.  Yet, I agree it is poor design to
   do this in a library.
  
  I disagree strongly here.  Any code which detects an impossible state
  or an error clearly due to a programming error by the caller should
  die as soon as possible.
 
 Quite so.

No, libraries don't enough to decide what's fatal. The calling
process (trying to an LDAP lookup via nsswitch.conf say...) may have
other reasonable sources of data, and having the library kill it is
unacceptable.

   If you try to resolve the problem by working
  around it will increase code complexity and thus error won't be
  detected.  (Some systems might provide a failsafe mechanism at a top
  layer; e.g. by voting between independed developed code).
 
 But this is not why: if you attempt to fix impossible states, the
 problem is that you cannot know why (by definition) the code is in the
 state you are trying to fix, or what else might be broken. Continuing to
 run is giving the attacker the option to make good on his exploit.

Not being able to access a resource is not an impossible
state. Impossible states are corruption of internal data structures,
invalid function arguments, ... Failure to obtain seed data is an
error and needs to be reported as such.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Victor Duchovni
On Mon, Feb 13, 2006 at 11:29:00AM +0100, Simon Josefsson wrote:

 However, looking at the code, it is possible for Postfix to handle
 this.  They could have installed a log handler with libgcrypt, and
 make sure to shut down gracefully if the log level is FATAL.  The
 recommendation to avoid GnuTLS because libgcrypt calls exit suggest
 that the Postfix developers didn't care to investigate how to use
 GnuTLS and libgcrypt properly.  So I don't think there is any real
 reason to change code in libgcrypt here.  Postfix could be changed, if
 they care about GnuTLS/libgcrypt.
 

Yeah, right, really easy when GnuTLS is called from the system LDAP
libraries... In any case the only way for the handler to avoid
process death is longjmp() to a context created before calling
GnuTLS/libgcrypt()... not a particularly robust solution.

void
_gcry_log_fatal( const char *fmt, ... )
{
va_list arg_ptr ;

va_start( arg_ptr, fmt ) ;
_gcry_logv( GCRY_LOG_FATAL, fmt, arg_ptr );
va_end(arg_ptr);
abort(); /* never called, but it makes the compiler happy */
}

the handler is invoked in _gcry_logv()... The Postfix TLS functionality
is built over OpenSSL (not GnuTLS) and OpenSSL has an error stack, which
the application can process as it sees fit. The libgrypt approach to
error reporting is not acceptable.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: general defensive crypto coding principles

2006-02-14 Thread Jack Lloyd
On Tue, Feb 14, 2006 at 03:24:09AM +1300, Peter Gutmann wrote:

 1. There are a great many special-case situations where no published protocol
fits.  As the author of a crypto toolkit, I could give you a list as long
as your arm of user situations where no existing protocol can be applied
(I'd prefer not to, because it's a lot of typing).
[...]

I'm also the author of a crypto toolkit, and I'll admit I've been involved in
creating custom security protocols more than once myself. I'm well aware that
this is a legitimate need.

 It's better to design a system that can be used by the average user than one
 that's brittle enough that only geniuses can safely employ it.

I think the source of our different views on this is a result of expectations
with regards to what your average programmer is capable of in terms of secure
protocol design. I have done reviews on probably a dozen or so products that
had a custom crypto component of one sort or another, and there were often
really trivial problems (typically the well-known and well-documented ones that
people have been getting wrong for decades).

At this point I'm generally of the opinion that there are maybe 5% of
programmers who will be careful enough to get it right, and the rest will get
it spectacularly wrong because they won't bother to do anything more than
perhaps skim Applied Cryptography. So, if you're going to mandate just one
technique for everyone, you're better off (IMO) using something that is a bit
trickier but has better optimal bounds, because the 5% will still probably get
it right (and their protocols will be better off for it) and the rest are too
busy getting it wrong in other ways to bother implementing the authenticated
encryption mode incorrectly.

In short, I find it extremely optimistic to think that there is any substantial
population of programmers who could correctly design and implement a
non-trivial and secure crypto protocol without taking a reasonable amount of
time with the existing body of knowledge.

-J

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


choosing building blocks, was Re: general defensive crypto coding principles

2006-02-14 Thread Travis H.
On 2/13/06, Peter Gutmann [EMAIL PROTECTED] wrote:
 I would expect that typically implementors would be following a published
 standard, which would (well, one would hope) have had expert cryptographers
 check it over sometime prior to publication

Published implementations aren't immune to errors, and quite
frequently they don't even explicitly specify what/whom they're
protecting against.  Using one incorrectly or inappropriately presents
the same pitfalls as using one de novo.  There's also an absence of
easily accessible information which describe the various protocols,
their design parameters, their constraints, their threat models, their
dependencies, their availability, their patent status, their licensing
status, status with regard to prior flaws, etc.  I know this has been
asked for before, when someone got the standard reuse a published
protocol answer.

For example, there's a popular program called stunnel which uses
openssl to secure connections.  This ostensibly is a shim for
protecting cleartext protocols such as POP.
However, unless it does significant length padding or in some other
way (maybe compression?) decouples the length of the messages from the
length of the ciphertext transmissions, you can still probably derive
a lot of information from a version identifcation and the length of
the human-readable strings which follow the ^[0-9][0-9][0-9]
machine-readable responses, and knowing the standard order of
interaction.  From browsing the online documentation, I find stunnel
basically saying it's really just openssl, and from browsing the
openssl documentation, I can't figure out whether it does any length
padding at all.  I think it's unrealistic to have to read source to
find out this kind of information.

I think that there's a noteworthy level of skill between being able to
design a secure block cipher (what I call a cryptologist) and being a
newbie.  I think that someone with those intermediate skills can
probably cobble together existing building blocks into a decent
protocol.  They should, however, do the homework on the various
protocols and attacks, publish their protocol (to this list?) before
implementing it, run a finite-state analysis against it with the
standard assumptions as a sanity check, keep up to date on the
weaknesses of any building blocks they use, and maybe hire an expert
cryptanalyst to try and break it (of course he will probably prefer to
design it, but the premise is that won't happen).

Doing this is not exactly easy -- I had a hard time finding any
descriptions of protocols for 2-party  mutual authentication in my
limited literature several years ago when I did the crypto and
networking for a distributed HIDS.  I ended up factoring one of the
parties out (i.e. merging two parties) of a 3-party authentication
algorithm published in AC.  Speaking of which, there's an error in the
2ed 5th printing, on pp 61, the Neumann-Stubblebine protocol, step (3)
--- the text is correct but the symbolic notation should read:
E_A(B,R_A,K,T_B), E_B(A,K,T_B), R_B
I have verified this against the original paper, and the error is
obvious if you think about what's going on.  I sent a correction to
the email black hole that is Schneier.  I only know of a few attacks
strictly on protocols (replay, version rollback, reflection,
MITM/chess grandmaster), and I think all are easily derived from some
simple rules in a finite state analysis (attacker can replay, attacker
can observe, attacker can modify, attacker can impersonate c.).  If I
am mistaken, please illuminate me.

Speaking of this makes me want to write such a set of wiki pages
somewhere.  So if anyone would kindly send me a list of protocols and
algorithms they'd like to see covered, I'll compile it and maybe fill
in some stuff on a wiki with it.  I'm sure it would be a useful
learning exercise, as well as a public service.  I'm mostly interested
in illustrating modern protocol details (e.g. SSL v3, SSHv2, ISAKMP,
WEP, WPA, WPA2, IEEE 802.1x?, Photuris?), reviewing libaries (e.g.
openssl, cryptlib), and describing the use of APIs (GSSAPI, SASL) but
I'm also interested in the strength of primitives (AES, SHA1, etc.) as
defined by recent attacks.

 It also defends against the MD5 crack, and is one of the recommended
 IETF solutions to hash problems.

Cool :)  Another idea I had was to uniquify the hashes using some sort
of machine-specific key to prevent them from being broken on a
different machine, but it'd still need to be stored on disk across
reboots.  With PK you could create a keypair, one for encrypting
(making a password hash) and one for decrypting (validating a password
hash), and you'd only have to protect one or the other.  Perhaps
making password hashes could be done offline.  Another way to slow
down brute force is to require a search for a correct input to the
password-verifying algorithm (for example, don't prepend the whole IV
to the hash).  A cracker would have to exhaustively test the input
space for every 

Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Florian Weimer
* Werner Koch:

 On Sat, 11 Feb 2006 12:36:52 +0100, Simon Josefsson said:

   1) It invoke exit, as you have noticed.  While this only happen
  in extreme and fatal situations, and not during runtime,
  it is not that serious.  Yet, I agree it is poor design to
  do this in a library.

 I disagree strongly here.  Any code which detects an impossible state
 or an error clearly due to a programming error by the caller should
 die as soon as possible.  If you try to resolve the problem by working
 around it will increase code complexity and thus error won't be
 detected.  (Some systems might provide a failsafe mechanism at a top
 layer; e.g. by voting between independed developed code).

_exit in libraries is fine if you don't service multiple clients from
a single process.  However, with the advent of heavy VMs and stuff
like that, there is a trend towards serving multiple clients from a
single process (which is quite a bad idea in almost all cases, but
this view is rather unpopular).  There are also libraries which
require proper cleanup procedures, otherwise the next program start
can be quite costly (think of databases, where you want to avoid log
replay).  Some services have even been implemented following a
single-process model for more than a decade (IRC servers, for
example).

A user-defined fatal error function (which must not return) would be
a compromise, I think.  Of course, such a function should never be
called if you just see wrong or unsual input.  But with a bit of
optimism, the process could recover from an error which is not locally
recoverable (throw an exception, terminate the offending thread, and
leak the allocated resources).

Now if the library maintains global, per-process state, this is a real
problem.  You can't know for sure if this state is consistent after a
fatal error, unless you program carefully to avoid this situation.
Yet another reason to move this functionality to a separate process. 8-)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Werner Koch
On Sun, 12 Feb 2006 19:33:07 -0800 (PST), David Wagner said:

 Of course, it would be better for a crypto library to document this
 assumption explicitly than to leave it up to users to discover it the
 hard way, but I would not agree with the suggestion that this exit before

Actually libgcrypt exactly does this.  I have not looked at the
postfix code under question but it sounds like stderr has beend duped
to /dev/null and no log handler has been registered (e.g. to divert
logging to syslog).


Shalom-Salam,

   Werner



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Werner Koch
On Sun, 12 Feb 2006 23:57:42 -, Dave Korn said:

   :-) Then what was EINVAL invented for?

[ Then for what was assert invented for? ]

   Really it's never ok for anything, not even games, and any program that 
 fails to check error return values is simply not properly coded, full stop.

I agree. But the reality is not that of the text books.

   But abort()-ing in a library is also a big problem, because it
 takes control away from the main executable.  That can be a massive
 security vulnerability on Windows.  If you can get a SYSTEM-level
 service that

Huh? According to ISO C and POSIX abort raises SIGABRT and the default
action is abnormal *process termination* - if your view is that
process termination takes away control from the main executable I
wonder how a file can control a process (unless the kernels plays
nasty games with on demand paging).

To my limited Windows experience abort() does terminate the process. I
have ported quite some Unix applications nativly to Windows and never
got in semantic problems you describe.  Anyway, Windows is strange
(atexit lists per DLL and such) but Libgcrypt is not really supported
there.

 ... receive request from client
 ... fail to service it because libgcrypt returns errors..
  return error to caller

 ... rather than for it to abort.

Being in an insane state libgcrypt can't assure that this main loop
will continue to run - the stack might already be corrupted.  We don't
know and thus assert(!fubar).

   I'm afraid I consider it instead a weakness in your API design that you 
 have no way to indicate an error return from a function that may fail.

By design there can't be any error.  If there is an error something
really strange has occured, like improper chrooting.

   Perhaps libgcrypt could call abort in debug builds and return error codes 
 in production builds?

Your joking right? I am usually quite sure that no attacker has made
it to one of the machines used for debugging. Outside in the Internet
wilderness I should then switch off all protection?  That is like
wearing a hard hat in bed and take it off at the construction site.


Salam-Shalom,

   Werner


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Werner Koch
On Mon, 13 Feb 2006 11:29:00 +0100, Simon Josefsson said:

 That /dev/random doesn't exist seem like a quite possible state to me.

Running Linux this is not possible because /dev/random is guarenteed
to be available.

 Further, a library is not in a good position to report errors.  A
 users will sit there wondering why Postfix, or some other complex

I don't know where Postfix dumps the error messages from Libcrypt:

  fd = open( name, O_RDONLY );
  if( fd == -1 )
log_fatal (can't open %s: %s\n, name, strerror(errno) );

I guess you need to blame postfix for this.

 recommendation to avoid GnuTLS because libgcrypt calls exit suggest
 that the Postfix developers didn't care to investigate how to use
 GnuTLS and libgcrypt properly.  So I don't think there is any real

So may I conclude that it is actually Good Thing that in this case
libgcrypt refrained from continuing to preserve the caller from false
security.

 I'd say that the most flexible approach for a library is to write
 thread-safe code that doesn't need access to mutexes to work properly.

Yes.  We discussed this already at length at more appropriate places.

 That seem like a poor argument to me.  It may be valid for embedded
 devices, but for most desktop PCs, Linux should provide a useful
 /dev/urandom.

I can only tell what Ted told me years ago.


Shalom-Salam,

   Werner


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Werner Koch
On Mon, 13 Feb 2006 03:07:26 -0500, John Denker said:

 That might lead to an argument in favor of exceptions instead of error
 codes, along the following lines:
  -- Naive code doesn't catch the exception.  However (unlike returned
   error codes) this does not cause the exception to be lost.
  -- The exception percolates up the call-tree until it is caught by
   some non-naive code (if any).
  -- If all the code is naive, then the uncaught exception terminates
   the process ... to the delight of the exit on error faction.
   However (!!!) unlike a plain old exit, throwing an exception leaves
   the door open for non-naive code to implement a nuanced response to
   the exceptional condition.

Actually the plain C similar thing is done for an internal error:
SIGABRT is raised and the top level code (or in theory any layer in
between) may catch it and try to continue.  Okay, this won't work in
practise because signal handling between independent developed code
(libraries) is guaranteed not to work correctly. 

And yes, we need to discuss whether whether a failed open should abort
or exit.  As of now it does an exit and not an abort() but I won't
insist on this.

 Again, enough false dichotomies already!  Just because error codes are
 open to abuse doesn't mean exiting is the correct thing to do.

For Libgcrypt's usage patterns I am still convinced that it is the
right decision.


Salam-Shalom,

   Werner





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread leichter_jerrold
|  I disagree strongly here.  Any code which detects an impossible state
|  or an error clearly due to a programming error by the caller should
|  die as soon as possible.  
| 
| That is a remarkably unprofessional suggestion.  I hope the people
| who write software for autopilots, pacemakers, antilock brakes,
| etc. do not follow this suggestion.
| 
| This just shows the dangers of over-generalization.
And *this* shows the danger of false dichotomies.

| Of course, we have to decide which is more important: integrity,
| or availability.  I suspect that in the overwhelming majority (perhaps
| all) of the cases where libgcrypt is used, integrity is more important
| than availability.  If that is true, well, if in doubt, it's better to
| fail closed than to fail open.
| 
| You rightly points out that there are important applications where
| availability is more important than integrity.  However, I suspect
| those cases are not too common when building Internet-connected desktop
| applications.
A library can't possibly know what kind of applications it will be part of!

| I think the attitude that it's better to die than to risk letting an
| attacker take control of the crypto library is defensible, in many cases.
| Of course, it would be better for a crypto library to document this
| assumption explicitly than to leave it up to users to discover it the
| hard way, but I would not agree with the suggestion that this exit before
| failing open stance is always inappropriate.
No, the library thinks it can call exit() is *always* inappropriate.

There are reasonable ways to deal with this kind of thing that are just as
safe, but allow general-purpose use.  For example:

1.  On an error like this, put the encrypted connection (or whatever
it is) into a permanent error state.  Any further calls act
as
if the connection had been closed.  Any incoming or outgoing
data is erased and discarded.  Any keying material is
immediately erased and discarded.  Of course, return error
statuses to the caller appropriately.  (You don't return
error statuses?  Then you're already talking about a poor
design.  Note that there's a world of difference between
returning an error status *locally* and sending it over the
wire.  The latter can turn your code into an oracle.  The
former ... well, unless you're writing a closed-source
library for a secret protocol and you assume your code and
protocol can't be reverse-engineered, the local user can
*always* get this information somehow.)

2.  When such an error occurs, throw an exception.  In a language
that supports exceptions as such (C++, Java), use the native
mechanism.  For languages that don't support exceptions, you
can call a function through a pointer.  By default, the
function can call, or simply be, exit(); but the user can
specify his own function.  The function *must* be allowed
to do something other than call exit()!

In general, this technique has to be combined with technique
1.

Granted, a user *could* write code that leaked important information upon
being informed of an error.  But he would have to try.  And, frankly,
there's
not a damn thing you can do to *prevent* that.  Most Unix systems these days
allow you to interpolate functions over standard library functions.  You
think you're calling exit(), or invoking kill()?  Hah, I've replaced them
with
my own functions.  So there.  (No interpolation?  Patching compiled code to
change where a function call goes is pretty easy.)

Of course, all this is nonsensical for an open-source library anyway!

You're kidding yourself if you think *any* programming practice will protect
you against a programmer who needs his program to do something that you
consider a bad idea.  But the whole approach is fundamentally wrong-headed.
The user of your library is *not* your enemy.  You should be cooperating
with
him, not trying to box him in.  If you treat him as your enemy, he'll either
choose another library - or find a way to work around your obstinacy.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


HDCP support in PCs is nonexistent now?

2006-02-14 Thread John Gilmore
http://www.firingsquad.com/hardware/ati_nvidia_hdcp_support/

HDCP is Intel-designed copy prevention that uses strong crypto to
encrypt the digital video signal on the cable between your video card
(or TV or DVD player) and your monitor.  There is no need for it --
you are seeing the signal that it is encrypting -- except for DRM.

Despite a bunch of PC graphics chips and boards having announced HDCP
support, according to the above article, it turns out that none of
them will actually work.  It looks like something slipped somewhere,
and an extra crypto-key chip needed to be added to every existing
board -- at manufacturing time.  My wild ass guess is that the
original design would have had software communicate the keys to the
board, but Hollywood has recently decided not to trust that design.

This is going to make life very interesting for the HD-DVD crowd.
Intel's grand scheme was to corrupt the PC to an extent that Hollywood
would trust movies, music, etc, to PCs.  Intel decided to learn from
an oligopoly what they know about extending a monopoly into the
indefinite future, by combining legislative bribery with technological
tricks.  Now it appears that even though they have largely succeeded
in pushing all kinds of crap into PC designs, Hollywood doesn't trust
the results enough anyway.  The result may well be that HD-DVDs that
contain movies can only be played on dedicated equipment (standalone
HD-DVD players), at least for the first few years.  Or, you'll need a
new video board, which nobody sells yet, when you buy your first
HD-DVD drive.  Or the DRM standards involved will have to be somehow
weakened.

Anybody know anything more about this imbroglio?

John

PS:  Of course, the whole thing is foolish.  DVD encryption has been
cracked for years, and circumvention tools widely distributed
worldwide, despite being too illegal to appear in out-of-the-box
products.  DVD encryption has provided exactly zero protection for DVD
revenues -- yet DVD revenues are high and rising.  In short, unless
Hollywood was lying about its motivations, DRM has so far been useless
to Hollywood.  Yet it has done great violence to consumers, to
computer architecture, to open competition, and to science.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread James A. Donald

 --
Werner Koch retorted:
  I disagree strongly here.  Any code which detects an impossible
  state or an error clearly due to a programming error by the caller
  should die as soon as possible.

 John Denker wrote:
 That is a remarkably unprofessional suggestion.  I hope the people
 who write software for autopilots, pacemakers, antilock brakes, etc.
 do not follow this suggestion.

If bad code halts, it will not get incorporated into production code.
If bad code produces ignored error messages, it will get incorporated
into production code, including pacemakers etc.  Therefore libraries
intended for use with pacemakers, anti lock brakes, and the like,
should die on error (which in the case of antilock brakes forces a
hard reboot.

Code intended for pacemakers and the like should be error free.  If
you write libraries intended to continue after error, you are writing
on the assumption that the pacemaker code will be buggy, and we don't
really care, we are going to ship it anyway, bugs and all into other
people's chests.  People who write code for pacemakers that continues
on error should be shot.

Halt on error is a tool for achieving error free code.  Error free
code is in fact achievable for really crucial applications.  The more
crucial the application, the more reason to write code that halts on
error.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 Cau3evB8n2DnP2D8ej3FHKKnKnMeseK65pUDF346
 4FbXJRaadlYWOfMnkhNKfdLxDaKNb58AoLBUm8ox9

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread James A. Donald

--

Libgcrypt tries to minimize these coding errors; for example there
are no error returns for the RNG - if one calls for 16 bytes of
random one can be sure that the buffer is filled with 16 bytes of
random.  Now, if the environment is not okay and Libgcrypt can't
produce that random - what shall we do else than abort the process.
This way the errors will be detected before major harm might occur.

   I'm afraid I consider it instead a weakness in your API design
   that you
 have no way to indicate an error return from a function that may
 fail.

The correct mechanism is exception handling.

If caller has provided a mechanism to handle the failure, that
mechanism should catch the library generated exception.  If the caller
has provided no such mechanism, his program should terminate
ungracefully.

Unfortunately, there is no very portable support for exception
handling in C.   There is however support in C++, Corn, D, Delphi,
Objective-C, Java, Eiffel, Ocaml, Python, Common Lisp, SML, PHP and
all .NET CLS-compliant languages.

Absent exception handling, mission critical tasks should have no
exceptions, which is best accomplished by the die-on-error standard.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 Ywzx2XsxbvPNX+eeGZVUpnq16108eQo1eBvq8K1I
 46HVM7avhGKHTF4Y1SqhFSUdIsTlbJvpXX43jkvQP

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: general defensive crypto coding principles

2006-02-14 Thread Anton Stiglic
I don't believe MtE is good advice, and I have yet to see a decent reason
why one would want to use that instead of EtM. 
Of course when we talk about EtM, the MAC should be applied over all
plaintext headers and trailers (including IV used for encryption, algorithm
identifier, protocol version, whatever).

Allot of attacks could have been prevented with EtM, including the Vaudenay
padding attack, the Chosen-Ciphertext Attacks against PGP and other email
encryption protocols described by Schneier, Katz and Jallad
http://www.schneier.com/paper-pgp.pdf
as well as the attacks on Host Security Modules key blocks (well in this
case the bad was simply that their were to integrity checks, 2 key
Triple-DES keys were protected by a master triple-DES key by encrypted the
left part and right part independently) and other such types as described by
Clulow and others
http://www.cl.cam.ac.uk/~jc407/Chap3.pdf

Ferguson gave an explanation why in his book with Schneier they recommend
MtE
http://groups.google.ca/group/sci.crypt/msg/1a0e0165c48e4fe4?q=g:thl19936885
73ddq=hl=enlr=ie=UTF-8oe=UTF-8
But the arguments he gives pertain to other problems; see for example the
comments given by Wagner which I agree with
http://groups.google.ca/group/sci.crypt/msg/532fdfb5edca19a8?q=g:thl24955674
08ddq=hl=enlr=ie=UTF-8oe=UTF-8

I had come up with a list of advices for crypto implementation some time ago
myself.  These included (from memory)

- Use good RNGs, even for things other than the generation of keys (such as
for generating IVs, challenges, etc.)
- Use standard algorithms, and use them in secure ways (choose a good mode
of encryption, adequate key sizes, pick the IVs the way you are supposed to
securely, usually either randomly or for counters make sure you have no
repeats)
- Use standard protocols (don't try to re-invent TLS or IPSec)
- Encrypt then authenticate over ciphertext and all plaintext headers and
trailers.
- Use independent keys for different functionalities.  If needed, derive
independent keys based on a single secret using a good key derivation
function.
- Limit the amount of time you handle secrets (zeroize after use...)
- Don't let yourself be used as a random oracle (I think Ross Anderson said
it this way first), this includes limiting information that is leaked about
errors, avoiding timing attacks and such (this is hard to do in practice).

--Anton




-- 
Internal Virus Database is out-of-date.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.1/250 - Release Date: 03/02/2006
 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Victor Duchovni
On Tue, Feb 14, 2006 at 12:44:39PM +1000, James A. Donald wrote:

 Absent exception handling, mission critical tasks should have no
 exceptions, which is best accomplished by the die-on-error standard.
 

Absent good library design, the developer's goals are best accomplished
with the roll-your-own standard.

If the authors of libgrypt instead of saying sorry, we know, it is a
difficult problem, we are working on it, instead become defensive and
erect false dichotomies to defend the developer from his own folly, I
can add libgrypt to my list of tools to avoid when building large systems.

As I said before, Postfix does not use GnuTLS directly, rather it is
sometimes a victim of libgrypt design via GnuTLS imbedded in the system
LDAP library.

The current libgrypt is IMHO not suitable for linking into LDAP libraries,
database client-server communication libraries, SMTP servers...

As for Postfix, it does entropy gathering out-of-process (in the tlsmgr(8)
daemon). The SMTP server and client daemons get entropy indirectly from
tlsmgr(8) to seed their internal PRNG. Postfix uses OpenSSL, and error
conditions in OpenSSL are recoverable (Postfix can and will return 454 in
response to STARTTLS, fatal errors are not appropriate in this context).
Postfix makes use of error reporting hooks in MySQL, PgSQL, SASL, OpenSSL,
(non-GnuTLS) OpenLDAP... none of these have been reported to abruptly
terminate the calling process instead of reporting errors to the caller.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], James A. Donald writes:
 --

 Libgcrypt tries to minimize these coding errors; for example there
 are no error returns for the RNG - if one calls for 16 bytes of
 random one can be sure that the buffer is filled with 16 bytes of
 random.  Now, if the environment is not okay and Libgcrypt can't
 produce that random - what shall we do else than abort the process.
 This way the errors will be detected before major harm might occur.

I'm afraid I consider it instead a weakness in your API design
that you
  have no way to indicate an error return from a function that may
  fail.

The correct mechanism is exception handling.

If caller has provided a mechanism to handle the failure, that
mechanism should catch the library generated exception.  If the caller
has provided no such mechanism, his program should terminate
ungracefully.

Unfortunately, there is no very portable support for exception
handling in C.   There is however support in C++, Corn, D, Delphi,
Objective-C, Java, Eiffel, Ocaml, Python, Common Lisp, SML, PHP and
all .NET CLS-compliant languages.

Absent exception handling, mission critical tasks should have no
exceptions, which is best accomplished by the die-on-error standard.


Precisely.  I was preparing a post of my own, saying the same thing; 
you beat me to it.

We all agree that critical errors like this should be caught; the only 
question is at what layer the action should take place.  I'm an 
adherent to the Unix philosophy -- when a decision is made at a lower 
level, it takes away the ability of the higher level to do something 
different if appropriate, and this loss of flexibility is a bad thing.

As noted, the best answer is a modern language that supports 
exceptions.  (Sorry, SIGABRT and setjmp/longjmp just don't cut it.)  
Let me suggest a C-compatible possibility: pass an extra parameter to 
the library routines, specifying a procedure to call if serious errors 
occur.  If that pointer is null, the library can abort.

--Steven M. Bellovin, http://www.cs.columbia.edu/~smb



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]