[openssl-dev] what's possible and what's not ... including RNGs

2017-06-29 Thread John Denker via openssl-dev
On 06/29/2017 08:01 AM, I wrote:

> Some platforms are not secure and cannot be made secure.

This is relevant to the RNG discussion, and lots of other
stuff besides.

*) For example, you can use openssl rsa to generate a so-called
 private key, but it will not actually be private (in any
 practical sense) if the operating system's RNG has been
 compromised.  There is little that openssl can do to detect
 this problem.  RNGs are infamously hard to test.

*) As another example, a VM guest is at the mercy of the host.

*) Don't enable remote administration using "admin" as the
 root password.

*) Etc. etc. etc.

All this "should" be obvious.  However, there are lots of people
who use openssl but don't know much about security in general
or crypto in particular.  Therefore the limitations need to be
prominently mentioned in the openssl documentation.  That's not
sufficient by itself, but it counts as a step in the right
direction.

Series metaphor:  A chain is only as strong as its weakest link.
Parallel metaphor:  A fence is only as strong as its weakest picket.

One of the defining properties of modern civilization is division
of labor.  You simply cannot have everybody be responsible for
everything.  We make jokes when the division is done improperly:
   https://www.av8n.com/physics/not-my-job.htm

but even so, a great many divisions are necessary.

 -- If you hire a locksmith to install a deadbolt on your front
  door, that does not make him responsible for the fact that the
  back door is made of balsa wood, and the side door is standing
  wide open.

 ++ If you hire a security consultant to look at the big picture,
  that's different.

 +- The locksmith MAY remark on security problems that he notices,
  but he is not obliged to fix them, or even to search for them.

In this parable, openssl is the locksmith.  The project does not
have the resources to do much beyond basic locksmithing.



There are deep issues here that I don't know how to solve.  For
example, what should be done when the situation is discovered
to be insecure?
 a) Ignore the discovery and soldier on?
 b) Print a warning?
 c) Block?

Blocking generally infuriates the users.  They replace the
offending software with something that doesn't block, even if
it's less secure.

Warnings aren't much better.  It does little good to print a
warning about a problem that the user does not understand and
could not solve.

Ignorance is not bliss.
Into the valley of death rode the six hundred.

Asking the user a question that has no good answers, e.g. to
choose between /dev/random and /dev/urandom, does not solve
the problem at all.

=

On 06/29/2017 10:07 AM, Theo de Raadt wrote:

>> As has been said many times before, what we need (but do not have)
>> is /one/ source of randomness that never blocks and never returns
>> bits that are guessable by the adversary.

> I've been preaching this for more than a decade, and that is exactly
> what I built in OpenBSD.

> It isn't very hard to do this properly in the kernel.

Sometimes it's not very hard ... but sometimes it's provably impossible,
depending on what sort of support is available from the hardware.  I
stand by the assertion that some platforms are not secure and cannot
be made secure.  RNGs are one manifestation of this, among others.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-29 Thread John Denker via openssl-dev
Executive summary:

As has been said many times before, what we need (but do not have)
is /one/ source of randomness that never blocks and never returns
bits that are guessable by the adversary.

In favorable cases, using getrandom(,,0) \*\ is appropriate
for openssl.  There are problems with that, but switching to
getrandom(,,GRND_RANDOM) \**\ would not solve the problems.

\*\  Reading /dev/urandom is almost the same.
\**\ Reading /dev/random is essentially the same.

In cases where getrandom() is not good enough, the problems tend
to be highly platform-dependent.  Many of these problems would be
quite difficult for openssl to detect (much less solve).  Some
platforms are not secure and cannot be made secure.

On 06/27/2017 01:40 PM, Theodore Ts'o wrote:

>> My recommendation for Linux is to use getrandom(2) the flags field set
>> to zero.
>> [...] /dev/urandom (which has the same performance characteristics as the
>> getrandom system call)


Similarly, on 06/29/2017 04:03 AM, Dimitry Andric gave what might
be considered the usually-correct answer for the wrong reasons:

> In short, almost everybody should use /dev/urandom

OK.  There's also getrandom().

> and /dev/random is kept alive for old programs.

[...]

> The Linux random(4) manpage says:
> 
>The /dev/random device is a legacy interface which  dates  back
>to a time where the cryptographic primitives used in the imple‐
>mentation of /dev/urandom were not  widely  trusted.   It  will
>return random bytes only within the estimated number of bits of
>fresh  noise  in  the  entropy  pool,  blocking  if  necessary.
>/dev/random is suitable for applications that need high quality
>randomness, and can afford indeterminate delays.

That's what the manpage says ... but does anybody believe it?

On 06/27/2017 06:22 PM, Ted told us not to trust what it says in the man
pages.

Oddly enough, all the advice given above (including the list traffic
and the man pages) is flatly contradicted by what it says in the most
up-to-date kernel source, namely:

>>>  /dev/random is suitable for use when very high
>>>  quality randomness is desired (for example, for key generation

Reference:
  
https://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git/tree/drivers/char/random.c?id=e2682130931f#n111

All it all, it's hardly surprising that users are confused.

==

When it was introduced, the random / urandom split was advertised as
a way of solving certain problems with the old approach.  To block or
not to block, that is the question.  The problems didn't actually
get solved, just shifted.  The split requires users (rather than the
RNG designers) to deal with the problems.  The fact that the recently-
introduced getrandom(2) call has flags such as GRND_RANDOM and
GRND_NONBLOCK means that users are still on the hook for problems
they almost certainly cannot understand, much less solve.

The conclusion remains the same:  What we need (but do not have) is
/one/ source of randomness that never blocks and never returns bits
that are guessable by the adversary.

==

In fact there are profound distinctions between an ideal HRNG and
an ideal PRNG.  AFAICT neither one exists in the real world, in the
same sense that ideal spheres and planes do not exist, but still
the idealizations are meaningful and helpful.

It seems likely that /dev/random was intended, at the time of the
split, to serve as approximate HRNG, while /dev/urandom was intended
to be a PRNG of some kind.  Using terms like "legacy interface" is
an astonishing mischaracterization of the distinction.



Similarly, it is strange to talk about

> a time where the cryptographic primitives used in the imple‐
>mentation of /dev/urandom were not  widely  trusted

In fact, 
 ++ Improper seeding is, and has always been, the #1 threat to
  both /dev/random and /dev/urandom.
 ++ Compromise of the internal state is a threat to /dev/urandom.
  It is better to prevent this than to try to cure it.
  If the PRNG is compromised, probably a lot of other things are too.
 ++ Lousy architectural design is always a threat.
 ++ Coding errors are always a threat.
 ++ etc.
 ++ etc.
 -- Cryptanalytic attack against the outputs is way, Way, WAY
  down on the list, and always has been, assuming the crypto
  primitives are halfway decent, assuming the architecture and
  implementation are sound.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread John Denker via openssl-dev
On 06/27/2017 06:41 PM, Peter Waltenberg wrote:

> Consider most of the worlds compute is now done on VM's where images are 
> cloned, duplicated and restarted as a matter of course. Not vastly 
> different from an embedded system where the clock powers up as 00:00 
> 1-Jan, 1970 on each image. If you can trust the OS to come up with unique 
> state each time you can rely solely on the OS RNG - well provided you 
> reseed often enough anyway, i.e. before key generation. That's also why 
> seeding a chain of PRNG's once at startup is probably not sufficient here.

That is approximately the last thing openssl should be
fussing over.  There is a set of problems there, with a
set of solutions, none of which openssl has any say over.

===>  The VM setup should provide a virtual /dev/hwrng  <===

Trying to secure a virtual machine without a virtual hwrng
(or the equivalent) is next to impossible.  There may be
workarounds, but they tend to be exceedingly locale-specific,
and teaching openssl to try to discover them would be a
tremendous waste of resources.

So stop trying to operate without /dev/hwrng already.

It reminds me of the old Smith & Dale shtick:
  -- Doctor, doctor, it hurts when I do *this*.
  -- So don't do that.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread John Denker via openssl-dev
On 06/27/2017 02:28 AM, Matt Caswell wrote:
>>
>> I also agree that, by default, using the OS provided source makes a lot
>> of sense.

Reality is more complicated than that;  see below.

On 06/27/2017 11:50 AM, Benjamin Kaduk via openssl-dev wrote:

> Do you mean having openssl just pass through to
> getrandom()/read()-from-'/dev/random'/etc. or just using those to seed
> our own thing?
> 
> The former seems simpler and preferable to me (perhaps modulo linux's
> broken idea about "running out of entropy")

That's a pretty big modulus.  As I wrote over on the crypto list:

The xenial 16.04 LTS manpage for getrandom(2) says quite explicitly:

>> Unnecessarily reading large quantities  of data will have a
>> negative impact on other users of the /dev/random and /dev/urandom
>> devices.

And that's an understatement.  Whether unnecessary or not, reading
not-particularly-large quantities of data is tantamount to a
denial of service attack against /dev/random and against its
upstream sources of randomness.

No later LTS is available.  Reference:
  http://manpages.ubuntu.com/manpages/xenial/man2/getrandom.2.html

Recently there has been some progress on this, as reflected in in
the zesty 17.04 manpage:
  http://manpages.ubuntu.com/manpages/zesty/man2/getrandom.2.html

However, in the meantime openssl needs to run on the platforms that
are out there, which includes a very wide range of platforms.

It could be argued that the best *long-term* strategy is to file
a flurry of bug reports against the various kernel RNGs, and then
at some *later* date rely on whatever the kernel provides ... but
still, in the meantime openssl needs to run on the platforms that
are out there.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-26 Thread John Denker via openssl-dev
In the context of

>> What's your threat model?

>> Are you designing to resist an output-text-only attack?  Or do you also want
>> "some" ability to recover from a compromise of the PRNG internal state?

On 06/26/2017 11:51 AM, Salz, Rich wrote:

> Our TCB border is the process.

That doesn't answer the question.

> then something like chacha.

That doesn't answer the question either.



I'm not mentioning any names, but some people are *unduly*
worried about recovery following compromise of the PRNG
internal state, so they constantly re-seed the PRNG, to
the point where it becomes a denial-of-service attack
against the upstream source of randomness.

This is also mostly pointless, because any attack that
compromises the PRNG state will likely compromise so many
other things that recovery will be very difficult.  All
future outputs will be suspect.

So please let's not go overboard in that direction.

On the other hand, it seems reasonable to insist on /forward/
secrecy.  That is, we should insist that /previous/ outputs
should not be compromised.  This is achievable at small but
not-quite-zero cost.

Specifically, ChaCha (or any other cipher) in counter mode
does *not* provide forward secrecy when the PRNG state is
compromised.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-26 Thread John Denker via openssl-dev
On 06/26/2017 12:41 PM, Salz, Rich wrote:

> We run in many environments, and I don't think it's reasonable to say
> that the RNG on someone's personal web server, perhaps on the
> Internet, is at the same level of criticality, say, as the same RNG
> running on something like a global CDN.  I am not trying to back out
> of our responsibilities here, but rather saying that I think a
> justifiable case can be made for accepting vague words like mediocre
> at times.

That argument cuts the other way, much more acutely.

When writing a low- to mid-level library such as openssl,
the problem is you *don't know* how it will be used.  If
you design a RNG that is good enough for a game of Go Fish,
it is entirely possible that some user will turn around and
use the same RNG to sign a multi-million dollar contract,
or encrypt some life-and-death critical messages.

The days when we could get away with mediocre security are
gone, and have been for quite a while now.

The idea of "provably correct" code has been around for decades
now.  I don't always succeed, but I try to write provably
correct code, even for things that are vastly less critical
than a cryptographic RNG.

In particular, the idea of combining several lousy upstream
sources and hoping for the best is 100% virgin serpentoleum.
It violates every engineering principle known to man, except
for Murphy's law.

The fact that RNGs are hard to test makes it easy to fool your
friends.  Your enemies will not be so easily fooled.  This
just makes it extra-super-important to insist on sound
engineering practices, top to bottom.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-26 Thread John Denker via openssl-dev
On 06/26/2017 12:41 PM, Salz, Rich wrote:

> Suppose the chip supports RDRAND but the runtime doesn't have
> getrandom or /dev/random?

That's an easy one!

Check the feature-test bit and then call RDRAND yourself.
Code to do this exists, e.g.
  
https://en.wikipedia.org/wiki/RdRand#Sample_x86_asm_code_to_check_upon_RDRAND_instruction

A version of that for 64-bit architecture exists somewhere, too.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-26 Thread John Denker via openssl-dev
On 06/26/2017 11:51 AM, Salz, Rich wrote:

>> Combining many lousy sources and hoping that one of them will do
>> the job is not good engineering practice.

> But if there are a couple and they're both mediocre?

There are multiple dimensions to be considered, including reliability
and rate.

As for reliability, I don't know what "mediocre" means.  Usually
security-critical code is correct or it's not.  For a seed-source,
either a lower bound on the amount of good "hard" randomness is
available and reliable, or it's not.

As for rate, seeding a PRNG places rather mild demands on the source.
I'm having trouble imagining a reasonable source that has a guaranteed
lower bound on the rate that is nonzero yet too small for the purpose.

By "reasonable" I mean to exclude things that were designed to be
facetious or perverse counterexamples.

Can somebody point to a specific example of a "mediocre" source?

> the ambient OS isn't one, but is one of many possibilities.

That's moving the outer loop to the inside, for no good reason.
I suggest asking the hard questions on a per-OS basis:

 --  If you trust this particular OS to provide a seed, why not
  trust it for everything, and not bother to implement an
  openssl-specfic RNG at all?

 -- Conversely, if you don't trust this particular OS, what makes
  you think you can solve a problem the OS failed to solve,
  especially without knowing why it failed?

You can then write an outer loop over all OS colors and flavors.

If the questions are unanswerable for each individual OS, it seems
both impossible and pointless to try to answer them for all OSs at
once.

> To summarize, perhaps, let's just say that it is really really
> outdated.  The state of the art has advanced, and we have some
> catching-up to do.

The standard advice that you see on e.g. the crypto list is to
use whatever the OS provides.  It's unlikely you can do better
... certainly not without making a tremendous multi-year
R&D project out of it.

In particular, if the ambient environment is not secure, it is
very unlikely that anything openssl can do will make it secure.

If what the OS provides isn't good enough, you should file bug
reports against it.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-26 Thread John Denker via openssl-dev
On 06/26/2017 09:17 AM, Salz, Rich wrote:

[snip]

> Does this make sense?  Are there holes?

Even without the snipping, the proposal is very incomplete.
Insofar as any hole that is not explicitly closed should
be presumed open, then yes, there are many holes.

What's your threat model?  I know that sounds like a cliché,
but it's actually important.

In particular, in my world the #1 threat against any PRNG
is improper seeding. 
 --  If you trust the ambient OS to provide a seed, why not
  trust it for everything, and not bother to implement an
  openssl-specfic RNG at all?
 -- Conversely, if you don't trust the OS, what makes you
  think you can solve a problem the OS failed to solve,
  especially without knowing why it failed?

And (!) what do you propose to do when a suitable seed is not
available at the moment but might be available later?

Are you designing to resist an output-text-only attack?  Or do
you also want "some" ability to recover from a compromise of
the PRNG internal state?

Is there state in a persistent file, or only in memory?


> Randomness should be whitened.  Anything feed into an randomness
> pool, should be mixed in and run through SHA256. pool = SHA256(pool
> || new-randomness) 

Just having a pool and a hash function is not enough.  Not
even close.

Constructive suggestion:  If you want to see what a RNG looks
like when designed by cryptographers, take a look at:
  Elaine Barker and John Kelsey,
  “Recommendation for Random Number Generation Using Deterministic Random Bit 
Generators”
  http://csrc.nist.gov/publications/nistpubs/800-90A/SP800-90A.pdf

That design may look complicated, but if you think you can
leave out some of the blocks in their diagram, proceed with
caution.  Every one of those blocks is there for a reason.

> Randomness should be whitened.

Whitening at the input is neither difficult nor necessary nor sufficient.
The hard part is obtaining a reliable lower bound on the amount of
useful randomness in the bit-blob when it appears at the input.  Where
did the bits come from?  Where did the bound come from?  Do you trust
the generic openssl user, who knows nothing about cryptology, to provide
either one?

> The idea of cascading pools is neat.

Cascading is absolutely necessary, and must be done "just so", to
prevent track-and-hold attacks.  One of the weaknesses in the
Enigma, exploited to the hilt by Bletchley Park, was that each
change in the internal state was too small.  A large state space
is not sufficient if the state /changes/ are small.

On 06/26/2017 10:12 AM, Kurt Roeckx wrote:

>> Do you think we need to use multiple sources of randomness?

Quality is more important than quantity.

N reliable sources is marginally better than N-1 reliable sources.
One reliable source is immeasurably better than any number of
unreliable sources.

Combining many lousy sources and hoping that one of them will do
the job is not good engineering practice.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-26 Thread John Denker via openssl-dev
In the context of:

>> If you mean for something to be hard for the attacker to guess,
>> the word "adamance" can be used.

On 06/26/2017 08:32 AM, Salz, Rich wrote:

> All my attempts to look up a definition of this word came up with a noun for 
> for adamant.

The word "adamance", meaning hardness (as in hard to guess),
was coined for this purpose.

The allusion to "adamance", meaning hardness (as in rheologically
hard), is not a coincidence.

Can anybody suggest a better term?

For more on this, and a host of RNG-related issues see:
  https://www.av8n.com/turbid/paper/rng-intro.htm

> Is it worth reposting my thoughts with your suggested wording changes?

OK.  Off-list or on.  This stuff is important.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-26 Thread John Denker via openssl-dev
On 06/26/2017 05:49 AM, Salz, Rich via openssl-dev wrote:

> We welcome your input.

Here is an observation plus some suggestions:

Using the word "entropy" in this context is unhelpful.

Normally, entropy means something very specific, in which
case using entropy to design and explain your RNG is a bad
idea.  I can exhibit a distribution that has provably infinite
entropy, even though you can guess the exact output more than
25% of the time.

If perhaps you mean something else, calling it "entropy" is an
even worse idea.  It is likely that readers will misunderstand
what is written.

I am quite aware that the word appears in kernel source, but that
doesn't make it right.  It is used inconsistently, and AFAICT none
of the possible interpretations is really correct.

Note:  The real issue here it not the terminology.  Ideas are primary
and fundamental;  terminology is tertiary.  Terminology is only
important insofar as it helps us formulate and communicate the ideas.

There are at least five different ideas that need to be understood:
  1) The randomness of an ideal PRNG.
  2) The randomness of an ideal TRNG aka HRNG.
  3) The opposite, i.e. pure determinism.
  4) Squish, which is neither reliably predictable nor reliably unpredictable.
  5) Combinations of the above.


Suggestion:  Get rid of every mention of "entropy" from openssl
code, documentation, design discussions, and everywhere else.

Suggestion:  In the common case where exact meaning is not important,
"entropy" can be replaced by a noncommittal nontechnical word such
as "randomness".  Even so, it should be clearly documented that this
term is not meant to be quantitative.

Suggestion:  If you mean for something to be hard for the attacker
to guess, the word "adamance" can be used.  This can be quantified
in terms of the Rényi H_∞ functional, plus some additional attention
to detail (including specifying that it is a functional of the
attacker's macrostate, not anybody else's).

Suggestion:  In the remaining cases, which are not rare, it is
important to take a step back and figure out what is the actual
idea that is being (or should be) discussed.  This will not be
easy, but it must be done, line by line.  Otherwise the whole
enterprise is likely to be a waste of time.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] discussion venue policy

2017-06-26 Thread John Denker via openssl-dev
On 06/26/2017 05:49 AM, Salz, Rich wrote:

> Please take a look at GitHub pull request

Is the RNG topic going to be discussed on github, or on openssl-dev?
What about other topics?

Having some topics one place and some another seems like a Bad Idea™
Having a single topic split across multiple venues seems even worse.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Revert commit 10621ef white space nightmare

2017-01-09 Thread John Denker
On 01/09/2017 10:46 AM, Leonard den Ottolander wrote:

> I don't remember ever seeing directives being indented by adding
> white space between the hash sign and the directive.

In my world, that is quite common.

> If one wants to indent directives space is normally inserted before
> the hash sign.

No, that is not normal.  It is not even permitted by traditional
versions of the preprocessor.  I quote from
  https://gcc.gnu.org/onlinedocs/gcc-3.1/cpp/Traditional-Mode.html

>> Preprocessing directives are recognized in traditional C only when
>> their leading # appears in the first column. There can be no
>> whitespace between the beginning of the line and the #.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] github raw .patch ... also: still seeing heartbeat prolem in openssl 1.0.2 STABLE Daily releases

2016-11-02 Thread John Denker
On 11/02/2016 09:50 AM, The Doctor wrote:

> I usually use lynx and cannot see where to pull this.

AFAICT here is the relevant patch, in a raw form suitable for
direct download:

  
https://github.com/openssl/openssl/commit/554ae58d09a9b09fa430553c3e6ba5bb5433150c.patch

In general, the procedure goes like this:
 Given the URL of the PR, go there.
   For example, https://github.com/openssl/openssl/pull/1826
 Click on the "Commits" tag
 In the list of commits, click on the SHA1 of the commit.
 (That is *not* the same as clicking on the name of the commit.)
   Example result: 
https://github.com/openssl/openssl/commits/554ae58d09a9b09fa430553c3e6ba5bb5433150c
 Munge the URL by changing /commits/ to /commit/ singular,
 and by adding .patch to the end.

Truly an astonishing user interface.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] status of libefence (electric fence)

2016-10-21 Thread John Denker
On 10/21/2016 04:14 PM, Salz, Rich asked:

> Is electric fence even available any more?

It's bundled with current Debian and Ubuntu.

>From the README:
  "This version should run on all systems that support POSIX mmap() and
  mprotect(). This includes Linux, Unix, and I think even BeOS."

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] DRBG entropy

2016-08-01 Thread John Denker
On 08/01/2016 02:17 AM, Leon Brits wrote:

> Am I correct to state that for a tested entropy source of 2b/B and
> the same assumptions as in the paragraph, I need to return 8 blocks
> of 16B each in my get_entropy() callback?

No, that is not correct, for the reasons previously explained.

> Again assume it is uniform (e.g. we don't get 8 bits of entropy in byte 1 and 
> nothing in the next 7).

That assumption is invalid, if we believe the LRS test.
Quoting from LRS.py:

>> # Length of the Longest Repeated Substring Test - Section 5.2.5
>> # This test checks the IID assumption using the length of the longest 
>> repeated
>> # substring. If this length is significantly longer than the expected value,
>> # then the test invalidates the IID assumption.

Accumulating 8 or more blocks might make sense if the data were IID,
but it isn't.  Either that or the LRS test itself is broken, which
is a possibility that cannot be ruled out.  By way of analogy, note
that the p(max) reported by the Markov test is clearly impossible
and inconsistent with the reported min-entropy.

Suggestion:  Modify LRS.py to print (in hex) the longest repeated
substring.  Then verify by hand that the string really does recur
in the data.
 -- If it doesn't, then the test is broken.
 -- If it does, then either the chip is broken or you're using it wrong.

Remind your boss that the whole point of the certification process is to
make sure that broken hardware doesn't get certified.

Also:
 *) Please stop using "entropy" as a synonym for randomness.  Some things
  have very little entropy but are still random enough for a wide range
  of purposes.  Meanwhile other things have large entropy but are not
  random enough.
 *) Please stop using "entropy" as a synonym for "min-entropy".  The
  latter is a two-word idiomatic expression.  A titmouse is not a mouse.
  Buckwheat is not a form of wheat.  The Holy Roman Empire was neither
  holy, nor Roman, nor an empire.

Just because openssl is sloppy about this doesn't make it OK.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] DRBG entropy

2016-07-29 Thread John Denker
In the context of:

>> I have a chip (FDK RPG100) that generates randomness, but the
>> SP800-90B python test suite indicated that the chip only provides
>> 2.35 bits/byte of entropy

On 07/28/2016 09:08 AM, I wrote:

> That means the chip design is broken in ways that the manufacturer
> does not understand.  The mfgr data indicates it "should" be much
> better than that:
>   http://www.fdk.com/cyber-e/pdf/HM-RAE103.pdf

To be scientific, we must consider all the relevant hypotheses.

1) For starters, let's consider the possibility that the python
 test suite is broken.  Apply the test suite to a sufficiently
 random stream. 
  -- An encrypted counter should be good enough.
  -- /dev/urandom is not a paragon of virtue, but it should be
good enough for this limited purpose.

1a) If the test suite reports a low randomness for the truly random
 stream, then the test is broken.  Find a better test suite and
 start over from Square One.

1b) If the test suite reports a high randomness for the random stream
 but a low randomness for the chip, the chip is broken and cannot be
 trusted for any serious purpose.
  -- You could derate it by another factor of 10 (down to 0.2
   bits per byte) and I still wouldn't trust it.  A stopped
   clock tells the correct time twice a day, but even so, you
   should not use it for seeding your PRNG.
  -- It must be emphasized yet again that for security you
   need a lower bound on the randomness of the source.
   Testing cannot provide this.  A good test provides an upper
   bound.  A bad test tells you nothing.  In any case, testing
   does not provide what you need.  Insofar as the chip passes
   some tests but not others, that should be sufficient to prove
   and illustrate the point.

 Seriously, if the FIPS lab accepts the broken chip for any
 purpose, with or without software postprocesing, then you
 have *two* problems:  A chip that cannot be trusted and a
 lab that cannot be trusted.


2a) We must consider the possibility that the bare chip 
 hardware is OK, but there is a board-level fault, e.g.
 wrong voltage, wrong readout timing, or whatever.

2b) Similarly there is the possibility that the bare chip
 hardware is OK but the data is being mishandled by the
 system-level driver software.  This should be relatively
 easy to fix.

===

It must be emphasized yet again the entropy (p log 1/p) is
probably not the thing you care about anyway.  If the entropy
density is high (nearly 8 bits per byte) *and* you understand
why it is not higher, you may be able to calculate something
you can trust ... but let's not get ahead of ourselves.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] DRBG entropy

2016-07-28 Thread John Denker
Let's play a guessing game.  I provide a hardware-based random number
generator of my choosing.  It produces a stream of bytes.  It has
an entropy density greater than 2.35 bits per byte.  This claim is
consistent with all the usual tests, but it is also more than that;
it is not just "apparent" entropy or an upper bound based on testing,
but real honest-to-goodness Boltzmann entropy.  The bytes are IID
(independent and identically distributed).  The design and 
implementation are open to inspection.

On each move in this game, I try to predict the exact value of the
next byte.  Every time I succeed, you pay me a dollar; every time
I fail, I pay you a dollar.  We play at least 100 moves, to minimize
stray fluctuations.

The point is, if you think entropy is a good measure of resistance
to guessing, then you should be eager to play this game, expecting
a huge profit.

Would anybody like to play?


On 07/28/2016 12:40 AM, Leon Brits wrote:
> Thanks for the helping me understand the whole entropy thing better.
>  It is still get the feeling that this is a "best effort" thing and 
> that nobody can actually proof what is correct. I am probably just 
> bringing the math down to my level - sorry.
> 
> With that said for validation I still need to be sure that I give the
> required entropy back from the OpenSSL callback. Now since I am not
> allowed to use a hash with the DRBGs (FIPS lab and SP800-90B section
> 8.4), can you please confirm that, with a source of raw 2b/B entropy
> data, I need to return 4 times the data from the callback function?

That depends on what the objective is.  The objective is not
obvious, as discussed below.

> According to FIPS test lab the lowest value from all the tests are 
> used as the entropy and 2 is too low.

  1a) I assume the idea that "2 is too low" comes from the FIPS lab.

  1b) I assume the designer's boss doesn't directly care about this,
   so long as the FIPS lab is happy.

  1c) This requirement has little if any connection to actual security.

> I must however make use of this chip.

  2a) I assume the FIPS lab doesn't care exactly which chip is used.

  2b) I assume this requirement comes from the boss.

  2c) This requirement has little if any connection to actual security.

> I am not allowed to use a hash with the DRBGs (FIPS lab and
> SP800-90B section 8.4),

Where's That From?  Section 8.4 says nothing about hashes.  It's about
health testing.  The hash doesn't interfere with health testing, unless
the implementation is badly screwed up.

Furthermore, in sections 8.2 and 8.3, and elsewhere, there is explicit
consideration of "conditioning", which is what we're talking about.

  3a) Does this requirement really come from the FIPS lab?  It 
   certainly doesn't come from SP800-90B as claimed.

  3c) This requirement has nothing to do with actual security.

> It is still get the feeling that this is a "best effort" thing and 
> that nobody can actually proof what is correct.

Where's That From?

Proofs are available, based on fundamental physics and math, delineating
what's possible and what's not.

> can you please confirm that, with a source of raw 2b/B entropy data, 
> I need to return 4 times the data from the callback function?

Two answers:
 -- My friend Dilbert says you should do that, in order to make the
  pointy-haired boss happy.
 -- You should not, however, imagine that it provides actual security.

> I have a chip (FDK RPG100) that generates randomness, but the
> SP800-90B python test suite indicated that the chip only provides
> 2.35 bits/byte of entropy

That means the chip design is broken in ways that the manufacturer
does not understand.  The mfgr data indicates it "should" be much
better than that:
  http://www.fdk.com/cyber-e/pdf/HM-RAE103.pdf

The mfgr has not analyzed the thing properly, and nobody else will
be able to analyze it at all.  The block diagram in the datasheet
is a joke:
  http://www.fdk.com/cyber-e/pdf/HM-RAE106.pdf#Page=9

> I must however make use of this chip.

My friend suggests you XOR the chip output with a decent, well-
understood HRNG.  That way you can tell the pointy-haired boss
that you "make use of this chip".





Bottom line: consider the contrast:
-- I'm seeing a bunch of feelings and made-up requirements.
-- I have not yet seen any sign of concern for actual security.

Under such conditions it is not possible to give meaningful advice
on how to proceed.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] DRBG entropy

2016-07-27 Thread John Denker
On 07/27/2016 05:13 AM, Leon Brits wrote:
> 
> I have a chip (FDK RPG100) that generates randomness, but the
> SP800-90B python test suite indicated that the chip only provides
> 2.35 bits/byte of entropy. According to FIPS test lab the lowest
> value from all the tests are used as the entropy and 2 is too low. I
> must however make use of this chip.

That's a problem on several levels.

For starters, keep in mind the following maxim:
 Testing can certainty show the absence of entropy.
 Testing can never show the presence of entropy.

That is to say, you have ascertained that 2.35 bits/byte is an
/upper bound/ on the entropy density coming from the chip.  If
you care about security, you need a lower bound.  Despite what
FIPS might lead you to believe, you cannot obtain this from testing.
The only way to obtain it is by understanding how the chip works.
This might require a trmendous amount of effort and expertise.



Secondly, entropy is probably not even the correct concept.  For any
given probability distribution P, i.e. for any given ensemble, there
are many measurable properties (i.e. functionals) you might look at.
Entropy is just one of them.  It measures a certain /average/ property.
For cryptologic security, depending on your threat model, it is quite
possible that you ought to be looking at something else.  It may help
to look at this in terms of the Rényi functionals:
  H_0[P] = multiplicity  = Hartley functional
  H_1[P] = plain old entropy = Boltzmann functional
  H_∞[P] = adamance

The entropy H_1 may be appropriate if the attacker needs to break
all messages, or a "typical" subset of messages.  The adamance H_∞
may be more appropriate if there are many messages and the attacker
can win by breaking any one of them.

To say the same thing in other words:
 -- A small multiplicity (H_0) guarantees the problem is easy for the attacker.
 -- A large adamance (H_∞) guarantees the problem is hard for the attacker.



Now let us fast-forward and suppose, hypothetically, that you
have obtained a lower bound on what the chip produces.

One way to proceed is to use a hash function.  For clarity, let's
pick SHA-256.  Obtain from the chip not just 256 bits of adamance,
but 24 bits more than that, namely 280 bits.  This arrives in the
form of a string of bytes, possibly hundreds of bytes.  Run this
through the hash function.  The output word is 32 bytes i.e. 256
bits of high-quality randomness.  The key properties are:
 a) There will be 255.99 bits of randomness per word, guaranteed
  with high probability, more than high enough for all practical
  purposes.
 b) It will be computationally infeasible to locate or exploit
  the missing 0.01 bit.

Note that it is not possible to obtain the full 256 bits of
randomness in a 256-bit word.  Downstream applications must be
designed so that 255.99 is good enough.



As with all of crypto, this requires attention to detail.  You
need to protect the hash inputs, outputs, and all intermediate
calculations.  For example, you don't want such things to get
swapped out.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4607] improve quietness for s_client ... also documentation for s_client + s_server

2016-07-05 Thread John Denker via RT
On 07/05/2016 02:42 PM, Rich Salz via RT wrote:
> this is for 1.0.2, right?

:; openssl version
OpenSSL 1.1.0-pre6-dev

:; git log
commit c2d551c01930df54bce6517cfecd214db6e98e80
Date:   Wed Apr 27 14:47:45 2016 +0100


-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4607
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] [openssl.org #4607] improve quietness for s_client ... also documentation for s_client + s_server

2016-07-05 Thread John Denker via RT
Hi --

Attached are four simple patches.
They make the apps more usable.
They should be pretty much self-explanatory.
Let me know if you have questions.

-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4607
Please log in as guest with password guest if prompted

>From 07ff5a786d6d06774688404c2dedf86097d449d4 Mon Sep 17 00:00:00 2001
From: John Denker 
Date: Tue, 5 Jul 2016 08:49:10 -0700
Subject: [PATCH 1/4] make s_client more quiet when -quiet is specified

---
 apps/s_client.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/apps/s_client.c b/apps/s_client.c
index e79cf7e..0391581 100644
--- a/apps/s_client.c
+++ b/apps/s_client.c
@@ -2331,7 +2331,7 @@ int s_client_main(int argc, char **argv)
 if (c_brief)
 BIO_puts(bio_err, "CONNECTION CLOSED BY SERVER\n");
 else
-BIO_printf(bio_err, "read:errno=%d\n", ret);
+if (ret || !c_quiet) BIO_printf(bio_err, "read:errno=%d\n", ret);
 goto shut;
 case SSL_ERROR_ZERO_RETURN:
 BIO_printf(bio_c_out, "closed\n");
@@ -2377,7 +2377,7 @@ int s_client_main(int argc, char **argv)
 at_eof = 1;
 
 if ((!c_ign_eof) && ((i <= 0) || (cbuf[0] == 'Q' && cmdletters))) {
-BIO_printf(bio_err, "DONE\n");
+if (!c_quiet) BIO_printf(bio_err, "DONE.\n");
 ret = 0;
 goto shut;
 }
-- 
2.7.4

>From e6d642aba8281fb57afd637a87b8dd982f27e988 Mon Sep 17 00:00:00 2001
From: John Denker 
Date: Tue, 5 Jul 2016 08:50:58 -0700
Subject: [PATCH 2/4] when a write to stdout has failed, sending a message to
 stdout is pointless, so let's send it to stderr instead; also let's send a
 more informative message

---
 apps/s_client.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/apps/s_client.c b/apps/s_client.c
index 0391581..504e729 100644
--- a/apps/s_client.c
+++ b/apps/s_client.c
@@ -2269,7 +2269,9 @@ int s_client_main(int argc, char **argv)
 i = raw_write_stdout(&(sbuf[sbuf_off]), sbuf_len);
 
 if (i <= 0) {
-BIO_printf(bio_c_out, "DONE\n");
+/* typical failure is broken pipe */
+BIO_printf(bio_err, "s_client.c: write to stdout failed (%d): %s\n",
+i, strerror(errno));
 ret = 0;
 goto shut;
 /* goto end; */
-- 
2.7.4

>From 59272ed9b51263a165866637ff993382ad8d2bfc Mon Sep 17 00:00:00 2001
From: John Denker 
Date: Tue, 5 Jul 2016 09:09:37 -0700
Subject: [PATCH 3/4] document the -verify_quiet option to s_client

---
 apps/s_client.c   |  6 --
 doc/apps/s_client.pod | 10 --
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/apps/s_client.c b/apps/s_client.c
index 504e729..b0ad2a0 100644
--- a/apps/s_client.c
+++ b/apps/s_client.c
@@ -611,7 +611,8 @@ OPTIONS s_client_options[] = {
 {"nbio_test", OPT_NBIO_TEST, '-', "More ssl protocol testing"},
 {"state", OPT_STATE, '-', "Print the ssl states"},
 {"crlf", OPT_CRLF, '-', "Convert LF from terminal into CRLF"},
-{"quiet", OPT_QUIET, '-', "No s_client output"},
+{"quiet", OPT_QUIET, '-',
+  "Do not print session and certificate info.  See also -verify_quiet"},
 {"ign_eof", OPT_IGN_EOF, '-', "Ignore input eof (default when -quiet)"},
 {"no_ign_eof", OPT_NO_IGN_EOF, '-', "Don't ignore input eof"},
 {"starttls", OPT_STARTTLS, 's',
@@ -635,7 +636,8 @@ OPTIONS s_client_options[] = {
 {"CRLform", OPT_CRLFORM, 'F', "CRL format (PEM or DER) PEM is default"},
 {"verify_return_error", OPT_VERIFY_RET_ERROR, '-',
  "Close connection on verification error"},
-{"verify_quiet", OPT_VERIFY_QUIET, '-', "Restrict verify output to errors"},
+{"verify_quiet", OPT_VERIFY_QUIET, '-',
+  "Restrict verify output to errors.  See also -quiet"},
 {"brief", OPT_BRIEF, '-',
  "Restrict output to brief summary of connection parameters"},
 {"prexit", OPT_PREXIT, '-',
diff --git a/doc/apps/s_client.pod b/doc/apps/s_client.pod
index 77668ea..205d057 100644
--- a/doc/apps/s_client.pod
+++ b/doc/apps/s_client.pod
@@ -63,6 +63,7 @@ B B
 [B<-ign_eof>]
 [B<-no_ign_eof>]
 [B<-quiet>]
+[B<-verify_quiet>]
 [B<-ssl3>]
 [B<-tls1>]
 [B<-tls1_1>]
@@ -298,8 +299,13 @@ input.
 
 =item B<-

Re: [openssl-dev] [openssl.org #3502] nameConstraints bypass bug

2016-05-31 Thread John Denker via RT
Here's a set of obvious questions:
  -- What is the current design?
   Is there a concise-and-complete statement somewhere?
  -- What are the design constraints?
   What is it that openssl MUST do?
   What is it that openssl MUST NOT do?
  -- What information is available?
  -- What critical information is not available,
   and why not?

I mention this because I read things like "deprecated"
and "working as designed" ... referring to the same
features AFAICT.  Also I read "the best one can do,
absent additional information" and it makes me wonder.

In particular, at some point one could consider
changing the design to obtain additional info.  For
starters, one could imagine an interface that says
in effect:
  int isOK = this_site_sent_me_this_webPKI_cert(sitename, cert, ...);

The point is that when this interface is used, we don't
need to worry about CN="Joe Bloggs" s/mime issues.  We
know it's supposed to be a webPKI cert and anything else
MUST be rejected.
  -- For that matter, in this context, nowadays the CN 
   SHOULD be ignored completely in favor of SANs.
  -- Especially when there are nameConstraints or other
   v3 features in the cert, I would suggest the CN MUST
   be ignored in favor of SANs.
  ++ More generally, the interface should demand whatever
   information is needed in order to make an intelligent
   decision.

==

As a sidelight, not important but amusing, one might wonder:
What is it supposed to mean when a name-constrained CA issues
a CN="Joe Bloggs" certificate?  Why would anyone want to rely
on such a cert?  Before we decide this is working as designed,
it might be nice to take a close look at the design.

In any case, I would suggest that the s/mime tail should not
be allowed to wag the webPKI dog.


-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=3502
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3502] nameConstraints bypass bug

2016-05-30 Thread John Denker via RT
On 05/30/2016 08:58 PM, Viktor Dukhovni wrote:

> Name constraints in the X.509v3 PKI have not worked well, and are
> rarely used.  The attack requires a issuing CA to be willing to
> issue certificates beyond its constraints, that would be quite
> noticeable and rather unwise.  So I think this is not a major
> problem.  We should probably make a reasonable effort to address
> this, but the urgency is I think low.

The priority may be higher than that, because of something
that has not yet been mentioned in this discussion:

  The nameConstraints protect the issuing CA, not just
  the relying parties.

Here's the scenario:  I persuade 1000 of my closest friends
to accept my mumble.com CA as a trusted root.  I offer them
the assurance that:
  The root cert is name-constrained, and therefore affects
  only their interactions with *.mumble.com, so it's
  not very dangerous. [1]

The first problem is that if openssl does not implement
nameConstraints properly, my assertion [1] is false.

This leads to a second problem:  My cert-issuing machine
becomes a much juicier target.  If anybody pwns my machine,
then /every/ cert-based activity of /every one/ of my friends
is compromised, via the nameConstraints bypass bug.

The problem does not revolve around me intentionally doing
something unwise;  it involves a bad guy stealing from me
and then doing something nasty.

So it seems the priority / prevalence argument is at best
circular:  People would use the feature a lot more if they
could trust it to do the right thing.

As Fred Smith once said, you don't judge the importance or
the optimal size of the proposed bridge according to the
number of people seen driving across the river before the
bridge is built.


-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=3502
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3502] nameConstraints bypass bug

2016-05-30 Thread John Denker via RT
I'm glad to see some discussion of this issue.
Note that other implementations treated this as
a bug and fixed it a long time ago.

On 05/30/2016 11:56 AM, Rich Salz wrote:

> WebPKI has deprecated
> cn-as-hostname for more than a decade and mandated SAN names.

Well, maybe that is the Right Answer™

I'm not sure what "deprecated" and "mandated" mean in
the openssl context.  If openssl actually de-implemented
CN-as-hostname and actually mandated SAN, that would
solve the nameConstraints bypass bug in grand style.

> Leaving this open because we might be able to do some hueristics/hacks to
> determine when the CN "should be" a DNS name.

How about this for a heuristic:  If nameConstraints are
in effect, then the validator MUST NOT accept the CN as
a DNS name.  This seems like the least the validator
could do, in light of the aforementioned deprecation.

This seems unlikely to generate false alarms, since any
issuer who uses nameConstraints should be savvy enough 
to not rely on CN-as-hostname to the exclusion of SANs.

  Optionally a CN that satisfies the nameConstraints could
  be tolerated, insofar as it is deprecated but harmless.

>  But the workaround is to use SAN.

That workaround is not specific enough to solve the 
security problem.  Note the contrast:
 -- As it stands, good guys are /allowed/ to use SAN.
 -- The problem is not solved until bad guys are
  /required/ to use SAN;
 ... or more to the point, required to not use anything
  but SAN;
 ... or even more to the point, required to not use
  anything that bypasses the nameConstraints.

The crucial word there is "required".


-- 
Ticket here: http://rt.openssl.org/Ticket/Display.html?id=3502
Please log in as guest with password guest if prompted

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4169] openssl-1.0.2e build still recommends deprecated (unnecessary?) `make depend`, returns numerous warnings abt not finding stddef.h

2016-03-14 Thread John Denker
On 03/14/2016 12:53 PM, Salz, Rich via RT wrote:

>> In order build openssl 1.0.2g
>> 
>>  use `make depend` when prompted -- i.e., do NOT ignore the advice
>>  but DO ignore the 1000's of lines of output, and just proceed to
>> subsequent `make`
>> 
>> And that resultant build is considered a reliable build.
>> 
>> Is that correct?

> Yes.

How do you know it's reliable?

In particular, how do you know there is not one important 
warning hiding among the thousands of others?

To assume that "any warning must be a false warning" seems
tantamount to assuming there cannot possibly be any bugs 
in openssl.

When I'm writing code, for many many years I have treated all
warnings as fatal errors.  That applies to all my code, not
just mission-critical and security-critical code.

It's very trendy these days to use "formal methods" to increase
reliability and security.  Getting the code to compile without
warnings seems like 0.01% of a baby step in the right direction.
Conversely, training users to ignore warnings seems antisocial.
It is the opposite of good security practice. 

> In this particular case it's more trouble than it's worth.
> 
> A future update to 1.0.2 might just remove that.

If it's not supported it should be stricken from the list
of supported features.   Conversely, if it's a supported
feature it should do the right thing.  Code that generates
thousands of warnings is not doing the right thing.

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-users] Removing obsolete crypto from OpenSSL 1.1 - seeking feedback

2015-11-20 Thread John Denker
On 11/19/2015 12:28 PM, Viktor Dukhovni wrote:

> What algorithms people use on
> their own data is their choice and risk decision not ours.

I heartily agree with the sentiment.  A low- or mid-level library
is not the right place to be making and enshrining policy decisions.

We can take yet another step in the same direction.  There are
several job descriptions:
 1) Writing the library code.
 2) Compiling the library code.  This might be done by some
  distributor.  This is where final decisions about #define
  options are made.
 3) Linking against the compiled library and running the application.
 4) Running the counterpart application on the other end of the
  communication line.

You would think that the guy in job #3 would be in the best position
to make policy decisions ... but sometimes even he doesn't get to
make that decision, because of #4.  It takes two to tango.  It seems
very likely that some people are using openssl to communicate with
legacy devices that use "outdated" crypto primitives.

-- There are /some/ cases where it is better to communicate in the
 clear than to encrypt badly.
-- There are /some/ cases where it is better to not communicate at
 all than to encrypt badly.
++ Sometimes not!  There is no such thing as absolute security, and
 sometimes an algorithm that would not withstand an "advanced persistent"
 attack might be good enough for some quick-and-dirty tactical purpose.

To say the same thing another way:  I am quite sure that many
/persons/ on this list, if assigned to job #3 and/or job #4, could
make wise decisions at those levels, based on information available
at those levels.  Indeed there are some persons on this list who
wear all four hats simultaneously.

So the question is, are there any representatives of category #3
who are willing to speak on behalf of /everybody/ in that category?
If not, it seems this thread is asking a question that cannot be
answered.

To say the same thing yet another way, fundamentally we have a
communication problem, or rather two separate communication
problems:
 A) The experts on this list know that certain crypto primitives
  are "broken or outdated".  This needs to be communicated to the
  people who are actually in a position to make and implement
  policy.
 B) There is some question as to whether users in the field have
  received message (A) and successfully ended all use of the
  deprecated primitives.  It would be nice if the people who 
  know the status could communicate this back to the developers.

The problem is:  It's not obvious that discussions on this list 
will solve either of these communication problems.  It's very 
asymmetrical:  If somebody squawks, you know you have a problem
... but the converse does not hold.  Furthermore, it seems likely
that the people who subscribe to this list have long since gotten
message (A) ... but what about non-subscribers?  There's a 
correlation there, the sort of correlation that makes it very
perilous to extrapolate.

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] [openssl.org #3960] make install fails with --prefix=./relative-path

2015-07-28 Thread John Denker via RT
Scenario:
  :; git clone https://github.com/openssl/openssl openssl-temp
  :; cd openssl-temp
  :; ./config --prefix=./relpath
  :; make
  :; make install
  [spewage snipped]

  created directory `./relpath'
  Cannot create directory ./relpath/.: File exists
  Makefile:669: recipe for target 'install_docs' failed
  make: *** [install_docs] Error 17

Discussion:

It could be argued that an implicit relative path of the
form --prefix=usr is probably a user error, i.e. a typo
in lieu of --prefix=/usr.  However, if you think it 
should be treated as an error, it should be caught at 
./config time ... rather than waiting until the middle 
of the install process.  Also, there should be some
meaningful, helpful error message, rather than "file 
exists".

Furthermore, an explicit relative path (i.e. one with 
a leading "./" or "../" in it) is probably not a user
error.  The expected and desired behavior is that it
should just work.

  If for some reason this cannot work, it should be
  caught at ./config time.  A meaningful, helpful
  error message should be given.

___
openssl-bugs-mod mailing list
openssl-bugs-...@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-bugs-mod

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Kerberos

2015-05-09 Thread John Denker
On 05/09/2015 05:21 AM, Douglas E Engert wrote:
> 
> Removing the code might be the best thing that could happen.

It "might" be.  That's hardly a ringing endorsement.

> Misuse of the older Kerberos code in OpenSSL with SSL is not as
> secure as one might think.

That's not proof -- that's not even evidence that it
is necessary to remove the code.  More specifically,
it is an awfully high-handed way to inform the users
what we think is "best" for them.

As previously mentioned in a different context, it 
is a bedrock principle of sound reasoning and sound 
planning that one should 
   /Consider all the plausible scenarios./

So let's consider the following scenario:  Rather 
than extirpating the code, we could simply add in 
a few instances of something like this:

  #error This feature is insecure, obsolete, unsupported, and vehemently 
deprecated.
  #warning This code will be removed in a future release.

and leave it that way for a couple of Debian release
cycles.  That serves the purpose of communicating
with the users, without being quite so high-handed.

Also it would be good to communicate exactly what is
being deprecated.  All of Kerberos?  Some particular
combination of Kerberos+SSL

In this scenario, users who wish to communicate a 
reply to us can do so, on a non-emergency basis.
They can search for other ways of doing what needs 
to be done, on a non-emergency basis.
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Kerberos

2015-05-08 Thread John Denker
On 05/05/2015 01:21 AM, Matt Caswell wrote:

> I am considering removing Kerberos support from OpenSSL 1.1.0. There are
> a number of problems with the functionality as it stands, and it seems
> to me to be a very rarely used feature.

I don't understand what it means to say the
feature "seems" rarely used.  Is there any 
actual evidence about the number and/or
importance of uses?

>  I'm interested in hearing any
> opinions on this (either for or against).

Opinions are not a good substitute for actual
evidence.

This thread has revealed that some people on
this list would prefer something else, but
that leaves unanswered (and almost unasked)
the question of whether Kerberos is actually 
being used.

Personally I don't use it, but that does not
come close to answering the question.  A few
moments of googling suggest that some people
are using Kerberos in conjunction with openssl.
For example:
  
http://linuxsoft.cern.ch/cern/slc61/i386/yum/updates/repoview/krb5-pkinit-openssl.html

> I plan to start preparing the patches to remove it next week.

Why do we think that's worth the trouble?

What evidence is there that removal won't 
cause problems?  It's hard to prove a negative,
and the recent discussions on this list don't
even come close.

I don't care about Kerberos directly, but it
seems like a poor use of resources to worry
about Kerberos while more pressing issues are
left unaddressed.

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3771] patch: bug: s_client loop at 100% cpu

2015-04-05 Thread John Denker via RT
Comment reformatted to comply with new OpenSSL coding style chapter 8
  https://www.openssl.org/about/codingstyle.txt

Functionality unchanged from previous patch.



>From 9e896a7a0f1ae28ab32c025ae2a5730aa7343c6a Mon Sep 17 00:00:00 2001
From: John Denker 
Date: Sat, 4 Apr 2015 16:36:51 -0700
Subject: [PATCH] fix CPU-hogging loop; don't try to read when EoF already seen

---
 apps/s_client.c | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/apps/s_client.c b/apps/s_client.c
index ec11617..0d41318 100644
--- a/apps/s_client.c
+++ b/apps/s_client.c
@@ -729,6 +729,7 @@ int MAIN(int argc, char **argv)
 int crl_download = 0;
 STACK_OF(X509_CRL) *crls = NULL;
 int sdebug = 0;
+int tty_at_EoF = 0;
 
 meth = SSLv23_client_method();
 
@@ -1719,7 +1720,14 @@ int MAIN(int argc, char **argv)
 if (!ssl_pending) {
 #if !defined(OPENSSL_SYS_WINDOWS) && !defined(OPENSSL_SYS_MSDOS) && !defined(OPENSSL_SYS_NETWARE)
 if (tty_on) {
-if (read_tty)
+/*-
+ * Note that select() returns whenever a read _would not block_
+ * and being at EoF satisfies this criterion ...
+ * even though a read after EoF is not interesting to us
+ * and would cause a CPU-hogging loop.
+ * Hence the factor of tty_at_EoF here.
+ */
+if (read_tty && !tty_at_EoF)
 openssl_fdset(fileno(stdin), &readfds);
 if (write_tty)
 openssl_fdset(fileno(stdout), &writefds);
@@ -1977,6 +1985,8 @@ int MAIN(int argc, char **argv)
 } else
 i = raw_read_stdin(cbuf, BUFSIZZ);
 
+if (i == 0) tty_at_EoF = 1;
+
 if ((!c_ign_eof) && ((i <= 0) || (cbuf[0] == 'Q'))) {
 BIO_printf(bio_err, "DONE\n");
 ret = 0;
-- 
2.1.0

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3771] patch: bug: s_client loop at 100% cpu

2015-04-04 Thread John Denker via RT
The attached patch makes the problem go away.

The method of solution is simple and obvious.


>From 92e824e2cfa02ecfc41b78e91acdd5ac0a845c17 Mon Sep 17 00:00:00 2001
From: John Denker 
Date: Sat, 4 Apr 2015 16:36:51 -0700
Subject: [PATCH] fix CPU-hogging loop; don't try to read when EoF already seen

---
 apps/s_client.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/apps/s_client.c b/apps/s_client.c
index ec11617..775420a 100644
--- a/apps/s_client.c
+++ b/apps/s_client.c
@@ -729,6 +729,7 @@ int MAIN(int argc, char **argv)
 int crl_download = 0;
 STACK_OF(X509_CRL) *crls = NULL;
 int sdebug = 0;
+int tty_at_EoF = 0;
 
 meth = SSLv23_client_method();
 
@@ -1719,7 +1720,12 @@ int MAIN(int argc, char **argv)
 if (!ssl_pending) {
 #if !defined(OPENSSL_SYS_WINDOWS) && !defined(OPENSSL_SYS_MSDOS) && !defined(OPENSSL_SYS_NETWARE)
 if (tty_on) {
-if (read_tty)
+/* Note that select() returns whenever a read _would not block_ */
+/* and being at EoF satisfies this criterion ...*/
+/* even though a read after EoF is not interesting to us*/
+/* and would cause a CPU-hogging loop.  */
+/* Hence the factor of tty_at_EoF here. */
+if (read_tty && !tty_at_EoF)
 openssl_fdset(fileno(stdin), &readfds);
 if (write_tty)
 openssl_fdset(fileno(stdout), &writefds);
@@ -1977,6 +1983,8 @@ int MAIN(int argc, char **argv)
 } else
 i = raw_read_stdin(cbuf, BUFSIZZ);
 
+if (i == 0) tty_at_EoF = 1;
+
 if ((!c_ign_eof) && ((i <= 0) || (cbuf[0] == 'Q'))) {
 BIO_printf(bio_err, "DONE\n");
 ret = 0;
-- 
2.1.0

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] [openssl.org #3771] bug: s_client loop at 100% cpu

2015-03-30 Thread John Denker via RT
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Contrast the following two examples:

#1:
time : | openssl s_client -connect www.openssl.org:443  >& /dev/null

real0m0.545s
user0m0.000s
sys 0m0.000s

#2:
time : | openssl s_client -quiet -connect www.openssl.org:443  >& /dev/null

real0m21.255s
user0m9.500s
sys 0m11.180s

- ---

Note the numerology:   21.225 - 9.5 - 11.18 =  0.545
That means that if you discount the half second it takes to actually
fetch the certificate, s_client was using 100% of the cpu the whole
time ... for more than 20 seconds.

I cannot imagine why it loops when "-quiet" is specified and not
otherwise.  I cannot imagine why it loops for 20.5 seconds instead
of 20.5 minutes or 20.5 hours.

This is 100% reproducible chez moi, although the timings naturally
vary by a little bit.


(gdb) where
#0  0x77903653 in __select_nocancel () at 
../sysdeps/unix/syscall-template.S:81
#1  0x00434d73 in s_client_main (argc=0, argv=0x7fffe680) at 
s_client.c:1794
#2  0x004039a8 in do_cmd (prog=0x990540, argc=4, argv=0x7fffe660) 
at openssl.c:470
#3  0x004035b8 in main (Argc=4, Argv=0x7fffe660) at openssl.c:366


openssl version
OpenSSL 1.1.0-dev xx XXX   (latest github version)

Same symptoms observed in older versions:
openssl version
OpenSSL 1.0.1f 6 Jan 2014

uname -a
Linux asclepias 3.18.0+ #2 SMP Sun Dec 21 18:25:03 MST 2014 x86_64 x86_64 
x86_64 GNU/Linux

=

Obvious workaround:  Don't specify the "-quiet" option.  There are
other ways of dealing with the unwanted prolixity.

Priority: low.  Compared to actual security problems such as the 
nameconstraints bypass bug [openssl.org #3502] this is nothing.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIVAwUBVRg/GPO9SFghczXtAQJfwxAAzbmfw1gCJYNCoxgI0kVX1davQ2tqq9Pv
eC5rVzyrh3+ii/PlvgojjOi9KR4o/nOUoy7CzVKTyidG5PTM1J8nNrCrl2H48vic
iv6fNVsLxcibPGs7+De2SqZkiJXl2JvgCZuLACljxq39SrKK0SpNKWqM8DyQrnes
3Mfim3vEcPMHj5lrFTWvVP/tT+/aslW1WGHLuh5kh9KHLBoQQCH2kenVD4Rrxz+F
pa5PjRVf7rPQEfaFWKBZ2WLwStelp1ZriJN1TxEXPqWqZsWlUnKwJUhZZaAnBdUt
z4Vj9MhgQDPMnyWDy8sVb/5BAyiMoTL6/DJfm949tn3rsef6UHtCu3iHg+GRDTVP
AQ6I8TmGnQMpXGTQnmLA5fyHrmGlSbcdmcSDQaIA1noKuWyORT4/CBNMftt+A5gV
MuWrSdZg4/l1Tkon4712v3yucg9r2WSMbz5hEGxw99MVd7Kk27OHfSYrDowYvjKC
vwBtABvXTmsR387pkcTDpuRU8Ayk/OXM1cbkuK7Vsadr2sfcwvi6iuL02NVDITwQ
XyksioIKPf76pXJt5aUOwjnVdRN0XN67LdHSSBZlmjEImUYQxswmZDuWZbdm/ECr
5Ahxeij8wkNZUKDCMCa2HScbQGlx9YveI+jDs2m5pB40lDcSWTqm+FmHtCVImi++
0atbpVOZanc=
=oaVD
-END PGP SIGNATURE-


___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3562] leading dots in nameConstraints ... bug report and patch

2015-01-01 Thread John Denker via RT
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/31/2014 10:31 AM, Rich Salz via RT wrote:

> This patch from Steve Henson seems better

I am happy with the proposed patch.  I have looked at
the code, and also tested it operationally.

The semantics is reasonable.
 ++ This is what I was arguing for initially.
 ++ It is AFAICT consistent with Mozilla behavior.
 ++ It is more strict than pk11-kit (lynx) behavior;
  see below for details.

>  and a good candidate for 1.0.2 and master...

OK, good, the sooner the better.

This is a "security issue" in the sense that is a Type-II
error (disallowing good guys).  It affects thousands of 
sites and who-knows-how-many users.

  It is not a Type-I error (letting bad guys in) and it
  affects "only" thousands of sites, not millions, so it
  is not in the same category as heartbleed ... but still
  it is well worth fixing, the sooner the better.

*** It would make sense to fix the nameConstraints bypass bug
*** [openssl.org #3502] at the same time.
*** Otherwise the whole nameConstraints concept is pretty much
*** pointless.   http://rt.openssl.org/Ticket/Display.html?id=3502

- --

Test results:

The Henson patch does what it is supposed to:
   :| $newopenssl s_client -verify 10 -CAfile 
/usr/share/ca-certificates/mozilla/Hellenic_Academic_and_Research_Institutions_RootCA_2011.crt
  -connect auth.edu.gr:443  |& egrep 'Verify'
Verify return code: 0 (ok)

For more detailed checks, we can use the following signing CA:

  wget http://www.av8n.com/pdq-root-ca-cert.pem
  echo 5d9f030c791bcace3580f38286a49c4f  /tmp/pdq-root-ca-cert.pem | md5sum -c -

Here is another example of the patch allowing stuff, as it should:
  :| $newopenssl s_client -verify 10 -CAfile pdq-root-ca-cert.pem  -connect 
www.pdq.av8n.com:1446 |& egrep 'Verify'
Verify return code: 0 (ok)

Here it is disallowing some things, as it should:
A rule with a leading dot does /not/ match a domain name without
a corresponding dot:
   :| $newopenssl s_client -verify 10 -CAfile pdq-root-ca-cert.pem  -connect 
pdq.av8n.com:1445 |& egrep 'Verify'
Verify return code: 47 (permitted subtree violation)

In contrast, note that pk11-kit is more tolerant, and totally ignores 
the leading dot:
  SSL_CERT_FILE=pdq-root-ca-cert.pem  lynx -source 
https://pdq.av8n.com:1445/cgi-bin/ipaddr

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIVAwUBVKV8H/O9SFghczXtAQIQIBAAiBsPoCiRqzKqZpE4hNmefyWaFlLagVQ0
Abhw5tBFCULwjGE5f4VlAil/1dNVro3Qlp7yvZ8qIXKkvJ3hk9gO6rUId6hP74Qy
xtAiBvs4az+IpxImQ91RRnw+ZYtf9IghKNSQyPkBDVpNMtlr7zr/OTe6ksgH0/h6
HhfKs7YuZ7RkCHmPjL7qDzFSfMxoFQSIXDguBFTrOrLmeOv7XG3O3ZvKnpcH/79M
DcxhUATvzzXyoYN9mtkbgw4W13Gcl0ds8S/zeFuencH2UvBVYPBZl/UTDf2stwya
wvizplcopgQikTd/p1D+5GnfaBUzRS2YopY77GQIokca0pLq/uO5XRBx/l8jQidX
i0KGvVWRsP1Oud0yJ/Ba8pMwunJwkzWpcvoV1kk1copm/KuZwZGb4k2PpQTLGBsG
czbRvHm5FbCRE7WDxGpO3GjtqcWZpY5kMlxUe8/yhnTvYQrcXgf7pUChBouS6Un9
GHq1uLGymIP6OtqzIpd0NSR2Fo1JU28mCGJs6gN+r7QtL5FoKpRo+16UrfHvBSt5
D1KuGjOCEb7JSxjsGIVaGK/lFQjMBksqG8MXj4xUR3UB7udj8tjBfxF1X3/rFk9M
fvkhMqw2zh8CvzzvIe3URy5t4rVooZ0Alwk7Q9imN4gI7tt8HrSadB5pwsTjfe0E
p3Sg79BLMM4=
=fbFv
-END PGP SIGNATURE-


___
openssl-dev mailing list
openssl-dev@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-dev


[openssl.org #3562] leading dots in nameConstraints ... bug report and patch

2014-10-12 Thread John Denker via RT
This bug report and patch concern the way openssl responds to CA 
certificates that have leading dots in the nameConstraints lists.
Thousands of certificates of this type are in use, issued by the 
HARICA_2011 CA and perhaps others.  Given that both Mozilla and 
Ubuntu ship HARICA_2011 as a trusted root CA, this is not a trivial 
matter.

This issue applies to root CAs and intermediate CAs alike.

Point of comparison:  pk11-kit tolerates leading dots:
lynx -source -head https://www.auth.gr/ |& head -1
HTTP/1.1 200 OK

Similarly:  Mozilla NSS tolerates leading dots:
firefox https://www.auth.gr/

Furthermore, virtually all of the user-oriented instructions 
found on the web today call for the use of leading dots.  The 
instructions suggest the dots are mandatory, although in fact 
they are not;  both pk11-kit and Mozilla NSS tolerate the dotted 
and dotless forms equally.

Observed openssl behavior, alas:
wget  https://www.auth.gr/ -O /dev/null |& tail -2
  permitted subtree violation
To connect to www.auth.gr insecurely, use `--no-check-certificate'.

Similarly, alas:
curl https://www.auth.gr/ |& grep curl:
curl: (60) SSL certificate problem: permitted subtree violation

Similarly, alas:
:| openssl s_client -verify 10 -CApath /etc/ssl/certs -connect 
www.auth.gr:443 |&  grep Verify.return.code
Verify return code: 47 (permitted subtree violation)

Desired behavior:  Openssl should tolerate constraint items with
and without leading dots, just like NSS and pk11-kit do.
 :| fixed/openssl s_client -verify 10 -CApath /etc/ssl/certs -connect 
www.auth.gr:443 |&  grep Verify.return.code
Verify return code: 0 (ok)


Short of patching openssl, no reasonable workaround for this problem
has been found.  Less-than-reasonable options include:
 ☠ Re-issuing thousands of certificates.  This would be prohibitively
  disruptive and expensive.
 ☠ Rewriting all applications to use pk11-kit instead of openssl.

A patch is available:  https://www.av8n.com/openssl/leading-dot.diff
The patch applies to openssl-1.0.1i.

Priority:  The bug as it stands is a denial-of-service for anybody 
who wants to use openssl in connection with thousands of already-issued 
certs.

   There is probably not much point in fixing this unless we also
   fix the nameConstraints bypass bug, i.e. [openssl.org #3502]

The security risk associated with applying the patch is exceedingly
small.  It's a very minor change to the code.  It makes the code in
some ways better and in no ways worse.  The semantics of the leading-
dot form is not seriously open to question.

The counterargument that the RFC does not require us to tolerate
leading dots is bogus.  The way I read it, the RFC does require it.
  http://www.rfc-editor.org/rfc/rfc5280.txt   (page 40)
Furthermore, common sense requires it and certainly the RFC does
not forbid it.  Besides, RFCs are not gospel anyway.  This RFC
has already been patched to fix horrific errors.

At one point there was talk of treating the leading-dot forms
differently from the dotless forms, but this suggestion has
been withdrawn.

Additional info:  To test the dotless form, to make sure it 
continues to work:
wget http://www.av8n.com/av8n.com_Dotless_Root_CA.pem
echo 00453924426f1cbb69c0ea9b62d8f2fd  av8n.com_Dotless_Root_CA.pem | 
md5sum -c -
 :| openssl s_client -verify 10 -CAfile av8n.com_Dotless_Root_CA.pem  
-connect dotless.av8n.com:1443 |& grep Verify.return.code
Verify return code: 0 (ok)

Also: To see the structure of the HARICA-2011 CA certificate, 
including the leading dots:
openssl x509 -text -noout -in 
/usr/share/ca-certificates/mozilla/Hellenic_Academic_and_Research_Institutions_RootCA_2011.crt

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3502] effective_names function --> smarter way to fix bypass bug

2014-09-13 Thread John Denker
Hi Folks --

I figured out a smarter way to fix the bypass bug ...
and potentially make some other things better at the 
same time.

The idea is to create a structured reference that returns
a stack containing the relevant effective name(s) of a
given x509 certificate.  This means there's a lot of
code -- in various places -- that no longer needs
to know or care whether the name(s) come from the 
subjectAltName list or from the common name.

The new function is called from the code that checks
nameConstraints, but it could usefully be called from
elsewhere.  In particular, the 'curl' application has
about 100 lines of code that could almost all be
replaced by a call to the effective_names function.

A first draft of some code to do this can be found at
  https://www.av8n.com/openssl/effective-names.diff

Beware that I don't have much experience programming
in the openssl environment, so somebody should check
this code pretty carefully.  I'm calling functions
that aren't terribly well documented, so I had to do
a lot of reasoning by analogy.



There is an associated patch
  https://www.av8n.com/openssl/const-get-subject-name.diff
that adds a few 'const' declarations.

I reckon 'const' declarations can't hurt and might help.
 
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3502] effective_names function --> smarter way to fix bypass bug

2014-09-13 Thread John Denker via RT
Hi Folks --

I figured out a smarter way to fix the bypass bug ...
and potentially make some other things better at the 
same time.

The idea is to create a structured reference that returns
a stack containing the relevant effective name(s) of a
given x509 certificate.  This means there's a lot of
code -- in various places -- that no longer needs
to know or care whether the name(s) come from the 
subjectAltName list or from the common name.

The new function is called from the code that checks
nameConstraints, but it could usefully be called from
elsewhere.  In particular, the 'curl' application has
about 100 lines of code that could almost all be
replaced by a call to the effective_names function.

A first draft of some code to do this can be found at
  https://www.av8n.com/openssl/effective-names.diff

Beware that I don't have much experience programming
in the openssl environment, so somebody should check
this code pretty carefully.  I'm calling functions
that aren't terribly well documented, so I had to do
a lot of reasoning by analogy.



There is an associated patch
  https://www.av8n.com/openssl/const-get-subject-name.diff
that adds a few 'const' declarations.

I reckon 'const' declarations can't hurt and might help.
 


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3502] nameConstraints bypass bug: a fix, or some approximation thereto

2014-09-09 Thread John Denker
On 08/22/2014 12:26 PM, Salz, Rich wrote:
> It'd be good to fix this.

Behold a patch that seems to fix it:
  https://www.av8n.com/openssl/bypass-bugfix.diff

The code seems pretty straightforward to me, but on the
other hand, I have very little experience coding in the
openssl environment, so I might be overlooking something.
Somebody should check this pretty closely.

A simple way to exhibit the bug (and the fix) as follows:

Desired behavior:
  openssl verify -CAfile av8n-root-ca-cert.pem bypass.jdenker.com-cert.pem
  bypass.jdenker.com-cert.pem: C = US, CN = bypass.jdenker.com
  error 47 at 0 depth lookup:permitted subtree violation

Observed (unfixed) behavior:
  openssl verify -CAfile av8n-root-ca-cert.pem bypass.jdenker.com-cert.pem
  bypass.jdenker.com-cert.pem: OK
which is a security lapse.

The demonstration certs can be found at:
  https://www.av8n.com/openssl/av8n-root-ca-cert.pem
  https://www.av8n.com/openssl/bypass.jdenker.com-cert.pem
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3502] nameConstraints bypass bug: a fix, or some approximation thereto

2014-09-09 Thread John Denker via RT
On 08/22/2014 12:26 PM, Salz, Rich wrote:
> It'd be good to fix this.

Behold a patch that seems to fix it:
  https://www.av8n.com/openssl/bypass-bugfix.diff

The code seems pretty straightforward to me, but on the
other hand, I have very little experience coding in the
openssl environment, so I might be overlooking something.
Somebody should check this pretty closely.

A simple way to exhibit the bug (and the fix) as follows:

Desired behavior:
  openssl verify -CAfile av8n-root-ca-cert.pem bypass.jdenker.com-cert.pem
  bypass.jdenker.com-cert.pem: C = US, CN = bypass.jdenker.com
  error 47 at 0 depth lookup:permitted subtree violation

Observed (unfixed) behavior:
  openssl verify -CAfile av8n-root-ca-cert.pem bypass.jdenker.com-cert.pem
  bypass.jdenker.com-cert.pem: OK
which is a security lapse.

The demonstration certs can be found at:
  https://www.av8n.com/openssl/av8n-root-ca-cert.pem
  https://www.av8n.com/openssl/bypass.jdenker.com-cert.pem


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #3502] nameConstraints bypass bug

2014-08-24 Thread John Denker via RT
At present, it is pathetically easy to trick openssl into
bypassing nameConstraints.  All you need to do is put 
some evil DNS name in the common name and not provide 
any subjectAltName list.

  I checked;  this bug is present in openssl-1.0.1i;
  openssl s_client happily connects to bypass.jdenker.com
  in defiance of my CA's nameConstraints.  This affects
  widely-used apps including curl, lynx, and wget, although
  I only checked them as of 1.0.1f.

  If you want a live demonstration, try the following:
   wget http://www.av8n.com/av8n.com_Root_CA.pem
   echo bde600da763f4105ceb64913d0ed5838 av8n.com_Root_CA.pem | md5sum -c -
   SSL_CERT_FILE=av8n.com_Root_CA.pem curl 
https://bypass.jdenker.com:444/hello.txt
  Observed behavior:  Command succeeds, prints "Hello, world!"
  Desired behavior:  Should fail, due to violation of nameConstraints.
  Similarly, the following succeeds, but should not:
:| openssl s_client -CAfile av8n-root-ca-cert.pem -connect 
bypass.jdenker.com:444
  Compare and contrast:  firefox properly complains; see below.
  Also compare:  The following succeeds, as it should:
SSL_CERT_FILE=av8n.com_Root_CA.pem curl https://cloud.av8n.com/hello.txt

  If anybody is interested, I can provide the config files
  that generate the certificates in question.

You can easily find additional discussion of this bug;
  https://www.google.com/search?q=x509+%22name+constraints%22+bypass
leads to
  http://www.openwall.com/lists/oss-security/2013/08/12/4



Note that in contrast, the bypass bug has been fixed in 
Mozilla NSS.
 https://bugzilla.mozilla.org/show_bug.cgi?id=394919

When I try the bypass trick on firefox 31.0, it throws 
an appropriate error.

>>> firefox https://bypass.jdenker.com:444/hello.txt

> Secure Connection Failed
>
> An error occurred during a connection to bypass.jdenker.com:444
> The Certifying Authority for this certificate is not permitted 
> to issue a certificate with this name.
>
> (Error code: sec_error_cert_not_in_name_space) 

The user is not even given the option of continuing past 
the error.  So evidently this is considered more serious 
than a run-of-the-mill problem with an unrecognized issuer.

===

Additional discussion of why this is important has already
been posted to the openssl-dev list; see
  http://marc.info/?l=openssl-dev&m=140873436313689&w=2

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


nameConstraints bypass bug

2014-08-22 Thread John Denker
Executive summary:
Forgive me if I have overlooked something, but I could not
find any discussion of the nameConstraints bypass bug on
openssl.org.  Is everybody aware of this?  Do we need to
add it to the request tracker?



On the cryptography list, on 07/19/2014 02:37 PM, Tony 
Arcieri wrote:

> If only X.509 name constraints actually worked.

Indeed.  If name constraints actually worked, they would 
be important.

Right now, almost nobody is using nameConstraints in
conjunction with openssl.  That is because they don't 
work reliably ... not because they are intrinsically 
unimportant.  In particular, it would be a huge mistake 
to let this become a chicken-and-egg problem.  That is, 
it would be a huge mistake to think nameConstraints are
not worth fixing, because nobody is using them (yet).

As Fred Smith once said, when you are designing a new
bridge, you don't size it according to the number of
people who travel across the river today, in the
absence of any bridge.  You size it according to the 
number of people who will travel across the bridge 
/after it is fully functional/.

Right now I have a bunch of web servers.  Rather than
giving each one its own self-signed snake-oil certificate,
I have my own CA.  Each web server has its own CA-signed
certificate.  This gives me more flexibility and more
convenience.  It is a trust model I can believe in:  I
trust the certificate because I issued it.

My CA is subject to name constraints.  If name constraints
actually worked, this would confer an advantage on me,
insofar as it would make me less of juicy target.  That 
is, it would make my root CA private key much less worth 
stealing.  By the same token, if name constraints actually 
worked, they would make it easier for me to convince /other/
people to trust my CA, because it is far less open to abuse.

However, at present, it is pathetically easy to trick
openssl into bypassing nameConstraints.  All you need to 
do is put some evil DNS name in the common name and not 
provide any subjectAltName list.

  I checked;  this bug is present in openssl-1.0.1i;
  openssl s_client happily connects to bypass.jdenker.com
  in defiance of my CA's nameConstraints.  This affects
  widely-used apps including curl, lynx, and wget, although
  I only checked them as of 1.0.1f.

  If you want a live demonstration, try the following:
   wget http://www.av8n.com/av8n.com_Root_CA.pem
   echo bde600da763f4105ceb64913d0ed5838 av8n.com_Root_CA.pem | md5sum -c -
   SSL_CERT_FILE=av8n.com_Root_CA.pem curl 
https://bypass.jdenker.com:444/hello.txt
  Observed behavior:  Command succeeds, prints "Hello, world!"
  Desired behavior:  Should fail, due to violation of nameConstraints.
  Compare and contrast:  firefox behavior below.
  Also compare:  The following succeeds, as it should:
SSL_CERT_FILE=av8n.com_Root_CA.pem curl https://cloud.av8n.com/hello.txt
  Also:  The following succeeds, but should not:
:| openssl s_client -CAfile av8n-root-ca-cert.pem -connect 
bypass.jdenker.com:444

  If anybody is interested, I can provide the config files
  that generate the certificates in question.

You can easily find additional discussion of this bug;
  https://www.google.com/search?q=x509+%22name+constraints%22+bypass
leads to
  http://www.openwall.com/lists/oss-security/2013/08/12/4

As long as openssl remains vulnerable, nobody should rely 
on v3 nameConstraints to be effective.  In particular, in 
accordance with the Golden Rule, I cannot in good conscience 
ask anybody to trust my root CA unless they trust me so 
completely that they accept it to sign anything in the 
world, without regard to the constraints ... or unless
they trust it only on platforms that do not use openssl 
or other vulnerable packages.



Note that in contrast, the bypass bug has been fixed in 
Mozilla NSS.
 https://bugzilla.mozilla.org/show_bug.cgi?id=394919

When I try the bypass trick on firefox 31.0, it throws 
an appropriate error.

>>> firefox https://bypass.jdenker.com:444/hello.txt

> Secure Connection Failed
> 
> An error occurred during a connection to bypass.jdenker.com:444
> The Certifying Authority for this certificate is not permitted 
> to issue a certificate with this name.
> 
> (Error code: sec_error_cert_not_in_name_space) 

The user is not even given the option of continuing past 
the error.  So evidently this is considered more serious 
than a run-of-the-mill problem with an unrecognized issuer.

===

It is my understanding that the RFC has been repaired to
deal with this issue:
  http://www.rfc-editor.org/rfc/rfc5280.txt

This is yet more proof, if any were needed, why RFCs should
never be treated as Holy Scripture.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@open

Re: [openssl-dev] nameConstraints : leading "." in permission list items

2014-08-14 Thread John Denker
On 08/13/2014 08:27 AM, Erwann Abalea wrote in part:

> the question isn't "should we tolerate it?", but "what do the sacred 
> scriptures ask compliant implementation to do?"

What sacred scriptures are we talking about here?  I'm not an 
expert, so correct me if I'm wrong, but I thought RFC stood 
for "Request for Comments" not «Demand for Kadavergehorsamkeit»

In the world I live in, yes, there are some people who care 
only about scriptural exegesis.  Meanwhile, there are some 
other people who care about doing what makes sense, doing 
what best serves the interests of the user community.

In any case, it is hard to find any reading of rfc5280 that
disallows /.foo.com/ as a pattern.  Adding stuff to the left
of /.foo.bar/ should count as adding stuff to the left.  So
AFAICT we are not discussing the spiritual purity of the
existing openssl-1.0.1i code;  as the famous anecdote says,
we are just haggling over the price.
  http://quoteinvestigator.com/2012/03/07/haggling/

If anybody wants my comments on this Request-for-Comments:
  a) We need both wildcard /and/ non-wildcard forms, so that
   users can express what they want.
  b) If taken too literally, the text of rfc5280 does not
   allow sufficient expressive power.
  c) AFAICT /.foo.com/ works OK as a wildcard.
   Similarly, /foo.com/ works fine as a non-wildcard.

Significant parts of the user community assume this is how
things already work.  My spiritual advisor says that sometimes
it is OK to amend the RFC.

> This one is a root, so this extension shouldn't be taken into
> account. This clearly written in RFC5280 section 4.2.1.10,

Maybe I'm missing something, but that's not entirely clear.

As I read it, name constraints should not be applied 
  /when checking the validity of the self-signature/
but that does not mean that root CAs are completely forbidden
from having name constraints.  The constraints are applied
to /everything else/ signed by that CA.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: nameConstraints : leading "." in permission list items

2014-08-14 Thread John Denker
On 08/13/2014 09:59 AM, at the end of a long message I wrote:

[.]
>  I will rewrite my patch code accordingly.  It will take me a
>  little while to do this and test it.

This is now done.  The improved patch can be found at
  http://www.av8n.com/openssl/leading-dot-better.diff

The patch applies against the  openssl-1.0.1i  tarball.

Good discussion so far.  Any other helpful ideas out there?
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: nameConstraints : leading "." in permission list items

2014-08-13 Thread John Denker
On 08/13/2014 03:46 AM, Vyronas Tsingaras wrote:
> 
> If you could also take a look at https://github.com/openssl/openssl/pull/111
> we have listed a number of reasons. What are your thoughts on this?

I agree with the reasoning given there.  In particular, one 
point that I left as an open question in my original post is 
now persuasively answered.

  I apologize for not finding that item earlier.
  I did look;  I just missed it somehow.

To summarize my current understanding:

  1) The pattern /foo.bar/ should match "foo.bar" and nothing
   else.  It is not a wildcard.

  2) The pattern /.foo.bar/ is a wildcard that should match
   any left-extension, including "a.foo.bar", "a.b.foo.bar",
   et cetera ... but not "foo.bar" itself.

  3) If somebody wants to match both, they can include both
   on the list.

  4) AFAICT this is nice and logical and consistent with what 
   users expect and what other SSL implemenations are doing.
   The argument is strong for the permission list, and even 
   stronger for the exclusion list.

  5) Here is the only counterargument I can see:  enforcing 
   the non-wildcard requirement (item 1 above) will break 
   applications that are relying on the current undocumented 
   behavior as implemented in v3_ncons.c in openssl-1.0.1i.

   Therefore I suggest a transition strategy, as follows:

  6) We would rather not have a situation where a given cert 
   does one thing on some versions of openssl and different 
   things on other versions (and on competing products).  Here
   is a possible way to survive the transition:  We could 
   carefully and conspicuously document the following:
  Anybody who can tolerate matching foo.com and all of
  its subdomains should include both /foo.com/ and 
  /.foo.com/ on the list.  This covers the most common 
  use-case.  Anybody who wants this behavior should issue
  the appropriate cert ASAP, before the openssl update 
  goes out.
   Note that anybody who wants to permit the subdomains but
   not foo.com itself has a problem until openssl gets fixed.
   The current code provides no way to exclude foo.com without
   excluding all the subdomains.  I see no workaround for this.
   AFAICT the only fix is to patch the openssl code.

 I will rewrite my patch code accordingly.  It will take me a
 little while to do this and test it.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


nameConstraints : leading "." in permission list items

2014-08-13 Thread John Denker
Hi Folks --

0) Beware that I am not an expert in this area.  What follows is
 probably mostly true, but I'm still feeling my way to some extent.

1) There are actually some people who are using v3 nameConstraints.
 Not a lot, but some.

 An example can be found in one of the fully-trusted root certificates
 that is distributed in the current Ubuntu release, and several previous
 releases:
   /etc/ssl/certs/Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem
 which is a symlink to
   
/usr/share/ca-certificates/mozilla/Hellenic_Academic_and_Research_Institutions_RootCA_2011.crt

 Let's take a look at it:
 openssl x509 -text -noout < 
Hellenic_Academic_and_Research_Institutions_RootCA_2011.crt
 [snip]
X509v3 Name Constraints: 
Permitted:
  DNS:.gr
  DNS:.eu
  DNS:.edu
  DNS:.org
  email:.gr
  email:.eu
  email:.edu
  email:.org

 2) Note the leading "." in each item in the permission list.
a) This seems entirely logical and reasonable to me.
b) All the documentation and examples I've seen on the web assume
 the "." should be there.  It's not even a topic of discussion.

 3) Desired behavior:  openssl should tolerate the leading "."

  Question:  Does anybody think the leading "." should be mandatory?
 Or should we tolerate it either way

 4) Observed behavior:  As of openssl-1.0.1i the leading "." is
  not tolerated.   In particular:

   openssl verify -verbose -check_ss_sig -CAfile $CA_NAME-cert.pem  
$TARGET-cert.pem
   server.example.net-cert.pem: C = US, CN = server.example.net
   error 47 at 0 depth lookup:permitted subtree violation

   In more detail: I added some debugging printf statements:

    checking DNS 'www.example.net' against '.example.net' ... result: 47
    checking DNS 'www.example.net' against 'example.net' ... result: 0

   The certs I used to test this can be found at
 http://www.av8n.com/openssl/namecon-ca-cert.pem
 http://www.av8n.com/openssl/server.example.net-cert.pem

   If somebody wants the ugly little config files I used to create those 
   certs, they can be provided.

 5) Here is a patch that seems to make the problem go away.
  http://www.av8n.com/openssl/leading-dot.patch
  I do not guarantee that this is high-security industrial-strength code, 
  but it should suffice to let people know where I think the issue lies.

  If somebody wants to take a closer look at what the code is doing,
  here is a bundle of debugging printf statements:
  http://www.av8n.com/openssl/namecon-printf.patch
  This is not meant to be elegant.
  It's quick-and-dirty experimentation.
  I found it useful.  YMMV.

---

Let's discuss this on the -dev list for a little while to see if anybody 
has any better insight as to what's going on.  Then maybe we can send it 
over to the request tracker.

There's more I could say about this, but I'll stop here for now.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org