Additional things for deprecation

2020-10-13 Thread Tim Hudson
In a 3.0 context, EVP_PKEY_ASN1_METHOD and all the associated functions
should be marked deprecated in my view.

Tim.


Re: OTC VOTE: The PR #11359 (Allow to continue with further checks on UNABLE_TO_VERIFY_LEAF_SIGNATURE) is acceptable for 1.1.1 branch

2020-10-12 Thread Tim Hudson
-1

I don't see this as a bug fix.

Tim


On Fri, Oct 9, 2020 at 10:02 PM Tomas Mraz  wrote:

> topic: The PR #11359 (Allow to continue with further checks on
>  UNABLE_TO_VERIFY_LEAF_SIGNATURE) is acceptable for 1.1.1 branch
> As the change is borderline on bug fix/behaviour change OTC needs
> to decide whether it is acceptable for 1.1.1 branch.
> Proposed by Tomas Mraz
> Public: yes
> opened: 2020-10-09
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [  ]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [+1]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]
>
> --
> Tomáš Mráz
> No matter how far down the wrong road you've gone, turn back.
>   Turkish proverb
> [You'll know whether the road is wrong if you carefully listen to your
> conscience.]
>
>
>


Re: VOTE: Weekly OTC meetings until 3.0 beta1 is released

2020-10-09 Thread Tim Hudson
+1

Tim

On Fri, 9 Oct 2020, 10:01 pm Nicola Tuveri,  wrote:

> topic: Hold online weekly OTC meetings starting on Tuesday 2020-10-13
>and until 3.0 beta1 is released, in lieu of the weekly "developer
>meetings".
> Proposed by Nicola Tuveri
> Public: yes
> opened: 2020-10-09
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [  ]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [+1]
>


Re: VOTE: Technical Items still to be done

2020-10-08 Thread Tim Hudson
+1

Tim.


On Fri, Oct 9, 2020 at 12:47 AM Matt Caswell  wrote:

> topic: The following items are required prerequisites for the first beta
> release:
>  1) EVP is the recommended API, it must be feature-complete compared with
> the functionality available using lower-level APIs.
>- Anything that isn’t available must be put to an OTC vote to exclude.
>- The apps are the minimum bar for this, subject to exceptions noted
> below.
>  2) Deprecation List Proposal: DH_, DSA_, ECDH_, ECDSA_, EC_KEY_, RSA_,
> RAND_METHOD_.
>- Does not include macros defining useful constants (e.g.
>  SHA512_DIGEST_LENGTH).
>- Excluded from Deprecation: `EC_`, `DSA_SIG_`, `ECDSA_SIG_`.
>- There might be some others.
>- Review for exceptions.
>- The apps are the minimum bar to measure feature completeness for
> the EVP
>  interface: rewrite them so they do not use internal nor deprecated
>  functions (except speed, engine, list, passwd -crypt and the code
> to handle
>  the -engine CLI option).  That is, remove the suppression of
> deprecated
>  define.
>  - Proposal: drop passwd -crypt (OMC vote required)
>- Compile and link 1.1.1 command line app against the master headers and
>  library.  Run 1.1.1 app test cases against the chimera.  Treat this
> as an
>  external test using a special 1.1.1 branch. Deprecated functions
> used by
>  libssl should be moved to independent file(s), to limit the
> suppression of
>  deprecated defines to the absolute minimum scope.
>  3) Draft documentation (contents but not pretty)
>- Need a list of things we know are not present - including things we
> have
>  removed.
>- We need to have mapping tables for various d2i/i2d functions.
>- We need to have a mapping table from “old names” for things into the
>  OSSL_PARAMS names.
>  - Documentation addition to old APIs to refer to new ones (man7).
>  - Documentation needs to reference name mapping.
>  - All the legacy interfaces need to have their documentation
> pointing to
>the replacement interfaces.
>  4) Review (and maybe clean up) legacy bridge code.
>  5) Review TODO(3.0) items #12224.
>  6) Source checksum script.
>  7) Review of functions previously named _with_libctx.
>  8) Encoder fixes (PKCS#8, PKCS#1, etc).
>  9) Encoder DER to PEM refactor.
> 10) Builds and passes tests on all primary, secondary and FIPS platforms.
> 11) Query provider parameters (name, version, ...) from the command line.
> 12) Setup buildbot infrastructure and associated instructions.
> 13) Complete make fipsinstall.
> 14) More specific decoding selection (e.g. params or keys).
> 15) Example code covering replacements for deprecated APIs.
> 16) Drop C code output options from the apps (OMC approval required).
> 17) Address issues and PRs in the 3.0beta1 milestone.
> Proposed by .
> Public: yes
> opened: 2020-10-08
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [+1]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]
>


Re: VOTE: Accept the Fully Pluggable TLSv1.3 KEM functionality

2020-10-08 Thread Tim Hudson
+1

Tim


On Fri, Oct 9, 2020 at 12:27 AM Matt Caswell  wrote:

> topic: We should accept the Fully Pluggable TLSv1.3 KEM functionality as
> shown in PR #13018 into the 3.0 release
> Proposed by Matt Caswell
> Public: yes
> opened: 2020-10-08
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [+1]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]
>


Re: stable branch release cadence

2020-09-15 Thread Tim Hudson
Thanks Matt - cut-n-paste fumble on my part from the previous vote summary.
The tally should always equal the number of OMC members.

For: 6, against: 0, abstained 0, not voted: 1

Tim.


On Wed, Sep 16, 2020 at 8:11 AM Matt Caswell  wrote:

>
>
> On 15/09/2020 23:10, Tim Hudson wrote:
> > The OMC voted to:
> >
> > /Release stable branch on the second last Tuesday of the last month in
> > each quarter as a regular cadence./
> >
> > The vote passed.
> > For: 6, against: 9, abstained 0, not voted: 1
>
> That should say against: 0
>
> ;-)
>
> Matt
>
> >
> > Thanks,
> > Tim.
> >
>


stable branch release cadence

2020-09-15 Thread Tim Hudson
The OMC voted to:

*Release stable branch on the second last Tuesday of the last month in each
quarter as a regular cadence.*

The vote passed.
For: 6, against: 9, abstained 0, not voted: 1

Thanks,
Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-14 Thread Tim Hudson
On Mon, Sep 14, 2020 at 9:52 PM Matt Caswell  wrote:

> > And that is the point - this is not how the existing CTX functions work
> > (ignoring the OPENSSL_CTX stuff).
>
> Do you have some concrete examples of existing functions that don't work
> this way?
>

SSL_new()
BIO_new_ssl()
BIO_new_ssl_connect()
BIO_new_bio_pair()
etc

And all the existing method using functions which are also factories.

But I get the point - if there is only one argument is it logically coming
first or last - obviously it can be seen both ways.

IMO, we absolutely MUST have the ability to delete parameters (for
> example ENGINEs). If we can do that, then I don't see why we can't add
> parameters.
>

No - that doesn't follow. It is perfectly reasonable to have an ENGINE
typedef that remains and is passed as NULL as usual - and in fact most of
the top-level ENGINE stuff handles NULL as meaning no engine usage so that
would remain consistent. There is no absolute requirement to delete a
parameter for this or other purposes. If you want to reorder parameters I
would argue it should be a new function name and not an _ex version.

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-14 Thread Tim Hudson
On Mon, Sep 14, 2020 at 9:19 PM Matt Caswell  wrote:

> I must be misunderstanding your point because I don't follow your logic
> at all.
>
> So this is the correct form according to my proposed policy:
>
> TYPE *TYPE_make_from_ctx(char *name,TYPE_CTX *ctx)
>

And that is the point - this is not how the existing CTX functions work
(ignoring the OPENSSL_CTX stuff).


> > The PR should cover that context of how we name the variations of
> > functions which add the OPENSSL_CTX reference.
>
> IMO, it does. They should be called "*_ex" as stated in chapter 6.2. I
> don't see why we need to special case OPENSSL_CTX in the policy. Its
> written in a generic way an can be applied to the OPENSSL_CTX case.
>

And that means renaming all the with_libctx to _ex which frankly I don't
think makes a lot of sense to do.
Having a naming indicating OPENSSL_CTX added isn't a bad thing to do in my
view.
Collecting a whole pile of _ex functions and having to look at each one to
figure out what the "extension" is does add additional burden for the user.
There is a reason that _with_libctx was added rather than picking _ex as
the function names - it clearly communicates then intent rather than be a
generic extension-of-API naming.


IMO *the* most confusing thing would be to *change* an existing ordering
>
(for example swap two existing parameters around). I see no problem with
> adding new ones anywhere that's appropriate.
>

Clearly we disagree on that - if you are making an extended version of a
function and you aren't keeping the same existing parameter order (which
you are not if you add or remove parameters which is the point I was making
- the order isn't the same as the existing function because you've removed
items - what you have is the order of whatever parameters remain in the
extended function are the same - and that's a pretty pointless distinction
to make - if you aren't simply adding additional items on the end you make
for a change over mess and this I think we should be trying to avoid). My
context there is for the users of the existing API.
It becomes especially problematic if you have type equivalence when the
order is changed around so there are no compiler warnings if the user
hasn't picked up on reordering of parameters.

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-14 Thread Tim Hudson
Any proposal needs to deal with the constructors consistently - whether
they come from an OPENSSL_CTX or they come from an existing TYPE_CTX.
That is absent in your PR.

Basically this leads to the ability to provide inconsistent argument order
in functions.

TYPE *TYPE_make_from_ctx(TYPE_CTX *ctx, char *name)
or
TYPE *TYPE_make_from_ctx(char *name,TYPE_CTX *ctx)

It simply isn't consistent to basically allow both forms of this approach.

Seeing the OPENSSL_CTX as something different to the other APIs in terms of
its usage when it is providing the context from which something is
constructed is the underlying issue here.
Your PR basically makes rules for "context" arguments which lead to
allowing both the above forms - and making the current usage of CTX values
a different logical order than the OPENSSL_CTX.

Separately, we should have consistency in the naming of the functions which
take an OPENSSL_CTX - the _with_libctx makes no sense now that we settled
on OPENSSL_CTX rather than OPENSSL_LIBCTX or OPENSSL_LIB_CTX as the naming.
We also have a pile of _ex functions that were introduced just to add an
OPENSSL_CTX - those should be consistently named.

The PR should cover that context of how we name the variations of functions
which add the OPENSSL_CTX reference.

The suggested rules for extended functions is inconsistent in stating the
order "should be retained" and then allowing for inserting "at any point".
IHMO either the new function must preserve the existing order and just add
to the end (to easy migration) or it should conform to the naming scheme
and parameter argument order for new functions - one of those should be the
driver rather than basically creating something that is neither - not easy
to migrate to and also not following the documented order. We should be
trying to achieve one of those two objectives.

Tim.


On Mon, Sep 14, 2020 at 8:30 PM Matt Caswell  wrote:

> In order to try and move this discussion forward I have made a concrete
> proposal for how we should formulate the various ideas in this thread
> into an actual style. Please see here:
>
> https://github.com/openssl/web/pull/194
>
> Since we're not yet fully in agreement some compromises will have to be
> made. I hope I've come up with something which isn't too abhorrent to
> anyone.
>
> Please take a look.
>
> Matt
>
>
> On 05/09/2020 04:48, SHANE LONTIS wrote:
> >
> >   PR #12778 reorders all the API’s of the form:
> >
> >
> >   EVP_XX_fetch(libctx, algname, propq)
> >
> >
> >   So that the algorithm name appears first..
> >
> >
> >   e.g: EVP_MD_fetch(digestname, libctx, propq);
> >
> > This now logically reads as 'search for this algorithm using these
> > parameters’.
> >
> > The libctx, propq should always appear together as a pair of parameters.
> > There are only a few places where only the libctx is needed, which means
> > that if there is no propq it is likely to cause a bug in a fetch at some
> > point.
> >
> > This keeps the API’s more consistent with other existing XXX_with_libctx
> > API’s that put the libctx, propq at the end of the parameter list..
> > The exception to this rule is that callback(s) and their arguments occur
> > after the libctx, propq..
> >
> > e.g:
> > typedef OSSL_STORE_LOADER_CTX *(*OSSL_STORE_open_with_libctx_fn)
> > (const OSSL_STORE_LOADER *loader,
> >  const char *uri, OPENSSL_CTX *libctx, const char *propq,
> >  const UI_METHOD *ui_method, void *ui_data);
> >
> > An otc_hold was put on this.. Do we need to have a formal vote?
> > This really needs to be sorted out soon so the API’s can be locked down
> > for beta.
> >
> > 
> > Also discussed in a weekly meeting and numerous PR discussions was the
> > removal of the long winded API’s ending with ‘with_libctx’
> > e.g CMS_data_create_with_libctx
> > The proposed renaming for this was to continue with the _ex notation..
> > If there is an existing _ex name then it becomes _ex2, etc.
> > There will most likely be additional parameters in the future for some
> > API’s, so this notation would be more consistent with current API’s.
> > Does this also need a vote?
> >
> > Regards,
> > Shane
> >
> >
>


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Tim Hudson
On Sun, Sep 6, 2020 at 6:55 AM Richard Levitte  wrote:

> I'd rank the algorithm name as the most important, it really can't do
> anything of value without it.
>

It also cannot do anything without knowing which libctx to use. Look at the
implementation.
Without the libctx there is no "from-where" to specify.

This is again hitting the concept of where do things come from and what is
a default.
Once "global" disappears as such, logically everything comes from a libctx.

Your argument is basically "what" is more important than "from" or "where".
And the specific context here is where you see "from" or "where" can be
defaulted to a value so it can be deduced so it isn't (as) important in the
API.

That would basically indicate you would (applying the same pattern/rule in
a different context) change:


*int EVP_PKEY_get_int_param(EVP_PKEY *pkey, const char *key_name, int
*out);*

To the following (putting what you want as the most important rather than
from):


*int EVP_PKEY_get_int_param(char *key_name, EVP_PKEY *pkey, int *out);*

Or pushing it right to the end after the output parameter:


*int EVP_PKEY_get_int_param(char *key_name, int *out,EVP_PKEY *pkey);*

The context of where things come from is actually the most critical item in
any of the APIs we have.
Even though what you want to get from where you want to get it is in the
point of the API call you need to specify where from first as that context
sets the frame of the call.

Think of it around this way - we could have an implementation where we
remember the last key that we have used and allow you to simply indicate
you use the last key or if we didn't want the last key used to be able to
specify it via a non-NULL pointer. This isn't that unusual an API but not
something I'm suggesting we add - just trying to get the point across that
you are still thinking of global and libctx as something super special with
an exception in its handling rather than applying a general rule which is
pretty much what we use everywhere else.

And in which case where you generally don't provide a reference as there is
some default meaning for it in general and can provide a reference for that
sort of API would this make sense to you:


*int EVP_PKEY_get_int_param(char *key_name, int *out,EVP_PKEY *pkey);*

If pkey is NULL then you use the last key that you referenced, if it is not
then you use the specified pkey. For the application the specific key_name
is the most important thing (using your argument that basically states the
"what" is what counts).

I would suggest that you really would still want to place the EVP_PKEY
first - even if you had a defaulting mechanism of referring to the last key
used. Conceptually you always have to have knowledge of from-where when you
are performing a function. And that *context* is what is the most important.

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Tim Hudson
On Sat, Sep 5, 2020 at 8:45 PM Nicola Tuveri  wrote:

> Or is your point that we are writing in C, all the arguments are
> positional, none is ever really optional, there is no difference between
> passing a `(void*) NULL` or a valid `(TYPE*) ptr` as the value of a `TYPE
> *` argument, so "importance" is the only remaining sorting criteria, hence
> (libctx, propq) are always the most important and should go to the
> beginning of the args list (with the exception of the `self/this` kind of
> argument that always goes first)?
>

That's a reasonable way to express things.

The actual underlying point I am making is that we should have a rule in
place that is documented and that works within the programming language we
are using and that over time doesn't turn into a mess.
We do add parameters (in new function names) and we do like to keep the
order of the old function - and ending up with a pile of things in the
"middle" is in my view is one of the messes that we should be avoiding.

I think the importance argument is the one that helps for setting order and
on your "required args" coming first approach the importance argument also
applies because it is actually a required argument simply with an implied
default - which again puts it not at the end of the argument list. The
library context is required - always - there is just a default - and that
parameter must be present because we are working in C.

Whatever it is that we end up decided to do, the rule should be captured
and should also allow for the fact that we will evolve APIs and create _ex
versions and those two will also evolve and a general rule should be one
that doesn't result in an inconsistent treatment for argument order as we
add _ex versions.

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Tim Hudson
On Sat, Sep 5, 2020 at 6:38 PM Nicola Tuveri  wrote:

> In most (if not all) cases in our functions, both libctx and propquery are
> optional arguments, as we have global defaults for them based on the loaded
> config file.
> As such, explicitly passing non-NULL libctx and propquery, is likely to be
> an exceptional occurrence rather than the norm.
>

And that is where we have a conceptual difference, the libctx is *always* used.
If it is provided as a NULL parameter then it is a shortcut to make the
call to get the default or to get the already set one.
Conceptually it is always required for the function to operate.

And the conceptual issue is actually important here - all of these
functions require the libctx to do their work - if it is not available then
they are unable to do their work.
We just happen to have a default-if-NULL.

If C offered the ability to default a parameter if not provided (and many
languages offer that) I would expect we would be using it.
But it doesn't - we are coding in C.

So it is really where-is-what-this-function-needs-coming-from that actually
is the important thing - the source of the information the function needs
to make its choice.
It isn't which algorithm is being selected - the critical thing is from
which pool of algorithm implementations are we operating. The pool must be
specified (this is C code), but we have a default value.

And that is why I think the conceptual approach here is getting confused by
the arguments appearing to be optional - conceptually they are not - we
just have a defaulting mechanism and that isn't the same conceptually as
the arguments actually being optional.

Clearer?

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Tim Hudson
I place the OTC hold because I don't believe we should be making parameter
reordering decisions without actually having documented what approach we
want to take so there is clear guidance.
This was the position I expressed in the last face to face OTC meeting -
that we need to write such things down - so that we avoid this precise
situation - where we have new functions that are added that introduce the
inconsistency that has been noted here that PR#12778 is attempting to
address.

However, this is a general issue and not a specific one to OPENSSL_CTX and
it should be discussed in the broader context and not just be a last minute
(before beta1) API argument reordering.
That does not provide anyone with sufficient time to consider whether or
not the renaming makes sense in the broader context.
I also think that things like API argument reordering should have been
discussed on openssl-project so that the broader OpenSSL community has an
opportunity to express their views.

Below is a quick write up on APIs in an attempt to make it easier to hold
an email discussion about the alternatives and implications of the various
alternatives over time.
I've tried to outline the different options.

In general, the OpenSSL API approach is of the following form:



*rettype  TYPE_get_something(TYPE *,[args])rettype  TYPE_do_something(TYPE
*,[args])TYPE*TYPE_new([args])*

This isn't universally true, but it is the case for the majority of OpenSSL
functions.

In general, the majority of the APIs place the "important" parameters
first, and the ancillary information afterwards.

In general, for output parameters we tend to have those as the return value
of the function or an output parameter that
tends to be at the end of the argument list. This is less consistent in the
OpenSSL APIs.

We also have functions which operate on "global" information where the
information used or updated or changed
is not explicitly provided as an API parameter - e.g. all the byname
functions.

Adding the OPENSSL_CTX is basically providing where to get items from that
used to be "global".
When performing a lookup, the query string is a parameter to modify the
lookup being performed.

OPENSSL_CTX is a little different, as we are adding in effectively an
explicit parameter where there was an implicit (global)
usage in place. But the concept of adding parameters to functions over time
is one that we should have a policy for IMHO.

For many APIs we basically need the ability to add the OPENSSL_CTX that is
used to the constructor so that
it can be used for handling what used to be "global" information where such
things need the ability to
work with other-than-the-default OPENSSL_CTX (i.e. not the previous single
"global" usage).

That usage works without a query string - as it isn't a lookup as such - so
there is no modifier present.
For that form of API usage we have three choices as to where we place
things:

1) place the context first

*TYPE *TYPE_new(OPENSSL_CTX *,[args])*

2) place the context last

*TYPE *TYPE_new([args],OPENSSL_CTX *)*

3) place the context neither first nor last

*TYPE *TYPE_new([some-args],OPENSSL_CTX *,[more-args])*

Option 3 really isn't a sensible choice to make IMHO.

When we are performing effectively a lookup that needs a query string, we
have a range of options.
If we basically state that for a given type, you must use the OPENSSL_CTX
you have been provided with on construction,
(not an unreasonable position to take), then you don't need to modify the
existing APIs.

If we want to allow for a different OPENSSL_CTX for a specific existing
function, then we have to add items.
Again we have a range of choices:

A) place the new arguments first


*rettype  TYPE_get_something(OPENSSL_CTX *,TYPE *,[args])rettype
 TYPE_do_something(OPENSSL_CTX *,TYPE *,[args])*

B) place the new arguments first after the TPYE



*rettype  TYPE_get_something(TYPE *,OPENSSL_CTX *,[args])rettype
 TYPE_do_something(TYPE *,OPENSSL_CTX *,[args])*
C) place the new arguments last


*rettype  TYPE_get_something(TYPE *,[args], OPENSSL_CTX *)rettype
 TYPE_do_something(TYPE *,[args], OPENSSL_CTX *)*

D) place the new arguments neither first nor last



*rettype  TYPE_get_something(TYPE *,[some-args], OPENSSL_CTX
*,[more-args])rettype  TYPE_do_something(OPENSSL_CTX *,TYPE *,[some-args],
OPENSSL_CTX *,[more-args])*
Option D really isn't a sensible choice to make IMHO.

My view is that the importance of arguments is what sets their order - and
that is why for the TYPE functions the TYPE pointer
comes first. We could have just as easily specified it as last or some
other order, but we didn't.

Now when we need to add a different location from which to retrieve other
information we need to determine where this gets added.
I'd argue that it is at constructor time that we should be adding any
OPENSSL_CTX or query parameter for any existing TYPE usage
in OpenSSL. If we feel the need to cross OPENSSL_CTX's (logically that is
what is being done) 

Re: API renaming

2020-07-23 Thread Tim Hudson
Placing everything under EVP is reasonable in my view. It is just a prefix
and it really has no meaning these days as it became nothing more than a
common prefix to use.

I don't see any significant benefit in renaming at this point - even for
RAND.

Tim.

On Fri, 24 Jul 2020, 1:56 am Matt Caswell,  wrote:

>
>
> On 23/07/2020 16:52, Richard Levitte wrote:
> > On Thu, 23 Jul 2020 12:18:10 +0200,
> > Dr Paul Dale wrote:
> >> There has been a suggestion to rename EVP_RAND to OSSL_RAND.  This
> seems reasonable.  Would it
> >> also make sense to rename the other new APIs similarly.
> >> More specifically, EVP_MAC and EVP_KDF to OSSL_MAC and OSSL_KDF
> respectively?
> >
> > This is a good question...
> >
> > Historically speaking, even though EVP_MAC and EVP_KDF are indeed new
> > APIs, they have a previous history of EVP APIs, through EVP_PKEY.  The
> > impact of relocating them outside of the EVP "family" may be small,
> > but still, history gives me pause.
> >
> > RAND doesn't carry the same sort of history, which makes it much
> > easier for me to think "just do it and get it over with"...
>
> I have the same pause - so  I'm thinking just RAND for now.
>
> Matt
>
>


Vote results for PR#12089

2020-07-03 Thread Tim Hudson
topic: Change some words by accepting PR#12089

4 against, 3 for, no absensions

The vote failed, the PR will now be closed.

Thanks,
Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
I suggest everyone takes a read through
https://en.wikipedia.org/wiki/Long-term_support as to what LTS is actually
meant to be focused on.

What you (Ben and Matt) are both describing is not LTS but STS ... these
are different concepts.

For LTS the focus is *stability *and *reduced risk of disruption *which
alters the judgement on what is appropriate to put into a release.
It then becomes a test of "is this bug fix worth the risk" with the general
focus on lowest possible risk which stops this sort of thing happening
unless a critical feature is broken.

All of the "edge case" examples presented all involve substantial changes
to implementations and carry an inherent risk of breaking something else -
and that is the issue.
It would be different if we had a full regression test suite across all the
platforms and variations on platforms that our users are operating on - but
we don't.

We don't compare performance between releases or for any updates in our
test suite so it isn't part of our scope for release readiness - if this is
such a critical thing that must be fixed then it is something that we
should be measuring and checking as a release condition - but we don't -
because it actually isn't that critical.

Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
On Sat, 20 Jun 2020, 8:14 am Benjamin Kaduk,  wrote:

> On Sat, Jun 20, 2020 at 08:11:16AM +1000, Tim Hudson wrote:
> > The general concept is to only fix serious bugs in stable releases.
> > Increasing performance is not fixing a bug - it is a feature.
>
> Is remediating a significant performance regression fixing a bug?
>

It would be a bug - but not a serious bug. So no.
It works. It was released.
Wholesale replacement of implementations of algorithms should not be
happening in LTS releases.

We make no performance guarantees or statements in our releases (in
general).

And performance isn't an issue for the vast majority of our users.

Those for whom performance is critical also tend to be building their own
releases in my experience.

Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
The general concept is to only fix serious bugs in stable releases.
Increasing performance is not fixing a bug - it is a feature.

Swapping out one implementation of algorithm for another is a significant
change and isn't something that should go into an LTS in my view.

It would be less of an issue for users if our release cadence was more
frequent - but the principal applies - stability is what an LTS is aimed
at. We should be fixing significant bugs only.

Arguing that slower performance is a bug is specious - it works - and
performance is not something that we document and guarantee. You don't find
our release notes stating performance=X for a release - because such a
statement makes little sense for the vast majority of users.

Tim.


Re: Naming conventions

2020-06-18 Thread Tim Hudson
We have a convention that we mostly follow. Adding new stuff with
variations in the convention offers no benefit without also adjusting the
rest of the API. Inconsistencies really do not assist any developer.

Where APIs have been added that don't follow the conventions they should be
changed.

It really is that simple - each developer may have a different set of
personal preferences and if we simply allow any two people to pick their
own API pattern effectively at whim we end up with a real mess over time.

This example is a clear cut case where we should fix the unnecessary
variation in the pattern. It serves no benefit whatsoever to have such a
mix of API patterns.

We do have some variations that we should adjust - and for APIs that have
been in official releases dropping in backwards compatibility macros is
appropriate.

The argument that we aren't completely consistent is specious - it is
saying because we have a few mistakes that have slipped through the cracks
we have open season on API patterns.

It also would not hurt to have an automated check of API deviations on
commits to catch such things in future.

Tim.


Re: Reducing the security bits for MD5 and SHA1 in TLS

2020-06-17 Thread Tim Hudson
Given that this change impacts interoperability in a major way it should be
a policy vote of the OMC IMHO.

Tim.


On Thu, 18 Jun 2020, 5:57 am Kurt Roeckx,  wrote:

> On Wed, May 27, 2020 at 12:14:13PM +0100, Matt Caswell wrote:
> > PR 10787 proposed to reduce the number of security bits for MD5 and SHA1
> > in TLS (master branch only, i.e. OpenSSL 3.0):
> >
> > https://github.com/openssl/openssl/pull/10787
> >
> > This would have the impact of meaning that TLS < 1.2 would not be
> > available in the default security level of 1. You would have to set the
> > security level to 0.
> >
> > In my mind this feels like the right thing to do. The security bit
> > calculations should reflect reality, and if that means that TLS < 1.2 no
> > longer meets the policy for security level 1, then that is just the
> > security level doing its job. However this *is* a significant breaking
> > change and worthy of discussion. Since OpenSSL 3.0 is a major release it
> > seems that now is the right time to make such changes.
> >
> > IMO it seems appropriate to have an OMC vote on this topic (or should it
> > be OTC?). Possible wording:
>
> So should that be an OMC or OTC vote, or does it not need a vote?
>
>
> Kurt
>
>


Re: Cherry-pick proposal

2020-04-29 Thread Tim Hudson
Any change to the review gate check we have in place now that lowers it
will certainly not get my support.

If anything, that check before code gets approved should be raised, not
lowered.

Tim.

On Thu, 30 Apr 2020, 1:24 am Salz, Rich,  wrote:

> I suspect that the primary motivation for this proposal is that PR’s
> against master often become stale because nobody on the project looks at
> them. And then submitters are told to rebase, fix conflicts, etc. It gets
> disheartening. If that is the motivation, then perhaps the project should
> address that root cause.  Here are two ideas:
>
>
>
>1. Mark’s scanning tool complains to the OTC if it has been “X” weeks
>without OTC action.  I would pick X as 2.
>2. Change the submission rules to be one non-author OTC member review
>and allow OTC/OMC to put a hold for discussion during the 24-hour freeze
>period. That discussion must be concluded, perhaps by a vote, within “Y”
>weeks (four?).
>
>
>
>
>
>
>


Re: 1.1.1f

2020-03-26 Thread Tim Hudson
We don't guarantee constant time.

Tim.

On Fri, 27 Mar 2020, 5:41 am Bernd Edlinger, 
wrote:

> So I disagree, it is a bug when it is not constant time.
>
>
> On 3/26/20 8:26 PM, Tim Hudson wrote:
> > +1 for a release - and soon - and without bundling any more changes. The
> > circumstances justify getting this fix out. But I also think we need to
> > keep improvements that aren't bug fixes out of stable branches.
> >
> > Tim.
> >
> > On Fri, 27 Mar 2020, 3:12 am Matt Caswell,  wrote:
> >
> >> On 26/03/2020 15:14, Short, Todd wrote:
> >>> This type of API-braking change should be reserved for something like
> >>> 3.0, not a patch release.
> >>>
> >>> Despite it being a "incorrect", it is expected behavior.
> >>>
> >>
> >> Right - but the question now is not whether we should revert it (it has
> >> been reverted) - but whether this should trigger a 1.1.1f release soon?
> >>
> >> Matt
> >>
> >>> --
> >>> -Todd Short
> >>> // tsh...@akamai.com <mailto:tsh...@akamai.com>
> >>> // “One if by land, two if by sea, three if by the Internet."
> >>>
> >>>> On Mar 26, 2020, at 11:03 AM, Dr. Matthias St. Pierre
> >>>> mailto:matthias.st.pie...@ncp-e.com>>
> >>>> wrote:
> >>>>
> >>>> I agree, go ahead.
> >>>>
> >>>> Please also consider reverting the change for the 3.0 alpha release as
> >>>> well, see Daniel Stenbergs comment
> >>>>
> https://github.com/openssl/openssl/issues/11378#issuecomment-603730581
> >>>> <
> >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_11378-23issuecomment-2D603730581=DwMGaQ=96ZbZZcaMF4w0F4jpN6LZg=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q=djWoIIXyggxwOfbwrmYGrSJdR5tWm06IdzY9x9tDxkA=
> >>>
> >>>>
> >>>> Matthias
> >>>>
> >>>>
> >>>> *From**:* openssl-project  >>>> <mailto:openssl-project-boun...@openssl.org>> *On Behalf Of *Dmitry
> >>>> Belyavsky
> >>>> *Sent:* Thursday, March 26, 2020 3:48 PM
> >>>> *To:* Matt Caswell mailto:m...@openssl.org>>
> >>>> *Cc:* openssl-project@openssl.org <mailto:openssl-project@openssl.org
> >
> >>>> *Subject:* Re: 1.1.1f
> >>>>
> >>>>
> >>>> On Thu, Mar 26, 2020 at 5:14 PM Matt Caswell  >>>> <mailto:m...@openssl.org>> wrote:
> >>>>
> >>>> The EOF issue (https://github.com/openssl/openssl/issues/11378
> >>>> <
> >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_11378=DwMGaQ=96ZbZZcaMF4w0F4jpN6LZg=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q=MAiLjfGJWaKvnBvqnM4fcyvGVfUyj9CDANO_vh4wfco=
> >>> )
> >>>> has
> >>>> resulted in us reverting the original EOF change in the 1.1.1
> branch
> >>>> (https://github.com/openssl/openssl/pull/11400
> >>>> <
> >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_pull_11400=DwMGaQ=96ZbZZcaMF4w0F4jpN6LZg=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q=3hBU2pt84DQlrY1dCnSn9x1ah1gSzH6NEO_bNRH-6DE=
> >>> ).
> >>>>
> >>>> Given that this seems to have broken quite a bit of stuff, I
> propose
> >>>> that we do a 1.1.1f soon (possibly next Tuesday - 31st March).
> >>>>
> >>>> Thoughts?
> >>>>
> >>>>
> >>>> I strongly support this idea.
> >>>>
> >>>> --
> >>>> SY, Dmitry Belyavsky
> >>>
> >>
> >
>


Re: 1.1.1f

2020-03-26 Thread Tim Hudson
+1 for a release - and soon - and without bundling any more changes. The
circumstances justify getting this fix out. But I also think we need to
keep improvements that aren't bug fixes out of stable branches.

Tim.

On Fri, 27 Mar 2020, 3:12 am Matt Caswell,  wrote:

> On 26/03/2020 15:14, Short, Todd wrote:
> > This type of API-braking change should be reserved for something like
> > 3.0, not a patch release.
> >
> > Despite it being a "incorrect", it is expected behavior.
> >
>
> Right - but the question now is not whether we should revert it (it has
> been reverted) - but whether this should trigger a 1.1.1f release soon?
>
> Matt
>
> > --
> > -Todd Short
> > // tsh...@akamai.com 
> > // “One if by land, two if by sea, three if by the Internet."
> >
> >> On Mar 26, 2020, at 11:03 AM, Dr. Matthias St. Pierre
> >> mailto:matthias.st.pie...@ncp-e.com>>
> >> wrote:
> >>
> >> I agree, go ahead.
> >>
> >> Please also consider reverting the change for the 3.0 alpha release as
> >> well, see Daniel Stenbergs comment
> >> https://github.com/openssl/openssl/issues/11378#issuecomment-603730581
> >> <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_11378-23issuecomment-2D603730581=DwMGaQ=96ZbZZcaMF4w0F4jpN6LZg=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q=djWoIIXyggxwOfbwrmYGrSJdR5tWm06IdzY9x9tDxkA=
> >
> >>
> >> Matthias
> >>
> >>
> >> *From**:* openssl-project  >> > *On Behalf Of *Dmitry
> >> Belyavsky
> >> *Sent:* Thursday, March 26, 2020 3:48 PM
> >> *To:* Matt Caswell mailto:m...@openssl.org>>
> >> *Cc:* openssl-project@openssl.org 
> >> *Subject:* Re: 1.1.1f
> >>
> >>
> >> On Thu, Mar 26, 2020 at 5:14 PM Matt Caswell  >> > wrote:
> >>
> >> The EOF issue (https://github.com/openssl/openssl/issues/11378
> >> <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_11378=DwMGaQ=96ZbZZcaMF4w0F4jpN6LZg=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q=MAiLjfGJWaKvnBvqnM4fcyvGVfUyj9CDANO_vh4wfco=
> >)
> >> has
> >> resulted in us reverting the original EOF change in the 1.1.1 branch
> >> (https://github.com/openssl/openssl/pull/11400
> >> <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_pull_11400=DwMGaQ=96ZbZZcaMF4w0F4jpN6LZg=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q=3hBU2pt84DQlrY1dCnSn9x1ah1gSzH6NEO_bNRH-6DE=
> >).
> >>
> >> Given that this seems to have broken quite a bit of stuff, I propose
> >> that we do a 1.1.1f soon (possibly next Tuesday - 31st March).
> >>
> >> Thoughts?
> >>
> >>
> >> I strongly support this idea.
> >>
> >> --
> >> SY, Dmitry Belyavsky
> >
>


Re: Check NULL pointers or not...

2019-11-29 Thread Tim Hudson
On Fri, Nov 29, 2019 at 7:08 PM Tomas Mraz  wrote:

> The "always check for NULL pointers" approach does not avoid
> catastrophical errors in applications.


I didn't say it avoided all errors (nor did anyone else on the thread that
I've read) - but it does avoid a whole class of errors.

And for that particular context there are many things you can do to
mitigate it - and incorrect handling of EVP_CipherUpdate itself is very
common - where error returns are completely ignored.
We could reasonably define that it should wipe out the output buffer on any
error condition - that would make the function safer in a whole pile of
contexts.

However that is talking about a different issue IMHO.

Tim.


Re: Check NULL pointers or not...

2019-11-29 Thread Tim Hudson
The way I view the issue is to look at what happens when things go wrong -
what is the impact - and evaluate the difference in behaviour between the
approaches.
You have to start from the premise that (in general) software is not tested
for all possible usage models - i.e. test coverage isn't at 100% - so there
are always code paths and circumstances that can occur that simply haven't
been hit during testing.

That means that having a special assert enabled build doesn't solve the
problem - in that a gap in testing always remains - so the "middle ground"
doesn't alter the fundamentals as such and does also lead to basically
testing something other than what is running in a default (non-debug)
build. Testing precisely what you will actually run is a fairly normal
principle to work from - but many overlook those sorts of differences.

In a production environment, it is almost never appropriate to simply crash
in an uncontrolled manner (i.e. dereferencing a NULL pointer).
There are many contexts that generate real issues where a more graceful
failure mode (i.e. anything other than crashing) results in a much better
user outcome.
Denial-of-service attacks often rely on this sort of issue - basically
recognising that testing doesn't cover everything and finding unusual
circumstances that create some form of load on a system that impacts other
things.
You are effectively adding in a whole pile of abort() references in the
code by not checking - that is the actual effect - and abort isn't
something that should ever be called in production code.

The other rather practical thing is that when you do check for incoming
NULL pointers, you end up with a lot less reports of issues in a library -
as you aren't sitting in the call chain of a crash - and that in itself
saves a lot of time when dealing with users. Many people will report issues
like this - but if they get an error return rather than a crash they do
(generally) keep looking. And developer time for OpenSSL (like many
projects) is the most scarce resource that we have. Anything that reduces
the impact on looking at crashes enables people to perform more useful work
rather than debugging a coding error of an OpenSSL user.

Another simple argument that helps to make a choice is that whatever we do
we need to be consistent in our handling of things - and at the moment many
functions check and many functions do not.
And if we make things consistent we certainly don't really have the option
of undoing any of the null checks because that will break code - it is
effectively part of the API contract - that it is safe to call certain
functions with NULL arguments. Adding extra checks from an API contract
point of view is harmless, removing them isn't an option. And if we want
consistency then we pretty much have only one choice - to check everywhere.

There are *very few places* where such checks will have a substantial
performance impact in real-world contexts.

Arguments about the C runtime library not checking simply aren't relevant.
The C runtime library doesn't set the API contracts and usage model that we
use. And there are C runtime functions that do check.

What we should be looking at in my view is the impact when things go wrong
- and a position that basically says the caller isn't allowed to make
mistakes ever for simple things like this which are easy to check isn't
appropriate. That it does also reduce the burden on the project (and it
also increases the users perception of the quality because they are always
looking at their own bugs and not thinking that OpenSSL has a major issue
with an uncontrolled crash - even if they don't report it publicly).

Tim.


Re: Commit access to openssl/tools and openssl/web

2019-10-04 Thread Tim Hudson
FYI - I have reviewed and added my approval. No need to back out anything.

Tim.

On Fri, Oct 4, 2019 at 5:50 PM Dr Paul Dale  wrote:

> I believed that it required two OMC approvals but was pointed to an
> earlier instance where only one was present and I flew with it without
> checking further.
> My apologies for merging prematurely and I’ll back out the changes if any
> OMC member wants.
>
> As for discussing this at the upcoming face to face, I agree
> wholeheartedly.
>
>
> Pauli
> --
> Dr Paul Dale | Distinguished Architect | Cryptographic Foundations
> Phone +61 7 3031 7217
> Oracle Australia
>
>
>
>
> On 4 Oct 2019, at 5:39 pm, Matt Caswell  wrote:
>
>
>
> On 04/10/2019 08:15, Dr. Matthias St. Pierre wrote:
>
> Dear OMC,
>
> while the process of merging and committing to openssl/openssl has been
> formalized,
> no similar (official) rules for pull requests by non-OMC-member seem to
> apply to the
> other two repositories openssl/tools and openssl/web. Probably it's
> because hardly
> anybody outside the OMC else ever raises them? Or is it the other way
> around?
>
>
> There are clear official rules. This vote was passed by the OMC over a
> year ago:
>
> topic: Openssl-web and tools repositories shall be under the same review
>   policy as per the openssl repository where the reviewers are OMC
> members
>
> So it needs two approvals from an OMC member. It looks like recent commits
> haven't obeyed those rules.
>
>
> I would like to raise the question whether it wouldn't be beneficial for
> all of us,
> if we would apply the same rules (commit access for all committers, plus
> the well
> known approval rules) to all of our repos. After all, the openssl/openssl
> repository
> is the most valuable of the three and I see no reason why the others would
> need
> more protection. In the case of the openssl/web repository which targets
> the
> official website, you might want to consider a 2OMC approval rule, but
> even there
> I don't see why the usual OMC veto rule wouldn't be sufficient.
>
>
> There is a lot of merit in that. Certainly for tools. I've added it to the
> OMC
> agenda for Nuremburg.
>
> Matt
>
>
>


Re: Reorganization of the header files (GitHub #9333)

2019-09-27 Thread Tim Hudson
Merge early is pretty much my default position ... and that applies to this
context in my view.

Tim.

On Sat, 28 Sep. 2019, 7:44 am Dr. Matthias St. Pierre, <
matthias.st.pie...@ncp-e.com> wrote:

> Hi,
>
> some of you might have loosely followed pull request #9333 (see [1]),
> where I am reorganizing the header files in a more consistent manner.
>
> Since I intend to do the reorganization both to master and the 1.1.1
> stable branch (in order to facilitate conflict-free cherry-picking to
> the 1.1.1 LTS branch), I decided to automate the changes. This decision
> turned out to be very convenient, because the heavy rearchitecturing on
> master frequently conflicted with my changes. Instead of resolving
> conflicts, I could just rerun the script after rebasing.
>
> When this pull request finally gets merged, it might happen that you
> in turn encounter more merge conflicts as usual, in particular if you
> are one of the 'replumbers' involved in the FIPS rearchitecturing.
> It might also happen if you are working on some long running pull
> request like the CMP integration.
>
> To check the impact of my changes on your work, I did some rebasing
> experiments, and as a result I wrote down some guidance about possible
> rebasing strategies in [2].
>
> The reason I write this mail is because I'd like to hear your opinion
> about how to proceed with this pull request. There are two possibilities:
>
> Variant I):
> Merge early in to get the reorganization integrated as soon
> as possible and not wait until shortly before the next release.
>
> Variant II):
> Wait a little bit more until the heavy rearchitecturing
> has settled down a little bit.
>
> What is your opinion? What are your preferences?
>
>
> Regards,
>
> Matthias
>
>
> [1] https://github.com/openssl/openssl/pull/9333
> [2] https://github.com/openssl/openssl/pull/9333#issuecomment-536105158
>


Re: Do we really want to have the legacy provider as opt-in only?

2019-07-17 Thread Tim Hudson
My view point (which has been stated elsewhere) is that OpenSSL-3.0 is
about internal restructuring to allow for the various things noted in the
design documents.
It is not about changing the feature set (in a feature reduction sense).

In future releases we will make the mixture of providers available more
capable and may adjust what algorithms are available and may even do things
like place national ciphers in separate providers.
But OpenSSL-3.0 is *not* the time to do any of those things.

We should be focused on the restructuring and getting the FIPS140 handling
in place and not making policy decisions about changing algorithm
availability or other such things.
The objective is that the vast majority of applications that use
OpenSSL-1.1 can use OpenSSL-3.0 with a simple recompilation without any
other code changes.

That I believe has been our consistent out-bound message in discussions as
a group and our overall driver.

In the future, things may become more dynamic and we may change the
algorithm sets and may use more configuration based approaches and may even
place each algorithm in a separate provider and allow for a whole range of
dynamic handling.
But those are for the future. OpenSSL-3.0 is basically an internally
restructured version of OpenSSL-1.1 with a FIPS140 solution.

Tim.


Re: punycode licensing

2019-07-10 Thread Tim Hudson
On Thu, Jul 11, 2019 at 12:37 AM Dmitry Belyavsky  wrote:

> Dear Tim,
>
> Formally I am a contributor with a signed CLA.
> I took a code definitely permitting any usage without any feedback,
> slightly modified it (at least by openssl-format-source and splitting
> between header and source), and submitted it as my feedback to OpenSSL.
>
> I still think that it will be a good idea if Adam signs the CLA, but if he
> declines, we still have a correct interpretation.
>

In your ICLA it contains the instructions you have to follow (reproduced
here to save you time):

7. Should You wish to submit work that is not Your original creation, You
may submit it to the Foundation separately from any Contribution,
identifying the complete details of its source and of any license or other
restriction (including, but not limited to, related patents, trademarks,
and license agreements) of which you are personally aware, and
conspicuously marking the work as "Submitted on behalf of a third-party:
[named here]".


Your current PR at https://github.com/openssl/openssl/pull/9199  does not
actually do this - basically you have to have punycode.c, punycode.h in a
separate submission not intermixed with anything else.

The reason for not intermixing the code should be pretty clear - as we need
to know which parts belong to someone else and aren't covered by your ICLA
and which parts are - with no possibility of confusion.

You would also need to include the *license* that those files are under
which you have not done so - which according to the RFC is:

Regarding this entire document or any portion of it (including
the pseudocode and C code), the author makes no guarantees and
is not responsible for any damage resulting from its use.  The
author grants irrevocable permission to anyone to use, modify,
and distribute it in any way that does not diminish the rights
of anyone else to use, modify, and distribute it, provided that
redistributed derivative works do not contain misleading author or
version information.  Derivative works need not be licensed under
similar terms.

Separately, Rich Salz indicated he had email from the author with
respect to being willing to license under the Apache License 2.0 which
you would need to get directly from the author (or Rich would need to
be the submitter). Only the author (actually copyright owner) can
change the license terms of code they create. This isn't about the
license.

You really should reach out to the author to ask if he is willing to
sign an ICLA - that is the normal steps involved.
There is nothing particularly onerous in the ICLAs - they are
basically there to provide certainty and a legal background for the
project to be able to provide the code that it does.

You should also note that the license noted in the RFC misses many of
the provisions within the ICLA and within the Apache License 2.0
itself and is incompatible with the Apache License 2.0 because it
contains restrictions and conditions beyond those stated in this
license.

After all the work that the project did to be able to move to its
current license (and a lot of that work was Rich Salz's efforts) it is
important that we maintain the foundation of the clear license terms
for the entire code base.

Tim.


Re: punycode licensing

2019-07-10 Thread Tim Hudson
Previous assertions that if the license was compatible that we don't need a
CLA in order to accept a contribution were incorrect.
You are now questioning the entire purpose of contributor agreements and
effectively arguing they are superfluous and that our policy should be
different.

You are (of course) entitled to your opinion on the topic - however the
project view and policy on this is both clear and consistent even if it is
different from what you would like to see.

If someone else wants to create a derivative of the software and combine in
packages under other licenses (Apache License or otherwise) without having
CLAs in place then that is their choice to do so as long as they adhere to
the license agreement.
Again, all of this is use under the license. What our policies cover is for
contributions that the project itself will distribute - and entirely
separate context for what others can do with the resulting package.

The CLAs are not the same as code being contributed under an Apache License
2.0.
There are many sound reasons for CLAs existing, and discussion of those
reasons isn't an appropriate topic IMHO for openssl-project.

Tim.



On Wed, Jul 10, 2019 at 8:08 PM Salz, Rich  wrote:

> Thank you for the reply.
>
>
>
> *>*The license under which the OpenSSL software is provided does not
> require "permission" to be sought for use of the software.
>
> See https://www.openssl.org/source/apache-license-2.0.txt
> 
>
>
>
> Use, as defined by the license, doesn’t just mean end-users, and it is not
> limited to compiling, linking, and running executables.  A recipient can
> make derivative items, redistribute, and so on. All of those things are
> what OpenSSL would do if it “took in” code into the source base.
>
>
>
> So why does the project require permission from other Apache-licensed
> licensed software? In other words, why will the project not accept and use
> the rights, covered by copyright and license, that it grants to others?
>
>
>


Re: punycode licensing

2019-07-09 Thread Tim Hudson
On Wed, Jul 10, 2019 at 1:58 AM Salz, Rich  wrote:
> Thank you for the update. This brings to mind a few additional questions:
>
> 1. Does other code which is copyright/licensed under the Apache 2 license
also require CLAs?
See points 1-3 of previous email. CLAs are required for anything
non-trivial.

> 2. Does other code which is in the public domain also require CLAs?
See points 1-3 of previous email. CLAs are required for anything
non-trivial.

> 3. Does OpenSSL expect that anyone using OpenSSL and bundling it with
Apache 2 software must first ask the project for permission?

That is an entirely separate question and all the project states is the
license under which we offer the software.
That question can be more broadly worded as "Does OpenSSL expect that
anyone using OpenSSL must first ask the project for permission?"

The license under which the OpenSSL software is provided does not require
"permission" to be sought for use of the software.
See https://www.openssl.org/source/apache-license-2.0.txt

So in short the answer is "no" because the software is under a license that
doesn't require permission to be sought for its use.

> 4. Assuming #1 is yes and #3 is no, can you explain why there is a
difference?

Because 1 and 2 are about *contributing *code that the project then offers
under a license, whereas 3 is about *using* the produced code under its
license.
They are completely different contexts (one in-bound, one out-bound).
And they are completely different entities (1&2 are about requirements
the *project
*places on contributions, and 3 is about requirements the license places on
*users* of the software).

Tim.


Re: punycode licensing

2019-07-09 Thread Tim Hudson
>From OMC internal discussions:

For all contributions that are made to OpenSSL there are three
circumstances that can exist:
1) the contribution is considered trivial - no CLA required
2) the contribution is non-trivial and the copyright is owned by the
submitter (or by the company they work for) - ICLA (and CCLA) required
3) the contribution is non-trivial and the copyright is owned by someone
other than the submitter and the copyright owner acknowledges that the
submission is on their behalf - ICLA (and CCLA) from the copyright owner
required.

Our CLA policy and the CLA documents themselves operate to cover
contributions as described above and the CLA policy itself notes no
exceptions for contributions outside of these circumstances.
The only mechanism for a contribution to be accepted that does not meet the
CLA policy is if the OMC votes to explicitly accept a contribution without
a CLA as a special exception to the CLA policy.

Notes:
a) the OMC has not to this date voted to approve inclusion of any
contribution without a CLA in place since the CLA policy was established in
June 2016;
b) the OMC does not currently have a policy to allow exceptions to the CLA
policy based on the license terms of a contribution

Thanks,
Tim.


On Fri, Jun 21, 2019 at 4:24 PM Tim Hudson  wrote:

> Unfortunately, the issue isn't the compatibility of the license - they do
> indeed look relatively compatible to me - and the discussion on this thread
> has so far been about that.
> However the contributor license agreement requires that the copyright
> owner grants such permission - it is the fundamental basis of contributor
> agreements.
>
> Both the CCLA and ICLA make that exceedingly clear the contributor
> (individual or company) is "*the copyright owner or legal entity
> authorized by the copyright owner*" and the grants in the CLA are not
> grants that the notice in the RFC provide.
>
> In this case, the person who raised the PR is unable to meet those
> requirements (please do correct me if I am wrong on that) and as such their
> contribution is unable to be accepted.
>
> Tim.
>
>
> On Fri, Jun 21, 2019 at 12:12 PM Dr Paul Dale 
> wrote:
>
>> It seems okay from here too.
>>
>> Pauli
>> --
>> Dr Paul Dale | Cryptographer | Network Security & Encryption
>> Phone +61 7 3031 7217
>> Oracle Australia
>>
>>
>>
>> > On 21 Jun 2019, at 11:59 am, Benjamin Kaduk  wrote:
>> >
>> > On Thu, Jun 20, 2019 at 12:27:38PM -0400, Viktor Dukhovni wrote:
>> >> On Thu, Jun 20, 2019 at 03:39:10PM +0100, Matt Caswell wrote:
>> >>
>> >>> PR 9199 incorporates the C punycode implementation from RFC3492:
>> >>>
>> >>> https://github.com/openssl/openssl/pull/9199
>> >>>
>> >>
>> >> I'd be comfortable with relicensing under Apache, while clearly
>> >> indicating the provenance of the code, and indicating that the
>> >> file is also available under the original terms.
>> >
>> > Me, too.
>> >
>> > -Ben
>>
>>


Re: Removing function names from errors (PR 9058)

2019-06-13 Thread Tim Hudson
On Thu, Jun 13, 2019 at 6:40 PM Salz, Rich  wrote:

> The proper way to handle this, in my experience, is *DO NOT REUSE ERROR
> CODES.*


No. This is a path to a rather unacceptable outcome.
Taking your example and running forward with it, having an out-of-memory
separate error code for every possible context would like to *589 error
codes* for just handling out-of-memory in master at a single level.
And then on top of that you would need to cascade those errors up the call
chain potentially.

Each error condition that might require different treatment should have a
common code.
The concept of out-of-memory (as a simple example) is not context specific
(in terms of handling).
The concept of unable-to-open-file is also not context specific.

What the current error system does is to provide a context for the user
when error conditions occur to be able to have some idea as to what
happened.
It is very much like the java stack trace - and useful for a variety of
reasons.

The error system had two purposes:
1) allow for handling of an error by the calling function (or any function
up the call stack) in a different manner
2) provide the end user with context when things fail (as applications are
generally not very good at doing this)

Both of these are equally important - but aimed at different contexts
(developers, end-users).
Neither context should be ignored when considering changes IMHO.

Tim.


Re: proposed change to committers policy

2019-05-24 Thread Tim Hudson
On Fri, May 24, 2019 at 7:34 PM Matt Caswell  wrote:

> On 24/05/2019 10:28, SHANE LONTIS wrote:
> > It doesn’t stop us both reviewing a PR. That doesn’t mean we both need
> to approve.
>
> Right...but in Matthias's version if you raise a PR, and then Pauli
> approves it,
> then you only then need to get a second committer approval. Otherwise you
> would
> need to get an OMC approval.
>

It works that way in the original wording too - which is more simply stated
IMHO.
You choose which approvals you combine. If there are three - select which
ever two make the set you want to "combine" as such.

I also didn't see a need for a separate OMC approval if a committer
submitted something and a same-organisation OMC member approved it - it
just needs one more -  so the combination of approvals can be made from
not-the-same-organisation.

Tim.


Re: No two reviewers from same company

2019-05-23 Thread Tim Hudson
We have discussed this at numerous OMC meetings in terms of how to managed
potential *perceived *conflicts of interest that might arise if people
outside of the fellows come from the same company and hence can effectively
turn the OMC review control mechanism into a single control rather than a
dual control.

We discussed tooling changes to make checking this possible given that in
each instance we have had the individuals involved make a commitment to
avoid that situation (through their own actions).
Occasionally that didn't happen and the person "corrected" it when pointed
out.

We haven't formally voted to make such a change - however it is something
that I think we should have in place and I do support.
Making a formal policy change of course will go through our usual decision
making process.

What I was expecting tooling-wise is that the scripts would detect this
situation and advise - at the very least warn - and potentially blocking
things.

The OpenSSL fellows are in a completely different context - the company
they work for is directed by the OMC - so there isn't a separate external
third party source of influence so there is no reasonable mechanism to
*perceive* a potential conflict of interest.

Note - this is all about *perceptions* of a *potential* situation - not
about something we are actually concerned about for the individuals
involved.
However it is prudent to address even the perception of a path for
potential conflicts of interest in my view.

Tim.




On Fri, May 24, 2019 at 8:16 AM Paul Dale  wrote:

> There hasn't been a vote about this, however both Shane and I have
> committed to not approve each other's PRs.
>
> I also asked Richard if this could be mechanically enforced, which I
> expect will happen eventually.
>
>
> Pauli
> --
> Oracle
> Dr Paul Dale | Cryptographer | Network Security & Encryption
> Phone +61 7 3031 7217
> Oracle Australia
>
>
> -Original Message-
> From: Salz, Rich [mailto:rs...@akamai.com]
> Sent: Friday, 24 May 2019 1:01 AM
> To: openssl-project@openssl.org
> Subject: Re: No two reviewers from same company
>
> > I understand that OpenSSL is changing things so that, by mechanism
> (and maybe by
> > policy although it’s not published yet), two members of the same
> company cannot
> > approve the same PR.  That’s great.  (I never approved Akamai
> requests unless it
> > was trivial back when I was on the OMC.)
>
> No such decision has been made as far as I know although it has been
> discussed
> at various times.
>
> In private email, and
> https://github.com/openssl/openssl/pull/8886#issuecomment-494624313 the
> implication is that this was a policy.
>
> > Should this policy be extended to OpenSSL’s fellows?
>
> IMO, no.
>
> Why not?  I understand build process is always handled by Matt and Richard
> (despite many attempts in the past to expand this), but I think if Oracle
> or Akamai can't "force a change" then it seems to me that the OMC shouldn't
> either.
>
>
>


Re: Thoughts on OSSL_ALGORITHM

2019-03-22 Thread Tim Hudson
"handle" is the wrong name for this - if you want to have private const
data then do that rather than something which might be abused for instance
specific information. It could just be an int even or a short. It doesn't
have to be a pointer.

That would reduce the likely of it being used to hold non-const data.

Tim.

On Sat, 23 Mar. 2019, 9:11 am Dr Paul Dale  I’ve no issue having a provider data field there.  It will be useful for
> more than just this (S390 AES e.g. holds data differently to other
> implementations).
>
> I also don’t think forcing separate functions is a big problem — most
> providers will only implement one or two algorithm families which will help
> control the redundancy.
>
> I don’t think we should be doing a string lookup every time one of these
> is called.
>
>
> Of the three, the provider data feels clean and unique functions fast.
>
> I’d like to avoid mandating another level of indirection (it’s slow),
> which is a risk with provider data.
>
>
> My thought: add the provider data field.  Use that when it can be done
> directly, use unique functions otherwise.
> The example with key and iv lengths would be a direct use.  Code that
> dives through a function pointer or a switch statement would be an example
> of not.
>
>
>
> Pauli
> --
> Dr Paul Dale | Cryptographer | Network Security & Encryption
> Phone +61 7 3031 7217
> Oracle Australia
>
>
>
> On 23 Mar 2019, at 1:45 am, Matt Caswell  wrote:
>
> Currently we have the OSSL_ALGORITHM type defined as follows:
>
> struct ossl_algorithm_st {
>const char *algorithm_name;  /* key */
>const char *property_definition; /* key */
>const OSSL_DISPATCH *implementation;
> };
>
> I'm wondering whether we should add an additional member to this
> structure: a
> provider specific handle. i.e.
>
> struct ossl_algorithm_st {
>const char *algorithm_name;  /* key */
>const char *property_definition; /* key */
>const OSSL_DISPATCH *implementation;
>void *handle;
> };
>
> The reason to do this is because providers are likely to want to share the
> same
> implementation across multiple algorithms, e.g. the init/update/final
> functions
> for aes-128-cbc are likely to look identical to aes-256-cbc with the only
> difference being the key length. A provider could use the handle to point
> to
> some provider side structure which describes the details of the algorithm
> (key
> length, IV size etc). For example in the default provider we might have:
>
> typedef struct default_alg_handle_st {
>int nid;
>size_t keylen;
>size_t ivlen;
> } DEFAULT_ALG_HANDLE;
>
> DEFAULT_ALG_HANDLE aes256cbchandle = { NID_aes_256_cbc, 32, 16 };
> DEFAULT_ALG_HANDLE aes128cbchandle = { NID_aes_128_cbc, 16, 16 };
>
> static const OSSL_ALGORITHM deflt_ciphers[] = {
>{ "AES-256-CBC", "default=yes", aes_cbc_functions,  },
>{ "AES-128-CBC", "default=yes", aes_cbc_functions,  },
>{ NULL, NULL, NULL }
> };
>
> Then when the "init" function is called (or possibly at newctx), the core
> passes
> as an argument the provider specific handle associated with that algorithm.
>
> An alternative is for the provider to pass the algorithm name instead, but
> this
> potentially requires lots of strcmps to identify which algorithm we're
> dealing
> with which doesn't sound particularly attractive.
>
> A second alternative is for each algorithm to always have unique functions
> (which perhaps call some common functions underneath). This sounds like
> lots of
> unnecessary redundancy.
>
> Thoughts?
>
> Matt
>
>
>


function and typedef naming thoughts

2019-03-05 Thread Tim Hudson
Looking at PR#8287 I think we need to get some naming schemes written down
and documented and followed consistently. The naming used in this PR seems
to be somewhat inconsistent.

For me, I think the naming convention most often used is

return_type SOMETHING_whatever(SOMETHING *,...)

as a general rule for how we are naming things. There are lots of
exceptions to this in the code base - but this is also pretty much
consistent.

And we use typedef names in all capitals for what we expect users to work
with.
We avoid the use of pure lowercase in naming functions or typedefs that are
in the public API that we expect users to work with - all lowercase means
this is "internal" only usage.

And we reserve OSSL and OPENSSL as prefixes that we feel are safe to place
all new names under.

Tim.


Fwd: openssl-announce post from hong...@gmail.com requires approval

2019-02-26 Thread Tim Hudson
Add a -r to your diff command so you recursively compare ... then you will
see the actual code changes.
Without the -r you are only comparing files in the top-level directory of
each tree.

diff *-r* -dup openssl-1.0.2q openssl-1.0.2r

Tim.

-- Forwarded message --
From: Hong Cho 
To: open...@openssl.org, openssl-project@openssl.org
Cc: OpenSSL User Support ML , OpenSSL Announce
ML 
Bcc:
Date: Wed, 27 Feb 2019 08:28:17 +0900
Subject: Re: [openssl-project] OpenSSL version 1.0.2q published

I see no code change between 1.0.2q and 1.0.2r.

--
# diff -dup openssl-1.0.2q openssl-1.0.2r |& grep '^diff' | awk '{print $4}'
openssl-1.0.2r/CHANGES
openssl-1.0.2r/Makefile
openssl-1.0.2r/Makefile.org
openssl-1.0.2r/NEWS
openssl-1.0.2r/README
openssl-1.0.2r/openssl.spec
hongch@hongch_bldx:~/downloads> diff -dup openssl-1.0.2q openssl-1.0.2r | &
grep '^Only'
Only in openssl-1.0.2q: Makefile.bak
--

It's supposed have a fix for CVE-2019-1559? Am I missing something?

Hong.


Re: Thoughts about library contexts

2019-02-18 Thread Tim Hudson
On Mon, Feb 18, 2019 at 8:36 PM Matt Caswell  wrote:

>
>
> On 18/02/2019 10:28, Tim Hudson wrote:
> > It should remain completely opaque.
> > As a general rule, I've never seen a context where someone regretted
> making a
> > structure opaque over time, but the converse is not true.
> > This is opaque and should remain opaque.
> > We need the flexibility to adjust the implementation at will over time.
>
> I think we're debating whether it is internally opaque or not. Externally
> opaque
> is a given IMO.
>


And my comments apply to internally opaque too - I was aware of that
context when I wrote them  - this is something that we will want to change
as it evolves over time.
And we shouldn't have a pile of knowledge of the internals of one part of
the library spreading over the other parts.

Tim.


Re: Thoughts about library contexts

2019-02-18 Thread Tim Hudson
It should remain completely opaque.
As a general rule, I've never seen a context where someone regretted making
a structure opaque over time, but the converse is not true.
This is opaque and should remain opaque.
We need the flexibility to adjust the implementation at will over time.

For anything where partial visibility is seen as desirable, it should be
raised specifically as to what issue would drive that - as frankly I don't
see a context where that would make sense.

Tim.


[openssl-project] inline functions

2019-01-27 Thread Tim Hudson
>From https://github.com/openssl/openssl/pull/7721

Tim - I think inline functions in public header files simply shouldn't be
present.
Matt - I agree
Richard - I'm ambivalent... in the case of stack and lhash, the generated
functions we made static inline expressly to get better C type safety, and
to get away from the mkstack.pl horror.

It would be good to get a sense of the collective thoughts on the topic.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] [TLS] Yet more TLS 1.3 deployment updates

2019-01-24 Thread Tim Hudson
On Thu, Jan 24, 2019 at 9:45 PM Matt Caswell  wrote:

> > This notion of "handshake" is not supported by RFC 8446 uses the terms
> "the
> > handshake", "a handshake", and "post-handshake". "Post-handshake", in
> > particular, implies KeyUpdate are after the handshake, not part of it.
>
> I just don't agree with you here. About the best that can be said about
> RFC8446
> in this regards is that the term handshake is overloaded. It certainly
> does mean
> "the initial handshake" in the way that you describe (and I myself use the
> term
> to mean that). But it is *also* used in other contexts, such as "handshake
> messages" or "handshake protocol" where it is referring to things not
> necessarily constrained to the initial handshake.
>

I agree with Matt here - there is no such clear distinction made in RFC8446
- with "handshake" being used in *all *contexts.
If such a distinction was intended by the IETF WG then they failed to
achieve it in RFC8446 in numerous places.

Quoting RFC8446 ...

4.6.3.  Key and Initialization Vector Update

   The KeyUpdate *handshake message ...*


It doesn't help that it has 4.6 Post-Handshake Message section which states
"after the main handshake" also indicating that the handshake messages are
handshakes too - just not the "main handshake".

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] To deprecate OpenSSL_version() or not

2018-12-05 Thread Tim Hudson
The function has been there for a long time (since then beginning) and it
is all about version related information - so both names aren't exactly
clearly descriptive.

OpenSSL_version_information() would be a better name.

It would also argue that the "version" program should be renamed "info" as
the same argument would equally apply.

However I do not think we should rename a function and deprecate a function
that is very widely used.

And the function should also cover everything that the current "version"
application covers (like seeding source etc). The ifdefs there are not
something we should expect applications to repeat.

Tim.

On Wed, 5 Dec. 2018, 5:50 pm Matt Caswell  Richard and I are discussing whether OpenSSL_version() should be
> deprecated or
> not in favour of a new function OPENSSL_info() which does more or less the
> same
> thing. See:
>
> https://github.com/openssl/openssl/pull/7724#discussion_r239067887
>
> Richard's motivation for doing so is that he finds the old name "strongly
> misleading". I disagree and prefer not to deprecate functions like this
> just
> because we don't like the name (which eventually will lead to breakage
> when we
> remove the deprecated functions in some future major version (not 3.0.0))
>
> I'd appreciate more input on the discussion.
>
> Matt
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Minimum C version

2018-10-07 Thread Tim Hudson
I don't see a *substantial benefit* from going to C99 and I've worked on
numerous embedded platforms where it is highly unlikely that C99 support
will ever be available.

Kurt - do you have a specific list of features you think would be
beneficial - or is it just a general sense to move forward?

We should ensure that C++ builds work - but that is mostly simply keyword
avoidance - but sticking with the base C89/C90 in my experience is still a
reasonable position.

For Microsoft reading
https://social.msdn.microsoft.com/Forums/en-US/fa17bcdd-7165-4645-a676-ef3797b95918/details-on-c99-support-in-msvc?forum=vcgeneral
and
https://blogs.msdn.microsoft.com/vcblog/2013/07/19/c99-library-support-in-visual-studio-2013/
may
assist.

I know there are Microsoft platforms that require use of earlier compilers
than VS2013 to support (unfortunately).

Tim.


On Sun, Oct 7, 2018 at 11:33 PM Kurt Roeckx  wrote:

> On Sun, Oct 07, 2018 at 02:01:36PM +0100, David Woodhouse wrote:
> > Unfortunately Microsoft still does not support C99, I believe. Or did
> that get fixed eventually, in a version that can reasonably be required?
>
> That is a very good point, and they never intend to fix that.
>
> So would that mean we say that VC will be unsupported? Or that we
> should make it so that it can be build using C++?
>
>
> Kurt
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-28 Thread Tim Hudson
On Fri, Sep 28, 2018 at 4:55 PM Matt Caswell  wrote:

> Either we go with semver and totally commit to it - or we stick with what
> we've already got. No
> half-way, "well we're kind of doing semver, but not really".
>

+1

I see no point in changing what we are doing *without* getting the benefit
of following the semantic versioning approach.
Right now things like 1.1.0 with a major API change and 1.1.1 with a major
feature update of TLSv1.3 are confusing to those who haven't been along for
the journey.

Our current handling of version numbering surprises developers and requires
careful explanation.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 11:02 PM Matt Caswell  wrote:

> You're right on this one. I misread the diff.
>

Not a problem - you are doing the look-at-what-we-did and how it would be
impacted - and that is certainly what we should be doing - working through
what impact this would have had.
Semantic versioning is a major change in behaviour with a focus on the API
being directly reflected in the versioning scheme.

It does mean that a lot of how we have handled things in the past in terms
of what was okay to go into a patch release changes.
Patch release become* pure bug fix *with no API changes (of any form)
releases and that is very different.
We have taken a relatively flexible interpretation - and put in a lot more
than bug fixes into the letter (patch) releases - we have added upwards
compatible API additions.

It would also mean our LTS releases are MAJOR.MINOR - as the PATCH is the
fixes we will apply - so it isn't part of the LTS designation as such.
e.g. 5.0.x would be the marker - not 5.0.0 - so 5.0 in shorthand form.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 10:37 PM Matt Caswell  wrote:

> - Added some new macros:
> https://github.com/openssl/openssl/pull/6037


No we didn't change our public API for this one - we changed *internals*.
The change to include/openssl/crypto.h was entirely changing *comments* to
document that the internals used more of a reserved space - but the API
wasn't changed.

And it isn't a matter of what we did for each item - it is a matter of
could we have fixed certainly things that we would classify as *bugs*
without impacting the API - we didn't have that as a rule - but if we did
then what would we have done.
And where we missed an API then it is not a PATCH. Where we got something
that wasn't spelled correctly - it has to wait for a MINOR to be fixed. A
patch doesn't change APIs.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 9:22 PM Matt Caswell  wrote:

> Lets imagine we release version 5.0.0. We create a branch for it and
> declare a support period. Its an LTS release. This is a *stable*
> release, so we shouldn't de-stabilise it by adding new features.
>
> Later we do some work on some new features in master. They are backwards
> compatible in terms of API so we release it as version 5.1.0. Its got a
> separate support period to the LTS release.
>
> We fix some bugs in 5.0.0, and release the bug fixes as 5.0.1. All fine
> so far. But then we realise there is a missing accessor in it. Its an
> LTS release so its going to be around for a long time - we really need
> to add the accessor. But we *can't* do it!! Technically speaking,
> according to the rules of semantic versioning, that is a change to our
> API - so it needs to be released as a MINOR version change. That would
> make the new version 5.1.0but we already used that number for our
> backwards compatible feature release!
>

And that is what semantic versioning is about - it is about the API.
So if you add to the API you change either MINOR or MAJOR.
In your scenario the moment you need to add an API you are doing a MINOR
release and if you already have a MINOR release under the MAJOR then you
need to add a MINOR on top of the latest MINOR that you have.
You don't get to make API changing releases that expand the API behind the
existing releases that are out there.

That is not how a semantically versioned project behaves.

The rules are strict for a reason to stop some of the practices that we
have - where PATCH releases add APIs.

Part of the precondition for a semantically versioned project is that the
API (and in this sense this is the public API) is under "control" as such.

I think there are very few circumstances under which we have needed to add
APIs - and I think outside of accessor functions during the opacity changes
- I don't know that there were any API additions that weren't actually
avoidable by solving the problem without adding an API. I don't know - I
haven't checked - but none leap to the front on mind. We have done that
simply because we didn't have a strict rule in place about API additions -
we do about changes or deletions - but we tend to view additions as
relatively safe (and they are from a backwards compatible perspective - but
they are not from a semantic versioning point of view).

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 8:07 PM Matt Caswell  wrote:

> On 25/09/18 10:58, Tim Hudson wrote:
> > On Tue, Sep 25, 2018 at 7:23 PM Richard Levitte  > <mailto:levi...@openssl.org>> wrote:
> >
> > So what you suggest (and what I'm leaning toward) means that we will
> > change our habits.
> >
> >
> > Adoption of semantic versioning will indeed require us to change our
> > habits in a number of areas - that is the point of having a single clear
> > definition of what our version numbers mean.
> >
> > I also do think we need to document a few things like what we mean by
> > bug fix compared to a feature update as there are items which we could
> > argue could either be a PATCH or a MINOR update depending on how we
> > describe such things.
>
> Sometimes we need to add a function in order to fix a bug. How should
> this be handled? e.g. there are 60 functions defined in
> util/libcrypto.num in the current 1.1.0i release that weren't there in
> the original 1.1.0 release.
>


In semantic versioning those are MINOR releases. The API has changed in a
backward compatible manner.
They cannot be called PATCH releases - those are only for internal changes
that fix incorrect behaviour.
If you change the API you are either doing a MINOR or a MAJOR release.

Now I think the flexibility we added during 1.1.0 when the MAJOR change was
to make APIs opaque was a different context where our API remained unstable
(in semantic terms) yet we did a release (for other reasons).
Under semantic versioning, 1.1.0 would have been called 2.0.0 and we would
have had to batch up the accessor API additions into a 2.1.0 release and
not have those included in any 2.0.X PATCH release.

It is quite a change under semantic versioning in some areas as it
basically requires the version to reflect API guarantees.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 7:23 PM Richard Levitte  wrote:

> So what you suggest (and what I'm leaning toward) means that we will
> change our habits.
>

Adoption of semantic versioning will indeed require us to change our habits
in a number of areas - that is the point of having a single clear
definition of what our version numbers mean.

I also do think we need to document a few things like what we mean by bug
fix compared to a feature update as there are items which we could argue
could either be a PATCH or a MINOR update depending on how we describe such
things.
Getting those things documented so we can be consistent is a good thing
IMHO. The specifics of which we place in PATCH and which we place in MINOR
are less important than being consistent in handling the same item.

For example:
- adding an ASM implementation for performance reasons - is that PATCH or
MINOR
- changing an ASM implementation for performance release - is that PATCH or
MINOR
- adding an ASM implementation to get constant time behaviour - is that
PATCH or MINOR
- changing an ASM implementation for constant time behaviour - is that
PATCH or MINOR

For all four of the above examples the API is the same (assuming that the
low-level APIs are not actually exposed in the public interface for any of
these).
And deciding on those depends how you view performance - is it a bug that
something runs slower than it could - or is it a feature.

Good arguments can be made for always MINOR or for PATCH - but I think we
should have a clear view on how we will handle such things going forward
given the OMC members have differing views on the topic and we shouldn't
end up with different handling depending on which members in which timezone
are awake for a given pull request :-)

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
A fairly common approach that is used is that you can only remove something
that has been marked for deprecation at a MAJOR release version boundary.

That is entirely independent of the semantic versioning view of things -
which also happens to say the same thing (that adding a deprecation
indication is at least a MINOR release and the removal is at a MAJOR
release).
PATCH versions should never change an API.
So we start warning whenever it is that we decide to deprecate in a MINOR
release, but any actual removal (actual non-backwards compatible API
change) doesn't come in place until a MAJOR release.

I see marking things for deprecation is a warning of a pending
non-backwards-compatible API change - i.e. that there is another way to
handle the functionality which is preferred which you should have switched
across to - but for now we are warning you as the final step before
removing an API at the next MAJOR release. We haven't committed we will
actually remove it - but we have warned you that we *expect* to remove it.

Tim
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 3:12 PM Viktor Dukhovni 
wrote:

> The proposal to move the minor version into nibbles 2 and 3 breaks this
> OpenSSH function.
>

No it doesn't - because I'm not talking about moving *anything* in the
current encoding for major and minor - see earlier post.
The positions don't change for minor version. We just stop using the
current PATCH and use the current FIX as PATCH.
And the logical test there remains valid in that it detects all
incompatible versions correctly - what changes is that some versions that
are compatible are seen as incompatible - but that is an incorrect
interpretation that is *safe.*

And note that the openssh code there is actually more conservative than it
needs to be already.

And under semantic versioning, it is only the MAJOR that changes when
breaking changes happen.

But what I've been referring to is that there is other code that uses our
documented encoding and parses it ... this isn't just an openssh issue.

Tim
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
If you accept that we have a MAJOR.MINOR.FIX.PATCH encoding currently
documented and in place and that the encoding in OPENSSL_VERSION_NUMBER is
documented then the semantic versioning is an easy change.
We do not need to change our encoding of MAJOR.MINOR - that is documented
and fixed. We just need to start using it the way semantic versioning says
we should.
We need to eliminate FIX - there is no such concept in semantic versioning.
And PATCH exists as a concept but in semantic versioning terms it is a
number.

That leads to two main choices in our current encoding as to how we achieve
that:
1) leave old FIX blank
2) move old PATCH to old FIX and leave old PATCH blank (i.e. old FIX is
renamed to new PATCH and old PATCH is eliminated)

Option 2 offers the *least surprise* to users - in that it is how most
users already think of OpenSSL in terms of a stable API - i.e. our current
MAJOR.MINOR.FIX is actually read as MAJOR.MINOR.PATCH or to be more precise
our users see FIX and PATCH as combined things - they don't see our MAJOR
and MINOR as combined things.

Under option 2 it does mean anyone that thinks a change of our current
MINOR means an ABI break would be interpreting things incorrectly - but
that interpretation is *entirely safe* - in that they would be assuming
something more conservative rather than less conservative. And leaving the
old PATCH blank doesn't hurt anyone.

I don't think that moving to semantic versioning requires us to change our
encoding of OPENSSL_VERSION_NUMBER except to move PATCH to FIX - as one of
those concepts disappears.
And then when we do a major release update (like 1.1.1) we end up with a
MAJOR version change.

Alternatively, we can elect to effectively say the OPENSSL_VERSION_NUMBER
encoding we have documented and supported is subject to yet another change
where we reinterpret things.
That is doable - but I also think entirely unnecessary and confusing for
our users.

Tim
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 11:55 AM Viktor Dukhovni 
wrote:

> this is an ad-hoc encoding with monitonicity as the
> the only constraint.
>

If you start from the position that the encoding of OPENSSL_VERSION_NUMBER
is free to change so long as the resulting value is larger than what we
have been using for all previous versions then a whole range of options
come into play.
But we shouldn't assert that we aren't changing the meaning of
OPENSSL_VERSION_NUMBER - and that is what others have been doing on this
thread - and what your previous email also asserted.

It is a *breaking* change in our comments and our documentation and in what
our users are expecting. Basically it is an API and ABI change - we said
what the number means and we are changing that.
The impact of the breaking change for those using it for pure feature
testing for API difference handling (where it isn't actually parsed) can be
minimised just by always having a larger number that all previous uses.
The impact of the breaking change on anyone actually following our
documented encoding cannot.
i.e. openssh

as
one example Richard pointed out.
That isn't the only code that actually believes our header files and
documentation :-)

Semantic versioning is about MAJOR.MINOR.PATCH with specific meanings.
There is no FIX concept as such. There is PRERELEASE and METADATA.
One of our existing concepts disappears - we have MAJOR.MINOR.FIX.PATCH
currently and we also encode STATUS as a concept (which we can map mostly
to PRERELEASE).

And if we are changing to fit into semantic versioning we need to use its
concepts and naming and not our present ones.
A merge of concepts pretty much goes against the point of semantic
versioning.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, 22 Sep. 2018, 3:24 am Viktor Dukhovni, 
wrote:

> > On Sep 21, 2018, at 12:50 PM, Tim Hudson  wrote:
> > If that is the case then our current practice of allowing ABI breakage
> with
> > minor release changes (the middle number we document as the minor
> release number)
> > has to change.
> CORRECTION:  As advertised when 1.0.0 first came out, and repeated in our
> release
>  policy:
>   As of release 1.0.0 the OpenSSL versioning scheme was improved
>   to better meet developers' and vendors' expectations. Letter
>   releases, such as 1.0.2a, exclusively contain bug and security
>   fixes and no new features. Minor releases that change the
>   last digit, e.g. 1.1.0 vs. 1.1.1, can and are likely to
>   contain new features, but in a way that does not break binary
>   compatibility. This means that an application compiled and
>   dynamically linked with 1.1.0 does not need to be recompiled
>   when the shared library is updated to 1.1.1. It should be
>   noted that some features are transparent to the application
>   such as the maximum negotiated TLS version and cipher suites,
>   performance improvements and so on. There is no need to
>   recompile applications to benefit from these features.
>


I have to disagree here. We said things about *minor releases* and *major
releases* but we didn't say things about the version numbers or change the
documentation or code comments to say the first digit was meaningless and
that we have shifted the definition of what X.Y.Z means.

That parsing of history I think is at best a stretch and not supportable
and also not what our *users* think.

Our users don't call 1.0.2 the 0.2 release of OpenSSL. Our users don't call
1.0.0 the 0.0 release. There isn't a short hand or acceptance or a decision
communicated to conceptually drop the 1. off the front of our versioning.
Your logic and practice is saying you see the first two digits as the major
version number - that's fine - you are welcome to take an ABI compatible
short hand to refer to version - but that doesn't mean we changed the
definition of our versioning system.
What you are tracking there is effectively the SHLIB version.

So if our users don't think that, and our code comments don't stay that,
and our pod documentation doesn't say that and users have the first part in
their masks and don't just ignore it then I don't think it is supportable
to claim we told everyone it was a meaningless first digit and changed our
definition of our versioning scheme back at the 1.0.0 release.

Currently when we make minor API changes that aren't breaking we update the
fix version number. When we make major API changes which we expect to be
breaking we update the minor version number.
Now think about the transition from 1.1.0 to 1.1.1 - that is by any
reasonable definition *a major release* - but we don't update the major
version number by either definition of the major version number.
We call it in our website a "feature release" - yet more terminology to
dance around our version numbering scheme.
Read the actual blog post
<https://www.openssl.org/blog/blog/2018/09/11/release111/> and try to parse
that as *a minor release* - it isn't - it is *a major release* - but we did
not change the *major release number *even if we accepted the theory and
your definition that the first number is meaningless and it is only the
second number that is the real major version and the third number isn't a
fix version is it actually the minor version. The implications of that view
point aren't supported by our actions.

We did not say we are redefining the concept of our release numbering
scheme - we just defined what API impacts there were and we used "major"
and "minor" about *release efforts and impacts* - not about changing the
definition of the parts of the OPENSSL_VERSION_NUMBER macro and the
corresponding meaning of what SSLeay_version_num() and now
OpenSSL_version_num() returns. This is a core part of our API.

Remember that SSLeay_version_num() was only renamed OpenSSL_version_num()
in 2015 in commit b0700d2c8de79252ba605748a075cf2e5d670da1
<https://github.com/openssl/openssl/commit/b0700d2c8de79252ba605748a075cf2e5d670da1>
as part of 1.1.0
The first update for the top nibble indicating the major version number had
changed was back in 2009 in commit 093f5d2c152489dd7733dcbb68cbf654988a496c
<https://github.com/openssl/openssl/commit/093f5d2c152489dd7733dcbb68cbf654988a496c>
which
is when 1.0.0 started.

Saying that our documentation in doc/crypto/OPENSSL_VERSION_NUMBER.pod
<https://github.com/openssl/openssl/blob/master/doc/man3/OPENSSL_VERSION_NUMBER.pod>
was just forgotten to be updated and happens to be wrong (in order to
support that view point) isn't matched by the actua

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 1:39 AM Viktor Dukhovni 
wrote:
>  The only change needed is a minor one in applications that actually
> parse the nibbles

What I was suggesting is that we don't need to break the current encoding
at all.
We have a major.minor.fix version encoded and documented in the header file
that we can continue to represent in semantic terms.
So anyone decoding those items will actually get the correct result (from a
user perspective).
Those using it for conditional compilation checks continue to work.

BTW thanks for the correction on "a" to "1" for the patch release (wasn't
being careful there).

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 1:34 AM Matthias St. Pierre <
matthias.st.pie...@ncp-e.com> wrote:

>
> On 21.09.2018 17:27, Tim Hudson wrote:
> >
> > We cannot remove the current major version number - as that concept
> exists and we have used it all along.
> > We don't just get to tell our users for the last 20+ years what we
> called the major version (which was 0 for the first half and 1 for the
> second half) doesn't exist.
> >
>
> There has been a famous precedent in history though:  Java dropped the
> "1." prefix when going from "1.4" to "5.0"
> (https://en.wikipedia.org/wiki/Java_version_history#Versioning_change)
>

Unfortunately that isn't a good example - as the version number didn't
actually change - just the marketing and download packaging.
The APIs continue to return 1.5.0 etc.
And many Java developers continue to use the real version numbers.

That is something I wouldn't suggest makes sense as an approach - to change
the tarfile name and leave all the internals the same achieves nothing.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 1:16 AM Viktor Dukhovni 
wrote:

>
>
> > On Sep 21, 2018, at 11:00 AM, Tim Hudson  wrote:
> >
> > If you repeat that in semantic versioning concepts just using the labels
> for mapping you get:
> > - what is the major version number - the answer is clearly "1".
> > - what is the minor version number - the answer is clearly "0"
> > - what is the fix version number - there is no such thing
> > - what is the patch version number - the answer is clearly "2" (reusing
> the fix version)
>
> I'm afraid that's where you're simply wrong.  Ever since 1.0.0, OpenSSL
> has promised (and I think delivered) ABI stability for the *minor* version
> and feature stability (bug fixes only) for the patch letters.  Therefore,
> the semantic version number of "1.0.2a" is "1.0", its minor number is 2
> and its fix number is 1 ("a").
>

No it isn't - as you note that isn't a valid mapping - 1.0 isn't a semantic
version and there is no such thing as a fix number,
You get three concepts and then on top of that the pre-release and the
build-metadata.

Semantic versioning is about the API not the ABI.

So we could redefine what we have been telling our user all along and
combine our current major+minor in a new major version.
Making 1.0.2a be 10.2.0 in semantic version terms.

We cannot remove the current major version number - as that concept exists
and we have used it all along.
We don't just get to tell our users for the last 20+ years what we called
the major version (which was 0 for the first half and 1 for the second
half) doesn't exist.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 12:32 AM Viktor Dukhovni 
wrote:

> > On Sep 21, 2018, at 10:07 AM, Tim Hudson  wrote:
> >
> > And the output you get:
> >
> > 0x10102000
>
> The trouble is that existing software expects to potential ABI changes
> resulting from changes in the 2nd and 3rd nibbles, and if the major
> version is just in the first nibble, our minor version changes will
> look like major number changes to such software.
>

The problem is that we documented a major.minor.fix.patch as our versioning
scheme and semantic versioning is different.
Semantic versioning does not allow for breaking changes on the minor
version - and that is what we do in OpenSSL currently.
Part of the requests that have come in for semantic versioning is to
actually do what is "expected".

Basically our significant version changes alter the minor version field in
the OPENSSL_VERSION_NUMBER structure.
And we have loosely referred to those as major releases but not changed the
actual major version field - but the minor.

The simple way to look at this: for OpenSSL-1.0.2a:
- what is the major version number - the answer is clearly "1".
- what is the minor version number - the answer is clearly "0"
- what is the fix version number - the answer is clear "2"
- what is the patch version number - the answer is clearly "a"

If you repeat that in semantic versioning concepts just using the labels
for mapping you get:
- what is the major version number - the answer is clearly "1".
- what is the minor version number - the answer is clearly "0"
- what is the fix version number - there is no such thing
- what is the patch version number - the answer is clearly "2" (reusing the
fix version)

If you repeat that in semantic versioning concepts just using the actual
definitions:
- what is the major version number - the answer is clearly "1".
- what is the minor version number - the answer is probably "2"
- what is the fix version number - there is no such thing
- what is the patch version number - the answer is probably "0"

Semantic versioning does not have the concept of a fix version separate
from a patch version - those are just a fix version.
If you add or deprecate APIs you have a new minor version. If you fix bugs
without touching the API is it a patch version.
If you break anything in the API is a major version.
It is completely totally about the API itself.

And for OpenSSL we are API compatible for most releases - except when we
did the major change for 1.1.0 which in reality semantic versioning would
have called 2.0.0

Semantic versioning does not have the concept of an API and a ABI
distinction.
In OpenSSL we have if the major.minor is the same then we are ABI
compatible. That is what doesn't exist in the semantic versioning approach
- it is about the API.

Effectively the current "minor" version disappears.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
I think we have to keep OPENSSL_VERSON_NUMBER and it has to have
MAJOR.MINOR.FIX in it encoded as we currently have it (where FIX is PATCH
in semantic terms and our current alpha PATCH is left blank).
That is what I've been saying in the various emails - because we precisely
need to not change the definition of what that macro is - as people
interpret it.
I suggest we zero out all the other information in the
OPENSSL_VERSION_NUMBER macro.
And I did also suggest we make the OPENSSL_VERSION_TEXT field precisely
what semantic versioning would have us do - and either drop the things we
have that don't fit or encode them following the rules.

I would also suggest we make that macro up using macros that use the
semantic version terminology directly.
i.e. something like the following.

And the version number is encoded that way to not break the existing usage
(except that what we currently call a fix is actually semantically named a
patch).
One of the critically important parts of semantic versioning is that the
API is precisely only about the major.minor.patch.

The examples for pre-release and build-metadata are just showing that one
goes first with a hyphen and can have dot separated things, the other goes
second with a plus and also can have dot separated things.
If we wanted to keep the date concept in the version text macro then we
encode it correctly - or we can stop doing that sort of thing and leave it
out.
The pre-release can be blank. The build metadata can be blank.

In semantic versioning terms this is what it would mean.
And if you want to check release/alpha/beta status you look at the
OPENSSL_VERSION_PRE_RELEASE macro and we stop the release+alpha+beta
indicator usage in the OPENSSL_VERSION_NUMBER macro.
It was rather limiting in its encoding format. That more rightly belongs in
the semantic version string format.

#include 

#define OPENSSL_MSTR_HELPER(x) #x
#define OPENSSL_MSTR(x) OPENSSL_MSTR_HELPER(x)

#define OPENSSL_VERSION_MAJOR  1
#define OPENSSL_VERSION_MINOR   1
#define OPENSSL_VERSION_PATCH   2
#define OPENSSL_VERSION_PRE_RELEASE "-beta1.special"
#define OPENSSL_VERSION_BUILD_METADATA "+21Sep2018.optbuild.arm"

#define OPENSSL_VERSION_NUMBER
(long)((OPENSSL_VERSION_MAJOR<<28)|(OPENSSL_VERSION_MINOR<<20)|(OPENSSL_VERSION_PATCH<<12))
#define OPENSSL_VERSION_TEXT OPENSSL_MSTR(OPENSSL_VERSION_MAJOR) "."
OPENSSL_MSTR(OPENSSL_VERSION_MINOR) "." OPENSSL_MSTR(OPENSSL_VERSION_PATCH)
OPENSSL_VERSION_PRE_RELEASE OPENSSL_VERSION_BUILD_METADATA

int main(void)
{
  printf("0x%8lx\n",OPENSSL_VERSION_NUMBER);

printf("%d.%d.%d\n",OPENSSL_VERSION_MAJOR,OPENSSL_VERSION_MINOR,OPENSSL_VERSION_PATCH);
  printf("%s\n",OPENSSL_VERSION_TEXT);
}

And the output you get:

0x10102000
1.1.2
1.1.2-beta1+21Sep2018.optbuild.arm

Tim.



On Fri, Sep 21, 2018 at 11:36 PM Richard Levitte 
wrote:

> In message  w2o_njr8bfoor...@mail.gmail.com> on Fri, 21 Sep 2018 23:01:03 +1000, Tim
> Hudson  said:
>
> > Semantic versioning is about a consistent concept of version handling.
> >
> > And that concept of consistency should be in a forms of the version
> > - be it text string or numberic.
> >
> > That you see them as two somewhat independent concepts isn't
> > something I support or thing makes sense at all.
>
> In that case, we should probably just thrown away
> OPENSSL_VERSION_NUMBER and come up with a different name.  If we keep
> that macro around, it needs to be consistent with its semantics as
> we've done it since that FAQ update.  Otherwise, I fear we're making
> life much harder on those who want to use it for pre-processing, and
> those who want to check the encoded version number.
>
> I do get what you're after...  a clean 1:1 mapping between the version
> number in text form and in numeric encoding.  I get that.  The trouble
> is the incompatibilities that introduces, and I'm trying to take the
> middle ground.
>
> > Our users code checks version information using the integer
> representation and it should be in
> > semantic form as such - i.e. the pure numeric parts of the semantic
> version.
> >
> > This is the major point I've been trying to get across. Semantic
> versioning isn't about just one
> > identifier in text format - it is about how you handle versioning in
> general. And consistency is its
> > purpose.
>
> Sure.
>
> Would you mind writing up a quick proposal on a new encoding of the
> version?  (and just so you don't limit yourself too much, it's fine by
> me if that includes abandoning the macro OPENSSL_VERSION_NUMBER and
> inventing a new one, a better one, with a definition that we can keep
> more consistent than our current mess)
>
> Cheers,
> Richard
>
> --
> Richard Levitte levi...@openssl.org
> OpenSSL Project   

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
Now I get the conceptual issue that Richard and Matt are differing on - and
it is about actually replacing OpenSSL's versioning concept with semantic
versioning compared to adopting semantic versioning principles without
actually being precisely a semantic version approach.

The whole concept of semantic versioning is that you define *precisely *what
you mean by a version.
Everywhere you have the concept of a version you must use the semantic form
of a *version encoding*.

That is the X.Y.Z[-prerelease][+buildmeta] format that is documented along
with the rules for X.Y.Z in terms of your public API.
And all other information about a version goes into prerelease and into
buildmeta.
Both prerelease and buildmeta are allowed to be a sequence of dot separated
alphanumerichyphen combinations.

This is the point of semantic versioning. All versions for all products are
all represented with the same sort of concepts and you know what the rules
are for the numeric X.Y.Z handling and the parsing rules for prerelease and
buildmeta.

Our concepts of versioning within OpenSSL if expressed in semantic form
MUST fit into this approach.
No prefixes. No suffixes. Not special additional encoding The idea is
consistency.

When dealing with API issues you only ever need to see X.Y.Z for any code
related testing - it precisely identifies a point in time API release.
There should never be any code ever that varies that requires looking at
prerelease or buildmeta in order to perform any action in terms of the code.

That maps to our concept of OPENSSL_VERSION_NUMBER

For the human reporting we should have the fill concept which is a text
string - and that should be OPENSSL_VERSION_TEXT and that means not having
anything else within the text other than the actual semantic version.
The syntax of the semantic version is fixed.

If you want to keep the concept of a build date in the macro you are
calling the version then it must be encoded in that format - or you move
that concept out of the macro that is the version.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
Semantic versioning is about a consistent concept of version handling.

And that concept of consistency should be in a forms of the version - be it
text string or numberic.

That you see them as two somewhat independent concepts isn't something I
support or thing makes sense at all.

Our users code checks version information using the integer representation
and it should be in semantic form as such - i.e. the pure numeric parts of
the semantic version.

This is the major point I've been trying to get across. Semantic versioning
isn't about just one identifier in text format - it is about how you handle
versioning in general. And consistency is its purpose.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
So as a concrete example - taking master and the current
OPENSSL_VERSION_TEXT value.

"OpenSSL 1.1.2-dev  xx XXX "

would become

"1.1.2-dev+xx.XXX."

That is what I understand is the point of semantic versioning. You know how
to pull apart the version string.
-dev indicates a pre-release version known as "dev"
+xx.XXX. indicates build metadata.
The underlying release is major 1, minor 1, patch 2.

But for semantic versioning where we allow API breakage in our current
minor version we would have to shift everything left.
And we would then have "1.2.0-dev+xx.XXX." if the planned new release
wasn't guaranteed API non-breaking with the previous release (1.1.1).

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Fri, Sep 21, 2018 at 9:02 PM Matt Caswell  wrote:

> I think this is an incorrect interpretation of Richard's proposal. The
> OPENSSL_VERSION_NUMBER value is an *integer* value. It does not and
> cannot ever conform to semantic versioning because, because version
> numbers in that scheme are *strings* in a specific format, where
> characters such as "." and "-" have special meanings.
>

It is the version number. We have it in two forms within OpenSSL from a
code perspective  - we have an integer encoding and a text string.
They are precisely what semantic versioning is about - making sure the
versioning concept is represented in what you are using versioning for.

For OpenSSL, codewise we have two macros and two functions that let us
access the build-time version of the macros:
  OPENSSL_VERSION_NUMBER
  OPENSSL_VERSION_TEXT
  OpenSSL_version_num()
  OpenSSL_version()

We also separately have another form of version number - for shared
libraries:
The macro:
  SHLIB_VERSION_NUMBER

We also encode the version number in release packages too.

What semantic versioning is about is sorting out how we represent the
version.
It should impact both OPENSSL_VERSION_NUMBER and OPENSSL_VERSION_TEXT and
it should be consistent.

For the semantic versioning document the status indicator is handled in the
pre-release indicator.
We could limit that to a numeric number and have it in the
OPENSSL_VERSION_NUMBER but I don't think that has helped and semantic
versioning strictly defines precedence handling.

So I would see the simple mapping from semantic versioning very differently
to Richard's write up - and in fact encoding something rather differently
into the OPENSSL_VERSION_NUMBER to my reading and thinking actually goes
against the principles of semantic versioning.

i.e. OPENSSL_VERSION_NUMBER should be X.Y.Z and OPENSSL_VERSION_TEXT should
be "X.Y.Z[-patch][+buildmeta]" and that would be a simple, direct, and
expected mapping to OpenSSL for semantic versioning.

A merged approach or keeping parts of our (non-semantic) approach while not
fully adopting semantic versioning to me at least would be missing the
point.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Fri, Sep 21, 2018 at 7:58 PM Richard Levitte  wrote:

> Our FAQ says that such changes *may* be part of a major
> release (we don't guarantee that breaking changes won't happen), while
> semantic versioning says that major releases *do* incur backward
> incompatible API changes.
>

I think you are misreading the semantic versioning usage - it states when
things MUST happen.
It does not state that you MUST NOT change a version if the trigger event
has not occurred.

Semantic versioning also requires you to explicitly declare what your
public API is in a "precise and comprehensive" manner.
What do you consider the public API of OpenSSL?

That is pretty much a prerequisite for actually adopting semantic
versioning.

I also think the concept of reinterpreting the current major version number
into an epoch as you propose is not something that we should be doing.
We have defined the first digit as our major version number - and changing
that in my view at least would be going completely against the principles
of semantic versioning.
The version itself is meant to be purely X.Y.Z[-PRERELEASE] or
X.Y.Z[+BUILDMETA] and your suggested encoding is not that at all.

What you have is EPOCH.X.Y.Z.FIX.STATUS - where EPOCH and STATUS are not
concepts contained within semantic versioning.

Basically adopting semantic versioning actually requires something
different to what has been proposed in my view.

I would suggest it means our current version encoding in an integer
of MNNFFPPS becomes simply MNNFF000 and the information for PP and S is
moved elsewhere as semantic versioning defines those concepts differently
(as noted above).

Part of our challenge is ensuring we don't cause unnecessary breakage for
users:

Vendors change the text string to add additional indicators for their
variations.
Otherwise developers use the current integer version for feature testing -
and it needs to remain compatible enough.

I haven't seen any code actually testing the S field within the version or
doing anything specific with the PP version - other than reporting it to
the user.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] coverity defect release criteria (Fwd: New Defects reported by Coverity Scan for openssl/openssl)

2018-09-10 Thread Tim Hudson
On Mon, Sep 10, 2018 at 8:44 AM, Matt Caswell  wrote:

> As far as the release criteria go we only count the ones shown in the
> Coverity tool. That's not to say we shouldn't fix issues in the tests as
> well (and actually I'd suggest we stop filtering out problems in the
> tests if anyone knows how to do that...perhaps Tim?).
>

I have changed things to no longer exclude the tests area from the online
reports (that was a historical setting on the original project from
pre-2014).
The second project which I got added to track master has always emailed all
issues.
I have also now changed the OpenSSL-1.0.2 project to report all errors as
well via email - no longer excluding various components.

So now we should have both the online tool and the emails for both projects
configured the same and including the test programs.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release Criteria Update

2018-09-06 Thread Tim Hudson
We need to get this release out and available - there are a lot of people
waiting on the "production"release - and who won't go forward on a beta
(simple fact of life there).

I don't see the outstanding items as release blockers - and they will be
wrapped up in time.

Having the release date as a drive I think helps for a lot of focus - and
more stuff has gone into 1.1.1 that we originally anticipated because we
held it open waiting on TLSv1.3 finalisation.

So a +1 for keeping to the release date.

Tim.


On Fri, Sep 7, 2018 at 8:25 AM, Matt Caswell  wrote:

>
>
> On 06/09/18 17:32, Kurt Roeckx wrote:
> > On Tue, Sep 04, 2018 at 05:11:41PM +0100, Matt Caswell wrote:
> >> Current status of the 1.1.1 PRs/issues:
> >
> > Since we did make a lot of changes, including things that
> > applications can run into, would it make sense to have an other
> > beta release?
>
> I'm not keen on that. What do others think?
>
> Matt
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release Criteria Update

2018-09-05 Thread Tim Hudson
 On Thu, Sep 6, 2018 at 8:59 AM, Matt Caswell  wrote:
> #7113 An alternative to address the SM2 ID issues
> (an alternative to the older PR, #6757)
>
> Updates made following earlier review. Awaiting another round of reviews.
> Owner: Paul Yang

All the previous comments have been addressed. I noted two missing SM2err
calls on malloc failure and a typo in SM2.pod.
I've approved it conditional on those being fixed.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

[openssl-project] New OMC Member and New Committers

2018-08-22 Thread Tim Hudson
 Welcome to Paul Dale (OMC) , Paul Yang and Nicola Tuveri (Commiters).

See the blog post at https://www.openssl.org/blog/blog/2018/08/22/updates/

Thanks,
Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Removal of NULL checks

2018-08-08 Thread Tim Hudson
We don't have a formal policy of no NULL checks - we just have a few
members that think we should have such a policy but it has never been voted
on as we had sufficiently varying views for a consensus approach to not be
possible.

Personally I'm in favour of high-level APIs having NULL checks as it (in my
experience) leads to less bogus error crash reports from users and simpler
handling in error cleanup logic.
Otherwise you end up writing a whole pile of if(x!=NULL) FOO_free(x); etc

But it is a style issue.

However in the context of removing such checks - that we should *not* be
doing - the behaviour of the APIs in this area should not be changed
outside of a major version increment - and even then I would not make the
change either because it introduces changes which in themselves serve no
real purpose - i.e. a style change can result in making a program that used
to work fine crash in bad ways - and that we shouldn't be doing even across
major version increments in my view.

Tim.


On Wed, Aug 8, 2018 at 8:08 PM, Matt Caswell  wrote:

> We've had a policy for a while of not requiring NULL checks in
> functions. However there is a difference between not adding them for new
> functions and actively removing them for old ones.
>
> See https://github.com/openssl/openssl/pull/6893
>
> In this case the removal of a NULL check in the stack code had the
> unintended consequence of a crash in a no-comp build. Is it wise to be
> actively removing existing NULL checks like this? It does have an impact
> on the behaviour of a function (even if that behaviour is undocumented
> and not encouraged). The concern I have is for our API/ABI compatibility
> guarantee. If we make changes like this then 1.1.1 may no longer be a
> drop in replacement for 1.1.0 for some apps.
>
> Matt
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Current votes FYI

2018-05-23 Thread Tim Hudson
No that vote does not pass. All votes require participation by a majority
of active members. Failure to have a majority participation causes a vote
to fail.

With only three out of eight members voting this vote simply did not pass.

Tim.


On Thu, 24 May 2018, 12:59 am Salz, Rich,  wrote:

> Another update
>
> VOTE: Remove the second paragraph ("Binary compatibility...improve
> security")
> from the release strategy.
>
>  +1: 2
>  0: 1
> -1: 0
> No vote: 5
>
> The vote passed.
>
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-15 Thread Tim Hudson
Where we are stating that ABI compatibility is in place we should be
testing it.
i.e. the older release binaries should be run against the current release
libraries - and that should be put into CI in my view.

Going the other direction isn't something I have thought we have ever
guaranteed (i.e. downgrading) - but programs that don't use new APIs (i.e.
for the ABI) should also work - that is pretty much the definition of an
ABI.
If we are unable to provide the forward ABI then we need to change the
version number of the release going forward. If weare unable to provide
backwards ABI then we need to think about how we are defining things and
make that clear.

And we need to be clear about what we mean by ABI - I don't think we have
written down a clear definition - and then have in CI the tests to match
what we are saying we do.

When it comes to TLSv1.3 it is highly desirable that an existing 1.1.0
application gets TLSv1.3 without API changes - i.e. ABI provides this.
There will be some things where there are problems where assumptions are
invalidated - but we should work to minimise those (and I think we have
actually done so).
But again we should be testing this in CI - if we want old apps to all get
TLSv1.3 for zero effort (other than a shared library upgrade) then we
should test that in CI. Hoping it works or assuming it works and "manually"
watching for changes that might invalidate that is rather error prone.

What Richard is doing really helps add to the information for informed
decisions ... the more information we have the better in my view.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] FW: April Crypto Bulletin from Cryptosense

2018-04-03 Thread Tim Hudson
I'm less concerned about that access in this specific instance - as if we
had a test in place for that function then make test on the platform would
have picked up the issue trivially.
I don't know that we asked the reporter of the issue as to *how* it was
found - that would be interesting information.

Noting which platforms are supported to which level and what level of test
coverage we have are the more important items in my view.

Tim.


On Wed, Apr 4, 2018 at 1:39 AM, Richard Levitte <levi...@openssl.org> wrote:

> While I totally agree with the direction Tim is taking on this, we
> need to remember that there's another condition as well: access to the
> platform in question, either directly by one of us, or through someone
> in the community.  Otherwise, we can have as many tests as we want, it
> still won't test *that* code (be it assembler or something else)
>
> In message <CAHEJ-S7o+ztC8gF3ZN_J7qoFPiCbxTOBYfrXr8AVK6s15Hd8C
> w...@mail.gmail.com> on Tue, 03 Apr 2018 15:36:15 +, Tim Hudson <
> t...@cryptsoft.com> said:
>
> tjh> And it should have a test - which has nothing to do with ASM and
> everything to do with improving
> tjh> test coverage.
> tjh>
> tjh> Bugs are bugs - and any form of meaningful test would have caught
> this.
> tjh>
> tjh> For the majority of the ASM code - the algorithm implementations we
> have tests that cover things
> tjh> in a decent manner.
> tjh>
> tjh> Improving tests is the solution - not whacking ASM code. Tests will
> catch issues across *all*
> tjh> implementations.
> tjh>
> tjh> Tim.
> tjh>
> tjh> On Tue, 3 Apr. 2018, 8:29 am Salz, Rich, <rs...@akamai.com> wrote:
> tjh>
> tjh>  On 03/04/18 15:55, Salz, Rich wrote:
> tjh>  > This is one reason why keeping around old assembly code can have a
> cost. :(
> tjh>
> tjh>  Although in this case the code is <2 years old:
> tjh>
> tjh>  So? It's code that we do not test, and have not tested in years. And
> guess what? Critical CVE.
> tjh>
> tjh>  ___
> tjh>  openssl-project mailing list
> tjh>  openssl-project@openssl.org
> tjh>  https://mta.openssl.org/mailman/listinfo/openssl-project
> tjh>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Is making tests faster a bugfix?

2018-03-29 Thread Tim Hudson
Improved testing to me is something that is a good thing - and a value
judgement.
It doesn't change libcrypto or libssl - and that to me is the way I think
of it.
Fixing tests and apps and Makefiles to me are different from adding
features to libcrypto or libssl.

On this one - the fuzz testing has been sufficiently slow to reduce its
usefulness - and this is a step in the right direction.

It is however also a bit outside of our current policy on such things - so
perhaps we need to update that.

Tim.


On Thu, Mar 29, 2018 at 11:45 PM, Richard Levitte 
wrote:

> In message  on Thu, 29
> Mar 2018 14:03:06 +0100, Matt Caswell  said:
>
> matt>
> matt>
> matt> On 29/03/18 14:00, Salz, Rich wrote:
> matt> > Please see https://github.com/openssl/openssl/pull/5788
> matt> >
> matt> > I don’t think it is, but I’d like to know what others think.
> matt>
> matt> I do think this should be applied. The tests in question are not just
> matt> slow but *really* slow to the point that I often exit them before
> they
> matt> have completed. This removes the benefits of having the tests in the
> matt> first place. From that perspective I view this as a bug fix.
>
> Something to remember is that no user will ever complain about this,
> because we don't deliver the contents of fuzz/corpora in our tarballs.
>
> In other words, this is a developer only change of our current tests,
> and you will only hear from developers who do engage in fuzz testing,
> i.e. those who do these tests as part of a release, just to pick a
> very recent example.
>
> Also, you may note that this test re-engages fuzz testing as part of
> our normal tests that are run for every PR, which means that we will
> catch errors that the fuzzers can detect much earlier.  Because the
> fuzz testing took so long time, we had them only engaged with
> [extended tests], something that's almost never used.
>
> So I would argue that faster fuzz testing means more fuzz testing, and
> hopefully better testing of stuff that's harder to catch otherwise.
>
> Cheers,
> Richard ( plus, from a very personal point of view, it's *my* time,
>   and Matt's, and whoever else's who tests for releases, that
>   gets substantially less wasted! )
>
> --
> Richard Levitte levi...@openssl.org
> OpenSSL Project http://www.openssl.org/~levitte/
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

[openssl-project] constification on already released branches

2018-03-25 Thread Tim Hudson
https://github.com/openssl/openssl/pull/2181
and
https://github.com/openssl/openssl/pull/1603#issuecomment-248649700

One thing that should be noted is that if you are building with -Wall
-Werror (which many projects do) and you are using OpenSSL and things
change from a const perspective builds break - and that then ends up people
having to change code on released versions.

Adding or removing const changes the API. It doesn't technically change the
ABI - although it can impact things as compilers can and do warn for simple
const violations in some circumstances. The straight forward "fix" of a
cast on call actually doesn't achieve anything useful - in that it simply
masks the underlying issue where const was added for a reason.

We should have a clear policy on doing this sort of thing in released code
- either it is okay to do and we should accept patches - or it should never
actually be done. I don't see this as a case-by-case basis for
determination at all - basically it is a single type of API change we
either should be allowing or disallowing.

There is a similar type of API change which is adding typedefs in for
callbacks - which technically also don't change the ABI - and if we allow
any form of API change that also could be considered.

We should discuss this separate from any specific PR and vote to set a
policy one way or the other in my view.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Code Repo

2018-03-20 Thread Tim Hudson
We have been holding off on post-1.1.1 feature development for a long time
now - on the grounds that TLSv1.3 was just around the corner etc and the
release was close - and then we formed a release plan which we pushed back
a week.

It is long overdue that we get to start moving those other things forward
in my view.
We had planned to start moving around a pile of stuff for FIPS related
items - and keeping master locked for API changes really works against that.

There are a large range of PRs which we pushed off as
must-wait-for-post-1.1.1 and those are things that remain stalled as long
as we keep master locked down.

The release for 1.1.1 should be pretty close to "complete" as such -
looking at the plans - as with no new features going in the work remaining
should be relatively staight forward.
Rich's suggestions I think tend to indicate more  work going into the
release that planned - and we had said we were creating this branch - and
deviating from that at the last minute isn't really how we shuold be making
decisions as a project.
Some stuff that would normally be in a banch now isn't ... as Richard noted
in the PR.

Tim.


On Wed, Mar 21, 2018 at 12:17 AM, Matt Caswell  wrote:

> The beta release is now complete.
>
> Important:
>
> We did *not* create the OpenSSL_1_1_1-stable branch as planned (see
> https://github.com/openssl/openssl/pull/5690 for the discussion that led
> to that decision). For now the release was done from the master branch
> in the same way as we did for the previous alpha releases. However the
> feature freeze *is* in force. Therefore no features can be pushed into
> the repo until such time as the branch is created. All commits to master
> must be suitable for inclusion in the 1.1.1 release.
>
> Matt
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] GitHub milestone for 1.1.1

2018-03-19 Thread Tim Hudson
I too see this in the "bug fix" area - although you can make a reasonable
counter argument (but I don't see a lot of point in doing so).
Improving the build environment is a good thing IMHO ...

Tim.


On Mon, Mar 19, 2018 at 10:27 PM, Salz, Rich  wrote:

> I would consider it a bug-fix FWIW.
>
> I thought we extended the deadline so that we could review more
> third-party PR's.I'm still waiting on an e_os/Microsoft cleanup, but
> getting community stuff in is more important.
>
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

[openssl-project] OpenSSL FIPS Wiki Update

2018-03-14 Thread Tim Hudson
https://wiki.openssl.org/index.php/FIPS_module_3.0

I've edited that to be closer to the list of items we are discussing and to
remove things which looked like commitments that are out of scope of our
current plans.

As always, feedback is welcome - but we have had a few people referencing
that out of date information so I fixed at least that issue (hopefully).

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] DRBGs, threads and locking

2018-03-13 Thread Tim Hudson
We have to keep in mind what threats we care about and their practicality.
The security of a DRBG is dependent on both the secrecy of the seed
material provided and the security of the algorithm in terms of its output
not leaking information that materially leaks the internal state in a
manner that enables it to be discovered or reversed in a manner to enable
determination of previous or future outputs.

For some of the arguments used to date there appears to be an assumption
that there is a practical difference between a broken DRBG algorithm such
that it is not such a security issue if we separate out the DRBG instances
on a per SSL connection.
In real terms if a DRBG is broken and its state is able to be determined
remotely there is no practical difference in separating DRBG instances -
they are all equally vulnerable in the same manner.
In the case of the DualEC-DRBG this was clear - and no one I've seen has
ever suggested that you were safer if you had separate instances of a
broken algorithm for a DRBG - it makes no practical difference to the
security at all.
Sure there is a slight technical difference - but from a security
perspective there is no difference - you are susceptible to the same attack
- so the minor technical difference offers no actual meaningful security
value - and everyone that has referenced this to date has also indicated
that they don't think that there is actually any real practical value to
the difference - it has been more of a "it cannot harm" sort of comment.

In more general terms we need to have a clear view on what we think about
our thread model - what is considered inside the scope of what we care to
address - and what is frankly outside the scope (for our view).


   - We don't consider attacks from the same process against itself within
   our threat model.
   - Correspondingly we don't consider attacks from one thread against
   another thread without our threat model.
   - We don't consider privileged user attacks against the user in our
   threat model (i.e. root can read the memory of the process on most
   Unix-like systems).
   - We also don't actually consider a need to protect all secret
   information from every possible other bug that might leak arbitrary parts
   of memory. We could. But we don't. And if we did we would need to protect
   both the seeding material for the DRBG and its internal state and
   potentially its output. We don't do that - because that isn't within our
   threat model.


Typical applications share an SSL_CTX between multiple SSL instances and we
maintain the session cache against the SSL_CTX. This may be in a single
process (thread) or shared across multiple threads - or even shared across
multiple prcesses (which is simply the same as being in a single process
from our perspective where the "magic" to coordinate the session id cache
between processes is left to the developer/user).

In a FIPS context, every DRBG has requirements on its inputs (seeding) and
on maintaining a continuous RNG test (block-based compare for non-repeating
outputs at a block level).
All of these would be a per-instance requirement on the DRBG. They have to
be factored in.

There is also the argument that locking is bad and fewer locks are better -
and that argument needs to be backed up by looking at the overall status -
which particular application model are we concerned about? Have we measured
it? Have we figured out where the bottlenecks are? Have we worked through
optimising the actual areas of performance impact? Or are we just
prematurely optimising? Excessive locking will have an impact for certain
application models - but I don't think anyone is suggesting that what we
had previously was excessive - and given the significant performance impact
of the recent changes which went unmeasured and unaddressed I think it is
clear we haven't been measuring performance related items for the DRBG at
all to date - so there wasn't any "science" behind the choices made.

Simple, clear, well documented code with good tests and known architectural
assumptions is what we are trying to achieve - and my sense from the
conversations on this topic to date was that we don't have a consensus as
to what problem we are actually trying to solve - so the design approach
shifts, and shifts again - all of which are the authors of the PRs
responding to what is (in my view at least) conflicting suggestions based
on different assumptions.

That is what I put the -1 on the the PR - to have this discussion - and
agree on what we are trying to solve - and also agree on what we are not
trying to solve. And perhaps we can actually document some of our "threat
model" - as I'm sure we have different views on that as well.

I don't think we should have per-SSL DRBGs - it offers no meaningful
security value. We could have a per-SSL_CTX - but I'm not sure that is
needed. We could have a per-thread - but again that is unclear if we
actually need that either.
My thoughts are per-SSL_CTX 

Re: [openssl-project] Copyrights, AUTHORS file, etc.

2018-03-13 Thread Tim Hudson
This discussion has been taken to the OMC mailing list (where it continues)
rather than the openssl-project list as it goes across previous team
decisions.
An update once that discussion completes will be sent to the
openssl-project list.

Thanks,
Tim.


On Tue, Mar 13, 2018 at 11:22 AM, Salz, Rich  wrote:

> For some background you can take a look at https://github.com/openssl/
> openssl/pull/5499 and the blog posts here: https://www.openssl.org/blog/
> blog/categories/license/
>
>
>
> The OMC voted in 2014 to work to an eventual change of license to Apache
> 2. That led to the CLA’s. We had calls with various open source folks who
> we thought could help us. This include Max Sills from the Google open
> source office, Gervais Markham who led the Mozilla relicensing effort,
> someone from GitHub (to talk about CLA automation), and probably another
> person who I am forgetting.  We also talked with our legal counsel, Mishi
> Choudhary of the Software Freedom Law Center.
>
>
>
> Around 2Q 3Q 2015 the discussions were completed, and we had coalesced
> around a plan. There was no formal OMC vote (it was called the openssl-team
> at that point). But there were no objections.  OMC members can skim the
> archives, starting around July 2015 if they need to refresh their memories.
>
>
>
> The key points of the plan are
>
>- Move to Apache license
>- Require CLA’s from all contributors
>- Reach out to everyone we can identify and get them to approve of the
>change, or not
>- Have a uniform copyright in all source files that points to the
>license and authors separately, for easier maintenance
>
>
>
> The “removing some code” blog post gives more details about the scripts we
> developed and what code we removed. Since then, nobody else has asked for
> their code to be removed.
>
>
>
> The file/copyright changes happened during the 1.1.0 release.
>
>
>
> We’re on the verge of being able to change the license, and as we said in
> our last press release, we are hoping and planning to do that for 1.1.1
>
>
>
> The PR that marks part of this has a -1 from Tim, which is a hold.  That
> means we have to discuss and the OMC vote on this.  This email is intended
> to give the background and start the discussion.
>
>
>
> So what are your objections Tim, and what do you want to see done
> differently? And also, please explain why it is better than the current
> plan.
>
>
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] External contributors and the next release

2018-03-06 Thread Tim Hudson
If you are blocked on review please drop a note (like the one you just did)
to the group.
Some of us review the specifically blocked things when such notes are sent.

#3082 is already closed and merged - did you mean another PR?
#3958 approved (in case Richard doesn't get back to it)
#1130 approved
#3958 approved

Tim.



On Wed, Mar 7, 2018 at 2:40 PM, Benjamin Kaduk  wrote:

> On Wed, Mar 07, 2018 at 01:20:41AM +, Salz, Rich wrote:
> > I think we should make sure to set aside time to review as many of the
> non-project pull requests as possible.  I think it is important to show a
> commitment to the larger community.
>
> I agree.  I started looking at this last week, when we had something
> like 170 open issues+pull requests for the 1.1.1 milestone.  We
> still have 165, and that excludes a couple things that I would like
> to see get in but are not part of the milestone (like #3082 and
> #1130).  I also have several changes open myself that are small and
> could easily go in, though none of them are especially critical.
> (Richard, could you reconfirm on #3958, though, please?)
>
> > One way to do this is to say that we will extend the BETA date by a
> week, and in that week no new code from the team, only third-party existing
> pull requests will be considered
>
> I'm open to extending the date, though am interested to see what we
> can do if everybody chips in this week.  (Though, I myself will only
> be able to start on Friday, as my next couple days are fully
> booked.)
>
> -Ben
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] VOTE on travel reimbursement policy

2018-02-14 Thread Tim Hudson
[kurt]
> So I think we should either all vote in public, or nobody should vote in
public.

You make a good point there - I agree.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] VOTE on travel reimbursement policy

2018-02-14 Thread Tim Hudson
> Now, the initial posting went to both the OMC and the project list,
> and some chose to vote with a simple "Reply All" without editing the
> recipients.  If that was on purpose or because attention wasn't payed
> to that detail, I cannot say.

For my part, it was just a reply-all - but if I had stopped to look at the
details I still would have done a reply-all - as there is nothing in this
vote that needs to be kept private in my view.
I guess it is a choice for each OMC member as to whether or not they make
their actual votes public - the summary results will be - but the actual
votes were indeed meant to be on openssl-omc only.

It would also have been good for the text to be attached - especially when
we are sending vote details to openssl-project.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Removing assembler for outdated algorithms

2018-02-10 Thread Tim Hudson
Before we look at removing things like this, I think we should look at
whether or not they actually have a significant maintenance cost.

Tim.


On 11 Feb. 2018 7:08 am, "Salz, Rich"  wrote:

This is derived from bureau/libcrypto-proposal that Emilila made in
November 2015.



We should remove the assembler versions of the following

Blowfish, cast, des, rc4, rc5, ripemd, whirlpool, md5



The reason is that they are outdated, not in use very much, and
optimization is not important, compared to having a single reference source
that we can maintain and debug.



___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project