Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
I suggest everyone takes a read through
https://en.wikipedia.org/wiki/Long-term_support as to what LTS is actually
meant to be focused on.

What you (Ben and Matt) are both describing is not LTS but STS ... these
are different concepts.

For LTS the focus is *stability *and *reduced risk of disruption *which
alters the judgement on what is appropriate to put into a release.
It then becomes a test of "is this bug fix worth the risk" with the general
focus on lowest possible risk which stops this sort of thing happening
unless a critical feature is broken.

All of the "edge case" examples presented all involve substantial changes
to implementations and carry an inherent risk of breaking something else -
and that is the issue.
It would be different if we had a full regression test suite across all the
platforms and variations on platforms that our users are operating on - but
we don't.

We don't compare performance between releases or for any updates in our
test suite so it isn't part of our scope for release readiness - if this is
such a critical thing that must be fixed then it is something that we
should be measuring and checking as a release condition - but we don't -
because it actually isn't that critical.

Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Benjamin Kaduk
On Fri, Jun 19, 2020 at 11:46:15PM +0100, Matt Caswell wrote:
> 
> 
> On 19/06/2020 23:34, Tim Hudson wrote:
> > 
> > 
> > On Sat, 20 Jun 2020, 8:14 am Benjamin Kaduk,  > > wrote:
> > 
> > On Sat, Jun 20, 2020 at 08:11:16AM +1000, Tim Hudson wrote:
> > > The general concept is to only fix serious bugs in stable releases.
> > > Increasing performance is not fixing a bug - it is a feature.
> > 
> > Is remediating a significant performance regression fixing a bug?
> > 
> > 
> > It would be a bug - but not a serious bug. So no.
> > It works. It was released. 
> 
> I don't recall us ever saying that we would only fix serious bugs.
> AFAIK, if its a bug then we will fix it. IMO a serious performance
> regression would qualify, within reason (e.g. major rewrites of the
> assembler would not be ok).
> 
> > Wholesale replacement of implementations of algorithms should not be
> > happening in LTS releases.
> 
> This I agree with.

Both of Matt's paragraphs match up with my understanding/sentiment.

-Ben


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Matt Caswell



On 19/06/2020 22:58, Kurt Roeckx wrote:
> On Fri, Jun 19, 2020 at 10:29:24PM +0100, Matt Caswell wrote:
>>
>> My immediate reaction to that is no - it shouldn't go to 1.1.1. That
>> would impact a very high proportion of our user base.
> 
> So is risk an important factor? How do you judge the risk? By the
> size of the change?

Yes, I do believe risk is an important factor although putting hard
rules around it is very difficult. There are bound to be some fuzzy
lines here between what is and isn't acceptable.

Matt


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Matt Caswell



On 19/06/2020 23:34, Tim Hudson wrote:
> 
> 
> On Sat, 20 Jun 2020, 8:14 am Benjamin Kaduk,  > wrote:
> 
> On Sat, Jun 20, 2020 at 08:11:16AM +1000, Tim Hudson wrote:
> > The general concept is to only fix serious bugs in stable releases.
> > Increasing performance is not fixing a bug - it is a feature.
> 
> Is remediating a significant performance regression fixing a bug?
> 
> 
> It would be a bug - but not a serious bug. So no.
> It works. It was released. 

I don't recall us ever saying that we would only fix serious bugs.
AFAIK, if its a bug then we will fix it. IMO a serious performance
regression would qualify, within reason (e.g. major rewrites of the
assembler would not be ok).

> Wholesale replacement of implementations of algorithms should not be
> happening in LTS releases.

This I agree with.

Matt


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
On Sat, 20 Jun 2020, 8:14 am Benjamin Kaduk,  wrote:

> On Sat, Jun 20, 2020 at 08:11:16AM +1000, Tim Hudson wrote:
> > The general concept is to only fix serious bugs in stable releases.
> > Increasing performance is not fixing a bug - it is a feature.
>
> Is remediating a significant performance regression fixing a bug?
>

It would be a bug - but not a serious bug. So no.
It works. It was released.
Wholesale replacement of implementations of algorithms should not be
happening in LTS releases.

We make no performance guarantees or statements in our releases (in
general).

And performance isn't an issue for the vast majority of our users.

Those for whom performance is critical also tend to be building their own
releases in my experience.

Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Benjamin Kaduk
On Sat, Jun 20, 2020 at 08:11:16AM +1000, Tim Hudson wrote:
> The general concept is to only fix serious bugs in stable releases.
> Increasing performance is not fixing a bug - it is a feature.

Is remediating a significant performance regression fixing a bug?

-Ben


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
The general concept is to only fix serious bugs in stable releases.
Increasing performance is not fixing a bug - it is a feature.

Swapping out one implementation of algorithm for another is a significant
change and isn't something that should go into an LTS in my view.

It would be less of an issue for users if our release cadence was more
frequent - but the principal applies - stability is what an LTS is aimed
at. We should be fixing significant bugs only.

Arguing that slower performance is a bug is specious - it works - and
performance is not something that we document and guarantee. You don't find
our release notes stating performance=X for a release - because such a
statement makes little sense for the vast majority of users.

Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Kurt Roeckx
On Fri, Jun 19, 2020 at 10:29:24PM +0100, Matt Caswell wrote:
> 
> My immediate reaction to that is no - it shouldn't go to 1.1.1. That
> would impact a very high proportion of our user base.

So is risk an important factor? How do you judge the risk? By the
size of the change?


Kurt



Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Matt Caswell



On 19/06/2020 21:42, Kurt Roeckx wrote:
> I think one other thing that has come up is adding support for a
> new target, which can just be some small change to configuration
> files. Is that something we want to accept?

I think previously we have said that a new target is a new feature and
therefore haven't allowed it. But even this I think is a grey area. If a
target could be added simply by adding some lines to Configure, probably
the risk to our existing users is very low. OTOH, if adding a platform
means adding lots of ifdef hackery throughout the codebase then the risk
of something going wrong is significantly higher.

> So we currently also have PR #12201 that adds a new constant time
> P-384 implementation. Do you think we should backport that to the
> 1.1.1 branch? If the answer is different than for the assembler,
> why?

My immediate reaction to that is no - it shouldn't go to 1.1.1. That
would impact a very high proportion of our user base.

Matt


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Kurt Roeckx
I think one other thing that has come up is adding support for a
new target, which can just be some small change to configuration
files. Is that something we want to accept?


Kurt


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Kurt Roeckx
So we currently also have PR #12201 that adds a new constant time
P-384 implementation. Do you think we should backport that to the
1.1.1 branch? If the answer is different than for the assembler,
why?

Does 1.1.1 being an LTS release have any effect on the answer?
For instance, if we add some assembler during the 3.1 development,
and 3.0 is not an LTS release, but 1.1.1 is, and is still supported,
do we backport it to 1.1.1 and not 3.0?

We've at least talked about not adding a feature to the stable
branch. At least one way of looking at that is that we don't add new
functionality. You could argue that new assembler is not a new
feature, just a new implementation of an existing feature.


Kurt



Re: Naming conventions

2020-06-19 Thread Richard Levitte
On Fri, 19 Jun 2020 09:15:22 +0200,
Tomas Mraz wrote:
> The problem is that there is not really fully consistent convention
> followed currently (even in 1.1.1, not counting the API additions in
> 3.0).
> 
> For example if we would like to follow the convention _fully_ we would
> have to rename these new EVP_MAC functions:
> 
> int EVP_MAC_init(EVP_MAC_CTX *ctx);
>   
> int EVP_MAC_update(EVP_MAC_CTX *ctx, const unsigned char *data, size_t 
> datalen);
>  
> int EVP_MAC_final(EVP_MAC_CTX *ctx, unsigned char *out, size_t *outl, size_t 
> outsize);
> 
> to something like:
> 
> int EVP_MacInit(EVP_MAC_CTX *ctx);
>   
> int EVP_MacUpdate(EVP_MAC_CTX *ctx, const unsigned char *data, size_t 
> datalen);
>  
> int EVP_MacFinal(EVP_MAC_CTX *ctx, unsigned char *out, size_t *outl, size_t 
> outsize);
> 
> or at least
> 
> int EVP_MACInit(EVP_MAC_CTX *ctx);
>   
> int EVP_MACUpdate(EVP_MAC_CTX *ctx, const unsigned char *data, size_t 
> datalen);
>  
> int EVP_MACFinal(EVP_MAC_CTX *ctx, unsigned char *out, size_t *outl, size_t 
> outsize);

Someone's walking memory lane ;-)
I had to look back at OpenSSL_0_9_1c to remind myself, 'cause today,
we're quite mixed up, what with the slightly lower level functions
EVP_PKEY_sign_init, EVP_PKEY_sign, etc etc etc...

I would say, thought, that if you want to do the higher level function
thing (which are all CamelCase as far as I can see), there's another
naming convention going on, at least with EVP_PKEY, it seems to be
done according to the higher level operation you want to perform, not
the type of data used to do it...  what do you normally to with a MAC? 
So:

int EVP_AuthenticateInit(EVP_MAC_CTX *ctx);
int EVP_AUthenticateUpdate(EVP_MAC_CTX *ctx, const unsigned char *data, size_t 
datalen);
int EVP_AUthenticateFinal(EVP_MAC_CTX *ctx, unsigned char *out, size_t *outl, 
size_t outsize);

... or something like that.  Quite frankly, a naming convention that
is about the operation rather than the type of any type is something I
could play along with.

> And I actually hope we could add at least non-CamelCase aliases to the
> existing (non-deprecated) CamelCase functions because they were always
> the worst offender of the API consistency.

I agree with you...  but we have to recognise that would be a bigger
remake, and if we do look at just the CamelCase API, it's actually
fairly consistent (turning a blind eye at DigestSign quirkiness when I
say that).

(I suppose that CamelCase was inspired by the time, it was quite
popular in certain circles in the 90's, and got especially popularised
by Microsoft...  and well, there are quite a few Microsoft ideas that
have infiltrated OpenSSL...  need I mention '_ex'?)

Cheers,
Richard

-- 
Richard Levitte levi...@openssl.org
OpenSSL Project http://www.openssl.org/~levitte/


Re: Naming conventions

2020-06-19 Thread Tomas Mraz
On Fri, 2020-06-19 at 01:48 +1000, Tim Hudson wrote:
> We have a convention that we mostly follow. Adding new stuff with
> variations in the convention offers no benefit without also adjusting
> the rest of the API. Inconsistencies really do not assist any
> developer.
> 
> Where APIs have been added that don't follow the conventions they
> should be changed. 
> 
> It really is that simple - each developer may have a different set of
> personal preferences and if we simply allow any two people to pick
> their own API pattern effectively at whim we end up with a real mess
> over time.
> 
> This example is a clear cut case where we should fix the unnecessary
> variation in the pattern. It serves no benefit whatsoever to have
> such a mix of API patterns. 
> 
> We do have some variations that we should adjust - and for APIs that
> have been in official releases dropping in backwards compatibility
> macros is appropriate.
> 
> The argument that we aren't completely consistent is specious - it is
> saying because we have a few mistakes that have slipped through the
> cracks we have open season on API patterns. 
> 
> It also would not hurt to have an automated check of API deviations
> on commits to catch such things in future. 

The problem is that there is not really fully consistent convention
followed currently (even in 1.1.1, not counting the API additions in
3.0).

For example if we would like to follow the convention _fully_ we would
have to rename these new EVP_MAC functions:

int EVP_MAC_init(EVP_MAC_CTX *ctx);
  
int EVP_MAC_update(EVP_MAC_CTX *ctx, const unsigned char *data, size_t datalen);
 
int EVP_MAC_final(EVP_MAC_CTX *ctx, unsigned char *out, size_t *outl, size_t 
outsize);

to something like:

int EVP_MacInit(EVP_MAC_CTX *ctx);
  
int EVP_MacUpdate(EVP_MAC_CTX *ctx, const unsigned char *data, size_t datalen);
 
int EVP_MacFinal(EVP_MAC_CTX *ctx, unsigned char *out, size_t *outl, size_t 
outsize);

or at least

int EVP_MACInit(EVP_MAC_CTX *ctx);
  
int EVP_MACUpdate(EVP_MAC_CTX *ctx, const unsigned char *data, size_t datalen);
 
int EVP_MACFinal(EVP_MAC_CTX *ctx, unsigned char *out, size_t *outl, size_t 
outsize);

Should we do that? I hope for the sheer ugliness of the supposedly
consistent names that we do not.

Fortunately or unfortunately depending on personal opinons we have
already diverged from that pattern with EVP_PKEY API.

And I actually hope we could add at least non-CamelCase aliases to the
existing (non-deprecated) CamelCase functions because they were always
the worst offender of the API consistency.

-- 
Tomáš Mráz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb
[You'll know whether the road is wrong if you carefully listen to your
conscience.]