[openssl/general-policies] 8ff792: add vote

2022-09-14 Thread Tim Hudson
  Branch: refs/heads/master
  Home:   https://github.com/openssl/general-policies
  Commit: 8ff792500fea00294cf54f2adc203bc8c0cea3b1
  
https://github.com/openssl/general-policies/commit/8ff792500fea00294cf54f2adc203bc8c0cea3b1
  Author: Tim Hudson 
  Date:   2022-09-14 (Wed, 14 Sep 2022)

  Changed paths:
A votes/vote-20220914-provider-abi-testing.txt

  Log Message:
  ---
  add vote




Re: Re-arrange the library structure - Re: CMP is a subproject?

2022-07-08 Thread Tim Hudson
So one of the ways to handle a transition that I've thought about a number
of times would be the following.

1) We split things out into multiple separate libraries.
2) We keep a combined library around using the current names which has all
the split libraries joined together.

So people who want to move to using just the parts they want to work with
can transition, and people who have fixed usage based on current practice
of the last 20 years keep working.

Then we could have the different TLS protocols split out, split out cmp,
split out smime, split out ocsp, etc - all of these items into separate
libraries - but also not break all the packaging usage of all existing
applications.

In terms of git sub projects - we could also look at that side of things
too - and if we combine the source release from multiple sub-projects into
one for packaging purpose we would again have a path to allow us to change
things around without having a significant impact on the packages that use
openssl already.

These are just some thoughts about ways to move from where we are at the
moment without causing unnecessary breakage elsewhere.
I still think this isn't something to do until the next major version -
i.e. openssl 4.0

Tim.


On Fri, Jul 8, 2022 at 5:23 PM Tomas Mraz  wrote:

> Just moving things around in the source tree does not achieve much so
> without the actual splitting the functionality out of the libcrypto
> does not make sense to me. Maybe it could be seen as a preparation step
> for the split out.
>
> However yes, it was a misunderstanding IMO that we would want to split
> it completely out of the main OpenSSL project tree. I do not think this
> is necessary or even desirable.
>
> However as many applications do not need this functionality it would be
> useful to have it as a separate shared library so loading libcrypto is
> less expensive. Still I think this is a 4.0 project.
>
> Tomas
>
> On Fri, 2022-07-08 at 09:00 +0200, David von Oheimb wrote:
> > On 07.07.22 23:02, Tim Hudson wrote:
> >
> > > I do not think this makes sense at this stage at all.
> > >  One of the key elements people are looking for when contributing
> > > code is the distribution vector of getting including in default OS
> > > distributions and standard builds.
> > I fully agree.
> > To avoid any misunderstandings on what I wrote before:
> >  My proposal (possibly in difference to Dmitry's) was and still is
> > not to move any functionality out of the OpenSSL main repository,
> >  but to re-arrange the library structure (likely by splitting
> > libcrypto into two or more libraries) to better reflect the code
> > layering.
> > Expected benefits:
> >  * improve clarity of the software component structure
> >  * slightly alleviate development and maintenance
> >  * reduce binary code footprint in case just the crypto core or just
> > TLS (including crypto) is needed
> > Expected drawbacks:
> >  * any re-structuring requires more or less work
> >  * some so far internal crypto interfaces that are used by the more
> > application-level code need to be exported
> >  * applications that also require the more high-level capabilities
> > will need to link with more libraries
> > We may also consider splitting the existing libcrypto just virtually,
> > e.g., into two code directories (say, crypto/ and crypto/apps/, which
> > includes CMS, CMP, OCSP, HTTP, TS, etc.)
> >  plus an actual library (say, libapps) that is more application-level
> > and includes everything that requires both TLS any crypto features,
> > such as HTTPS and part of (or even all of) apps/lib/.
> >  This likely would provide a better pros/cons ratio than actually
> > splitting up libcrypto.
> >
> >
> > > This is something we could look at tackling in a future major
> > > release - but even then it faces challenges to be workable for the
> > > desired outcome (broad distribution of capability).
> >  With just re-arranging the code into three (or more, rather than so
> > far two) OpenSSL libraries, there will be no issue with capability
> > because nothing is lost for OpenSSL users.
> >  In particular, as Tomas wrote, the openssl app will continue to
> > provide everything that it did before.
> >
> > David
> >
> >
> > > On Thu, 7 July 2022, 18:48 Tomas Mraz,  wrote:
> > >
> > > > OpenSSL Project list should be used instead of the committers
> > > > list for
> > > >  such discussions.
> > > >
> > > >  I do not think it would be good idea to do any such splitting
> > > > before a
> > > >  major release developme

Re: CMP is a subproject?

2022-07-07 Thread Tim Hudson
I do not think this makes sense at this stage at all. One of the key
elements people are looking for when contributing code is the distribution
vector of getting including in default OS distributions and standard builds.

This is something we could look at tackling in a future major release - but
even then it faces challenges to be workable for the desired outcome (broad
distribution of capability).

Tim.


On Thu, 7 July 2022, 18:48 Tomas Mraz,  wrote:

> OpenSSL Project list should be used instead of the committers list for
> such discussions.
>
> I do not think it would be good idea to do any such splitting before a
> major release development is being started (i.e., 4.0).
>
> The openssl application could depend on that application library(ies).
>
> Tomas
>
> On Wed, 2022-07-06 at 09:32 +0200, David von Oheimb wrote:
> > Yes, there are number of components that should better be moved out
> > of the core crypto library into a more application-level one.
> >  As I wrote three days ago, though my email got stuck in mailing list
> > moderation:
> >
> >   Forwarded Message  Subject: Re: CMP is a
> > subproject? Date: Sun, 3 Jul 2022 22:50:06 +0200 From: David von
> > Oheimb  To: Dmitry Belyavsky
> > , List of openssl committers
> > 
> >  Dear all, thanks Dmitry for sharing this thought.
> >  In a sense it is an instance of a more general suggestion I gave
> >  * back in 2017:  Introducing an application-level library for the
> > CLI and OpenSSL-based applications #4992
> >  * and in 2020:  Improve overall OpenSSL library structure #13440
> > which pertains also to CMS, HTTP, OCSP, TS, and maybe further more
> > application-level component(s) of libcrypto like CT.
> > The CMP implementation does not rely on libssl, but it does heavily
> > rely on libcrypto and relies on some of its internals.
> >  The same holds for HTTP, and likely this also holds for CMS, OCSP,
> > TS, and CT.
> >  David
> >
> >
> > On 06.07.22 07:25, Dr Paul Dale wrote:
> >
> > > I'd support such a change.  Our stability policy won't without an
> > > exception.
> > >
> > > There are a lot more things that could be moved out IMO.
> > >
> > >
> > > Pauli
> > >
> > >
> > > On 6/7/22 15:22, Benjamin Kaduk wrote:
> > >
> > > > On Sun, Jul 03, 2022 at 09:51:23PM +0200, Dmitry Belyavsky wrote:
> > > >
> > > > > Dear colleagues,
> > > > >
> > > > > With all respect to David's efforts - isn't it worth turning
> > > > > CMP into a
> > > > > separate library in OpenSSL (and probably into a separate
> > > > > repo)? I remember
> > > > > there was a separate PR in this direction.
> > > > I think I found https://github.com/openssl/openssl/issues/16358
> > > > just now,
> > > > but maybe there are others.
> > > >
> > > >
> > > > > It looks like CMP heavily relies on libcrypto/libssl, but I'm
> > > > > not sure it
> > > > > requires an integration - and, last but not least, has its own
> > > > > life cycle.
> > > > > Several years ago this seemed a good rationale both to me and
> > > > > to the
> > > > > OpenSSL team to separate a GOST engine.
> > > > It looks like there was some discussion in
> > > > https://github.com/openssl/openssl/pull/6811 that suggests that
> > > > having
> > > > apps/cmp.c functionality was a key motivation for pulling in
> > > > everything to
> > > > libcrypto itself, but I'm not sure how far the conversation of
> > > > in-OpenSSL
> > > > vs standalond project really went at that time. I don't think I
> > > > have
> > > > anything to add to that discussion other than what you say above.
> > > >
> > > > -Ben
> > > >
>
> --
> Tomáš Mráz, OpenSSL
>
>


Re: OTC VOTE: Accept Policy change process proposal

2021-11-01 Thread Tim Hudson
+1

Tim.


On Mon, Nov 1, 2021 at 8:23 PM Tomas Mraz  wrote:

> topic: Accept openssl/technical-policies PR#1 - the policy change
> process proposal as of commit 3bccdf6. This will become an official OTC
> policy.
>
> comment: This will implement the formal policy change process so we can
> introduce and amend further policies as set by OTC via a public
> process.
>
> Proposed by Tomáš Mráz
> Public: yes
> opened: 2021-11-01
> closed: 2021-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>Dmitry [  ]
>Matt   [  ]
>Pauli  [  ]
>Tim[  ]
>Richard[  ]
>Shane  [  ]
>Tomas  [+1]
>Kurt   [  ]
>Matthias   [  ]
>Nicola [  ]
>
>
>
>


Re: OTC VOTE: Accept PR#16725

2021-10-20 Thread Tim Hudson
On the basis of further discussion I'm changing my vote to 0 which enables
this to pass now.

Tim.


On Wed, Oct 20, 2021 at 5:18 PM Matt Caswell  wrote:

>
>
> On 19/10/2021 19:31, Nicola Tuveri wrote:
> > I believe Matt will find the time at some point to post the minutes
> > from today's meeting, but until then here is my recap.
>
> We decided in the meeting that posting the minutes to the list wasn't
> necessary and we would just push them to the repo:
>
>
> https://git.openssl.org/gitweb/?p=otc.git;a=blob;f=meeting-minutes/minutes-2021-10-19.txt;h=8bae2b86ecd7c4f967ba2aa822535dc0facbbfa9;hb=HEAD
>
> Matt
>
> >
> > The discussion mostly focused on why the changes in #16725 are a
> > bugfix and not a new feature, which would be a prerequisite to be
> > admissible to be merged in the 3.0 branch.
> > As I recall it, there were no objections to the final outcome of the
> > PR to be desirable, the vote is entirely about this being a bugfix or
> > not.
> >
> > It would be on those who voted +1 to properly argument why this is a
> > bugfix and not a new feature, but the short version of that argument
> > is that the outcome of #16725 was the "intended behavior" for 3.0.0.
> > The counterargument is that we could not find written evidence (i.e.,
> > GH issues/PRs, documentation, and/or tests) that indeed the project
> > ever committed to have this behavior in 3.0.0.
> >
> >
> > The Strategic Architecture document has some text that could be
> > somewhat related and used to support the "intend behavior" view, but
> > the document clearly states
> >
> >> This document outlines the OpenSSL strategic architecture. It will take
> multiple releases, starting from 3.0.0, to move the architecture from the
> current "as-is" (1.1.1), to the future "to-be" architecture.
> >
> > Hence, it does not really prove that this functionality was always
> > planned for the 3.0.0 release.
> >
> > Accepting this PR for the next minor release would not require a vote.
> >
> >
> >
> > I hope this recap is helpful to inform your decision.
> >
> >
> >
> > Cheers,
> >
> > Nicola
> >
> > On Tue, Oct 19, 2021 at 9:10 PM Kurt Roeckx  wrote:
> >>
> >> On Tue, Oct 19, 2021 at 11:07:26AM +0100, Matt Caswell wrote:
> >>> topic: Accept PR#16725 as a bug fix for backport into 3.0 subject to
> the
> >>> normal review process
> >>
> >> So we have various people voting -1. Does someone want to explain
> >> why they vote -1?
> >>
> >>
> >> Kurt
> >>
> >
>


Re: OTC quick votes [WAS: RE: OTC vote PR #16171: config_diagnostic]

2021-08-29 Thread Tim Hudson
You cannot meaningfully vote on the PR without reviewing it. It is that
simple. There is zero point in providing details beyond that as the PR is
the details. The subject of the PR doesn't remove the need to read the
details to form a view.

The commentary on the PR and the code itself is what needs to be reviewed
in order to form a view point - and that journey needs to be gone through
if you want to offer a meaningful opinion on a PR.

Any attempt to make a decision based on any form of summary of a PR is
completely missing the point of participation in the review process - you
might as well toss a coin to form your vote if you aren't going to look at
the actual details.

Reviewing the code changes and the comments is what is necessary for that
purpose and the PR itself is where we want any additional comments to be
made.

Remember that this context is where and OTC member has placed a hold on a
PR for a discussion and decision to be made and there is no clear consensus
in the OTC meeting on the path forward or any OTC member wants a formal
decision made for whatever reason. There is always a discussion prior to
the vote being called.

A substantial portion of the PRs result in one or more OTC member
acknowledging that the didn't realise the PR meant what it means from
having seen its title or summary description - and actually having to read
the full commentary *and* the code to understand the context was absolutely
necessary.

Remember that someone has gone to the effort generally of raising and issue
and the creating a proposed solution in a PR and multiple people have
looked at it and provided feedback and one or more of them decided that we
need a decision made for some reason (and those reasons widely vary). To
make an informed decision reasonably requires reviewing all that
information to form a view.

There is no short cut to reading the PR itself and the vote text reflects
that reality.

In our OTC meetings we review the issues and review the PRs and we do that
by going through the details - not by working off the title of the issue or
PR.

If you want to only comment on PRs off a summary list then go to GitHub and
use its summary views to get that.

We could perhaps even add another label with OTC vote pending which might
get close to what you want in a single GitHub view.

It won't avoid the need to actually read the details to be able to form a
view prior to voting.

Tim.


On Sun, 29 Aug 2021, 22:35 Kurt Roeckx,  wrote:

> I currently fail to see why you can't describe in words what you
> intend to fix. The PR itself has a subject, so have the commits.
>
> One of the reasons we have this vote is public is so that people
> reading this list can comment on it. Just some number doesn't tell
> them anything without having to go and look it up and spend time
> trying to understand. A simple summary of what we intend to vote
> on will help them understand if they want to look at this or not.
>
> If you feel that it should not be part of the vote text, can we at
> least have some introduction text?
>
> For instance PR 16203 was about adding the TLS 1.3 KDF. I can see
> several vote text for that:
> - Add the TLS 1.3 KDF to the provider in 3.0.
> - Add the TLS 1.3 KDF to the provider, and use it in the ssl library
>   in 3.0 so that TLS 1.3 can be used in FIPS mode.
> - Add support for using TLS 1.3 in FIPS mode in 3.0
>
> Most of our votes are not about how it's implemented, but really
> a general direction. As you say yourself, it's about proceeding in
> that direction. I don't see why you can't describe that direction
> with words.
>
> Some of the PR and issues we vote on have different view
> points based on the comments. Am I voting on the last comment
> made in the PR or issue? With a PR you can argue it's not the
> comments but the code, but what happens in case the PR is changed
> before I can vote? We have also voted on issue, were several
> things were mentioned in the issue, some things I agree we should
> do, others I don't.
>
>
> Kurt
>
> On Wed, Aug 04, 2021 at 08:25:59AM +1000, Tim Hudson wrote:
> > The votes on the PR are precisely that - to vote to proceed with the PR
> via
> > the normal review process - and that means looking at the varying
> > viewpoints.
> > If we reached a consensus that overall we didn't think the PR made sense
> > then we wouldn't form a vote of that form.
> >
> > What you are voting for is that what the PR holds is what makes sense to
> > proceed with - subject to the normal review process - i.e. this isn't a
> > push-the-PR-in-right-now vote - it is a "proceed" vote.
> >
> > A PR has a discussion in the PR comments, and generally an associated
> > issue, and always (as it is a PR) the sugg

Re: OTC quick votes [WAS: RE: OTC vote PR #16171: config_diagnostic]

2021-08-03 Thread Tim Hudson
The votes on the PR are precisely that - to vote to proceed with the PR via
the normal review process - and that means looking at the varying
viewpoints.
If we reached a consensus that overall we didn't think the PR made sense
then we wouldn't form a vote of that form.

What you are voting for is that what the PR holds is what makes sense to
proceed with - subject to the normal review process - i.e. this isn't a
push-the-PR-in-right-now vote - it is a "proceed" vote.

A PR has a discussion in the PR comments, and generally an associated
issue, and always (as it is a PR) the suggested code change.
That is what is being voted on - to proceed with the PR - via the normal
review process.

As otherwise the PR remains blocked on an OTC decision - and the OTC
decision is to not continue to block the PR (blocked on an OTC decision).

Repeating again - the PR itself is what is being voted about - not a set of
different unstated viewpoints - it is what is in the PR - fix this problem
- and generally there is nothing critical seen in the code approach - but
our normal review processes still apply to handle it. Which means the PR
simply needs the two approvals.

The majority of the time in the OTC discussions we are actually looking at
the PR details in terms of the code changes - we aren't reviewing it as
such (although that does sometimes happen on the call) - we are looking at
the PR code changes providing the additional detail to help in the decision
making.

There are very few PRs which describe what is going to change in a useful
form independent of the code changes - they don't usually state things like
what is going to be changed - they are focused on what is wrong and the PR
is going to fix it - there isn't usually anything meaningful about what is
going to be changed in order to fix what is wrong.

A PR doesn't hold a range of code fixes to choose between - it holds a
discussion (comments) and a specific code fix - and perhaps that specific
code fix is the result of a sequence of code fixes that have evolved
through the discussion.

So the precise viewpoint you are voting for in those PRs is to proceed to
include that PR in the work for the current release while continuing to use
our normal review and approval processes - that is the vote text - and it
makes the intent of the vote.

Where there is a view that we should not take a particular approach
represented in a PR and should take an alternate approach then we don't
form a vote that way - we actually allocate someone to produce an alternate
PR. Often we leave the initial PR and alternate PR open until such time as
we can compare the approaches in concrete form and then we can make a
decision - but that would be accepting one PR over another PR. We have had
"competing" PRs regularly - and we then vote on the alternatives - where it
is clear what the alternatives are. A single PR vote is about that PR.

Tim.


On Wed, Aug 4, 2021 at 8:07 AM Kurt Roeckx  wrote:

> On Wed, Aug 04, 2021 at 07:21:57AM +1000, Tim Hudson wrote:
> >
> > This isn't about the OTC meeting itself - this is about the details of
> the
> > topic actually being captured within the PR.
> > You need to actually look at the PR to form a view. And we do add to the
> > PRs during the discussion if things come up and we review the PR details.
> > So the vote isn't about an OTC discussion - the vote is precisely about
> the
> > PR itself.
> >
> > One of the things we have explicitly discussed on multiple calls is that
> in
> > order to be informed of the details, you need to consult the PR which
> has a
> > record of the discussion and often viewpoints that offer rather different
> > positions on a topic - and those viewpoints are what should be being
> > considered.
>
> I am happy to read the issue/PR to see the different view points.
> But different viewpoints is exactly the problem, which of those am
> I voting for?
>
> During a meeting, there probably was a consensus about what you're
> actually voting for, but that information is lost.
>
> In general, I want a vote to be about what is going to change, the
> how isn't important most of the time.
>
>
> Kurt
>
>


Re: OTC quick votes [WAS: RE: OTC vote PR #16171: config_diagnostic]

2021-08-03 Thread Tim Hudson
On Wed, Aug 4, 2021 at 5:26 AM Dr. Matthias St. Pierre <
matthias.st.pie...@ncp-e.com> wrote:

> > I'm starting to vote -1 on anything has a vote text that looks like
> that, so -1.
>
> I perfectly understand Kurt's dislike of this kind of votes. The text is
> not very informative for OTC members who
> weren't able to participate at the weekly video meetings.
>

This isn't about the OTC meeting itself - this is about the details of the
topic actually being captured within the PR.
You need to actually look at the PR to form a view. And we do add to the
PRs during the discussion if things come up and we review the PR details.
So the vote isn't about an OTC discussion - the vote is precisely about the
PR itself.

One of the things we have explicitly discussed on multiple calls is that in
order to be informed of the details, you need to consult the PR which has a
record of the discussion and often viewpoints that offer rather different
positions on a topic - and those viewpoints are what should be being
considered.

If you wanted to form a view on the basis of vote text without a reference
to the PR, then you would require replication of the issue, the PR code
changes, and the PR comment discussion.
Such an approach would be a complete waste of time and resources. We want
the discussion in terms of an issue to be in the PR itself.

Often during OTC meetings additional comments (separate from the consensus
viewpoint) are added capturing an relevant point from the OTC discussions
(which can be quite free flowing).

The email based voting mechanism is a separate topic and one I've expressed
views on multiple times - as I do think we should have an online mechanism
for voting and tallying and communicating about votes - but that isn't this
specific issue Kurt is raising here.

Tim.


Re: OTC VOTE: Accept PR 16128

2021-07-22 Thread Tim Hudson
+1 as this is consistent with previous OTC post-beta decisions to accept
such changes (subject to OTC vote).

Tim.


On Thu, Jul 22, 2021 at 10:51 PM Matt Caswell  wrote:

> topic: Accept PR 16128 in 3.0 subject to our normal review process
> Proposed by Matt Caswell
> Public: yes
> opened: 2021-07-22
> closed: 2021-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>Matt   [+1]
>Pauli  [  ]
>Tim[  ]
>Richard[  ]
>Shane  [  ]
>Tomas  [  ]
>Kurt   [  ]
>Matthias   [  ]
>Nicola [  ]
>


Re: OTC VOTE: Reject PR#14759

2021-04-20 Thread Tim Hudson
On the call the votes were very clear to accept the PR (not reject it).
So I'm again rejecting the request to reject the PR -  it would be better
to express such votes in positive terms.

-1

Tim.



On Wed, Apr 21, 2021 at 9:06 AM Dr Paul Dale  wrote:

> -0
>
> Pauli
>
> On 20/4/21 8:23 pm, Nicola Tuveri wrote:
> > Following up on
> > https://www.mail-archive.com/openssl-project@openssl.org/msg02407.html
> > 
>
> > we had a discussion on this during last week OTC meeting, and opened a
> > vote limited exclusively to the matter of rejecting PR#14759.
> >
> > We lost the record of the votes collected during the call, so opening
> > it officially today with a clean slate.
> >
> >
> > 
> > topic: Reject PR#14759
> > Proposed by Nicola Tuveri
> > Public: yes
> > opened: 2021-04-20
> > closed: 2021-mm-dd
> > accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
> >   Matt   [  ]
> >   Mark   [  ]
> >   Pauli  [  ]
> >   Viktor [  ]
> >   Tim[  ]
> >   Richard[  ]
> >   Shane  [  ]
> >   Tomas  [  ]
> >   Kurt   [  ]
> >   Matthias   [  ]
> >   Nicola [+1]
> >
> >
>
>


Re: [OTC VOTE PROPOSAL] Don't merge PR#14759 (blinding=yes and similar properties)

2021-04-08 Thread Tim Hudson
Nicola, you are (in my view) conflating multiple items.

For the provider and property design approach it is rather simple.
- Algorithm implementations can vary.
- Selection between algorithm implementations when multiple providers are
available is performed by properties

Algorithm implementations should declare whatever set of properties they
feel is appropriate for their implementation.
Applications (and in this context most likely directly by end-user
configuration) should be able to select which properties are considered
most important for their context.
That decision capability must be left to the end user as only the end user
knows the security context in which they are operating - we don't know that
ourselves.

The vast majority of your lengthy email below is actually focused on one
issue - what should the default behaviour be for selection of
implementations - and your viewpoint that we should not mark different
properties at all that might impact security. I don't think that position
is supportable - in that you are basically arguing that we should never
declare anything about properties of implementations and should never
select between different implementations except at a "provider level"
approach. Your approach is that all implementations should be considered
equal and that pretty much defies logic in my view.

Different implementations have different characteristics. Even looking at
something like constant time - not all of our implementations are constant
time.

Your statement that the approach of declaring properties "promotes insecure
by default" is simply flawed logic - and following the same logic I could
state that your approach "promotes ignorance by default" as it effectively
states that users shouldn't know the properties of the implementation in a
manner that allows selection to be performed.

Not all implementations are the same and different implementations can
implement different mitigations. Properties allow us to declare those
mitigations and allow users to make different decisions on the basis of the
properties that providers declare for algorithms. Having that information
available has to be a better thing than having nothing available - as with
nothing available then no selection between alternatives is possible.

Separately arguing about what the default set of properties should be (i.e.
what mitigations should we configure as required by default) would make
sense to do so - but arguing that the information for making such decisions
shouldn't be present simply makes no sense.

Tim.


On Fri, Apr 9, 2021 at 3:02 AM Nicola Tuveri  wrote:

> Background
> ==
>
> [PR#14759](https://github.com/openssl/openssl/pull/14759) (Set
> blinding=yes property on some algorithm implementations) is a fix for
> [Issue#14654](https://github.com/openssl/openssl/issues/14654) which
> itself is a spin-off of
> [Issue#14616](https://github.com/openssl/openssl/issues/14616).
>
> The original issue is about _Deprecated low level key API's that have no
> replacements_, among which the following received special attention and
> were issued a dedicated issue during an OTC meeting:
>
> ~~~c
> // RSA functions on RSA_FLAG_*
> void RSA_clear_flags(RSA *r, int flags);
> int RSA_test_flags(const RSA *r, int flags);
> void RSA_set_flags(RSA *r, int flags);
>
> // RSA_FLAG_* about blinding
> #define RSA_FLAG_BLINDING
> #define RSA_FLAG_NO_BLINDING
>
> // RSA functions directly on blinding
> int RSA_blinding_on(RSA *rsa, BN_CTX *ctx);
> void RSA_blinding_off(RSA *rsa);
> BN_BLINDING *RSA_setup_blinding(RSA *rsa, BN_CTX *ctx);
> ~~~
>
> The decision the sprung Issue#14616 and PR#14759 was to use the
> propquery mechanism to let providers advertise algorithms as
> `blinding=yes` to select secure implementations if there are insecure
> ones present as well.
>
> Similarly it was discussed the potential `consttime=yes` property that
> would work in a similar way: if applied properly for our current
> implementations that are not fully constant time it would allow a
> user/sysadmin/developer to prefer a third party implementation for the
> same algorithm with better security guarantees.
> In some contexts the consttime implementation might be seriously
> penalizing in terms of performance and in the contexts where const time
> is not required this would allow to select accordingly.
>
> Definition for the blinding property
> 
>
> The current definition of the `blinding` property applies to
> provider-native algorithm implementations for the `asym_cipher` and
> `signature` operations:
>
> ```pod
> =head2 Properties
>
> The following property is set by some of the OpenSSL signature
> algorithms.
>
> =over 4
>
> =item "blinding"
>
> This boolean property is set to "yes" if the implementation performs
> blinding to prevent some side-channel attacks.
> ```
>
> Rationale
> =
>
> Property queries are our decision making process for implementation
> selection, and has been

Re: OTC Vote: Remove the RSA_SSLV23_PADDING and related functions completely

2021-02-23 Thread Tim Hudson
0

Tim.

On Tue, Feb 23, 2021 at 8:21 PM Tomas Mraz  wrote:

> topic: The RSA_SSLV23_PADDING and related functions should be
> completely removed from OpenSSL 3.0 code.
>
> comment: The padding mode and the related functions (which are already
> deprecated in the current master branch) is useless outside of SSLv2
> support. We do not support SSLv2 and we do not expect anybody using
> OpenSSL 3.0 to try to support SSLv2 by calling those functions.
>
> Proposed by Tomas Mraz.
>
> public: yes
> opened: 2021-02-23
>
>
>
>


Re: OSSL_PARAM behaviour for unknown keys

2020-12-15 Thread Tim Hudson
On Tue, Dec 15, 2020 at 10:24 PM Kurt Roeckx  wrote:

> If an applications wants to switch from one to the other
> algorithm, it should be as easy as possible. But the application
> might need to change, and might need to be aware which parameters
> are needed.


A provider may not need any of those parameters - it might just need (for
example) a label or key name.
That could be entirely sufficient and valid for an HSM usage scenario and
setting up a key in that manner should be permitted.
Then you don't have any of the sort of parameters you are talking about and
it remains perfectly valid - for that provider.
For other providers the list may be different.

This is one of the areas where there is a conceptual difference - it is a
collection of things a provider needs to do its work - it isn't necessarily
a complete standalone portable definition of a cryptographic object with
all elements available and provided by the application.

Part of the point of this is you should be able to use different algorithms
without the application having to change - that is part of the point of the
sort of APIs we have - so that applications can work with whatever the user
of the application wants to work with and you don't have to always go and
add extra code into every application if something new comes along that we
want to support.

Tim.


Re: OSSL_PARAM behaviour for unknown keys

2020-12-14 Thread Tim Hudson
On Tue, Dec 15, 2020 at 5:46 PM Richard Levitte  wrote:

> Of course, checking the gettable and settable tables beforehand works
> as well.  They were originally never meant to be mandatory, but I
> guess we're moving that way...
>

The only one who knows whether or not a given parameter is critically
important to have been used is the application.
The gettable and settable interfaces provide the ability to check that.

For forward and backward compatibility it makes no sense to wire in a
requirement for complete knowledge of everything that is provided.

You need to be able to provide extra optional parameters that some
implementations want to consume and are entirely irrelevant to
other implementations to have extensibility wired into the APIs and that
was one of the purposes of the new plumbing - to be able to handle things
going forward. If you change things at this late stage to basically say
everything has to know everything then we lose that ability.

In practical terms too, we need to be able to have later releases of
applications able to work with earlier releases of providers (specifically
but not limited to the FIPS provider) and it would practically mean you
could never reach that interchangeable provider context that is there for a
stable ABI - wiring in a requirement to be aware of all parameters will
simply lead to provider implementations needing to ignore things to achieve
the necessary outcome.

If you want to know if a specific implementation is aware of something, the
interface is already there.

In short - I don't see an issue as there is a way to check, and the
interface is designed for forward and backward compatibility and that is
more important than the various items raised here so far IMHO.

Tim


Re: OTC VOTE: Keeping API compatibility with missing public key

2020-12-04 Thread Tim Hudson
+1

Note I support also changing all key types to be able to operate without
the public component (where that is possible) which goes beyond what this
vote covers (as previously noted).
Having a documented conceptual model that is at odds with the code isn't a
good thing and in particular this choice of conceptual model isn't one that
is appropriate in my view.

Tim.


On Fri, Dec 4, 2020 at 10:45 PM Tomas Mraz  wrote:

> Vote background
> ---
>
> The vote on relaxing the conceptual model in regards to required public
> component for EVP_PKEY has passed with the following text:
>
> For 3.0 EVP_PKEY keys, the OTC accepts the following resolution:
> * relax the conceptual model to allow private keys to exist without
> public components;
> * all implementations apart from EC require the public component to be
> present;
> * relax implementation for EC key management to allow private keys that
> do not contain public keys and
> * our decoders unconditionally generate the public key (where
> possible).
>
> However since then the issue 13506 [1] was reported.
>
> During OTC meeting we concluded that we might need to relax also other
> public key algorithm implementations to allow private keys without
> public component.
>
> Vote
> 
>
> topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable
>with 1.1.1 EVP_PKEY API or low level APIs without public component
> must
>stay usable.
>
>This overrules the
>  * all implementations apart from EC require the public component
> to be present;
>part of the vote closed on 2020-11-17.
>
> Proposed by Tomas Mraz
> Public: yes
> opened: 2020-12-04
>
> Tomas Mraz
>
>
>


Re: OTC VOTE: Fixing missing failure exit status is a bug fix

2020-11-30 Thread Tim Hudson
+1

On Mon, 30 Nov 2020, 10:03 pm Nicola Tuveri,  wrote:

> Vote background
> ---
>
> This follows up on a [previous proposal] that was abandoned in favor of
> an OMC vote on the behavior change introduced in [PR#13359].
> Within today's OTC meeting this was further discussed with the attending
> members that also sit in the OMC.
>
> The suggestion was to improve the separation of the OTC and OMC domains
> here, by having a more generic OTC vote to qualify as bug fixes the
> changes to let any OpenSSL app return an (early) failure exit status
> when a called function fails.
>
> The idea is that, if we agree on this technical definition, then no OMC
> vote to allow a behavior change in the apps would be required in
> general, unless, on a case-by-case basis, the "OMC hold" process is
> invoked for whatever reason on the specific bug fix, triggering the
> usual OMC decision process.
>
> [previous proposal]:
> 
> [PR#13359]: 
>
>
>
> Vote text
> -
>
> topic: In the context of the OpenSSL apps, the OTC qualifies as bug
>fixes the changes to return a failure exit status when a called
>function fails with an unhandled return value.
>Even when these bug fixes change the apps behavior triggering
>early exits (compared to previous versions of the apps), as bug
>fixes, they do not qualify as behavior changes that require an
>explicit OMC approval.
> Proposed by Nicola Tuveri
> Public: yes
> opened: 2020-11-30
>


Re: OTC VOTE: EVP_PKEY private/public key components

2020-11-09 Thread Tim Hudson
-1

As I said in the OTC meeting, I agree we should change the conceptual model
to not require a private key to be a super-set of a public key; however I
do not think we should introduce key-type specific behaviour in this area -
i.e. if it makes sense to change the model then it should apply equally to
(for example) RSA keys as to EC keys.

Tim.

On Tue, Nov 10, 2020 at 2:20 AM Dr. Matthias St. Pierre <
matthias.st.pie...@ncp-e.com> wrote:

> 0
>
> > -Original Message-
> > From: openssl-project  On Behalf
> Of Matt Caswell
> > Sent: Tuesday, November 3, 2020 1:11 PM
> > To: openssl-project@openssl.org
> > Subject: OTC VOTE: EVP_PKEY private/public key components
> >
> > Background to the vote:
> >
> > The OTC meeting today discussed the problems raised by issue #12612. In
> > summary the problem is that there has been a long standing, widespread
> > and documented assumption that an EVP_PKEY with a private key will
> > always also have the public key component.
> >
> > In spite of this it turns out that in the EC implementation in 1.1.1 it
> > was perfectly possible to create an EVP_PKEY with only a private key and
> > no public key - and it was also possible to use such an EVP_PKEY in a
> > signing operation.
> >
> > The current 3.0 code in master introduced an explicit check (in line
> > with the long held assumption) that the public key was present and
> > rejected keys where this was not the case. This caused a backwards
> > compatibility break for some users (as discussed at length in #12612).
> >
> > The OTC discussed a proposal that we should relax our conceptual model
> > in this regards and conceptually allow EVP_PKEYs to exist that only have
> > the private component without the public component - although individual
> > algorithm implementations may still require both.
> >
> > It is possible to automatically generate the public component from the
> > private for many algorithms, although this may come at a performance and
> > (potentially) a security cost.
> >
> > The proposal discussed was that while relaxing the conceptual model,
> > most of the existing implementations would still require both. The EC
> > implementation would be relaxed however. This essentially gives largely
> > compatible behaviour between 1.1.1 and 3.0.
> >
> > The vote text is as follows:
> >
> > topic: For 3.0 EVP_PKEY keys, the OTC accepts the following resolution:
> > * relax the conceptual model to allow private keys to exist without
> public
> >   components;
> > * all implementations apart from EC require the public component to be
> > present;
> > * relax implementation for EC key management to allow private keys that
> > do not
> >   contain public keys and
> > * our decoders unconditionally generate the public key (where possible).
> >
> > Proposed by Matt Caswell
> > Public: yes
> > opened: 2020-11-03
> > closed: 2020-mm-dd
> > accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
> >
> >   Matt   [+1]
> >   Mark   [  ]
> >   Pauli  [+1]
> >   Viktor [  ]
> >   Tim[  ]
> >   Richard[+1]
> >   Shane  [+1]
> >   Tomas  [+1]
> >   Kurt   [  ]
> >   Matthias   [  ]
> >   Nicola [-1]
>
>


Additional things for deprecation

2020-10-13 Thread Tim Hudson
In a 3.0 context, EVP_PKEY_ASN1_METHOD and all the associated functions
should be marked deprecated in my view.

Tim.


Re: OTC VOTE: The PR #11359 (Allow to continue with further checks on UNABLE_TO_VERIFY_LEAF_SIGNATURE) is acceptable for 1.1.1 branch

2020-10-12 Thread Tim Hudson
-1

I don't see this as a bug fix.

Tim


On Fri, Oct 9, 2020 at 10:02 PM Tomas Mraz  wrote:

> topic: The PR #11359 (Allow to continue with further checks on
>  UNABLE_TO_VERIFY_LEAF_SIGNATURE) is acceptable for 1.1.1 branch
> As the change is borderline on bug fix/behaviour change OTC needs
> to decide whether it is acceptable for 1.1.1 branch.
> Proposed by Tomas Mraz
> Public: yes
> opened: 2020-10-09
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [  ]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [+1]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]
>
> --
> Tomáš Mráz
> No matter how far down the wrong road you've gone, turn back.
>   Turkish proverb
> [You'll know whether the road is wrong if you carefully listen to your
> conscience.]
>
>
>


Re: VOTE: Weekly OTC meetings until 3.0 beta1 is released

2020-10-09 Thread Tim Hudson
+1

Tim

On Fri, 9 Oct 2020, 10:01 pm Nicola Tuveri,  wrote:

> topic: Hold online weekly OTC meetings starting on Tuesday 2020-10-13
>and until 3.0 beta1 is released, in lieu of the weekly "developer
>meetings".
> Proposed by Nicola Tuveri
> Public: yes
> opened: 2020-10-09
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [  ]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [+1]
>


Re: VOTE: Technical Items still to be done

2020-10-08 Thread Tim Hudson
+1

Tim.


On Fri, Oct 9, 2020 at 12:47 AM Matt Caswell  wrote:

> topic: The following items are required prerequisites for the first beta
> release:
>  1) EVP is the recommended API, it must be feature-complete compared with
> the functionality available using lower-level APIs.
>- Anything that isn’t available must be put to an OTC vote to exclude.
>- The apps are the minimum bar for this, subject to exceptions noted
> below.
>  2) Deprecation List Proposal: DH_, DSA_, ECDH_, ECDSA_, EC_KEY_, RSA_,
> RAND_METHOD_.
>- Does not include macros defining useful constants (e.g.
>  SHA512_DIGEST_LENGTH).
>- Excluded from Deprecation: `EC_`, `DSA_SIG_`, `ECDSA_SIG_`.
>- There might be some others.
>- Review for exceptions.
>- The apps are the minimum bar to measure feature completeness for
> the EVP
>  interface: rewrite them so they do not use internal nor deprecated
>  functions (except speed, engine, list, passwd -crypt and the code
> to handle
>  the -engine CLI option).  That is, remove the suppression of
> deprecated
>  define.
>  - Proposal: drop passwd -crypt (OMC vote required)
>- Compile and link 1.1.1 command line app against the master headers and
>  library.  Run 1.1.1 app test cases against the chimera.  Treat this
> as an
>  external test using a special 1.1.1 branch. Deprecated functions
> used by
>  libssl should be moved to independent file(s), to limit the
> suppression of
>  deprecated defines to the absolute minimum scope.
>  3) Draft documentation (contents but not pretty)
>- Need a list of things we know are not present - including things we
> have
>  removed.
>- We need to have mapping tables for various d2i/i2d functions.
>- We need to have a mapping table from “old names” for things into the
>  OSSL_PARAMS names.
>  - Documentation addition to old APIs to refer to new ones (man7).
>  - Documentation needs to reference name mapping.
>  - All the legacy interfaces need to have their documentation
> pointing to
>the replacement interfaces.
>  4) Review (and maybe clean up) legacy bridge code.
>  5) Review TODO(3.0) items #12224.
>  6) Source checksum script.
>  7) Review of functions previously named _with_libctx.
>  8) Encoder fixes (PKCS#8, PKCS#1, etc).
>  9) Encoder DER to PEM refactor.
> 10) Builds and passes tests on all primary, secondary and FIPS platforms.
> 11) Query provider parameters (name, version, ...) from the command line.
> 12) Setup buildbot infrastructure and associated instructions.
> 13) Complete make fipsinstall.
> 14) More specific decoding selection (e.g. params or keys).
> 15) Example code covering replacements for deprecated APIs.
> 16) Drop C code output options from the apps (OMC approval required).
> 17) Address issues and PRs in the 3.0beta1 milestone.
> Proposed by .
> Public: yes
> opened: 2020-10-08
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [+1]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]
>


Re: VOTE: Accept the Fully Pluggable TLSv1.3 KEM functionality

2020-10-08 Thread Tim Hudson
+1

Tim


On Fri, Oct 9, 2020 at 12:27 AM Matt Caswell  wrote:

> topic: We should accept the Fully Pluggable TLSv1.3 KEM functionality as
> shown in PR #13018 into the 3.0 release
> Proposed by Matt Caswell
> Public: yes
> opened: 2020-10-08
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [+1]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]
>


Re: stable branch release cadence

2020-09-15 Thread Tim Hudson
Thanks Matt - cut-n-paste fumble on my part from the previous vote summary.
The tally should always equal the number of OMC members.

For: 6, against: 0, abstained 0, not voted: 1

Tim.


On Wed, Sep 16, 2020 at 8:11 AM Matt Caswell  wrote:

>
>
> On 15/09/2020 23:10, Tim Hudson wrote:
> > The OMC voted to:
> >
> > /Release stable branch on the second last Tuesday of the last month in
> > each quarter as a regular cadence./
> >
> > The vote passed.
> > For: 6, against: 9, abstained 0, not voted: 1
>
> That should say against: 0
>
> ;-)
>
> Matt
>
> >
> > Thanks,
> > Tim.
> >
>


stable branch release cadence

2020-09-15 Thread Tim Hudson
The OMC voted to:

*Release stable branch on the second last Tuesday of the last month in each
quarter as a regular cadence.*

The vote passed.
For: 6, against: 9, abstained 0, not voted: 1

Thanks,
Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-14 Thread Tim Hudson
On Mon, Sep 14, 2020 at 9:52 PM Matt Caswell  wrote:

> > And that is the point - this is not how the existing CTX functions work
> > (ignoring the OPENSSL_CTX stuff).
>
> Do you have some concrete examples of existing functions that don't work
> this way?
>

SSL_new()
BIO_new_ssl()
BIO_new_ssl_connect()
BIO_new_bio_pair()
etc

And all the existing method using functions which are also factories.

But I get the point - if there is only one argument is it logically coming
first or last - obviously it can be seen both ways.

IMO, we absolutely MUST have the ability to delete parameters (for
> example ENGINEs). If we can do that, then I don't see why we can't add
> parameters.
>

No - that doesn't follow. It is perfectly reasonable to have an ENGINE
typedef that remains and is passed as NULL as usual - and in fact most of
the top-level ENGINE stuff handles NULL as meaning no engine usage so that
would remain consistent. There is no absolute requirement to delete a
parameter for this or other purposes. If you want to reorder parameters I
would argue it should be a new function name and not an _ex version.

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-14 Thread Tim Hudson
On Mon, Sep 14, 2020 at 9:19 PM Matt Caswell  wrote:

> I must be misunderstanding your point because I don't follow your logic
> at all.
>
> So this is the correct form according to my proposed policy:
>
> TYPE *TYPE_make_from_ctx(char *name,TYPE_CTX *ctx)
>

And that is the point - this is not how the existing CTX functions work
(ignoring the OPENSSL_CTX stuff).


> > The PR should cover that context of how we name the variations of
> > functions which add the OPENSSL_CTX reference.
>
> IMO, it does. They should be called "*_ex" as stated in chapter 6.2. I
> don't see why we need to special case OPENSSL_CTX in the policy. Its
> written in a generic way an can be applied to the OPENSSL_CTX case.
>

And that means renaming all the with_libctx to _ex which frankly I don't
think makes a lot of sense to do.
Having a naming indicating OPENSSL_CTX added isn't a bad thing to do in my
view.
Collecting a whole pile of _ex functions and having to look at each one to
figure out what the "extension" is does add additional burden for the user.
There is a reason that _with_libctx was added rather than picking _ex as
the function names - it clearly communicates then intent rather than be a
generic extension-of-API naming.


IMO *the* most confusing thing would be to *change* an existing ordering
>
(for example swap two existing parameters around). I see no problem with
> adding new ones anywhere that's appropriate.
>

Clearly we disagree on that - if you are making an extended version of a
function and you aren't keeping the same existing parameter order (which
you are not if you add or remove parameters which is the point I was making
- the order isn't the same as the existing function because you've removed
items - what you have is the order of whatever parameters remain in the
extended function are the same - and that's a pretty pointless distinction
to make - if you aren't simply adding additional items on the end you make
for a change over mess and this I think we should be trying to avoid). My
context there is for the users of the existing API.
It becomes especially problematic if you have type equivalence when the
order is changed around so there are no compiler warnings if the user
hasn't picked up on reordering of parameters.

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-14 Thread Tim Hudson
Any proposal needs to deal with the constructors consistently - whether
they come from an OPENSSL_CTX or they come from an existing TYPE_CTX.
That is absent in your PR.

Basically this leads to the ability to provide inconsistent argument order
in functions.

TYPE *TYPE_make_from_ctx(TYPE_CTX *ctx, char *name)
or
TYPE *TYPE_make_from_ctx(char *name,TYPE_CTX *ctx)

It simply isn't consistent to basically allow both forms of this approach.

Seeing the OPENSSL_CTX as something different to the other APIs in terms of
its usage when it is providing the context from which something is
constructed is the underlying issue here.
Your PR basically makes rules for "context" arguments which lead to
allowing both the above forms - and making the current usage of CTX values
a different logical order than the OPENSSL_CTX.

Separately, we should have consistency in the naming of the functions which
take an OPENSSL_CTX - the _with_libctx makes no sense now that we settled
on OPENSSL_CTX rather than OPENSSL_LIBCTX or OPENSSL_LIB_CTX as the naming.
We also have a pile of _ex functions that were introduced just to add an
OPENSSL_CTX - those should be consistently named.

The PR should cover that context of how we name the variations of functions
which add the OPENSSL_CTX reference.

The suggested rules for extended functions is inconsistent in stating the
order "should be retained" and then allowing for inserting "at any point".
IHMO either the new function must preserve the existing order and just add
to the end (to easy migration) or it should conform to the naming scheme
and parameter argument order for new functions - one of those should be the
driver rather than basically creating something that is neither - not easy
to migrate to and also not following the documented order. We should be
trying to achieve one of those two objectives.

Tim.


On Mon, Sep 14, 2020 at 8:30 PM Matt Caswell  wrote:

> In order to try and move this discussion forward I have made a concrete
> proposal for how we should formulate the various ideas in this thread
> into an actual style. Please see here:
>
> https://github.com/openssl/web/pull/194
>
> Since we're not yet fully in agreement some compromises will have to be
> made. I hope I've come up with something which isn't too abhorrent to
> anyone.
>
> Please take a look.
>
> Matt
>
>
> On 05/09/2020 04:48, SHANE LONTIS wrote:
> >
> >   PR #12778 reorders all the API’s of the form:
> >
> >
> >   EVP_XX_fetch(libctx, algname, propq)
> >
> >
> >   So that the algorithm name appears first..
> >
> >
> >   e.g: EVP_MD_fetch(digestname, libctx, propq);
> >
> > This now logically reads as 'search for this algorithm using these
> > parameters’.
> >
> > The libctx, propq should always appear together as a pair of parameters.
> > There are only a few places where only the libctx is needed, which means
> > that if there is no propq it is likely to cause a bug in a fetch at some
> > point.
> >
> > This keeps the API’s more consistent with other existing XXX_with_libctx
> > API’s that put the libctx, propq at the end of the parameter list..
> > The exception to this rule is that callback(s) and their arguments occur
> > after the libctx, propq..
> >
> > e.g:
> > typedef OSSL_STORE_LOADER_CTX *(*OSSL_STORE_open_with_libctx_fn)
> > (const OSSL_STORE_LOADER *loader,
> >  const char *uri, OPENSSL_CTX *libctx, const char *propq,
> >  const UI_METHOD *ui_method, void *ui_data);
> >
> > An otc_hold was put on this.. Do we need to have a formal vote?
> > This really needs to be sorted out soon so the API’s can be locked down
> > for beta.
> >
> > 
> > Also discussed in a weekly meeting and numerous PR discussions was the
> > removal of the long winded API’s ending with ‘with_libctx’
> > e.g CMS_data_create_with_libctx
> > The proposed renaming for this was to continue with the _ex notation..
> > If there is an existing _ex name then it becomes _ex2, etc.
> > There will most likely be additional parameters in the future for some
> > API’s, so this notation would be more consistent with current API’s.
> > Does this also need a vote?
> >
> > Regards,
> > Shane
> >
> >
>


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Tim Hudson
On Sun, Sep 6, 2020 at 6:55 AM Richard Levitte  wrote:

> I'd rank the algorithm name as the most important, it really can't do
> anything of value without it.
>

It also cannot do anything without knowing which libctx to use. Look at the
implementation.
Without the libctx there is no "from-where" to specify.

This is again hitting the concept of where do things come from and what is
a default.
Once "global" disappears as such, logically everything comes from a libctx.

Your argument is basically "what" is more important than "from" or "where".
And the specific context here is where you see "from" or "where" can be
defaulted to a value so it can be deduced so it isn't (as) important in the
API.

That would basically indicate you would (applying the same pattern/rule in
a different context) change:


*int EVP_PKEY_get_int_param(EVP_PKEY *pkey, const char *key_name, int
*out);*

To the following (putting what you want as the most important rather than
from):


*int EVP_PKEY_get_int_param(char *key_name, EVP_PKEY *pkey, int *out);*

Or pushing it right to the end after the output parameter:


*int EVP_PKEY_get_int_param(char *key_name, int *out,EVP_PKEY *pkey);*

The context of where things come from is actually the most critical item in
any of the APIs we have.
Even though what you want to get from where you want to get it is in the
point of the API call you need to specify where from first as that context
sets the frame of the call.

Think of it around this way - we could have an implementation where we
remember the last key that we have used and allow you to simply indicate
you use the last key or if we didn't want the last key used to be able to
specify it via a non-NULL pointer. This isn't that unusual an API but not
something I'm suggesting we add - just trying to get the point across that
you are still thinking of global and libctx as something super special with
an exception in its handling rather than applying a general rule which is
pretty much what we use everywhere else.

And in which case where you generally don't provide a reference as there is
some default meaning for it in general and can provide a reference for that
sort of API would this make sense to you:


*int EVP_PKEY_get_int_param(char *key_name, int *out,EVP_PKEY *pkey);*

If pkey is NULL then you use the last key that you referenced, if it is not
then you use the specified pkey. For the application the specific key_name
is the most important thing (using your argument that basically states the
"what" is what counts).

I would suggest that you really would still want to place the EVP_PKEY
first - even if you had a defaulting mechanism of referring to the last key
used. Conceptually you always have to have knowledge of from-where when you
are performing a function. And that *context* is what is the most important.

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Tim Hudson
On Sat, Sep 5, 2020 at 8:45 PM Nicola Tuveri  wrote:

> Or is your point that we are writing in C, all the arguments are
> positional, none is ever really optional, there is no difference between
> passing a `(void*) NULL` or a valid `(TYPE*) ptr` as the value of a `TYPE
> *` argument, so "importance" is the only remaining sorting criteria, hence
> (libctx, propq) are always the most important and should go to the
> beginning of the args list (with the exception of the `self/this` kind of
> argument that always goes first)?
>

That's a reasonable way to express things.

The actual underlying point I am making is that we should have a rule in
place that is documented and that works within the programming language we
are using and that over time doesn't turn into a mess.
We do add parameters (in new function names) and we do like to keep the
order of the old function - and ending up with a pile of things in the
"middle" is in my view is one of the messes that we should be avoiding.

I think the importance argument is the one that helps for setting order and
on your "required args" coming first approach the importance argument also
applies because it is actually a required argument simply with an implied
default - which again puts it not at the end of the argument list. The
library context is required - always - there is just a default - and that
parameter must be present because we are working in C.

Whatever it is that we end up decided to do, the rule should be captured
and should also allow for the fact that we will evolve APIs and create _ex
versions and those two will also evolve and a general rule should be one
that doesn't result in an inconsistent treatment for argument order as we
add _ex versions.

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Tim Hudson
On Sat, Sep 5, 2020 at 6:38 PM Nicola Tuveri  wrote:

> In most (if not all) cases in our functions, both libctx and propquery are
> optional arguments, as we have global defaults for them based on the loaded
> config file.
> As such, explicitly passing non-NULL libctx and propquery, is likely to be
> an exceptional occurrence rather than the norm.
>

And that is where we have a conceptual difference, the libctx is *always* used.
If it is provided as a NULL parameter then it is a shortcut to make the
call to get the default or to get the already set one.
Conceptually it is always required for the function to operate.

And the conceptual issue is actually important here - all of these
functions require the libctx to do their work - if it is not available then
they are unable to do their work.
We just happen to have a default-if-NULL.

If C offered the ability to default a parameter if not provided (and many
languages offer that) I would expect we would be using it.
But it doesn't - we are coding in C.

So it is really where-is-what-this-function-needs-coming-from that actually
is the important thing - the source of the information the function needs
to make its choice.
It isn't which algorithm is being selected - the critical thing is from
which pool of algorithm implementations are we operating. The pool must be
specified (this is C code), but we have a default value.

And that is why I think the conceptual approach here is getting confused by
the arguments appearing to be optional - conceptually they are not - we
just have a defaulting mechanism and that isn't the same conceptually as
the arguments actually being optional.

Clearer?

Tim.


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Tim Hudson
I place the OTC hold because I don't believe we should be making parameter
reordering decisions without actually having documented what approach we
want to take so there is clear guidance.
This was the position I expressed in the last face to face OTC meeting -
that we need to write such things down - so that we avoid this precise
situation - where we have new functions that are added that introduce the
inconsistency that has been noted here that PR#12778 is attempting to
address.

However, this is a general issue and not a specific one to OPENSSL_CTX and
it should be discussed in the broader context and not just be a last minute
(before beta1) API argument reordering.
That does not provide anyone with sufficient time to consider whether or
not the renaming makes sense in the broader context.
I also think that things like API argument reordering should have been
discussed on openssl-project so that the broader OpenSSL community has an
opportunity to express their views.

Below is a quick write up on APIs in an attempt to make it easier to hold
an email discussion about the alternatives and implications of the various
alternatives over time.
I've tried to outline the different options.

In general, the OpenSSL API approach is of the following form:



*rettype  TYPE_get_something(TYPE *,[args])rettype  TYPE_do_something(TYPE
*,[args])TYPE*TYPE_new([args])*

This isn't universally true, but it is the case for the majority of OpenSSL
functions.

In general, the majority of the APIs place the "important" parameters
first, and the ancillary information afterwards.

In general, for output parameters we tend to have those as the return value
of the function or an output parameter that
tends to be at the end of the argument list. This is less consistent in the
OpenSSL APIs.

We also have functions which operate on "global" information where the
information used or updated or changed
is not explicitly provided as an API parameter - e.g. all the byname
functions.

Adding the OPENSSL_CTX is basically providing where to get items from that
used to be "global".
When performing a lookup, the query string is a parameter to modify the
lookup being performed.

OPENSSL_CTX is a little different, as we are adding in effectively an
explicit parameter where there was an implicit (global)
usage in place. But the concept of adding parameters to functions over time
is one that we should have a policy for IMHO.

For many APIs we basically need the ability to add the OPENSSL_CTX that is
used to the constructor so that
it can be used for handling what used to be "global" information where such
things need the ability to
work with other-than-the-default OPENSSL_CTX (i.e. not the previous single
"global" usage).

That usage works without a query string - as it isn't a lookup as such - so
there is no modifier present.
For that form of API usage we have three choices as to where we place
things:

1) place the context first

*TYPE *TYPE_new(OPENSSL_CTX *,[args])*

2) place the context last

*TYPE *TYPE_new([args],OPENSSL_CTX *)*

3) place the context neither first nor last

*TYPE *TYPE_new([some-args],OPENSSL_CTX *,[more-args])*

Option 3 really isn't a sensible choice to make IMHO.

When we are performing effectively a lookup that needs a query string, we
have a range of options.
If we basically state that for a given type, you must use the OPENSSL_CTX
you have been provided with on construction,
(not an unreasonable position to take), then you don't need to modify the
existing APIs.

If we want to allow for a different OPENSSL_CTX for a specific existing
function, then we have to add items.
Again we have a range of choices:

A) place the new arguments first


*rettype  TYPE_get_something(OPENSSL_CTX *,TYPE *,[args])rettype
 TYPE_do_something(OPENSSL_CTX *,TYPE *,[args])*

B) place the new arguments first after the TPYE



*rettype  TYPE_get_something(TYPE *,OPENSSL_CTX *,[args])rettype
 TYPE_do_something(TYPE *,OPENSSL_CTX *,[args])*
C) place the new arguments last


*rettype  TYPE_get_something(TYPE *,[args], OPENSSL_CTX *)rettype
 TYPE_do_something(TYPE *,[args], OPENSSL_CTX *)*

D) place the new arguments neither first nor last



*rettype  TYPE_get_something(TYPE *,[some-args], OPENSSL_CTX
*,[more-args])rettype  TYPE_do_something(OPENSSL_CTX *,TYPE *,[some-args],
OPENSSL_CTX *,[more-args])*
Option D really isn't a sensible choice to make IMHO.

My view is that the importance of arguments is what sets their order - and
that is why for the TYPE functions the TYPE pointer
comes first. We could have just as easily specified it as last or some
other order, but we didn't.

Now when we need to add a different location from which to retrieve other
information we need to determine where this gets added.
I'd argue that it is at constructor time that we should be adding any
OPENSSL_CTX or query parameter for any existing TYPE usage
in OpenSSL. If we feel the need to cross OPENSSL_CTX's (logically that is
what is being done) in

Re: API renaming

2020-07-23 Thread Tim Hudson
Placing everything under EVP is reasonable in my view. It is just a prefix
and it really has no meaning these days as it became nothing more than a
common prefix to use.

I don't see any significant benefit in renaming at this point - even for
RAND.

Tim.

On Fri, 24 Jul 2020, 1:56 am Matt Caswell,  wrote:

>
>
> On 23/07/2020 16:52, Richard Levitte wrote:
> > On Thu, 23 Jul 2020 12:18:10 +0200,
> > Dr Paul Dale wrote:
> >> There has been a suggestion to rename EVP_RAND to OSSL_RAND.  This
> seems reasonable.  Would it
> >> also make sense to rename the other new APIs similarly.
> >> More specifically, EVP_MAC and EVP_KDF to OSSL_MAC and OSSL_KDF
> respectively?
> >
> > This is a good question...
> >
> > Historically speaking, even though EVP_MAC and EVP_KDF are indeed new
> > APIs, they have a previous history of EVP APIs, through EVP_PKEY.  The
> > impact of relocating them outside of the EVP "family" may be small,
> > but still, history gives me pause.
> >
> > RAND doesn't carry the same sort of history, which makes it much
> > easier for me to think "just do it and get it over with"...
>
> I have the same pause - so  I'm thinking just RAND for now.
>
> Matt
>
>


Vote results for PR#12089

2020-07-03 Thread Tim Hudson
topic: Change some words by accepting PR#12089

4 against, 3 for, no absensions

The vote failed, the PR will now be closed.

Thanks,
Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
I suggest everyone takes a read through
https://en.wikipedia.org/wiki/Long-term_support as to what LTS is actually
meant to be focused on.

What you (Ben and Matt) are both describing is not LTS but STS ... these
are different concepts.

For LTS the focus is *stability *and *reduced risk of disruption *which
alters the judgement on what is appropriate to put into a release.
It then becomes a test of "is this bug fix worth the risk" with the general
focus on lowest possible risk which stops this sort of thing happening
unless a critical feature is broken.

All of the "edge case" examples presented all involve substantial changes
to implementations and carry an inherent risk of breaking something else -
and that is the issue.
It would be different if we had a full regression test suite across all the
platforms and variations on platforms that our users are operating on - but
we don't.

We don't compare performance between releases or for any updates in our
test suite so it isn't part of our scope for release readiness - if this is
such a critical thing that must be fixed then it is something that we
should be measuring and checking as a release condition - but we don't -
because it actually isn't that critical.

Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
On Sat, 20 Jun 2020, 8:14 am Benjamin Kaduk,  wrote:

> On Sat, Jun 20, 2020 at 08:11:16AM +1000, Tim Hudson wrote:
> > The general concept is to only fix serious bugs in stable releases.
> > Increasing performance is not fixing a bug - it is a feature.
>
> Is remediating a significant performance regression fixing a bug?
>

It would be a bug - but not a serious bug. So no.
It works. It was released.
Wholesale replacement of implementations of algorithms should not be
happening in LTS releases.

We make no performance guarantees or statements in our releases (in
general).

And performance isn't an issue for the vast majority of our users.

Those for whom performance is critical also tend to be building their own
releases in my experience.

Tim.


Re: Backports to 1.1.1 and what is allowed

2020-06-19 Thread Tim Hudson
The general concept is to only fix serious bugs in stable releases.
Increasing performance is not fixing a bug - it is a feature.

Swapping out one implementation of algorithm for another is a significant
change and isn't something that should go into an LTS in my view.

It would be less of an issue for users if our release cadence was more
frequent - but the principal applies - stability is what an LTS is aimed
at. We should be fixing significant bugs only.

Arguing that slower performance is a bug is specious - it works - and
performance is not something that we document and guarantee. You don't find
our release notes stating performance=X for a release - because such a
statement makes little sense for the vast majority of users.

Tim.


Re: Naming conventions

2020-06-18 Thread Tim Hudson
We have a convention that we mostly follow. Adding new stuff with
variations in the convention offers no benefit without also adjusting the
rest of the API. Inconsistencies really do not assist any developer.

Where APIs have been added that don't follow the conventions they should be
changed.

It really is that simple - each developer may have a different set of
personal preferences and if we simply allow any two people to pick their
own API pattern effectively at whim we end up with a real mess over time.

This example is a clear cut case where we should fix the unnecessary
variation in the pattern. It serves no benefit whatsoever to have such a
mix of API patterns.

We do have some variations that we should adjust - and for APIs that have
been in official releases dropping in backwards compatibility macros is
appropriate.

The argument that we aren't completely consistent is specious - it is
saying because we have a few mistakes that have slipped through the cracks
we have open season on API patterns.

It also would not hurt to have an automated check of API deviations on
commits to catch such things in future.

Tim.


Re: Reducing the security bits for MD5 and SHA1 in TLS

2020-06-17 Thread Tim Hudson
Given that this change impacts interoperability in a major way it should be
a policy vote of the OMC IMHO.

Tim.


On Thu, 18 Jun 2020, 5:57 am Kurt Roeckx,  wrote:

> On Wed, May 27, 2020 at 12:14:13PM +0100, Matt Caswell wrote:
> > PR 10787 proposed to reduce the number of security bits for MD5 and SHA1
> > in TLS (master branch only, i.e. OpenSSL 3.0):
> >
> > https://github.com/openssl/openssl/pull/10787
> >
> > This would have the impact of meaning that TLS < 1.2 would not be
> > available in the default security level of 1. You would have to set the
> > security level to 0.
> >
> > In my mind this feels like the right thing to do. The security bit
> > calculations should reflect reality, and if that means that TLS < 1.2 no
> > longer meets the policy for security level 1, then that is just the
> > security level doing its job. However this *is* a significant breaking
> > change and worthy of discussion. Since OpenSSL 3.0 is a major release it
> > seems that now is the right time to make such changes.
> >
> > IMO it seems appropriate to have an OMC vote on this topic (or should it
> > be OTC?). Possible wording:
>
> So should that be an OMC or OTC vote, or does it not need a vote?
>
>
> Kurt
>
>


Re: Cherry-pick proposal

2020-04-29 Thread Tim Hudson
Any change to the review gate check we have in place now that lowers it
will certainly not get my support.

If anything, that check before code gets approved should be raised, not
lowered.

Tim.

On Thu, 30 Apr 2020, 1:24 am Salz, Rich,  wrote:

> I suspect that the primary motivation for this proposal is that PR’s
> against master often become stale because nobody on the project looks at
> them. And then submitters are told to rebase, fix conflicts, etc. It gets
> disheartening. If that is the motivation, then perhaps the project should
> address that root cause.  Here are two ideas:
>
>
>
>1. Mark’s scanning tool complains to the OTC if it has been “X” weeks
>without OTC action.  I would pick X as 2.
>2. Change the submission rules to be one non-author OTC member review
>and allow OTC/OMC to put a hold for discussion during the 24-hour freeze
>period. That discussion must be concluded, perhaps by a vote, within “Y”
>weeks (four?).
>
>
>
>
>
>
>


Re: 1.1.1f

2020-03-26 Thread Tim Hudson
We don't guarantee constant time.

Tim.

On Fri, 27 Mar 2020, 5:41 am Bernd Edlinger, 
wrote:

> So I disagree, it is a bug when it is not constant time.
>
>
> On 3/26/20 8:26 PM, Tim Hudson wrote:
> > +1 for a release - and soon - and without bundling any more changes. The
> > circumstances justify getting this fix out. But I also think we need to
> > keep improvements that aren't bug fixes out of stable branches.
> >
> > Tim.
> >
> > On Fri, 27 Mar 2020, 3:12 am Matt Caswell,  wrote:
> >
> >> On 26/03/2020 15:14, Short, Todd wrote:
> >>> This type of API-braking change should be reserved for something like
> >>> 3.0, not a patch release.
> >>>
> >>> Despite it being a "incorrect", it is expected behavior.
> >>>
> >>
> >> Right - but the question now is not whether we should revert it (it has
> >> been reverted) - but whether this should trigger a 1.1.1f release soon?
> >>
> >> Matt
> >>
> >>> --
> >>> -Todd Short
> >>> // tsh...@akamai.com <mailto:tsh...@akamai.com>
> >>> // “One if by land, two if by sea, three if by the Internet."
> >>>
> >>>> On Mar 26, 2020, at 11:03 AM, Dr. Matthias St. Pierre
> >>>> mailto:matthias.st.pie...@ncp-e.com>>
> >>>> wrote:
> >>>>
> >>>> I agree, go ahead.
> >>>>
> >>>> Please also consider reverting the change for the 3.0 alpha release as
> >>>> well, see Daniel Stenbergs comment
> >>>>
> https://github.com/openssl/openssl/issues/11378#issuecomment-603730581
> >>>> <
> >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_11378-23issuecomment-2D603730581&d=DwMGaQ&c=96ZbZZcaMF4w0F4jpN6LZg&r=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik&m=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q&s=djWoIIXyggxwOfbwrmYGrSJdR5tWm06IdzY9x9tDxkA&e=
> >>>
> >>>>
> >>>> Matthias
> >>>>
> >>>>
> >>>> *From**:* openssl-project  >>>> <mailto:openssl-project-boun...@openssl.org>> *On Behalf Of *Dmitry
> >>>> Belyavsky
> >>>> *Sent:* Thursday, March 26, 2020 3:48 PM
> >>>> *To:* Matt Caswell mailto:m...@openssl.org>>
> >>>> *Cc:* openssl-project@openssl.org <mailto:openssl-project@openssl.org
> >
> >>>> *Subject:* Re: 1.1.1f
> >>>>
> >>>>
> >>>> On Thu, Mar 26, 2020 at 5:14 PM Matt Caswell  >>>> <mailto:m...@openssl.org>> wrote:
> >>>>
> >>>> The EOF issue (https://github.com/openssl/openssl/issues/11378
> >>>> <
> >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_11378&d=DwMGaQ&c=96ZbZZcaMF4w0F4jpN6LZg&r=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik&m=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q&s=MAiLjfGJWaKvnBvqnM4fcyvGVfUyj9CDANO_vh4wfco&e=
> >>> )
> >>>> has
> >>>> resulted in us reverting the original EOF change in the 1.1.1
> branch
> >>>> (https://github.com/openssl/openssl/pull/11400
> >>>> <
> >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_pull_11400&d=DwMGaQ&c=96ZbZZcaMF4w0F4jpN6LZg&r=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik&m=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q&s=3hBU2pt84DQlrY1dCnSn9x1ah1gSzH6NEO_bNRH-6DE&e=
> >>> ).
> >>>>
> >>>> Given that this seems to have broken quite a bit of stuff, I
> propose
> >>>> that we do a 1.1.1f soon (possibly next Tuesday - 31st March).
> >>>>
> >>>> Thoughts?
> >>>>
> >>>>
> >>>> I strongly support this idea.
> >>>>
> >>>> --
> >>>> SY, Dmitry Belyavsky
> >>>
> >>
> >
>


Re: 1.1.1f

2020-03-26 Thread Tim Hudson
+1 for a release - and soon - and without bundling any more changes. The
circumstances justify getting this fix out. But I also think we need to
keep improvements that aren't bug fixes out of stable branches.

Tim.

On Fri, 27 Mar 2020, 3:12 am Matt Caswell,  wrote:

> On 26/03/2020 15:14, Short, Todd wrote:
> > This type of API-braking change should be reserved for something like
> > 3.0, not a patch release.
> >
> > Despite it being a "incorrect", it is expected behavior.
> >
>
> Right - but the question now is not whether we should revert it (it has
> been reverted) - but whether this should trigger a 1.1.1f release soon?
>
> Matt
>
> > --
> > -Todd Short
> > // tsh...@akamai.com 
> > // “One if by land, two if by sea, three if by the Internet."
> >
> >> On Mar 26, 2020, at 11:03 AM, Dr. Matthias St. Pierre
> >> mailto:matthias.st.pie...@ncp-e.com>>
> >> wrote:
> >>
> >> I agree, go ahead.
> >>
> >> Please also consider reverting the change for the 3.0 alpha release as
> >> well, see Daniel Stenbergs comment
> >> https://github.com/openssl/openssl/issues/11378#issuecomment-603730581
> >> <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_11378-23issuecomment-2D603730581&d=DwMGaQ&c=96ZbZZcaMF4w0F4jpN6LZg&r=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik&m=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q&s=djWoIIXyggxwOfbwrmYGrSJdR5tWm06IdzY9x9tDxkA&e=
> >
> >>
> >> Matthias
> >>
> >>
> >> *From**:* openssl-project  >> > *On Behalf Of *Dmitry
> >> Belyavsky
> >> *Sent:* Thursday, March 26, 2020 3:48 PM
> >> *To:* Matt Caswell mailto:m...@openssl.org>>
> >> *Cc:* openssl-project@openssl.org 
> >> *Subject:* Re: 1.1.1f
> >>
> >>
> >> On Thu, Mar 26, 2020 at 5:14 PM Matt Caswell  >> > wrote:
> >>
> >> The EOF issue (https://github.com/openssl/openssl/issues/11378
> >> <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_11378&d=DwMGaQ&c=96ZbZZcaMF4w0F4jpN6LZg&r=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik&m=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q&s=MAiLjfGJWaKvnBvqnM4fcyvGVfUyj9CDANO_vh4wfco&e=
> >)
> >> has
> >> resulted in us reverting the original EOF change in the 1.1.1 branch
> >> (https://github.com/openssl/openssl/pull/11400
> >> <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_pull_11400&d=DwMGaQ&c=96ZbZZcaMF4w0F4jpN6LZg&r=QBEcQsqoUDdk1Q26CzlzNPPUkKYWIh1LYsiHAwmtRik&m=87AtfQDFl1z9cdRP12QeRUizmgnW6ejbufNT40Gip4Q&s=3hBU2pt84DQlrY1dCnSn9x1ah1gSzH6NEO_bNRH-6DE&e=
> >).
> >>
> >> Given that this seems to have broken quite a bit of stuff, I propose
> >> that we do a 1.1.1f soon (possibly next Tuesday - 31st March).
> >>
> >> Thoughts?
> >>
> >>
> >> I strongly support this idea.
> >>
> >> --
> >> SY, Dmitry Belyavsky
> >
>


Re: Check NULL pointers or not...

2019-11-29 Thread Tim Hudson
On Fri, Nov 29, 2019 at 7:08 PM Tomas Mraz  wrote:

> The "always check for NULL pointers" approach does not avoid
> catastrophical errors in applications.


I didn't say it avoided all errors (nor did anyone else on the thread that
I've read) - but it does avoid a whole class of errors.

And for that particular context there are many things you can do to
mitigate it - and incorrect handling of EVP_CipherUpdate itself is very
common - where error returns are completely ignored.
We could reasonably define that it should wipe out the output buffer on any
error condition - that would make the function safer in a whole pile of
contexts.

However that is talking about a different issue IMHO.

Tim.


Re: Check NULL pointers or not...

2019-11-29 Thread Tim Hudson
The way I view the issue is to look at what happens when things go wrong -
what is the impact - and evaluate the difference in behaviour between the
approaches.
You have to start from the premise that (in general) software is not tested
for all possible usage models - i.e. test coverage isn't at 100% - so there
are always code paths and circumstances that can occur that simply haven't
been hit during testing.

That means that having a special assert enabled build doesn't solve the
problem - in that a gap in testing always remains - so the "middle ground"
doesn't alter the fundamentals as such and does also lead to basically
testing something other than what is running in a default (non-debug)
build. Testing precisely what you will actually run is a fairly normal
principle to work from - but many overlook those sorts of differences.

In a production environment, it is almost never appropriate to simply crash
in an uncontrolled manner (i.e. dereferencing a NULL pointer).
There are many contexts that generate real issues where a more graceful
failure mode (i.e. anything other than crashing) results in a much better
user outcome.
Denial-of-service attacks often rely on this sort of issue - basically
recognising that testing doesn't cover everything and finding unusual
circumstances that create some form of load on a system that impacts other
things.
You are effectively adding in a whole pile of abort() references in the
code by not checking - that is the actual effect - and abort isn't
something that should ever be called in production code.

The other rather practical thing is that when you do check for incoming
NULL pointers, you end up with a lot less reports of issues in a library -
as you aren't sitting in the call chain of a crash - and that in itself
saves a lot of time when dealing with users. Many people will report issues
like this - but if they get an error return rather than a crash they do
(generally) keep looking. And developer time for OpenSSL (like many
projects) is the most scarce resource that we have. Anything that reduces
the impact on looking at crashes enables people to perform more useful work
rather than debugging a coding error of an OpenSSL user.

Another simple argument that helps to make a choice is that whatever we do
we need to be consistent in our handling of things - and at the moment many
functions check and many functions do not.
And if we make things consistent we certainly don't really have the option
of undoing any of the null checks because that will break code - it is
effectively part of the API contract - that it is safe to call certain
functions with NULL arguments. Adding extra checks from an API contract
point of view is harmless, removing them isn't an option. And if we want
consistency then we pretty much have only one choice - to check everywhere.

There are *very few places* where such checks will have a substantial
performance impact in real-world contexts.

Arguments about the C runtime library not checking simply aren't relevant.
The C runtime library doesn't set the API contracts and usage model that we
use. And there are C runtime functions that do check.

What we should be looking at in my view is the impact when things go wrong
- and a position that basically says the caller isn't allowed to make
mistakes ever for simple things like this which are easy to check isn't
appropriate. That it does also reduce the burden on the project (and it
also increases the users perception of the quality because they are always
looking at their own bugs and not thinking that OpenSSL has a major issue
with an uncontrolled crash - even if they don't report it publicly).

Tim.


Re: Commit access to openssl/tools and openssl/web

2019-10-04 Thread Tim Hudson
FYI - I have reviewed and added my approval. No need to back out anything.

Tim.

On Fri, Oct 4, 2019 at 5:50 PM Dr Paul Dale  wrote:

> I believed that it required two OMC approvals but was pointed to an
> earlier instance where only one was present and I flew with it without
> checking further.
> My apologies for merging prematurely and I’ll back out the changes if any
> OMC member wants.
>
> As for discussing this at the upcoming face to face, I agree
> wholeheartedly.
>
>
> Pauli
> --
> Dr Paul Dale | Distinguished Architect | Cryptographic Foundations
> Phone +61 7 3031 7217
> Oracle Australia
>
>
>
>
> On 4 Oct 2019, at 5:39 pm, Matt Caswell  wrote:
>
>
>
> On 04/10/2019 08:15, Dr. Matthias St. Pierre wrote:
>
> Dear OMC,
>
> while the process of merging and committing to openssl/openssl has been
> formalized,
> no similar (official) rules for pull requests by non-OMC-member seem to
> apply to the
> other two repositories openssl/tools and openssl/web. Probably it's
> because hardly
> anybody outside the OMC else ever raises them? Or is it the other way
> around?
>
>
> There are clear official rules. This vote was passed by the OMC over a
> year ago:
>
> topic: Openssl-web and tools repositories shall be under the same review
>   policy as per the openssl repository where the reviewers are OMC
> members
>
> So it needs two approvals from an OMC member. It looks like recent commits
> haven't obeyed those rules.
>
>
> I would like to raise the question whether it wouldn't be beneficial for
> all of us,
> if we would apply the same rules (commit access for all committers, plus
> the well
> known approval rules) to all of our repos. After all, the openssl/openssl
> repository
> is the most valuable of the three and I see no reason why the others would
> need
> more protection. In the case of the openssl/web repository which targets
> the
> official website, you might want to consider a 2OMC approval rule, but
> even there
> I don't see why the usual OMC veto rule wouldn't be sufficient.
>
>
> There is a lot of merit in that. Certainly for tools. I've added it to the
> OMC
> agenda for Nuremburg.
>
> Matt
>
>
>


Re: Reorganization of the header files (GitHub #9333)

2019-09-27 Thread Tim Hudson
Merge early is pretty much my default position ... and that applies to this
context in my view.

Tim.

On Sat, 28 Sep. 2019, 7:44 am Dr. Matthias St. Pierre, <
matthias.st.pie...@ncp-e.com> wrote:

> Hi,
>
> some of you might have loosely followed pull request #9333 (see [1]),
> where I am reorganizing the header files in a more consistent manner.
>
> Since I intend to do the reorganization both to master and the 1.1.1
> stable branch (in order to facilitate conflict-free cherry-picking to
> the 1.1.1 LTS branch), I decided to automate the changes. This decision
> turned out to be very convenient, because the heavy rearchitecturing on
> master frequently conflicted with my changes. Instead of resolving
> conflicts, I could just rerun the script after rebasing.
>
> When this pull request finally gets merged, it might happen that you
> in turn encounter more merge conflicts as usual, in particular if you
> are one of the 'replumbers' involved in the FIPS rearchitecturing.
> It might also happen if you are working on some long running pull
> request like the CMP integration.
>
> To check the impact of my changes on your work, I did some rebasing
> experiments, and as a result I wrote down some guidance about possible
> rebasing strategies in [2].
>
> The reason I write this mail is because I'd like to hear your opinion
> about how to proceed with this pull request. There are two possibilities:
>
> Variant I):
> Merge early in to get the reorganization integrated as soon
> as possible and not wait until shortly before the next release.
>
> Variant II):
> Wait a little bit more until the heavy rearchitecturing
> has settled down a little bit.
>
> What is your opinion? What are your preferences?
>
>
> Regards,
>
> Matthias
>
>
> [1] https://github.com/openssl/openssl/pull/9333
> [2] https://github.com/openssl/openssl/pull/9333#issuecomment-536105158
>


Re: Do we really want to have the legacy provider as opt-in only?

2019-07-17 Thread Tim Hudson
My view point (which has been stated elsewhere) is that OpenSSL-3.0 is
about internal restructuring to allow for the various things noted in the
design documents.
It is not about changing the feature set (in a feature reduction sense).

In future releases we will make the mixture of providers available more
capable and may adjust what algorithms are available and may even do things
like place national ciphers in separate providers.
But OpenSSL-3.0 is *not* the time to do any of those things.

We should be focused on the restructuring and getting the FIPS140 handling
in place and not making policy decisions about changing algorithm
availability or other such things.
The objective is that the vast majority of applications that use
OpenSSL-1.1 can use OpenSSL-3.0 with a simple recompilation without any
other code changes.

That I believe has been our consistent out-bound message in discussions as
a group and our overall driver.

In the future, things may become more dynamic and we may change the
algorithm sets and may use more configuration based approaches and may even
place each algorithm in a separate provider and allow for a whole range of
dynamic handling.
But those are for the future. OpenSSL-3.0 is basically an internally
restructured version of OpenSSL-1.1 with a FIPS140 solution.

Tim.


Re: punycode licensing

2019-07-10 Thread Tim Hudson
On Thu, Jul 11, 2019 at 12:37 AM Dmitry Belyavsky  wrote:

> Dear Tim,
>
> Formally I am a contributor with a signed CLA.
> I took a code definitely permitting any usage without any feedback,
> slightly modified it (at least by openssl-format-source and splitting
> between header and source), and submitted it as my feedback to OpenSSL.
>
> I still think that it will be a good idea if Adam signs the CLA, but if he
> declines, we still have a correct interpretation.
>

In your ICLA it contains the instructions you have to follow (reproduced
here to save you time):

7. Should You wish to submit work that is not Your original creation, You
may submit it to the Foundation separately from any Contribution,
identifying the complete details of its source and of any license or other
restriction (including, but not limited to, related patents, trademarks,
and license agreements) of which you are personally aware, and
conspicuously marking the work as "Submitted on behalf of a third-party:
[named here]".


Your current PR at https://github.com/openssl/openssl/pull/9199  does not
actually do this - basically you have to have punycode.c, punycode.h in a
separate submission not intermixed with anything else.

The reason for not intermixing the code should be pretty clear - as we need
to know which parts belong to someone else and aren't covered by your ICLA
and which parts are - with no possibility of confusion.

You would also need to include the *license* that those files are under
which you have not done so - which according to the RFC is:

Regarding this entire document or any portion of it (including
the pseudocode and C code), the author makes no guarantees and
is not responsible for any damage resulting from its use.  The
author grants irrevocable permission to anyone to use, modify,
and distribute it in any way that does not diminish the rights
of anyone else to use, modify, and distribute it, provided that
redistributed derivative works do not contain misleading author or
version information.  Derivative works need not be licensed under
similar terms.

Separately, Rich Salz indicated he had email from the author with
respect to being willing to license under the Apache License 2.0 which
you would need to get directly from the author (or Rich would need to
be the submitter). Only the author (actually copyright owner) can
change the license terms of code they create. This isn't about the
license.

You really should reach out to the author to ask if he is willing to
sign an ICLA - that is the normal steps involved.
There is nothing particularly onerous in the ICLAs - they are
basically there to provide certainty and a legal background for the
project to be able to provide the code that it does.

You should also note that the license noted in the RFC misses many of
the provisions within the ICLA and within the Apache License 2.0
itself and is incompatible with the Apache License 2.0 because it
contains restrictions and conditions beyond those stated in this
license.

After all the work that the project did to be able to move to its
current license (and a lot of that work was Rich Salz's efforts) it is
important that we maintain the foundation of the clear license terms
for the entire code base.

Tim.


Re: punycode licensing

2019-07-10 Thread Tim Hudson
Previous assertions that if the license was compatible that we don't need a
CLA in order to accept a contribution were incorrect.
You are now questioning the entire purpose of contributor agreements and
effectively arguing they are superfluous and that our policy should be
different.

You are (of course) entitled to your opinion on the topic - however the
project view and policy on this is both clear and consistent even if it is
different from what you would like to see.

If someone else wants to create a derivative of the software and combine in
packages under other licenses (Apache License or otherwise) without having
CLAs in place then that is their choice to do so as long as they adhere to
the license agreement.
Again, all of this is use under the license. What our policies cover is for
contributions that the project itself will distribute - and entirely
separate context for what others can do with the resulting package.

The CLAs are not the same as code being contributed under an Apache License
2.0.
There are many sound reasons for CLAs existing, and discussion of those
reasons isn't an appropriate topic IMHO for openssl-project.

Tim.



On Wed, Jul 10, 2019 at 8:08 PM Salz, Rich  wrote:

> Thank you for the reply.
>
>
>
> *>*The license under which the OpenSSL software is provided does not
> require "permission" to be sought for use of the software.
>
> See https://www.openssl.org/source/apache-license-2.0.txt
> 
>
>
>
> Use, as defined by the license, doesn’t just mean end-users, and it is not
> limited to compiling, linking, and running executables.  A recipient can
> make derivative items, redistribute, and so on. All of those things are
> what OpenSSL would do if it “took in” code into the source base.
>
>
>
> So why does the project require permission from other Apache-licensed
> licensed software? In other words, why will the project not accept and use
> the rights, covered by copyright and license, that it grants to others?
>
>
>


Re: punycode licensing

2019-07-09 Thread Tim Hudson
On Wed, Jul 10, 2019 at 1:58 AM Salz, Rich  wrote:
> Thank you for the update. This brings to mind a few additional questions:
>
> 1. Does other code which is copyright/licensed under the Apache 2 license
also require CLAs?
See points 1-3 of previous email. CLAs are required for anything
non-trivial.

> 2. Does other code which is in the public domain also require CLAs?
See points 1-3 of previous email. CLAs are required for anything
non-trivial.

> 3. Does OpenSSL expect that anyone using OpenSSL and bundling it with
Apache 2 software must first ask the project for permission?

That is an entirely separate question and all the project states is the
license under which we offer the software.
That question can be more broadly worded as "Does OpenSSL expect that
anyone using OpenSSL must first ask the project for permission?"

The license under which the OpenSSL software is provided does not require
"permission" to be sought for use of the software.
See https://www.openssl.org/source/apache-license-2.0.txt

So in short the answer is "no" because the software is under a license that
doesn't require permission to be sought for its use.

> 4. Assuming #1 is yes and #3 is no, can you explain why there is a
difference?

Because 1 and 2 are about *contributing *code that the project then offers
under a license, whereas 3 is about *using* the produced code under its
license.
They are completely different contexts (one in-bound, one out-bound).
And they are completely different entities (1&2 are about requirements
the *project
*places on contributions, and 3 is about requirements the license places on
*users* of the software).

Tim.


Re: punycode licensing

2019-07-09 Thread Tim Hudson
>From OMC internal discussions:

For all contributions that are made to OpenSSL there are three
circumstances that can exist:
1) the contribution is considered trivial - no CLA required
2) the contribution is non-trivial and the copyright is owned by the
submitter (or by the company they work for) - ICLA (and CCLA) required
3) the contribution is non-trivial and the copyright is owned by someone
other than the submitter and the copyright owner acknowledges that the
submission is on their behalf - ICLA (and CCLA) from the copyright owner
required.

Our CLA policy and the CLA documents themselves operate to cover
contributions as described above and the CLA policy itself notes no
exceptions for contributions outside of these circumstances.
The only mechanism for a contribution to be accepted that does not meet the
CLA policy is if the OMC votes to explicitly accept a contribution without
a CLA as a special exception to the CLA policy.

Notes:
a) the OMC has not to this date voted to approve inclusion of any
contribution without a CLA in place since the CLA policy was established in
June 2016;
b) the OMC does not currently have a policy to allow exceptions to the CLA
policy based on the license terms of a contribution

Thanks,
Tim.


On Fri, Jun 21, 2019 at 4:24 PM Tim Hudson  wrote:

> Unfortunately, the issue isn't the compatibility of the license - they do
> indeed look relatively compatible to me - and the discussion on this thread
> has so far been about that.
> However the contributor license agreement requires that the copyright
> owner grants such permission - it is the fundamental basis of contributor
> agreements.
>
> Both the CCLA and ICLA make that exceedingly clear the contributor
> (individual or company) is "*the copyright owner or legal entity
> authorized by the copyright owner*" and the grants in the CLA are not
> grants that the notice in the RFC provide.
>
> In this case, the person who raised the PR is unable to meet those
> requirements (please do correct me if I am wrong on that) and as such their
> contribution is unable to be accepted.
>
> Tim.
>
>
> On Fri, Jun 21, 2019 at 12:12 PM Dr Paul Dale 
> wrote:
>
>> It seems okay from here too.
>>
>> Pauli
>> --
>> Dr Paul Dale | Cryptographer | Network Security & Encryption
>> Phone +61 7 3031 7217
>> Oracle Australia
>>
>>
>>
>> > On 21 Jun 2019, at 11:59 am, Benjamin Kaduk  wrote:
>> >
>> > On Thu, Jun 20, 2019 at 12:27:38PM -0400, Viktor Dukhovni wrote:
>> >> On Thu, Jun 20, 2019 at 03:39:10PM +0100, Matt Caswell wrote:
>> >>
>> >>> PR 9199 incorporates the C punycode implementation from RFC3492:
>> >>>
>> >>> https://github.com/openssl/openssl/pull/9199
>> >>>
>> >>
>> >> I'd be comfortable with relicensing under Apache, while clearly
>> >> indicating the provenance of the code, and indicating that the
>> >> file is also available under the original terms.
>> >
>> > Me, too.
>> >
>> > -Ben
>>
>>


Re: punycode licensing

2019-06-22 Thread Tim Hudson
Unfortunately, the issue isn't the compatibility of the license - they do
indeed look relatively compatible to me - and the discussion on this thread
has so far been about that.
However the contributor license agreement requires that the copyright owner
grants such permission - it is the fundamental basis of contributor
agreements.

Both the CCLA and ICLA make that exceedingly clear the contributor
(individual or company) is "*the copyright owner or legal entity authorized
by the copyright owner*" and the grants in the CLA are not grants that the
notice in the RFC provide.

In this case, the person who raised the PR is unable to meet those
requirements (please do correct me if I am wrong on that) and as such their
contribution is unable to be accepted.

Tim.


On Fri, Jun 21, 2019 at 12:12 PM Dr Paul Dale  wrote:

> It seems okay from here too.
>
> Pauli
> --
> Dr Paul Dale | Cryptographer | Network Security & Encryption
> Phone +61 7 3031 7217
> Oracle Australia
>
>
>
> > On 21 Jun 2019, at 11:59 am, Benjamin Kaduk  wrote:
> >
> > On Thu, Jun 20, 2019 at 12:27:38PM -0400, Viktor Dukhovni wrote:
> >> On Thu, Jun 20, 2019 at 03:39:10PM +0100, Matt Caswell wrote:
> >>
> >>> PR 9199 incorporates the C punycode implementation from RFC3492:
> >>>
> >>> https://github.com/openssl/openssl/pull/9199
> >>>
> >>
> >> I'd be comfortable with relicensing under Apache, while clearly
> >> indicating the provenance of the code, and indicating that the
> >> file is also available under the original terms.
> >
> > Me, too.
> >
> > -Ben
>
>


Re: Removing function names from errors (PR 9058)

2019-06-13 Thread Tim Hudson
On Thu, Jun 13, 2019 at 6:40 PM Salz, Rich  wrote:

> The proper way to handle this, in my experience, is *DO NOT REUSE ERROR
> CODES.*


No. This is a path to a rather unacceptable outcome.
Taking your example and running forward with it, having an out-of-memory
separate error code for every possible context would like to *589 error
codes* for just handling out-of-memory in master at a single level.
And then on top of that you would need to cascade those errors up the call
chain potentially.

Each error condition that might require different treatment should have a
common code.
The concept of out-of-memory (as a simple example) is not context specific
(in terms of handling).
The concept of unable-to-open-file is also not context specific.

What the current error system does is to provide a context for the user
when error conditions occur to be able to have some idea as to what
happened.
It is very much like the java stack trace - and useful for a variety of
reasons.

The error system had two purposes:
1) allow for handling of an error by the calling function (or any function
up the call stack) in a different manner
2) provide the end user with context when things fail (as applications are
generally not very good at doing this)

Both of these are equally important - but aimed at different contexts
(developers, end-users).
Neither context should be ignored when considering changes IMHO.

Tim.


Re: proposed change to committers policy

2019-05-24 Thread Tim Hudson
On Fri, May 24, 2019 at 7:34 PM Matt Caswell  wrote:

> On 24/05/2019 10:28, SHANE LONTIS wrote:
> > It doesn’t stop us both reviewing a PR. That doesn’t mean we both need
> to approve.
>
> Right...but in Matthias's version if you raise a PR, and then Pauli
> approves it,
> then you only then need to get a second committer approval. Otherwise you
> would
> need to get an OMC approval.
>

It works that way in the original wording too - which is more simply stated
IMHO.
You choose which approvals you combine. If there are three - select which
ever two make the set you want to "combine" as such.

I also didn't see a need for a separate OMC approval if a committer
submitted something and a same-organisation OMC member approved it - it
just needs one more -  so the combination of approvals can be made from
not-the-same-organisation.

Tim.


proposed change to committers policy

2019-05-24 Thread Tim Hudson
As part of various discussions, I've drafted a proposed (not yet put to a
formal vote) change to the committers policy to address the perception of a
potential conflict-of-interest situation. I don't believe that we have
actually encountered a conflict of interest in our current policy, but
avoiding the perception that the potential is there I think is worthwhile.

It also would encode formally the practice that individuals have been
following on an informal basis as a number of those individuals have noted.

Note the OSF and the OSS are noted in the bylaws - I haven't expanded that
here - but effectively that covers the organisations that are governed by
the OMC which means resources like the OpenSSL fellows are not impacted by
this policy change. i.e. one OpenSSL fellow can approve another OpenSSL
fellow proposed change (as is our regular practice).

I've tried to keep it simple and precise.

Tim.

diff --git a/policies/committers.html b/policies/committers.html
index 80e31c8..22d78b2 100644
--- a/policies/committers.html
+++ b/policies/committers.html
@@ -77,6 +77,11 @@
 including one from the OMC.
   

+   In considering approvals, the combined approvals must come
+   from individuals who work for separate organisations.
+   This condition does not apply where the organisation is the
+   OSF or OSS.
+
   This process may seem a little heavy, but OpenSSL is a
large,
   complicated codebase, and we think two reviews help prevent
   security bugs, as well as disseminate knowledge to the
growing


Re: No two reviewers from same company

2019-05-23 Thread Tim Hudson
We have discussed this at numerous OMC meetings in terms of how to managed
potential *perceived *conflicts of interest that might arise if people
outside of the fellows come from the same company and hence can effectively
turn the OMC review control mechanism into a single control rather than a
dual control.

We discussed tooling changes to make checking this possible given that in
each instance we have had the individuals involved make a commitment to
avoid that situation (through their own actions).
Occasionally that didn't happen and the person "corrected" it when pointed
out.

We haven't formally voted to make such a change - however it is something
that I think we should have in place and I do support.
Making a formal policy change of course will go through our usual decision
making process.

What I was expecting tooling-wise is that the scripts would detect this
situation and advise - at the very least warn - and potentially blocking
things.

The OpenSSL fellows are in a completely different context - the company
they work for is directed by the OMC - so there isn't a separate external
third party source of influence so there is no reasonable mechanism to
*perceive* a potential conflict of interest.

Note - this is all about *perceptions* of a *potential* situation - not
about something we are actually concerned about for the individuals
involved.
However it is prudent to address even the perception of a path for
potential conflicts of interest in my view.

Tim.




On Fri, May 24, 2019 at 8:16 AM Paul Dale  wrote:

> There hasn't been a vote about this, however both Shane and I have
> committed to not approve each other's PRs.
>
> I also asked Richard if this could be mechanically enforced, which I
> expect will happen eventually.
>
>
> Pauli
> --
> Oracle
> Dr Paul Dale | Cryptographer | Network Security & Encryption
> Phone +61 7 3031 7217
> Oracle Australia
>
>
> -Original Message-
> From: Salz, Rich [mailto:rs...@akamai.com]
> Sent: Friday, 24 May 2019 1:01 AM
> To: openssl-project@openssl.org
> Subject: Re: No two reviewers from same company
>
> > I understand that OpenSSL is changing things so that, by mechanism
> (and maybe by
> > policy although it’s not published yet), two members of the same
> company cannot
> > approve the same PR.  That’s great.  (I never approved Akamai
> requests unless it
> > was trivial back when I was on the OMC.)
>
> No such decision has been made as far as I know although it has been
> discussed
> at various times.
>
> In private email, and
> https://github.com/openssl/openssl/pull/8886#issuecomment-494624313 the
> implication is that this was a policy.
>
> > Should this policy be extended to OpenSSL’s fellows?
>
> IMO, no.
>
> Why not?  I understand build process is always handled by Matt and Richard
> (despite many attempts in the past to expand this), but I think if Oracle
> or Akamai can't "force a change" then it seems to me that the OMC shouldn't
> either.
>
>
>


Re: Thoughts on OSSL_ALGORITHM

2019-03-22 Thread Tim Hudson
"handle" is the wrong name for this - if you want to have private const
data then do that rather than something which might be abused for instance
specific information. It could just be an int even or a short. It doesn't
have to be a pointer.

That would reduce the likely of it being used to hold non-const data.

Tim.

On Sat, 23 Mar. 2019, 9:11 am Dr Paul Dale  I’ve no issue having a provider data field there.  It will be useful for
> more than just this (S390 AES e.g. holds data differently to other
> implementations).
>
> I also don’t think forcing separate functions is a big problem — most
> providers will only implement one or two algorithm families which will help
> control the redundancy.
>
> I don’t think we should be doing a string lookup every time one of these
> is called.
>
>
> Of the three, the provider data feels clean and unique functions fast.
>
> I’d like to avoid mandating another level of indirection (it’s slow),
> which is a risk with provider data.
>
>
> My thought: add the provider data field.  Use that when it can be done
> directly, use unique functions otherwise.
> The example with key and iv lengths would be a direct use.  Code that
> dives through a function pointer or a switch statement would be an example
> of not.
>
>
>
> Pauli
> --
> Dr Paul Dale | Cryptographer | Network Security & Encryption
> Phone +61 7 3031 7217
> Oracle Australia
>
>
>
> On 23 Mar 2019, at 1:45 am, Matt Caswell  wrote:
>
> Currently we have the OSSL_ALGORITHM type defined as follows:
>
> struct ossl_algorithm_st {
>const char *algorithm_name;  /* key */
>const char *property_definition; /* key */
>const OSSL_DISPATCH *implementation;
> };
>
> I'm wondering whether we should add an additional member to this
> structure: a
> provider specific handle. i.e.
>
> struct ossl_algorithm_st {
>const char *algorithm_name;  /* key */
>const char *property_definition; /* key */
>const OSSL_DISPATCH *implementation;
>void *handle;
> };
>
> The reason to do this is because providers are likely to want to share the
> same
> implementation across multiple algorithms, e.g. the init/update/final
> functions
> for aes-128-cbc are likely to look identical to aes-256-cbc with the only
> difference being the key length. A provider could use the handle to point
> to
> some provider side structure which describes the details of the algorithm
> (key
> length, IV size etc). For example in the default provider we might have:
>
> typedef struct default_alg_handle_st {
>int nid;
>size_t keylen;
>size_t ivlen;
> } DEFAULT_ALG_HANDLE;
>
> DEFAULT_ALG_HANDLE aes256cbchandle = { NID_aes_256_cbc, 32, 16 };
> DEFAULT_ALG_HANDLE aes128cbchandle = { NID_aes_128_cbc, 16, 16 };
>
> static const OSSL_ALGORITHM deflt_ciphers[] = {
>{ "AES-256-CBC", "default=yes", aes_cbc_functions, &aes256cbchandle },
>{ "AES-128-CBC", "default=yes", aes_cbc_functions, &aes128cbchandle },
>{ NULL, NULL, NULL }
> };
>
> Then when the "init" function is called (or possibly at newctx), the core
> passes
> as an argument the provider specific handle associated with that algorithm.
>
> An alternative is for the provider to pass the algorithm name instead, but
> this
> potentially requires lots of strcmps to identify which algorithm we're
> dealing
> with which doesn't sound particularly attractive.
>
> A second alternative is for each algorithm to always have unique functions
> (which perhaps call some common functions underneath). This sounds like
> lots of
> unnecessary redundancy.
>
> Thoughts?
>
> Matt
>
>
>


function and typedef naming thoughts

2019-03-05 Thread Tim Hudson
Looking at PR#8287 I think we need to get some naming schemes written down
and documented and followed consistently. The naming used in this PR seems
to be somewhat inconsistent.

For me, I think the naming convention most often used is

return_type SOMETHING_whatever(SOMETHING *,...)

as a general rule for how we are naming things. There are lots of
exceptions to this in the code base - but this is also pretty much
consistent.

And we use typedef names in all capitals for what we expect users to work
with.
We avoid the use of pure lowercase in naming functions or typedefs that are
in the public API that we expect users to work with - all lowercase means
this is "internal" only usage.

And we reserve OSSL and OPENSSL as prefixes that we feel are safe to place
all new names under.

Tim.


Fwd: openssl-announce post from hong...@gmail.com requires approval

2019-02-26 Thread Tim Hudson
Add a -r to your diff command so you recursively compare ... then you will
see the actual code changes.
Without the -r you are only comparing files in the top-level directory of
each tree.

diff *-r* -dup openssl-1.0.2q openssl-1.0.2r

Tim.

-- Forwarded message --
From: Hong Cho 
To: open...@openssl.org, openssl-project@openssl.org
Cc: OpenSSL User Support ML , OpenSSL Announce
ML 
Bcc:
Date: Wed, 27 Feb 2019 08:28:17 +0900
Subject: Re: [openssl-project] OpenSSL version 1.0.2q published

I see no code change between 1.0.2q and 1.0.2r.

--
# diff -dup openssl-1.0.2q openssl-1.0.2r |& grep '^diff' | awk '{print $4}'
openssl-1.0.2r/CHANGES
openssl-1.0.2r/Makefile
openssl-1.0.2r/Makefile.org
openssl-1.0.2r/NEWS
openssl-1.0.2r/README
openssl-1.0.2r/openssl.spec
hongch@hongch_bldx:~/downloads> diff -dup openssl-1.0.2q openssl-1.0.2r | &
grep '^Only'
Only in openssl-1.0.2q: Makefile.bak
--

It's supposed have a fix for CVE-2019-1559? Am I missing something?

Hong.


Re: Thoughts about library contexts

2019-02-18 Thread Tim Hudson
On Mon, Feb 18, 2019 at 8:36 PM Matt Caswell  wrote:

>
>
> On 18/02/2019 10:28, Tim Hudson wrote:
> > It should remain completely opaque.
> > As a general rule, I've never seen a context where someone regretted
> making a
> > structure opaque over time, but the converse is not true.
> > This is opaque and should remain opaque.
> > We need the flexibility to adjust the implementation at will over time.
>
> I think we're debating whether it is internally opaque or not. Externally
> opaque
> is a given IMO.
>


And my comments apply to internally opaque too - I was aware of that
context when I wrote them  - this is something that we will want to change
as it evolves over time.
And we shouldn't have a pile of knowledge of the internals of one part of
the library spreading over the other parts.

Tim.


Re: Thoughts about library contexts

2019-02-18 Thread Tim Hudson
It should remain completely opaque.
As a general rule, I've never seen a context where someone regretted making
a structure opaque over time, but the converse is not true.
This is opaque and should remain opaque.
We need the flexibility to adjust the implementation at will over time.

For anything where partial visibility is seen as desirable, it should be
raised specifically as to what issue would drive that - as frankly I don't
see a context where that would make sense.

Tim.


[openssl-project] inline functions

2019-01-27 Thread Tim Hudson
>From https://github.com/openssl/openssl/pull/7721

Tim - I think inline functions in public header files simply shouldn't be
present.
Matt - I agree
Richard - I'm ambivalent... in the case of stack and lhash, the generated
functions we made static inline expressly to get better C type safety, and
to get away from the mkstack.pl horror.

It would be good to get a sense of the collective thoughts on the topic.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] [TLS] Yet more TLS 1.3 deployment updates

2019-01-24 Thread Tim Hudson
On Thu, Jan 24, 2019 at 9:45 PM Matt Caswell  wrote:

> > This notion of "handshake" is not supported by RFC 8446 uses the terms
> "the
> > handshake", "a handshake", and "post-handshake". "Post-handshake", in
> > particular, implies KeyUpdate are after the handshake, not part of it.
>
> I just don't agree with you here. About the best that can be said about
> RFC8446
> in this regards is that the term handshake is overloaded. It certainly
> does mean
> "the initial handshake" in the way that you describe (and I myself use the
> term
> to mean that). But it is *also* used in other contexts, such as "handshake
> messages" or "handshake protocol" where it is referring to things not
> necessarily constrained to the initial handshake.
>

I agree with Matt here - there is no such clear distinction made in RFC8446
- with "handshake" being used in *all *contexts.
If such a distinction was intended by the IETF WG then they failed to
achieve it in RFC8446 in numerous places.

Quoting RFC8446 ...

4.6.3.  Key and Initialization Vector Update

   The KeyUpdate *handshake message ...*


It doesn't help that it has 4.6 Post-Handshake Message section which states
"after the main handshake" also indicating that the handshake messages are
handshakes too - just not the "main handshake".

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] To deprecate OpenSSL_version() or not

2018-12-05 Thread Tim Hudson
The function has been there for a long time (since then beginning) and it
is all about version related information - so both names aren't exactly
clearly descriptive.

OpenSSL_version_information() would be a better name.

It would also argue that the "version" program should be renamed "info" as
the same argument would equally apply.

However I do not think we should rename a function and deprecate a function
that is very widely used.

And the function should also cover everything that the current "version"
application covers (like seeding source etc). The ifdefs there are not
something we should expect applications to repeat.

Tim.

On Wed, 5 Dec. 2018, 5:50 pm Matt Caswell  Richard and I are discussing whether OpenSSL_version() should be
> deprecated or
> not in favour of a new function OPENSSL_info() which does more or less the
> same
> thing. See:
>
> https://github.com/openssl/openssl/pull/7724#discussion_r239067887
>
> Richard's motivation for doing so is that he finds the old name "strongly
> misleading". I disagree and prefer not to deprecate functions like this
> just
> because we don't like the name (which eventually will lead to breakage
> when we
> remove the deprecated functions in some future major version (not 3.0.0))
>
> I'd appreciate more input on the discussion.
>
> Matt
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Minimum C version

2018-10-07 Thread Tim Hudson
I don't see a *substantial benefit* from going to C99 and I've worked on
numerous embedded platforms where it is highly unlikely that C99 support
will ever be available.

Kurt - do you have a specific list of features you think would be
beneficial - or is it just a general sense to move forward?

We should ensure that C++ builds work - but that is mostly simply keyword
avoidance - but sticking with the base C89/C90 in my experience is still a
reasonable position.

For Microsoft reading
https://social.msdn.microsoft.com/Forums/en-US/fa17bcdd-7165-4645-a676-ef3797b95918/details-on-c99-support-in-msvc?forum=vcgeneral
and
https://blogs.msdn.microsoft.com/vcblog/2013/07/19/c99-library-support-in-visual-studio-2013/
may
assist.

I know there are Microsoft platforms that require use of earlier compilers
than VS2013 to support (unfortunately).

Tim.


On Sun, Oct 7, 2018 at 11:33 PM Kurt Roeckx  wrote:

> On Sun, Oct 07, 2018 at 02:01:36PM +0100, David Woodhouse wrote:
> > Unfortunately Microsoft still does not support C99, I believe. Or did
> that get fixed eventually, in a version that can reasonably be required?
>
> That is a very good point, and they never intend to fix that.
>
> So would that mean we say that VC will be unsupported? Or that we
> should make it so that it can be build using C++?
>
>
> Kurt
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-28 Thread Tim Hudson
On Fri, Sep 28, 2018 at 4:55 PM Matt Caswell  wrote:

> Either we go with semver and totally commit to it - or we stick with what
> we've already got. No
> half-way, "well we're kind of doing semver, but not really".
>

+1

I see no point in changing what we are doing *without* getting the benefit
of following the semantic versioning approach.
Right now things like 1.1.0 with a major API change and 1.1.1 with a major
feature update of TLSv1.3 are confusing to those who haven't been along for
the journey.

Our current handling of version numbering surprises developers and requires
careful explanation.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 11:02 PM Matt Caswell  wrote:

> You're right on this one. I misread the diff.
>

Not a problem - you are doing the look-at-what-we-did and how it would be
impacted - and that is certainly what we should be doing - working through
what impact this would have had.
Semantic versioning is a major change in behaviour with a focus on the API
being directly reflected in the versioning scheme.

It does mean that a lot of how we have handled things in the past in terms
of what was okay to go into a patch release changes.
Patch release become* pure bug fix *with no API changes (of any form)
releases and that is very different.
We have taken a relatively flexible interpretation - and put in a lot more
than bug fixes into the letter (patch) releases - we have added upwards
compatible API additions.

It would also mean our LTS releases are MAJOR.MINOR - as the PATCH is the
fixes we will apply - so it isn't part of the LTS designation as such.
e.g. 5.0.x would be the marker - not 5.0.0 - so 5.0 in shorthand form.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 10:37 PM Matt Caswell  wrote:

> - Added some new macros:
> https://github.com/openssl/openssl/pull/6037


No we didn't change our public API for this one - we changed *internals*.
The change to include/openssl/crypto.h was entirely changing *comments* to
document that the internals used more of a reserved space - but the API
wasn't changed.

And it isn't a matter of what we did for each item - it is a matter of
could we have fixed certainly things that we would classify as *bugs*
without impacting the API - we didn't have that as a rule - but if we did
then what would we have done.
And where we missed an API then it is not a PATCH. Where we got something
that wasn't spelled correctly - it has to wait for a MINOR to be fixed. A
patch doesn't change APIs.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 9:22 PM Matt Caswell  wrote:

> Lets imagine we release version 5.0.0. We create a branch for it and
> declare a support period. Its an LTS release. This is a *stable*
> release, so we shouldn't de-stabilise it by adding new features.
>
> Later we do some work on some new features in master. They are backwards
> compatible in terms of API so we release it as version 5.1.0. Its got a
> separate support period to the LTS release.
>
> We fix some bugs in 5.0.0, and release the bug fixes as 5.0.1. All fine
> so far. But then we realise there is a missing accessor in it. Its an
> LTS release so its going to be around for a long time - we really need
> to add the accessor. But we *can't* do it!! Technically speaking,
> according to the rules of semantic versioning, that is a change to our
> API - so it needs to be released as a MINOR version change. That would
> make the new version 5.1.0but we already used that number for our
> backwards compatible feature release!
>

And that is what semantic versioning is about - it is about the API.
So if you add to the API you change either MINOR or MAJOR.
In your scenario the moment you need to add an API you are doing a MINOR
release and if you already have a MINOR release under the MAJOR then you
need to add a MINOR on top of the latest MINOR that you have.
You don't get to make API changing releases that expand the API behind the
existing releases that are out there.

That is not how a semantically versioned project behaves.

The rules are strict for a reason to stop some of the practices that we
have - where PATCH releases add APIs.

Part of the precondition for a semantically versioned project is that the
API (and in this sense this is the public API) is under "control" as such.

I think there are very few circumstances under which we have needed to add
APIs - and I think outside of accessor functions during the opacity changes
- I don't know that there were any API additions that weren't actually
avoidable by solving the problem without adding an API. I don't know - I
haven't checked - but none leap to the front on mind. We have done that
simply because we didn't have a strict rule in place about API additions -
we do about changes or deletions - but we tend to view additions as
relatively safe (and they are from a backwards compatible perspective - but
they are not from a semantic versioning point of view).

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 8:07 PM Matt Caswell  wrote:

> On 25/09/18 10:58, Tim Hudson wrote:
> > On Tue, Sep 25, 2018 at 7:23 PM Richard Levitte  > <mailto:levi...@openssl.org>> wrote:
> >
> > So what you suggest (and what I'm leaning toward) means that we will
> > change our habits.
> >
> >
> > Adoption of semantic versioning will indeed require us to change our
> > habits in a number of areas - that is the point of having a single clear
> > definition of what our version numbers mean.
> >
> > I also do think we need to document a few things like what we mean by
> > bug fix compared to a feature update as there are items which we could
> > argue could either be a PATCH or a MINOR update depending on how we
> > describe such things.
>
> Sometimes we need to add a function in order to fix a bug. How should
> this be handled? e.g. there are 60 functions defined in
> util/libcrypto.num in the current 1.1.0i release that weren't there in
> the original 1.1.0 release.
>


In semantic versioning those are MINOR releases. The API has changed in a
backward compatible manner.
They cannot be called PATCH releases - those are only for internal changes
that fix incorrect behaviour.
If you change the API you are either doing a MINOR or a MAJOR release.

Now I think the flexibility we added during 1.1.0 when the MAJOR change was
to make APIs opaque was a different context where our API remained unstable
(in semantic terms) yet we did a release (for other reasons).
Under semantic versioning, 1.1.0 would have been called 2.0.0 and we would
have had to batch up the accessor API additions into a 2.1.0 release and
not have those included in any 2.0.X PATCH release.

It is quite a change under semantic versioning in some areas as it
basically requires the version to reflect API guarantees.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
On Tue, Sep 25, 2018 at 7:23 PM Richard Levitte  wrote:

> So what you suggest (and what I'm leaning toward) means that we will
> change our habits.
>

Adoption of semantic versioning will indeed require us to change our habits
in a number of areas - that is the point of having a single clear
definition of what our version numbers mean.

I also do think we need to document a few things like what we mean by bug
fix compared to a feature update as there are items which we could argue
could either be a PATCH or a MINOR update depending on how we describe such
things.
Getting those things documented so we can be consistent is a good thing
IMHO. The specifics of which we place in PATCH and which we place in MINOR
are less important than being consistent in handling the same item.

For example:
- adding an ASM implementation for performance reasons - is that PATCH or
MINOR
- changing an ASM implementation for performance release - is that PATCH or
MINOR
- adding an ASM implementation to get constant time behaviour - is that
PATCH or MINOR
- changing an ASM implementation for constant time behaviour - is that
PATCH or MINOR

For all four of the above examples the API is the same (assuming that the
low-level APIs are not actually exposed in the public interface for any of
these).
And deciding on those depends how you view performance - is it a bug that
something runs slower than it could - or is it a feature.

Good arguments can be made for always MINOR or for PATCH - but I think we
should have a clear view on how we will handle such things going forward
given the OMC members have differing views on the topic and we shouldn't
end up with different handling depending on which members in which timezone
are awake for a given pull request :-)

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-25 Thread Tim Hudson
A fairly common approach that is used is that you can only remove something
that has been marked for deprecation at a MAJOR release version boundary.

That is entirely independent of the semantic versioning view of things -
which also happens to say the same thing (that adding a deprecation
indication is at least a MINOR release and the removal is at a MAJOR
release).
PATCH versions should never change an API.
So we start warning whenever it is that we decide to deprecate in a MINOR
release, but any actual removal (actual non-backwards compatible API
change) doesn't come in place until a MAJOR release.

I see marking things for deprecation is a warning of a pending
non-backwards-compatible API change - i.e. that there is another way to
handle the functionality which is preferred which you should have switched
across to - but for now we are warning you as the final step before
removing an API at the next MAJOR release. We haven't committed we will
actually remove it - but we have warned you that we *expect* to remove it.

Tim
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 3:12 PM Viktor Dukhovni 
wrote:

> The proposal to move the minor version into nibbles 2 and 3 breaks this
> OpenSSH function.
>

No it doesn't - because I'm not talking about moving *anything* in the
current encoding for major and minor - see earlier post.
The positions don't change for minor version. We just stop using the
current PATCH and use the current FIX as PATCH.
And the logical test there remains valid in that it detects all
incompatible versions correctly - what changes is that some versions that
are compatible are seen as incompatible - but that is an incorrect
interpretation that is *safe.*

And note that the openssh code there is actually more conservative than it
needs to be already.

And under semantic versioning, it is only the MAJOR that changes when
breaking changes happen.

But what I've been referring to is that there is other code that uses our
documented encoding and parses it ... this isn't just an openssh issue.

Tim
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
If you accept that we have a MAJOR.MINOR.FIX.PATCH encoding currently
documented and in place and that the encoding in OPENSSL_VERSION_NUMBER is
documented then the semantic versioning is an easy change.
We do not need to change our encoding of MAJOR.MINOR - that is documented
and fixed. We just need to start using it the way semantic versioning says
we should.
We need to eliminate FIX - there is no such concept in semantic versioning.
And PATCH exists as a concept but in semantic versioning terms it is a
number.

That leads to two main choices in our current encoding as to how we achieve
that:
1) leave old FIX blank
2) move old PATCH to old FIX and leave old PATCH blank (i.e. old FIX is
renamed to new PATCH and old PATCH is eliminated)

Option 2 offers the *least surprise* to users - in that it is how most
users already think of OpenSSL in terms of a stable API - i.e. our current
MAJOR.MINOR.FIX is actually read as MAJOR.MINOR.PATCH or to be more precise
our users see FIX and PATCH as combined things - they don't see our MAJOR
and MINOR as combined things.

Under option 2 it does mean anyone that thinks a change of our current
MINOR means an ABI break would be interpreting things incorrectly - but
that interpretation is *entirely safe* - in that they would be assuming
something more conservative rather than less conservative. And leaving the
old PATCH blank doesn't hurt anyone.

I don't think that moving to semantic versioning requires us to change our
encoding of OPENSSL_VERSION_NUMBER except to move PATCH to FIX - as one of
those concepts disappears.
And then when we do a major release update (like 1.1.1) we end up with a
MAJOR version change.

Alternatively, we can elect to effectively say the OPENSSL_VERSION_NUMBER
encoding we have documented and supported is subject to yet another change
where we reinterpret things.
That is doable - but I also think entirely unnecessary and confusing for
our users.

Tim
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 11:55 AM Viktor Dukhovni 
wrote:

> this is an ad-hoc encoding with monitonicity as the
> the only constraint.
>

If you start from the position that the encoding of OPENSSL_VERSION_NUMBER
is free to change so long as the resulting value is larger than what we
have been using for all previous versions then a whole range of options
come into play.
But we shouldn't assert that we aren't changing the meaning of
OPENSSL_VERSION_NUMBER - and that is what others have been doing on this
thread - and what your previous email also asserted.

It is a *breaking* change in our comments and our documentation and in what
our users are expecting. Basically it is an API and ABI change - we said
what the number means and we are changing that.
The impact of the breaking change for those using it for pure feature
testing for API difference handling (where it isn't actually parsed) can be
minimised just by always having a larger number that all previous uses.
The impact of the breaking change on anyone actually following our
documented encoding cannot.
i.e. openssh

as
one example Richard pointed out.
That isn't the only code that actually believes our header files and
documentation :-)

Semantic versioning is about MAJOR.MINOR.PATCH with specific meanings.
There is no FIX concept as such. There is PRERELEASE and METADATA.
One of our existing concepts disappears - we have MAJOR.MINOR.FIX.PATCH
currently and we also encode STATUS as a concept (which we can map mostly
to PRERELEASE).

And if we are changing to fit into semantic versioning we need to use its
concepts and naming and not our present ones.
A merge of concepts pretty much goes against the point of semantic
versioning.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, 22 Sep. 2018, 3:24 am Viktor Dukhovni, 
wrote:

> > On Sep 21, 2018, at 12:50 PM, Tim Hudson  wrote:
> > If that is the case then our current practice of allowing ABI breakage
> with
> > minor release changes (the middle number we document as the minor
> release number)
> > has to change.
> CORRECTION:  As advertised when 1.0.0 first came out, and repeated in our
> release
>  policy:
>   As of release 1.0.0 the OpenSSL versioning scheme was improved
>   to better meet developers' and vendors' expectations. Letter
>   releases, such as 1.0.2a, exclusively contain bug and security
>   fixes and no new features. Minor releases that change the
>   last digit, e.g. 1.1.0 vs. 1.1.1, can and are likely to
>   contain new features, but in a way that does not break binary
>   compatibility. This means that an application compiled and
>   dynamically linked with 1.1.0 does not need to be recompiled
>   when the shared library is updated to 1.1.1. It should be
>   noted that some features are transparent to the application
>   such as the maximum negotiated TLS version and cipher suites,
>   performance improvements and so on. There is no need to
>   recompile applications to benefit from these features.
>


I have to disagree here. We said things about *minor releases* and *major
releases* but we didn't say things about the version numbers or change the
documentation or code comments to say the first digit was meaningless and
that we have shifted the definition of what X.Y.Z means.

That parsing of history I think is at best a stretch and not supportable
and also not what our *users* think.

Our users don't call 1.0.2 the 0.2 release of OpenSSL. Our users don't call
1.0.0 the 0.0 release. There isn't a short hand or acceptance or a decision
communicated to conceptually drop the 1. off the front of our versioning.
Your logic and practice is saying you see the first two digits as the major
version number - that's fine - you are welcome to take an ABI compatible
short hand to refer to version - but that doesn't mean we changed the
definition of our versioning system.
What you are tracking there is effectively the SHLIB version.

So if our users don't think that, and our code comments don't stay that,
and our pod documentation doesn't say that and users have the first part in
their masks and don't just ignore it then I don't think it is supportable
to claim we told everyone it was a meaningless first digit and changed our
definition of our versioning scheme back at the 1.0.0 release.

Currently when we make minor API changes that aren't breaking we update the
fix version number. When we make major API changes which we expect to be
breaking we update the minor version number.
Now think about the transition from 1.1.0 to 1.1.1 - that is by any
reasonable definition *a major release* - but we don't update the major
version number by either definition of the major version number.
We call it in our website a "feature release" - yet more terminology to
dance around our version numbering scheme.
Read the actual blog post
<https://www.openssl.org/blog/blog/2018/09/11/release111/> and try to parse
that as *a minor release* - it isn't - it is *a major release* - but we did
not change the *major release number *even if we accepted the theory and
your definition that the first number is meaningless and it is only the
second number that is the real major version and the third number isn't a
fix version is it actually the minor version. The implications of that view
point aren't supported by our actions.

We did not say we are redefining the concept of our release numbering
scheme - we just defined what API impacts there were and we used "major"
and "minor" about *release efforts and impacts* - not about changing the
definition of the parts of the OPENSSL_VERSION_NUMBER macro and the
corresponding meaning of what SSLeay_version_num() and now
OpenSSL_version_num() returns. This is a core part of our API.

Remember that SSLeay_version_num() was only renamed OpenSSL_version_num()
in 2015 in commit b0700d2c8de79252ba605748a075cf2e5d670da1
<https://github.com/openssl/openssl/commit/b0700d2c8de79252ba605748a075cf2e5d670da1>
as part of 1.1.0
The first update for the top nibble indicating the major version number had
changed was back in 2009 in commit 093f5d2c152489dd7733dcbb68cbf654988a496c
<https://github.com/openssl/openssl/commit/093f5d2c152489dd7733dcbb68cbf654988a496c>
which
is when 1.0.0 started.

Saying that our documentation in doc/crypto/OPENSSL_VERSION_NUMBER.pod
<https://github.com/openssl/openssl/blob/master/doc/man3/OPENSSL_VERSION_NUMBER.pod>
was just forgotten to be update

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, 22 Sep. 2018, 2:29 am Viktor Dukhovni, 
wrote:

>
>
> > On Sep 21, 2018, at 12:14 PM, Matt Caswell  wrote:
> >
> > I support Richard's proposal with an epoch of 1.
> > Grudgingly I would accept an epoch in the 3-8 range.
> > I would oppose an epoch of 2.
>
> I can live with that, though it might mean that a minority of
> applications will interpret (based on obsolete nibble extraction)
>


It isn't obsolete nibble extraction - that is the documented way to get the
major version. In the header file and in the API docs. This is changing
things as we document the format of that number currently and some code
does use that.

What Richard proposes is a breaking change conceptually and it doesn't need
to be one.

We can get to semantic versioning and also not break existing documented
usage.

But what we will be doing is following a versioning scheme that states
precisely what it means for API changes.

It says nothing specifically about ABI changes unless you take the view
that ABI == API when reading the document too.

If that is the case then our current practice of allowing ABI breakage with
minor release changes (the middle number we document as the minor release
number) has to change.

And anyone depending on checking the minor version number for ABI
compatibility will break.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 1:39 AM Viktor Dukhovni 
wrote:
>  The only change needed is a minor one in applications that actually
> parse the nibbles

What I was suggesting is that we don't need to break the current encoding
at all.
We have a major.minor.fix version encoded and documented in the header file
that we can continue to represent in semantic terms.
So anyone decoding those items will actually get the correct result (from a
user perspective).
Those using it for conditional compilation checks continue to work.

BTW thanks for the correction on "a" to "1" for the patch release (wasn't
being careful there).

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 1:34 AM Matthias St. Pierre <
matthias.st.pie...@ncp-e.com> wrote:

>
> On 21.09.2018 17:27, Tim Hudson wrote:
> >
> > We cannot remove the current major version number - as that concept
> exists and we have used it all along.
> > We don't just get to tell our users for the last 20+ years what we
> called the major version (which was 0 for the first half and 1 for the
> second half) doesn't exist.
> >
>
> There has been a famous precedent in history though:  Java dropped the
> "1." prefix when going from "1.4" to "5.0"
> (https://en.wikipedia.org/wiki/Java_version_history#Versioning_change)
>

Unfortunately that isn't a good example - as the version number didn't
actually change - just the marketing and download packaging.
The APIs continue to return 1.5.0 etc.
And many Java developers continue to use the real version numbers.

That is something I wouldn't suggest makes sense as an approach - to change
the tarfile name and leave all the internals the same achieves nothing.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 1:16 AM Viktor Dukhovni 
wrote:

>
>
> > On Sep 21, 2018, at 11:00 AM, Tim Hudson  wrote:
> >
> > If you repeat that in semantic versioning concepts just using the labels
> for mapping you get:
> > - what is the major version number - the answer is clearly "1".
> > - what is the minor version number - the answer is clearly "0"
> > - what is the fix version number - there is no such thing
> > - what is the patch version number - the answer is clearly "2" (reusing
> the fix version)
>
> I'm afraid that's where you're simply wrong.  Ever since 1.0.0, OpenSSL
> has promised (and I think delivered) ABI stability for the *minor* version
> and feature stability (bug fixes only) for the patch letters.  Therefore,
> the semantic version number of "1.0.2a" is "1.0", its minor number is 2
> and its fix number is 1 ("a").
>

No it isn't - as you note that isn't a valid mapping - 1.0 isn't a semantic
version and there is no such thing as a fix number,
You get three concepts and then on top of that the pre-release and the
build-metadata.

Semantic versioning is about the API not the ABI.

So we could redefine what we have been telling our user all along and
combine our current major+minor in a new major version.
Making 1.0.2a be 10.2.0 in semantic version terms.

We cannot remove the current major version number - as that concept exists
and we have used it all along.
We don't just get to tell our users for the last 20+ years what we called
the major version (which was 0 for the first half and 1 for the second
half) doesn't exist.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Sat, Sep 22, 2018 at 12:32 AM Viktor Dukhovni 
wrote:

> > On Sep 21, 2018, at 10:07 AM, Tim Hudson  wrote:
> >
> > And the output you get:
> >
> > 0x10102000
>
> The trouble is that existing software expects to potential ABI changes
> resulting from changes in the 2nd and 3rd nibbles, and if the major
> version is just in the first nibble, our minor version changes will
> look like major number changes to such software.
>

The problem is that we documented a major.minor.fix.patch as our versioning
scheme and semantic versioning is different.
Semantic versioning does not allow for breaking changes on the minor
version - and that is what we do in OpenSSL currently.
Part of the requests that have come in for semantic versioning is to
actually do what is "expected".

Basically our significant version changes alter the minor version field in
the OPENSSL_VERSION_NUMBER structure.
And we have loosely referred to those as major releases but not changed the
actual major version field - but the minor.

The simple way to look at this: for OpenSSL-1.0.2a:
- what is the major version number - the answer is clearly "1".
- what is the minor version number - the answer is clearly "0"
- what is the fix version number - the answer is clear "2"
- what is the patch version number - the answer is clearly "a"

If you repeat that in semantic versioning concepts just using the labels
for mapping you get:
- what is the major version number - the answer is clearly "1".
- what is the minor version number - the answer is clearly "0"
- what is the fix version number - there is no such thing
- what is the patch version number - the answer is clearly "2" (reusing the
fix version)

If you repeat that in semantic versioning concepts just using the actual
definitions:
- what is the major version number - the answer is clearly "1".
- what is the minor version number - the answer is probably "2"
- what is the fix version number - there is no such thing
- what is the patch version number - the answer is probably "0"

Semantic versioning does not have the concept of a fix version separate
from a patch version - those are just a fix version.
If you add or deprecate APIs you have a new minor version. If you fix bugs
without touching the API is it a patch version.
If you break anything in the API is a major version.
It is completely totally about the API itself.

And for OpenSSL we are API compatible for most releases - except when we
did the major change for 1.1.0 which in reality semantic versioning would
have called 2.0.0

Semantic versioning does not have the concept of an API and a ABI
distinction.
In OpenSSL we have if the major.minor is the same then we are ABI
compatible. That is what doesn't exist in the semantic versioning approach
- it is about the API.

Effectively the current "minor" version disappears.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
I think we have to keep OPENSSL_VERSON_NUMBER and it has to have
MAJOR.MINOR.FIX in it encoded as we currently have it (where FIX is PATCH
in semantic terms and our current alpha PATCH is left blank).
That is what I've been saying in the various emails - because we precisely
need to not change the definition of what that macro is - as people
interpret it.
I suggest we zero out all the other information in the
OPENSSL_VERSION_NUMBER macro.
And I did also suggest we make the OPENSSL_VERSION_TEXT field precisely
what semantic versioning would have us do - and either drop the things we
have that don't fit or encode them following the rules.

I would also suggest we make that macro up using macros that use the
semantic version terminology directly.
i.e. something like the following.

And the version number is encoded that way to not break the existing usage
(except that what we currently call a fix is actually semantically named a
patch).
One of the critically important parts of semantic versioning is that the
API is precisely only about the major.minor.patch.

The examples for pre-release and build-metadata are just showing that one
goes first with a hyphen and can have dot separated things, the other goes
second with a plus and also can have dot separated things.
If we wanted to keep the date concept in the version text macro then we
encode it correctly - or we can stop doing that sort of thing and leave it
out.
The pre-release can be blank. The build metadata can be blank.

In semantic versioning terms this is what it would mean.
And if you want to check release/alpha/beta status you look at the
OPENSSL_VERSION_PRE_RELEASE macro and we stop the release+alpha+beta
indicator usage in the OPENSSL_VERSION_NUMBER macro.
It was rather limiting in its encoding format. That more rightly belongs in
the semantic version string format.

#include 

#define OPENSSL_MSTR_HELPER(x) #x
#define OPENSSL_MSTR(x) OPENSSL_MSTR_HELPER(x)

#define OPENSSL_VERSION_MAJOR  1
#define OPENSSL_VERSION_MINOR   1
#define OPENSSL_VERSION_PATCH   2
#define OPENSSL_VERSION_PRE_RELEASE "-beta1.special"
#define OPENSSL_VERSION_BUILD_METADATA "+21Sep2018.optbuild.arm"

#define OPENSSL_VERSION_NUMBER
(long)((OPENSSL_VERSION_MAJOR<<28)|(OPENSSL_VERSION_MINOR<<20)|(OPENSSL_VERSION_PATCH<<12))
#define OPENSSL_VERSION_TEXT OPENSSL_MSTR(OPENSSL_VERSION_MAJOR) "."
OPENSSL_MSTR(OPENSSL_VERSION_MINOR) "." OPENSSL_MSTR(OPENSSL_VERSION_PATCH)
OPENSSL_VERSION_PRE_RELEASE OPENSSL_VERSION_BUILD_METADATA

int main(void)
{
  printf("0x%8lx\n",OPENSSL_VERSION_NUMBER);

printf("%d.%d.%d\n",OPENSSL_VERSION_MAJOR,OPENSSL_VERSION_MINOR,OPENSSL_VERSION_PATCH);
  printf("%s\n",OPENSSL_VERSION_TEXT);
}

And the output you get:

0x10102000
1.1.2
1.1.2-beta1+21Sep2018.optbuild.arm

Tim.



On Fri, Sep 21, 2018 at 11:36 PM Richard Levitte 
wrote:

> In message  w2o_njr8bfoor...@mail.gmail.com> on Fri, 21 Sep 2018 23:01:03 +1000, Tim
> Hudson  said:
>
> > Semantic versioning is about a consistent concept of version handling.
> >
> > And that concept of consistency should be in a forms of the version
> > - be it text string or numberic.
> >
> > That you see them as two somewhat independent concepts isn't
> > something I support or thing makes sense at all.
>
> In that case, we should probably just thrown away
> OPENSSL_VERSION_NUMBER and come up with a different name.  If we keep
> that macro around, it needs to be consistent with its semantics as
> we've done it since that FAQ update.  Otherwise, I fear we're making
> life much harder on those who want to use it for pre-processing, and
> those who want to check the encoded version number.
>
> I do get what you're after...  a clean 1:1 mapping between the version
> number in text form and in numeric encoding.  I get that.  The trouble
> is the incompatibilities that introduces, and I'm trying to take the
> middle ground.
>
> > Our users code checks version information using the integer
> representation and it should be in
> > semantic form as such - i.e. the pure numeric parts of the semantic
> version.
> >
> > This is the major point I've been trying to get across. Semantic
> versioning isn't about just one
> > identifier in text format - it is about how you handle versioning in
> general. And consistency is its
> > purpose.
>
> Sure.
>
> Would you mind writing up a quick proposal on a new encoding of the
> version?  (and just so you don't limit yourself too much, it's fine by
> me if that includes abandoning the macro OPENSSL_VERSION_NUMBER and
> inventing a new one, a better one, with a definition that we can keep
> more consistent than our current mess)
>
> Cheers,
> Richard
>
> --
> Richard Levi

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
Now I get the conceptual issue that Richard and Matt are differing on - and
it is about actually replacing OpenSSL's versioning concept with semantic
versioning compared to adopting semantic versioning principles without
actually being precisely a semantic version approach.

The whole concept of semantic versioning is that you define *precisely *what
you mean by a version.
Everywhere you have the concept of a version you must use the semantic form
of a *version encoding*.

That is the X.Y.Z[-prerelease][+buildmeta] format that is documented along
with the rules for X.Y.Z in terms of your public API.
And all other information about a version goes into prerelease and into
buildmeta.
Both prerelease and buildmeta are allowed to be a sequence of dot separated
alphanumerichyphen combinations.

This is the point of semantic versioning. All versions for all products are
all represented with the same sort of concepts and you know what the rules
are for the numeric X.Y.Z handling and the parsing rules for prerelease and
buildmeta.

Our concepts of versioning within OpenSSL if expressed in semantic form
MUST fit into this approach.
No prefixes. No suffixes. Not special additional encoding The idea is
consistency.

When dealing with API issues you only ever need to see X.Y.Z for any code
related testing - it precisely identifies a point in time API release.
There should never be any code ever that varies that requires looking at
prerelease or buildmeta in order to perform any action in terms of the code.

That maps to our concept of OPENSSL_VERSION_NUMBER

For the human reporting we should have the fill concept which is a text
string - and that should be OPENSSL_VERSION_TEXT and that means not having
anything else within the text other than the actual semantic version.
The syntax of the semantic version is fixed.

If you want to keep the concept of a build date in the macro you are
calling the version then it must be encoded in that format - or you move
that concept out of the macro that is the version.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
Semantic versioning is about a consistent concept of version handling.

And that concept of consistency should be in a forms of the version - be it
text string or numberic.

That you see them as two somewhat independent concepts isn't something I
support or thing makes sense at all.

Our users code checks version information using the integer representation
and it should be in semantic form as such - i.e. the pure numeric parts of
the semantic version.

This is the major point I've been trying to get across. Semantic versioning
isn't about just one identifier in text format - it is about how you handle
versioning in general. And consistency is its purpose.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
So as a concrete example - taking master and the current
OPENSSL_VERSION_TEXT value.

"OpenSSL 1.1.2-dev  xx XXX "

would become

"1.1.2-dev+xx.XXX."

That is what I understand is the point of semantic versioning. You know how
to pull apart the version string.
-dev indicates a pre-release version known as "dev"
+xx.XXX. indicates build metadata.
The underlying release is major 1, minor 1, patch 2.

But for semantic versioning where we allow API breakage in our current
minor version we would have to shift everything left.
And we would then have "1.2.0-dev+xx.XXX." if the planned new release
wasn't guaranteed API non-breaking with the previous release (1.1.1).

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Fri, Sep 21, 2018 at 9:02 PM Matt Caswell  wrote:

> I think this is an incorrect interpretation of Richard's proposal. The
> OPENSSL_VERSION_NUMBER value is an *integer* value. It does not and
> cannot ever conform to semantic versioning because, because version
> numbers in that scheme are *strings* in a specific format, where
> characters such as "." and "-" have special meanings.
>

It is the version number. We have it in two forms within OpenSSL from a
code perspective  - we have an integer encoding and a text string.
They are precisely what semantic versioning is about - making sure the
versioning concept is represented in what you are using versioning for.

For OpenSSL, codewise we have two macros and two functions that let us
access the build-time version of the macros:
  OPENSSL_VERSION_NUMBER
  OPENSSL_VERSION_TEXT
  OpenSSL_version_num()
  OpenSSL_version()

We also separately have another form of version number - for shared
libraries:
The macro:
  SHLIB_VERSION_NUMBER

We also encode the version number in release packages too.

What semantic versioning is about is sorting out how we represent the
version.
It should impact both OPENSSL_VERSION_NUMBER and OPENSSL_VERSION_TEXT and
it should be consistent.

For the semantic versioning document the status indicator is handled in the
pre-release indicator.
We could limit that to a numeric number and have it in the
OPENSSL_VERSION_NUMBER but I don't think that has helped and semantic
versioning strictly defines precedence handling.

So I would see the simple mapping from semantic versioning very differently
to Richard's write up - and in fact encoding something rather differently
into the OPENSSL_VERSION_NUMBER to my reading and thinking actually goes
against the principles of semantic versioning.

i.e. OPENSSL_VERSION_NUMBER should be X.Y.Z and OPENSSL_VERSION_TEXT should
be "X.Y.Z[-patch][+buildmeta]" and that would be a simple, direct, and
expected mapping to OpenSSL for semantic versioning.

A merged approach or keeping parts of our (non-semantic) approach while not
fully adopting semantic versioning to me at least would be missing the
point.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Tim Hudson
On Fri, Sep 21, 2018 at 7:58 PM Richard Levitte  wrote:

> Our FAQ says that such changes *may* be part of a major
> release (we don't guarantee that breaking changes won't happen), while
> semantic versioning says that major releases *do* incur backward
> incompatible API changes.
>

I think you are misreading the semantic versioning usage - it states when
things MUST happen.
It does not state that you MUST NOT change a version if the trigger event
has not occurred.

Semantic versioning also requires you to explicitly declare what your
public API is in a "precise and comprehensive" manner.
What do you consider the public API of OpenSSL?

That is pretty much a prerequisite for actually adopting semantic
versioning.

I also think the concept of reinterpreting the current major version number
into an epoch as you propose is not something that we should be doing.
We have defined the first digit as our major version number - and changing
that in my view at least would be going completely against the principles
of semantic versioning.
The version itself is meant to be purely X.Y.Z[-PRERELEASE] or
X.Y.Z[+BUILDMETA] and your suggested encoding is not that at all.

What you have is EPOCH.X.Y.Z.FIX.STATUS - where EPOCH and STATUS are not
concepts contained within semantic versioning.

Basically adopting semantic versioning actually requires something
different to what has been proposed in my view.

I would suggest it means our current version encoding in an integer
of MNNFFPPS becomes simply MNNFF000 and the information for PP and S is
moved elsewhere as semantic versioning defines those concepts differently
(as noted above).

Part of our challenge is ensuring we don't cause unnecessary breakage for
users:

Vendors change the text string to add additional indicators for their
variations.
Otherwise developers use the current integer version for feature testing -
and it needs to remain compatible enough.

I haven't seen any code actually testing the S field within the version or
doing anything specific with the PP version - other than reporting it to
the user.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] coverity defect release criteria (Fwd: New Defects reported by Coverity Scan for openssl/openssl)

2018-09-10 Thread Tim Hudson
On Mon, Sep 10, 2018 at 8:44 AM, Matt Caswell  wrote:

> As far as the release criteria go we only count the ones shown in the
> Coverity tool. That's not to say we shouldn't fix issues in the tests as
> well (and actually I'd suggest we stop filtering out problems in the
> tests if anyone knows how to do that...perhaps Tim?).
>

I have changed things to no longer exclude the tests area from the online
reports (that was a historical setting on the original project from
pre-2014).
The second project which I got added to track master has always emailed all
issues.
I have also now changed the OpenSSL-1.0.2 project to report all errors as
well via email - no longer excluding various components.

So now we should have both the online tool and the emails for both projects
configured the same and including the test programs.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Please freeze the repo

2018-09-09 Thread Tim Hudson
Done.

Tim.


On Sun, Sep 9, 2018 at 8:34 PM, Matt Caswell  wrote:

> Please can someone freeze the repo:
>
> ssh openssl-...@git.openssl.org freeze openssl matt
>
>
> Thanks
>
> Matt
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release Criteria Update

2018-09-06 Thread Tim Hudson
All PRs except #7145 now reviewed and marked ready.

Tim

On Fri, Sep 7, 2018 at 8:41 AM, Matt Caswell  wrote:

> We currently have 8 1.1.1 PRs that are open. 3 of which are in the
> "ready" state. There are 2 which are alternative implementations of the
> same thing - so there are really on 4 issues currently being addressed:
>
> #7145 SipHash: add separate setter for the hash size
>
> Owner: Richard
> Awaiting review (CIs are failing)
>
>
> #7141 Ensure certificate callbacks work correctly in TLSv1.3
>
> Owner: Matt
> Trivial change. Awaiting review
>
>
> #7139 Remove a reference to SSL_force_post_handshake_auth()
>
> Owner: Matt
> Trivial change. Awaiting review
>
>
> #7114 Process KeyUpdate and NewSessionTicket messages after a close_notify
> Alternative implementation for #7058
>
> Owner: Matt
> Awaiting review. Anyone?
>
>
> There 5 1.1.1 issues open - 3 of which should be solved by outstanding
> PRS. The remaining 2 are:
>
>
> #7014 TLSv1.2 SNI hostname works in 1.1.0h, not in 1.1.1 master (as of
> 18-Aug)
>
> We thought we had a fix for this, but the PR in question does not seem
> to have solved the OPs issue
>
>
> #7133 X509_sign SIGSEGVs with NULL private key
>
> Should be an easy fix
>
>
> Matt
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release Criteria Update

2018-09-06 Thread Tim Hudson
We need to get this release out and available - there are a lot of people
waiting on the "production"release - and who won't go forward on a beta
(simple fact of life there).

I don't see the outstanding items as release blockers - and they will be
wrapped up in time.

Having the release date as a drive I think helps for a lot of focus - and
more stuff has gone into 1.1.1 that we originally anticipated because we
held it open waiting on TLSv1.3 finalisation.

So a +1 for keeping to the release date.

Tim.


On Fri, Sep 7, 2018 at 8:25 AM, Matt Caswell  wrote:

>
>
> On 06/09/18 17:32, Kurt Roeckx wrote:
> > On Tue, Sep 04, 2018 at 05:11:41PM +0100, Matt Caswell wrote:
> >> Current status of the 1.1.1 PRs/issues:
> >
> > Since we did make a lot of changes, including things that
> > applications can run into, would it make sense to have an other
> > beta release?
>
> I'm not keen on that. What do others think?
>
> Matt
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release Criteria Update

2018-09-05 Thread Tim Hudson
 On Thu, Sep 6, 2018 at 8:59 AM, Matt Caswell  wrote:
> #7113 An alternative to address the SM2 ID issues
> (an alternative to the older PR, #6757)
>
> Updates made following earlier review. Awaiting another round of reviews.
> Owner: Paul Yang

All the previous comments have been addressed. I noted two missing SM2err
calls on malloc failure and a typo in SM2.pod.
I've approved it conditional on those being fixed.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

[openssl-project] New OMC Member and New Committers

2018-08-22 Thread Tim Hudson
 Welcome to Paul Dale (OMC) , Paul Yang and Nicola Tuveri (Commiters).

See the blog post at https://www.openssl.org/blog/blog/2018/08/22/updates/

Thanks,
Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Removal of NULL checks

2018-08-08 Thread Tim Hudson
We don't have a formal policy of no NULL checks - we just have a few
members that think we should have such a policy but it has never been voted
on as we had sufficiently varying views for a consensus approach to not be
possible.

Personally I'm in favour of high-level APIs having NULL checks as it (in my
experience) leads to less bogus error crash reports from users and simpler
handling in error cleanup logic.
Otherwise you end up writing a whole pile of if(x!=NULL) FOO_free(x); etc

But it is a style issue.

However in the context of removing such checks - that we should *not* be
doing - the behaviour of the APIs in this area should not be changed
outside of a major version increment - and even then I would not make the
change either because it introduces changes which in themselves serve no
real purpose - i.e. a style change can result in making a program that used
to work fine crash in bad ways - and that we shouldn't be doing even across
major version increments in my view.

Tim.


On Wed, Aug 8, 2018 at 8:08 PM, Matt Caswell  wrote:

> We've had a policy for a while of not requiring NULL checks in
> functions. However there is a difference between not adding them for new
> functions and actively removing them for old ones.
>
> See https://github.com/openssl/openssl/pull/6893
>
> In this case the removal of a NULL check in the stack code had the
> unintended consequence of a crash in a no-comp build. Is it wise to be
> actively removing existing NULL checks like this? It does have an impact
> on the behaviour of a function (even if that behaviour is undocumented
> and not encouraged). The concern I have is for our API/ABI compatibility
> guarantee. If we make changes like this then 1.1.1 may no longer be a
> drop in replacement for 1.1.0 for some apps.
>
> Matt
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Current votes FYI

2018-05-23 Thread Tim Hudson
No that vote does not pass. All votes require participation by a majority
of active members. Failure to have a majority participation causes a vote
to fail.

With only three out of eight members voting this vote simply did not pass.

Tim.


On Thu, 24 May 2018, 12:59 am Salz, Rich,  wrote:

> Another update
>
> VOTE: Remove the second paragraph ("Binary compatibility...improve
> security")
> from the release strategy.
>
>  +1: 2
>  0: 1
> -1: 0
> No vote: 5
>
> The vote passed.
>
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-15 Thread Tim Hudson
Where we are stating that ABI compatibility is in place we should be
testing it.
i.e. the older release binaries should be run against the current release
libraries - and that should be put into CI in my view.

Going the other direction isn't something I have thought we have ever
guaranteed (i.e. downgrading) - but programs that don't use new APIs (i.e.
for the ABI) should also work - that is pretty much the definition of an
ABI.
If we are unable to provide the forward ABI then we need to change the
version number of the release going forward. If weare unable to provide
backwards ABI then we need to think about how we are defining things and
make that clear.

And we need to be clear about what we mean by ABI - I don't think we have
written down a clear definition - and then have in CI the tests to match
what we are saying we do.

When it comes to TLSv1.3 it is highly desirable that an existing 1.1.0
application gets TLSv1.3 without API changes - i.e. ABI provides this.
There will be some things where there are problems where assumptions are
invalidated - but we should work to minimise those (and I think we have
actually done so).
But again we should be testing this in CI - if we want old apps to all get
TLSv1.3 for zero effort (other than a shared library upgrade) then we
should test that in CI. Hoping it works or assuming it works and "manually"
watching for changes that might invalidate that is rather error prone.

What Richard is doing really helps add to the information for informed
decisions ... the more information we have the better in my view.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] FW: April Crypto Bulletin from Cryptosense

2018-04-03 Thread Tim Hudson
I'm less concerned about that access in this specific instance - as if we
had a test in place for that function then make test on the platform would
have picked up the issue trivially.
I don't know that we asked the reporter of the issue as to *how* it was
found - that would be interesting information.

Noting which platforms are supported to which level and what level of test
coverage we have are the more important items in my view.

Tim.


On Wed, Apr 4, 2018 at 1:39 AM, Richard Levitte  wrote:

> While I totally agree with the direction Tim is taking on this, we
> need to remember that there's another condition as well: access to the
> platform in question, either directly by one of us, or through someone
> in the community.  Otherwise, we can have as many tests as we want, it
> still won't test *that* code (be it assembler or something else)
>
> In message  w...@mail.gmail.com> on Tue, 03 Apr 2018 15:36:15 +, Tim Hudson <
> t...@cryptsoft.com> said:
>
> tjh> And it should have a test - which has nothing to do with ASM and
> everything to do with improving
> tjh> test coverage.
> tjh>
> tjh> Bugs are bugs - and any form of meaningful test would have caught
> this.
> tjh>
> tjh> For the majority of the ASM code - the algorithm implementations we
> have tests that cover things
> tjh> in a decent manner.
> tjh>
> tjh> Improving tests is the solution - not whacking ASM code. Tests will
> catch issues across *all*
> tjh> implementations.
> tjh>
> tjh> Tim.
> tjh>
> tjh> On Tue, 3 Apr. 2018, 8:29 am Salz, Rich,  wrote:
> tjh>
> tjh>  On 03/04/18 15:55, Salz, Rich wrote:
> tjh>  > This is one reason why keeping around old assembly code can have a
> cost. :(
> tjh>
> tjh>  Although in this case the code is <2 years old:
> tjh>
> tjh>  So? It's code that we do not test, and have not tested in years. And
> guess what? Critical CVE.
> tjh>
> tjh>  ___
> tjh>  openssl-project mailing list
> tjh>  openssl-project@openssl.org
> tjh>  https://mta.openssl.org/mailman/listinfo/openssl-project
> tjh>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] FW: April Crypto Bulletin from Cryptosense

2018-04-03 Thread Tim Hudson
And it should have a test - which has nothing to do with ASM and everything
to do with improving test coverage.

Bugs are bugs - and any form of meaningful test would have caught this.

For the majority of the ASM code - the algorithm implementations we have
tests that cover things in a decent manner.

Improving tests is the solution - not whacking ASM code. Tests will catch
issues across *all* implementations.

Tim.


On Tue, 3 Apr. 2018, 8:29 am Salz, Rich,  wrote:

>
> On 03/04/18 15:55, Salz, Rich wrote:
> > This is one reason why keeping around old assembly code can have a
> cost. :(
>
> Although in this case the code is <2 years old:
>
> So?  It's code that we do not test, and have not tested in years.  And
> guess what?  Critical CVE.
>
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Is making tests faster a bugfix?

2018-03-29 Thread Tim Hudson
Improved testing to me is something that is a good thing - and a value
judgement.
It doesn't change libcrypto or libssl - and that to me is the way I think
of it.
Fixing tests and apps and Makefiles to me are different from adding
features to libcrypto or libssl.

On this one - the fuzz testing has been sufficiently slow to reduce its
usefulness - and this is a step in the right direction.

It is however also a bit outside of our current policy on such things - so
perhaps we need to update that.

Tim.


On Thu, Mar 29, 2018 at 11:45 PM, Richard Levitte 
wrote:

> In message  on Thu, 29
> Mar 2018 14:03:06 +0100, Matt Caswell  said:
>
> matt>
> matt>
> matt> On 29/03/18 14:00, Salz, Rich wrote:
> matt> > Please see https://github.com/openssl/openssl/pull/5788
> matt> >
> matt> > I don’t think it is, but I’d like to know what others think.
> matt>
> matt> I do think this should be applied. The tests in question are not just
> matt> slow but *really* slow to the point that I often exit them before
> they
> matt> have completed. This removes the benefits of having the tests in the
> matt> first place. From that perspective I view this as a bug fix.
>
> Something to remember is that no user will ever complain about this,
> because we don't deliver the contents of fuzz/corpora in our tarballs.
>
> In other words, this is a developer only change of our current tests,
> and you will only hear from developers who do engage in fuzz testing,
> i.e. those who do these tests as part of a release, just to pick a
> very recent example.
>
> Also, you may note that this test re-engages fuzz testing as part of
> our normal tests that are run for every PR, which means that we will
> catch errors that the fuzzers can detect much earlier.  Because the
> fuzz testing took so long time, we had them only engaged with
> [extended tests], something that's almost never used.
>
> So I would argue that faster fuzz testing means more fuzz testing, and
> hopefully better testing of stuff that's harder to catch otherwise.
>
> Cheers,
> Richard ( plus, from a very personal point of view, it's *my* time,
>   and Matt's, and whoever else's who tests for releases, that
>   gets substantially less wasted! )
>
> --
> Richard Levitte levi...@openssl.org
> OpenSSL Project http://www.openssl.org/~levitte/
> ___
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
>
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

[openssl-project] wiki info for pre-1.0.2

2018-03-26 Thread Tim Hudson
Seeing "This was the original information, might still be valid for < 1.0.2
openssl versions :" at
https://wiki.openssl.org/index.php/Hostname_validation raises the question
as to whether or not the wiki should contain information for versions of
OpenSSL that are no longer supported.

Thoughts?

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

[openssl-project] constification on already released branches

2018-03-25 Thread Tim Hudson
https://github.com/openssl/openssl/pull/2181
and
https://github.com/openssl/openssl/pull/1603#issuecomment-248649700

One thing that should be noted is that if you are building with -Wall
-Werror (which many projects do) and you are using OpenSSL and things
change from a const perspective builds break - and that then ends up people
having to change code on released versions.

Adding or removing const changes the API. It doesn't technically change the
ABI - although it can impact things as compilers can and do warn for simple
const violations in some circumstances. The straight forward "fix" of a
cast on call actually doesn't achieve anything useful - in that it simply
masks the underlying issue where const was added for a reason.

We should have a clear policy on doing this sort of thing in released code
- either it is okay to do and we should accept patches - or it should never
actually be done. I don't see this as a case-by-case basis for
determination at all - basically it is a single type of API change we
either should be allowing or disallowing.

There is a similar type of API change which is adding typedefs in for
callbacks - which technically also don't change the ABI - and if we allow
any form of API change that also could be considered.

We should discuss this separate from any specific PR and vote to set a
policy one way or the other in my view.

Tim.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

  1   2   >