Re: VOTE: Accept the Fully Pluggable TLSv1.3 KEM functionality

2020-10-13 Thread Nicola Tuveri
As defined by the [OTC Voting Procedures][0], I am declaring the
vote closed, as the number of uncast votes cannot affect the outcome of
the vote.

The vote is accepted.

topic: We should accept the Fully Pluggable TLSv1.3 KEM functionality as shown
in PR #13018 into the 3.0 release
Proposed by Matt Caswell
Public: yes
opened: 2020-10-08
closed: 2020-10-13
accepted:  yes  (for: 8, against: 1, abstained: 1, not voted: 1)

  Matt   [+1]
  Mark   [ 0]
  Pauli  [+1]
  Viktor [  ]
  Tim[+1]
  Richard[+1]
  Shane  [-1]
  Tomas  [+1]
  Kurt   [+1]
  Matthias   [+1]
  Nicola [+1]


-- Nicola

[0]: https://www.openssl.org/policies/omc-bylaws.html


Re: VOTE: Weekly OTC meetings until 3.0 beta1 is released

2020-10-11 Thread Nicola Tuveri
As defined by the [OTC Voting Procedures][0], I am declaring the
vote closed, as the number of uncast votes cannot affect the outcome of
the vote.

The vote is accepted.

topic: Hold online weekly OTC meetings starting on Tuesday 2020-10-13
   and until 3.0 beta1 is released, in lieu of the weekly "developer
   meetings".
Proposed by Nicola Tuveri
Public: yes
opened: 2020-10-09
closed: 2020-10-11
accepted:  yes  (for: 9, against: 0, abstained: 0, not voted: 2)

  Matt   [+1]
  Mark   [+1]
  Pauli  [+1]
  Viktor [  ]
  Tim[+1]
  Richard[  ]
  Shane  [+1]
  Tomas  [+1]
  Kurt   [+1]
  Matthias   [+1]
  Nicola     [+1]


Nicola Tuveri

[0]: https://www.openssl.org/policies/omc-bylaws.html

On Fri, 9 Oct 2020 at 15:00, Nicola Tuveri  wrote:
>
> topic: Hold online weekly OTC meetings starting on Tuesday 2020-10-13
>and until 3.0 beta1 is released, in lieu of the weekly "developer
>    meetings".
> Proposed by Nicola Tuveri
> Public: yes
> opened: 2020-10-09
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [  ]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [+1]


Re: OTC VOTE: The PR #11359 (Allow to continue with further checks on UNABLE_TO_VERIFY_LEAF_SIGNATURE) is acceptable for 1.1.1 branch

2020-10-11 Thread Nicola Tuveri
+1

I am basing my vote on the feedback provided by @DDvO [0] and @t8m [1].
In particular I am convinced to vote in favor, as I can see this as a
bug fix, fixing an undocumented inconsistency, and that it is very
unlikely it would affect existing applications.


Nicola


[0]: https://github.com/openssl/openssl/pull/11359#issuecomment-706189632
[1]: https://github.com/openssl/openssl/pull/11359#issuecomment-706191205


On Fri, 9 Oct 2020 at 15:02, Tomas Mraz  wrote:
>
> topic: The PR #11359 (Allow to continue with further checks on
>  UNABLE_TO_VERIFY_LEAF_SIGNATURE) is acceptable for 1.1.1 branch
> As the change is borderline on bug fix/behaviour change OTC needs
> to decide whether it is acceptable for 1.1.1 branch.
> Proposed by Tomas Mraz
> Public: yes
> opened: 2020-10-09
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [  ]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [+1]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]
>
> --
> Tomáš Mráz
> No matter how far down the wrong road you've gone, turn back.
>   Turkish proverb
> [You'll know whether the road is wrong if you carefully listen to your
> conscience.]
>
>


VOTE: Weekly OTC meetings until 3.0 beta1 is released

2020-10-09 Thread Nicola Tuveri
topic: Hold online weekly OTC meetings starting on Tuesday 2020-10-13
   and until 3.0 beta1 is released, in lieu of the weekly "developer
   meetings".
Proposed by Nicola Tuveri
Public: yes
opened: 2020-10-09
closed: 2020-mm-dd
accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)

  Matt   [  ]
  Mark   [  ]
  Pauli  [  ]
  Viktor [  ]
  Tim[  ]
  Richard[  ]
  Shane  [  ]
  Tomas  [  ]
  Kurt   [  ]
  Matthias   [  ]
  Nicola [+1]


Re: [VOTE PROPOSAL] Weekly OTC meeting until 3.0 beta1 is released

2020-10-09 Thread Nicola Tuveri
I received no feedback, so I am keeping the vote text as proposed and
opening the vote shortly.


Nicola

On Wed, 7 Oct 2020 at 16:53, Nicola Tuveri  wrote:
>
> Background
> --
>
> Since the two virtual face-to-face OTC meetings of last week, and again
> in the two OTC meetings this week, we repeatedly discussed replacing the
> weekly "developer meetings" with official OTC meetings.
>
> The "developer meetings" have so far seen frequent participation from a
> subset of OTC members heavily involved in day to day tasks for the
> [`3.0 New Core + FIPS` project][new_core+FIPS], and have been open and
> advertised to all OTC members since the end of January, but they were
> not qualified as official OTC meetings: they were meant to assess the
> weekly progress within the project, and determine the coming week's
> priorities, discussing and resolving any difficulties encountered.
>
> Rationale
> -
>
> At this point in the development process, in which we are refining the
> state of development branch to finally qualify for the transition into
> the beta release stage, upgrading these meetings to proper OTC meetings
> seems appropriate and desirable, as discussion and assessment of the
> remaining technical tasks increasingly requires discussion within the
> OTC and OTC decisions, and additionally provides OTC steering on the
> preferred way of completing some of the allocated tasks before the work
> is done, rather than after the fact.
>
> Personally, I also expect that upgrading the developer meetings into
> full-fledged OTC meetings will increase attendance by the wider OTC
> audience: this would provide the OTC as a whole with better oversight on
> the progress towards the upcoming release, and would further improve the
> quality of the release process by leveraging the plurality of OTC
> perspectives and insights.
>
> Meeting Schedule
> 
>
> So far, the "developer meetings" have been regularly scheduled each
> Tuesday from 08:00 to 09:30 (UTC, 24h format), with a tendency to spill
> into the next half-hour.
> There have been cases in which we had to hit the 3h mark, but I wouldn't
> describe 3h as the norm.
> My proposal for the OTC meeting is to adopt the same 1h30m (+30m buffer)
> approach.
> Tuesday, 08:00 UTC as the starting time, has been so far a good
> compromise between folks working in European and Australian timezone,
> and keeping it is the current proposal, but is admittedly not the best
> time for attendance from the American continents.
>
> Vote: why and when?
> ---
>
> We are proposing a vote, rather than agreeing on it during the last
> call, to invite discussion from the OTC members that could not
> participate in the meetings during these two weeks, for example
> regarding alternative time slots to improve overall attendance.
>
> I plan to wait until Friday to open the actual vote: please provide
> feedback before then if you would like to amend the letter of the vote
> topic.
>
> One item on which I'd like to ask for feedback is if the actual vote
> text should include the specifics of the timeslot or not.
>
> Proposed vote text
> --
>
> Hold online weekly OTC meetings starting on Tuesday 2020-10-13
> and until 3.0 beta1 is released, in lieu of the weekly
> "developer meetings".
>
> Refs
> 
>
> [new_core+FIPS]: https://github.com/openssl/openssl/projects/2


Re: VOTE: Technical Items still to be done

2020-10-08 Thread Nicola Tuveri
+1

On Thu, 8 Oct 2020 at 17:47, Matt Caswell  wrote:
>
> topic: The following items are required prerequisites for the first beta
> release:
>  1) EVP is the recommended API, it must be feature-complete compared with
> the functionality available using lower-level APIs.
>- Anything that isn’t available must be put to an OTC vote to exclude.
>- The apps are the minimum bar for this, subject to exceptions noted
> below.
>  2) Deprecation List Proposal: DH_, DSA_, ECDH_, ECDSA_, EC_KEY_, RSA_,
> RAND_METHOD_.
>- Does not include macros defining useful constants (e.g.
>  SHA512_DIGEST_LENGTH).
>- Excluded from Deprecation: `EC_`, `DSA_SIG_`, `ECDSA_SIG_`.
>- There might be some others.
>- Review for exceptions.
>- The apps are the minimum bar to measure feature completeness for
> the EVP
>  interface: rewrite them so they do not use internal nor deprecated
>  functions (except speed, engine, list, passwd -crypt and the code
> to handle
>  the -engine CLI option).  That is, remove the suppression of deprecated
>  define.
>  - Proposal: drop passwd -crypt (OMC vote required)
>- Compile and link 1.1.1 command line app against the master headers and
>  library.  Run 1.1.1 app test cases against the chimera.  Treat this
> as an
>  external test using a special 1.1.1 branch. Deprecated functions
> used by
>  libssl should be moved to independent file(s), to limit the
> suppression of
>  deprecated defines to the absolute minimum scope.
>  3) Draft documentation (contents but not pretty)
>- Need a list of things we know are not present - including things we
> have
>  removed.
>- We need to have mapping tables for various d2i/i2d functions.
>- We need to have a mapping table from “old names” for things into the
>  OSSL_PARAMS names.
>  - Documentation addition to old APIs to refer to new ones (man7).
>  - Documentation needs to reference name mapping.
>  - All the legacy interfaces need to have their documentation
> pointing to
>the replacement interfaces.
>  4) Review (and maybe clean up) legacy bridge code.
>  5) Review TODO(3.0) items #12224.
>  6) Source checksum script.
>  7) Review of functions previously named _with_libctx.
>  8) Encoder fixes (PKCS#8, PKCS#1, etc).
>  9) Encoder DER to PEM refactor.
> 10) Builds and passes tests on all primary, secondary and FIPS platforms.
> 11) Query provider parameters (name, version, ...) from the command line.
> 12) Setup buildbot infrastructure and associated instructions.
> 13) Complete make fipsinstall.
> 14) More specific decoding selection (e.g. params or keys).
> 15) Example code covering replacements for deprecated APIs.
> 16) Drop C code output options from the apps (OMC approval required).
> 17) Address issues and PRs in the 3.0beta1 milestone.
> Proposed by .
> Public: yes
> opened: 2020-10-08
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [+1]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]


Re: VOTE: Accept the Fully Pluggable TLSv1.3 KEM functionality

2020-10-08 Thread Nicola Tuveri
+1

On Thu, Oct 8, 2020, 17:27 Matt Caswell  wrote:

> topic: We should accept the Fully Pluggable TLSv1.3 KEM functionality as
> shown in PR #13018 into the 3.0 release
> Proposed by Matt Caswell
> Public: yes
> opened: 2020-10-08
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [+1]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [  ]
>   Nicola [  ]
>


Re: Vote proposal: Private keys can exist independently of public keys

2020-10-08 Thread Nicola Tuveri
An update to keep this thread in sync: we moved the discussion between
me and Richard partially offline and partially on
<https://github.com/openssl/openssl/issues/12612> (the issue that
triggered this vote proposal).

Part of the discussion was on our positions and rationales regarding
the actual vote and not the vote proposal, so they did not belong
here, but I feel like it would be better to include in this thread a
point I made in the issue discussion in support of amending the vote
text to be more explicit regarding the amount of work required to
change the "keypair assumption" in a way that is safe for all of our
users.

The full details are at
<https://github.com/openssl/openssl/issues/12612#issuecomment-705500620>
but the relevant part that I think is worth including in this thread
is the following:

> Regardless of the fact that, in this particular instance, arguments
> are checked and we don't fail with a segfault, my point in the
> discussion about the vote proposal is exactly that to propose to
> change the underlying assumption first we should build test coverage
> to catch the segfaults where they could happen and fix them!
>
> Currently our test coverage for "incomplete keys" is 0% or very close
> to it, because we rely greatly on data driven tests (and PEM in
> particular), and while we do exercise encodings that omit the optional
> public key components, this still means we are not covering the
> codepaths that a user could hit if using the API to intentionally
> violate the "keypair assumption" as in the description of this issue.

To be fair, when I say close to 0%, I am aware we have tests specific
to provider-native keys, that feed the raw data, and are offering some
level of assurance for this test. What I feel is gravely undertested
to undertake this change in the timeframe of 3.0 is exhaustive unit
testing of our wide API and integration/system tests to ensure that by
removing this assumption we do not impact established patterns in the
existing user codebase.

Yes, as #12612 shows also being strict about this introduces breaking
changes for the users that used the API to programmatically create
patterns that work in 1.1.1 even if they defied the documented
"keypair assumption", but I see a big difference in breaking leaning
towards hardening and breaking leaning towards potential run-time
exceptions!

I am not saying we should never change the "keypair assumption", but
that on voting to change this assumption in the 3.0 timeframe, we
should remind ourselves of the current poor coverage on some details,
the risk that could emerge out of this change and the amount of work
required to reach exhaustive testing to be able to quantify these
risks and the work required to make all the existing codebase robust
to this change.

-- Nicola

On Thu, 8 Oct 2020 at 00:10, Richard Levitte  wrote:
>
> On Wed, 07 Oct 2020 21:25:57 +0200,
> Nicola Tuveri wrote:
> >
> > On Wed, 7 Oct 2020 at 20:45, Richard Levitte  wrote:
> > >
> > > I'm as culpable as anyone re pushing the "convention" that an EVP_PKEY
> > > with a private key should have a public key.  I was incorrect.
> >
> > Sure, my example was not about pointing fingers!
>
> I didn't try to imply that you did, just wanted to own up to my part
> in it.
>
> > It's just to give recent data points on how the documentation written
> > in 2006 is probably not something as stale as one can think, because
> > until recently we were leveraging that assumption at least in some
> > sections of the lower layers.
>
> Were we?  Not in the operations...  however, this picked my curiousity
> and got me to look at this from a different angle.  See, it seems that
> what we're mostly disputing is the behaviour of the keymgmt import
> function, which is related to loading keys from file, among others.
>
> When loading private keys from PEM files in 1.1.1, we do rely entirely
> on the related i2d functions, which do implement the ASN.1 structures
> quite failthfully, so our basis of deciding what's ok or not is at
> influenced by associated standardised ASN.1 key formats.
>
> I did just now look up the ASN.1 key format for ECC keys, and it seems
> that our EC keymgmt ends up violating that standard.  I've commented
> further on that in the issue:
> https://github.com/openssl/openssl/issues/12612#issuecomment-705162303
>
> > I think the lack of NULL checks is intertwined with this problem as
> > long as in some code that does access the public key component we
> > ensured we dereference without making the NULL check because of the
> > assumption.
>
> Looking again at #12612, I don't quite see where the conclusion that
> we're missning NULL checks comes from.  EVP_DigestSignInit() fa

Re: Vote proposal: Private keys can exist independently of public keys

2020-10-07 Thread Nicola Tuveri
On Wed, 7 Oct 2020 at 20:45, Richard Levitte  wrote:
>
> I'm as culpable as anyone re pushing the "convention" that an EVP_PKEY
> with a private key should have a public key.  I was incorrect.

Sure, my example was not about pointing fingers!
It's just to give recent data points on how the documentation written
in 2006 is probably not something as stale as one can think, because
until recently we were leveraging that assumption at least in some
sections of the lower layers.

> Regarding avoiding NULL checks, that's actually a separate story,
> evem though it overlaps conveniently.  There was the idea, for a
> while, that we should avoid NULL checks everywhere (unless it's valid
> input), and indeed make it a programmer error if they happened to pass
> NULL where they shouldn't.
> One can see that as an hard assertion, and that has indeed produced
> crashes that uncovered bugs we might otherwise have missed.  The tide
> has changed though, and it seem like the fashion du jour is to check
> NULLs and error with an ERR_R_PASSED_NULL_PARAMETER.  I'm not sure
> that we've made an actual hard decision in either direction, though.
>
> However, I'm not sure where the NULL check problem comes in.
> Operations that don't use the public key parts don't need to look at
> the public key parts, so a NULL check there is simply not necessary.

I think the lack of NULL checks is intertwined with this problem as
long as in some code that does access the public key component we
ensured we dereference without making the NULL check because of the
assumption.

The problem I see with spot checking rather than writing a
comprehensive suite of tests, is that given our wide API surface and
the compromises taken in the transition from non-opaque stack
allocated objects in 1.0.2 to opaque objects in 1.1.1+, we might have
non-obvious places where dereferencing the public key without checking
for NULL can bite us.


I would add, although it's feedback for the vote, not the proposal,
that I am not opposed to changing the documented assumption in
principle, but with changing it this late in the development of 3.0.

I am giving feedback for the text proposal to ensure that during the
voting we don't underestimate the high risk of critical bugs coming
with adopting this change at this stage, and that it does call for
vastly extending our current test coverage beyond what we were
currently planning as acceptable.

We all feel quite under pressure due to the time constraints, and I
feel that in general we have privileged working on implementing the
required features (with sufficient level of tests as per our policy)
rather than spending major resources in further developing our test
coverage with exhaustive positive and negative tests for regular and
far-fetched cases, this should probably also have a weight on the
decision of committing to this change at this point.



Nicola


Re: Vote proposal: Private keys can exist independently of public keys

2020-10-07 Thread Nicola Tuveri
On Wed, 7 Oct 2020 at 20:17, Richard Levitte  wrote:
>
> There's no reason for the EVP layer to check for the presence or the
> absence or the public key, for one very simple reason: it doesn't have
> access to the backend key structure and therefore has no possibility
> to make any such checks.  Such checks should be made in the backend.

The very case that triggered this discussion happens because the
provider backend is doing the tests that libcrypto is not doing, and
upholding the core to the documented assumption.
On import/export the backend is enforcing that privkey alone is never
imported/exported without its public counterpart.


> That part of your proposed vote text that mentions checks in EVP
> doesn't make any sense with that in mind, and I fear that it would
> lead us into a rabbit hole that's not really necessary.  Also, this
> contradicts the intentions of the design for libcrypto vs providers,
> which was that libcrypto (and therefore EVP) would be a rather thin
> layer that simply passes stuff to the providers and gets results back
> (there are things that complicate this a bit, but this is essentially
> what we do), and leave it to the provider to decide what to do and how
> to do it.

Re: contradicting design principles

I would also remark here that pushing quirkiness into providers to
satisfy "weird" requirements that only serve the purpose of dealing
with the legacy promises of libcrypto (or anyway the extra constraints
enforced by libcrypto surface that is not directly facing the
providers) also contradicts designs of libcrypto vs providers:
libcrypto takes care of libcrypto user-facing and provider-facing API,
the provider only should care about the new and well defined
core-provider API.

Re: why test at EVP layer?

In my proposal the test at the EVP (and lower) layers is because that
is part of taking care of libcrypto user-facing API: the long-standing
(locally betrayed) assumption is established for `EVP_PKEY` objects,
so `EVP` is the natural layer to have a first layer of testing to
ensure that pervasively we can guarantee that the current documented
assumption can be safely disregarded because not relevant anymore.
That assumption, for which we have long-standing documentation at the
EVP layer has at some point (even in the past 4 years in my limited
experience) been enforced also on lower levels (it is not particularly
relevant if chronologically it percolated down to the lower levels or
conversely was already there all along undocumented and was put in
writing only when documenting `EVP_PKEY`): that is why I included in
my proposal to also build coverage for this change not only in EVP but
also in the other layers.



I can agree with you that some of this testing at different layers can
appear quite redundant, especially in terms of which "security
critical" or "potentially failing" code is tested in the current
snapshot of the code: but my counter argument is that if we were less
worried about redundant tests and were more rigorous in always
including both positive and negative tests for all the things we put
in our documentation, maybe we would have already had the tests that
asserted that the code reflects our documented assumption (and in
which layers) and we wouldn't have ended up violating our documented
assumption by accident creating something that works in 1.1.1 by
(partial) accident and that we are now considering as a potential
regression in 3.0.


Nicola


Re: Vote proposal: Private keys can exist independently of public keys

2020-10-07 Thread Nicola Tuveri
Re; "enforcement"

My impression is that there is no such enforcement, because the
project has (had?) a stance on "avoid NULL checks" and "consider
things that break our documented assumptions as programmer errors".

The natural result of these two principles may very well be the
deliberate reason why "enforcement" cannot be found and we might
perceive documentation written in 2006 as not current.

I can say that in the past 4 years within PR I authored or
co-authored, even before becoming a Committer, I can recall reviews
requesting to amend the code to avoid NULL checks on the public key
component because it is our convention that an EVP_PKEY with key
material is either a public key or a key pair.


Nicola

On Wed, 7 Oct 2020 at 19:52, Richard Levitte  wrote:
>
> As far as I can tell, the existing "enforcement" is in the algorithm
> implementations.  I had a look through the following files in OpenSSL
> 1.1.1:
>
> crypto/rsa/rsa_ossl.c
> crypto/dsa/dsa_ossl.c
> crypto/dh/dh_key.c
> crypto/ec/ecdsa_ossl.c
> crypto/ec/ecdh_ossl.c
>
> I first had a look at the key computing functions in dh_key.c and
> ecdh_ossl.c, because I was told that the public key is looked at
> there.  This turns out to be somewhat false; they do look at a public
> key, the public key of the peer key that they get passed, which isn't
> quite the same.  However, when it comes to the DH or EC_KEY they work
> with, only the private half is looked at,
>
> Looking at dsa_ossl.c and ecdsa_ossl.c, I can see that the signing
> functions do NOT look a |pub_key|, and the verification functions do
> NOT look at |priv_key|.  This seems perfectly normal to me.
>
> Similarly, the signing functions in rsa_ossl.c do NOT seem to look at
> |d|, and the verification functions do NOT seem to look at |e|.
>
> I took a moment to search through the corresponding *meth.c files to
> see if I could quickly grep for some kind of check, and found none.
> Mind you, that was a *quick* grep, someone more thorough might want to
> confirm or contradict.
>
> So, having looked through that code, my sense is that the "enforcement"
> that's talked about is non-existent, or at least very unclear, and I
> suspect that if there is some sort of "enforcement" at that level,
> it's more accidental than by design...
>
> I'm not sure what to make of the documentation from 2006.  Considering
> the level of involvement there was from diverse people (2006 wasn't
> exactly the most eventful time), there's a possibility that it was a
> precaution ("don't touch that can of worms!") than a firm decision.
>
> Cheers,
> Richard
>
> On Wed, 07 Oct 2020 17:34:25 +0200,
> Nicola Tuveri wrote:
> >
> > I believe the current formulation is slightly concealing part of the 
> > problem.
> >
> > The way I see it, the intention if the vote passes is to do more than 
> > stated:
> > 1. change the long-documented assumption
> > 2. fix "regressions" in 3.0 limited to things that worked in a certain
> > way in 1.1.1 (and maybe before), _despite_ the documented assumption
> > 3. test all existing functions that access the public key component of
> > `EVP_PKEY` (or lower level) objects to ensure they don't rely on the
> > long-documented assumption
> > 4. fix all the places that intentionally relied on the long-documented
> > assumption and that were required to avoid "useless NULL checks"
> >
> > I believe that to change a widely and longed-enforced assumption like
> > this, we can't rely on a single case or a list of single cases that
> > worked _despite_ the documented assumption as proof that things can
> > (and should) work a certain way or another, and that before taking the
> > decision this late in the development process the results of thorough
> > testing are a prerequisite to assess the extent of code changes
> > required by changing the assumption and the potential for instability
> > and disruption that this can cause for our direct and indirect users
> > if segfaults slip through our currently limited coverage as a
> > consequence of this change.
> >
> > I feel that we are currently underestimating the potential impact of
> > this change, and currently motivated by a handful of isolated cases in
> > which under a very specific set of conditions things aligned in a way
> > that allowed certain usage patterns to succeed despite the documented
> > assumption.
> > I also feel that the "burden of the proof" of improving the test
> > coverage to exhaustively cover all kinds of keys (both in their legacy
> > form and in provider-native form), u

Re: Vote proposal: Private keys can exist independently of public keys

2020-10-07 Thread Nicola Tuveri
I believe the current formulation is slightly concealing part of the problem.

The way I see it, the intention if the vote passes is to do more than stated:
1. change the long-documented assumption
2. fix "regressions" in 3.0 limited to things that worked in a certain
way in 1.1.1 (and maybe before), _despite_ the documented assumption
3. test all existing functions that access the public key component of
`EVP_PKEY` (or lower level) objects to ensure they don't rely on the
long-documented assumption
4. fix all the places that intentionally relied on the long-documented
assumption and that were required to avoid "useless NULL checks"

I believe that to change a widely and longed-enforced assumption like
this, we can't rely on a single case or a list of single cases that
worked _despite_ the documented assumption as proof that things can
(and should) work a certain way or another, and that before taking the
decision this late in the development process the results of thorough
testing are a prerequisite to assess the extent of code changes
required by changing the assumption and the potential for instability
and disruption that this can cause for our direct and indirect users
if segfaults slip through our currently limited coverage as a
consequence of this change.

I feel that we are currently underestimating the potential impact of
this change, and currently motivated by a handful of isolated cases in
which under a very specific set of conditions things aligned in a way
that allowed certain usage patterns to succeed despite the documented
assumption.
I also feel that the "burden of the proof" of improving the test
coverage to exhaustively cover all kinds of keys (both in their legacy
form and in provider-native form), under all possible operations at
different abstraction levels (e.g. limiting this example to
sign/verify as the operation, testing should not be limited to
`EVP_DigestSign/Verify()`, but also through
`EVP_DigestSign/Verify{Init,Update,Final}()`,
`EVP_PKEY_sign/verify*()`, `(EC)DSA_(do_)sign/verify()`, etc.,
external testing with ENGINEs that might rely on the long-documented
assumption), should fall on who is proposing this change, rather than
committing that we will be able to increase our coverage to prevent
unforeseen consequences of changing the assumption, before we can
decide to commit to alter the assumption.

So, to better capture the extent of work required to apply the change
and the risks associated with it I would rephrase the text vote as
follows:

> We should change the 3.0 code to explicitly allow private components
> to exist in keys without the public components also being present,
> ensuring that, in `EVP` and in the lower abstraction layers, both in
> legacy and in provider-native codepaths, _every_ instance in which any
> public key component is dereferenced it is preceded by a NULL pointer
> check or guaranteed non-NULL from any caller.
> To change the long documented assumption, we commit to improve test
> coverage of all public functions directly or indirectly triggering
> potential access to public key components, to prevent the risk of user
> facing crashes.


Nicola




On Wed, 7 Oct 2020 at 14:29, Matt Caswell  wrote:
>
> Issue #12612 exposes a problem with how we handle keys that contain
> private components but not public components.
>
> There is a widespread assumption in the code that keys with private
> components must have public components. There is text in our public
> documentation that states this (and that text dates back to 2006).
>
> OTOH, the code has not always enforced this. Issue #12612 describes a
> scenario where this has not historically been enforced, and it now is in
> the current 3.0 code causing a regression.
>
> There are differences of opinion on how this should be handled. Some
> have the opinion that we should change the model so that we explicitly
> allow private keys to exists without the public components. Others feel
> that we should continue with the old model.
>
> It seems we need a vote to decide this. Here is my proposed vote text:
>
> We should change the 3.0 code to explicitly allow private components to
> exist in keys without the public components also being present.
>
> Feedback please on the proposed vote text.
>
> Matt


Re: Vote proposal: Technical items still to be done

2020-10-07 Thread Nicola Tuveri
I support the edit proposed by Tomas.

First comment that I have is that I'd prefer the first-level items to
be actually numbered, as done in the drafts: we might put a disclaimer
that the numbers are not indicative of priority and just meant to be
used to address (equally important) tasks by index.

The second comment is just a quirk of mine: I prefer plain text emails
and Markdown formatting. If others shared the feeling (and nobody
objected to Markdown for our records) I'd volunteer to reformat the
draft of this vote text using Markdown syntax.

Nicola

On Wed, 7 Oct 2020 at 14:58, Tomas Mraz  wrote:
>
> On Wed, 2020-10-07 at 12:35 +0100, Matt Caswell wrote:
> > I had an action from the OTC meeting today to raise a vote on the OTC
> > list of technical items still to be done. Here is my proposed vote
> > text.
> > There will be a subsequent vote on the "beta readiness checklist"
> > which
> > is a separate list.
> >
> > Feedback please on the proposed vote text below.
> >
> > The following items are required prerequisites for the first beta
> > release:
> > * EVP is the recommended API, it must be feature-complete compared
> > with
> > the functionality available using lower-level APIs.
> >   - Anything that isn’t available must be put to an OTC vote to
> > exclude.
> >   - The apps are the minimum bar for this, subject to exceptions
> > noted
> > below.
> > * Deprecation List Proposal: DH_, DSA_, ECDH_, ECDSA_, EC_KEY_, RSA_,
> > RAND_METHOD_.
> >   - Does not include macros defining useful constants (e.g.
> > SHA512_DIGEST_LENGTH).
> >   - Excluded from Deprecation: `EC_`, `DSA_SIG_`, `ECDSA_SIG_`.
> >   - There might be some others.
> >   - Review for exceptions.
> >   - The apps are the minimum bar to measure feature completeness for
> > the
> > EVP interface: rewrite them so they do not use internal nor
> > deprecated
> > functions (except speed, engine, list, passwd -crypt and the code to
> > handle the -engine CLI option).  That is, remove the suppression of
> > deprecated define.
> > - Proposal: drop passwd -crypt (OMC vote required)
> >   - Compile and link 1.1.1 command line app against the master
> > headers
> > and library.  Run 1.1.1 app test cases against the chimera.  Treat
> > this
> > as an external test using a special 1.1.1 branch.
> > Deprecated functions used by libssl should be moved to independent
> > file(s), to limit the suppression of deprecated defines to the
> > absolute
> > minimum scope.
> > * Draft documentation (contents but not pretty)
> >   - Need a list of things we know are not present - including things
> > we
> > have removed.
> >   - We need to have mapping tables for various d2i/i2d functions.
> >   - We need to have a mapping table from “old names” for things into
> > the
> > OSSL_PARAMS names.
> > - Documentation addition to old APIs to refer to new ones (man7).
> > - Documentation needs to reference name mapping.
> > - All the legacy interfaces need to have their documentation
> > pointing to the replacement interfaces.
> > * Review (and maybe clean up) legacy bridge code.
> > * Review TODO(3.0) items #12224.
> > * Source checksum script.
> > * Review of functions previously named _with_libctx.
> > * Encoder fixers (PKCS#8, PKCS#1, etc).
> > * Encoder DER to PEM refactor.
> > * Builds and passes tests on all primary, secondary and FIPS
> > platforms.
> > * Query provider parameters (name, version, …) from the command line.
> > * Setup buildbot infrastructure and associated instructions.
> > * Complete make fipsinstall.
> > * More specific decoding selection (e.g. params or keys).
> > * Example code covering replacements for deprecated APIs.
> > * Drop C code output options from the apps (OMC approval required).
> > * Address 3.0beta1 milestones.
>
> Address issues and PRs in the 3.0beta1 milestone.
>
> --
> Tomáš Mráz
> No matter how far down the wrong road you've gone, turn back.
>   Turkish proverb
> [You'll know whether the road is wrong if you carefully listen to your
> conscience.]
>
>


[VOTE PROPOSAL] Weekly OTC meeting until 3.0 beta1 is released

2020-10-07 Thread Nicola Tuveri
Background
--

Since the two virtual face-to-face OTC meetings of last week, and again
in the two OTC meetings this week, we repeatedly discussed replacing the
weekly "developer meetings" with official OTC meetings.

The "developer meetings" have so far seen frequent participation from a
subset of OTC members heavily involved in day to day tasks for the
[`3.0 New Core + FIPS` project][new_core+FIPS], and have been open and
advertised to all OTC members since the end of January, but they were
not qualified as official OTC meetings: they were meant to assess the
weekly progress within the project, and determine the coming week's
priorities, discussing and resolving any difficulties encountered.

Rationale
-

At this point in the development process, in which we are refining the
state of development branch to finally qualify for the transition into
the beta release stage, upgrading these meetings to proper OTC meetings
seems appropriate and desirable, as discussion and assessment of the
remaining technical tasks increasingly requires discussion within the
OTC and OTC decisions, and additionally provides OTC steering on the
preferred way of completing some of the allocated tasks before the work
is done, rather than after the fact.

Personally, I also expect that upgrading the developer meetings into
full-fledged OTC meetings will increase attendance by the wider OTC
audience: this would provide the OTC as a whole with better oversight on
the progress towards the upcoming release, and would further improve the
quality of the release process by leveraging the plurality of OTC
perspectives and insights.

Meeting Schedule


So far, the "developer meetings" have been regularly scheduled each
Tuesday from 08:00 to 09:30 (UTC, 24h format), with a tendency to spill
into the next half-hour.
There have been cases in which we had to hit the 3h mark, but I wouldn't
describe 3h as the norm.
My proposal for the OTC meeting is to adopt the same 1h30m (+30m buffer)
approach.
Tuesday, 08:00 UTC as the starting time, has been so far a good
compromise between folks working in European and Australian timezone,
and keeping it is the current proposal, but is admittedly not the best
time for attendance from the American continents.

Vote: why and when?
---

We are proposing a vote, rather than agreeing on it during the last
call, to invite discussion from the OTC members that could not
participate in the meetings during these two weeks, for example
regarding alternative time slots to improve overall attendance.

I plan to wait until Friday to open the actual vote: please provide
feedback before then if you would like to amend the letter of the vote
topic.

One item on which I'd like to ask for feedback is if the actual vote
text should include the specifics of the timeslot or not.

Proposed vote text
--

Hold online weekly OTC meetings starting on Tuesday 2020-10-13
and until 3.0 beta1 is released, in lieu of the weekly
"developer meetings".

Refs


[new_core+FIPS]: https://github.com/openssl/openssl/projects/2


Re: [OTC] Agenda proposal for OTC meeting on 2020-10-06

2020-10-02 Thread Nicola Tuveri
 all subsequent versions.
03. Triage all open issues and decide:
not an issue, won’t fix, fix for beta or fix after beta for each.
04. Triage all TODO(current release) items:
done, won’t fix, fix for beta or fix after beta for each.
05. Triage all Coverity issues and close either:
by marking it as false positive or by fixing it.
06. Coveralls coverage has not decreased overall from the previous
release.
07. Have a pass through all new/changed APIs to ensure consistency,
fixing those that aren't.
08. Confirm that all new public APIs are documented.
09. Check that new exported symbols (in the static build) are named
correctly (`OSSL_` resp. `ossl_` prefix).
10. Run the previous (all relevant supported) release test suites
against the beta version.
11. Clean builds in Travis and Appveyor for two days.
12. run-checker.sh succeeds on 2 consecutive days before release.
13. A passing OTC "ready for beta" vote.


Proposed "Technical items still to be done" list


1. The `EVP` interface is the recommended API, it must be
   feature-complete compared with the functionality available using
   lower-level APIs.
   - Anything that isn't available must be put to an OTC vote to
 exclude.
   - The `apps/` are the minimum bar for this, they should not use
 internal nor deprecated functions (except `speed`, `engine` and
 the code to handle the `-engine` CLI option).
   - Deprecated functions used by `libssl` should be moved to an
 independent file.
2. Deprecation List Proposal: The entire `DH_`, `DSA_`, `ECDH_`,
   `ECDSA_`, `EC_KEY_`, `RSA_`, and `RAND_METHOD_` interfaces.
   - Does not include macros defining useful values (e.g. DH maximum
 size).
   - Excluded from Deprecation: `EC_`, `DSA_SIG_`, `ECDSA_SIG_`.
   - There might be some others.
   - Review for exceptions.
   - Rewrite apps to use non-deprecated interfaces as far as possible.
   - Transfer old apps to test cases (either directly or via 1.1.1).
3. Compile a list of things we know are not present - including things
   we have removed.
4. We need to have a mapping table from “old names” for things into the
   `OSSL_PARAMS` names.
5. We need to have mapping tables for various `d2i`/`i2d` functions.
6. We need to rewrite the apps to use only the non-deprecated interfaces
   (except for the list, speed and engine apps and the engine parameter
   in various places).
7. All the legacy interfaces need to have their documentation pointing
   to the replacement interfaces.


---

Best regards,

Nicola Tuveri

[Release Strategy]: https://www.openssl.org/policies/releasestrat.html
[OpenSSL Bylaws]: https://www.openssl.org/policies/omc-bylaws.html
[3.0 design document]: https://www.openssl.org/docs/OpenSSL300Design.html

On Fri, 2 Oct 2020 at 23:55, Nicola Tuveri  wrote:
>
> I am writing this on behalf of the OTC, as an update on the latest OTC
> developments.
> This week we had three days of virtual face to face meetings (two OTC
> sessions with one Committers session in the middle) and we scheduled the
> next OTC meeting already next week, on Tuesday 2020-10-06.
>
> This email presents some of the points already under discussion since
> last week, and an agenda proposal for the next OTC meeting.
> This is done in the interest of transparency and so that the community
> can give us feedback during the discussion.
>
>
> Background
> --
>
> Understandably, it doesn't come as a surprise that this week we mostly
> discussed about the upcoming beta release and associated items.
> According to our [Release Schedule][], among our pre-releases, the
> transition from _alpha_ releases to _beta_ releases is defined by the
> following definitions:
>
> - an _alpha_ release means:
>   - Not (necessarily) feature complete
>   - Not necessarily all new APIs in place yet
> - a _beta_ release means:
>   - Feature complete/Feature freeze
>   - Bug fixes only
>
> From the discussion, it transpired that the OTC needs to better
> formalize, in technical terms, how to assess its level of confidence on
> beta readiness, and to collectively agree on which technical tasks still
> need to be completed before the transition to beta to ensure feature
> completeness, and which remaining tasks would count as bug fixes that
> can be planned for the beta stage.
>
> At the next OTC meeting, we plan to discuss mainly two items in our
> agenda:
>
> - discuss, refine and vote on the "Beta Release Readiness Checklist"
> - discuss, refine and vote on the "Technical items still to be done"
>   list
>
> The "Beta Release Readiness Checklist" aims at establishing the
> technical checks to be performed to assess the OTC degree of confidence
> on the state of the development branch before calling the OTC "Ready for
> beta&q

[OTC] Agenda proposal for OTC meeting on 2020-10-06

2020-10-02 Thread Nicola Tuveri
’t fix, fix for beta or fix after beta for each.
04. Triage all TODO(current release) items:
done, won’t fix, fix for beta or fix after beta for each.
05. Triage all Coverity issues and close either:
by marking it as false positive or by fixing it.
06. Coveralls coverage has not decreased overall from the previous
release.
07. Have a pass through all new/changed APIs to ensure consistency,
fixing those that aren't.
08. Confirm that all new public APIs are documented.
09. Check that new exported symbols (in the static build) are named
correctly (`OSSL_` resp. `ossl_` prefix).
10. Run the previous (all relevant supported) release test suites
against the beta version.
11. Clean builds in Travis and Appveyor for two days.
12. run-checker.sh succeeds on 2 consecutive days before release.
13. A passing OTC "ready for beta" vote.


Proposed "Technical items still to be done" list


1. The `EVP` interface is the recommended API, it must be
   feature-complete compared with the functionality available using
   lower-level APIs.
   - Anything that isn't available must be put to an OTC vote to
 exclude.
   - The `apps/` are the minimum bar for this, they should not use
 internal nor deprecated functions (except `speed`, `engine` and
 the code to handle the `-engine` CLI option).
   - Deprecated functions used by `libssl` should be moved to an
 independent file.
2. Deprecation List Proposal: The entire `DH_`, `DSA_`, `ECDH_`,
   `ECDSA_`, `EC_KEY_`, `RSA_`, and `RAND_METHOD_` interfaces.
   - Does not include macros defining useful values (e.g. DH maximum
 size).
   - Excluded from Deprecation: `EC_`, `DSA_SIG_`, `ECDSA_SIG_`.
   - There might be some others.
   - Review for exceptions.
   - Rewrite apps to use non-deprecated interfaces as far as possible.
   - Transfer old apps to test cases (either directly or via 1.1.1).
3. Compile a list of things we know are not present - including things
   we have removed.
4. We need to have a mapping table from “old names” for things into the
5. We need to have mapping tables for various `d2i`/`i2d` functions.
   `OSSL_PARAMS` names.
6. We need to rewrite the apps to use only the non-deprecated interfaces
   (except for the list, speed and engine apps and the engine parameter
   in various places).
7. All the legacy interfaces need to have their documentation pointing
   to the replacement interfaces.


---

Best regards,

Nicola Tuveri

[Release Strategy]: https://www.openssl.org/policies/releasestrat.html
[OpenSSL Bylaws]: https://www.openssl.org/policies/omc-bylaws.html
[3.0 design document]: https://www.openssl.org/docs/OpenSSL300Design.html


Re: VOTE: Accept the OTC voting policy as defined:

2020-09-28 Thread Nicola Tuveri
+1, as expressed during the f2f meeting.

Nicola

On Mon, Sep 28, 2020, 15:02 Dr. Matthias St. Pierre <
matthias.st.pie...@ncp-e.com> wrote:

> topic: Accept the OTC voting policy as defined:
>
>The proposer of a vote is ultimately responsible for updating the
> votes.txt
>file in the repository.  Outside of a face to face meeting, voters
> MUST reply
>to the vote email indicating their preference and optionally their
> reasoning.
>Voters MAY update the votes.txt file in addition.
>
>The proposed vote text SHOULD be raised for discussion before
> calling the vote.
>
>Public votes MUST be called on the project list, not the OTC list
> and the
>subject MUST begin with "VOTE:".  Private votes MUST be called on
> the
>OTC list with "PRIVATE VOTE:" beginning subject.
>
> Proposed by Matthias St. Pierre (on behalf of the OTC)
> Public: yes
> opened: 2020-09-28
> closed: 2020-mm-dd
> accepted:  yes/no  (for: X, against: Y, abstained: Z, not voted: T)
>
>   Matt   [  ]
>   Mark   [  ]
>   Pauli  [  ]
>   Viktor [  ]
>   Tim[  ]
>   Richard[  ]
>   Shane  [  ]
>   Tomas  [  ]
>   Kurt   [  ]
>   Matthias   [+1]
>   Nicola [  ]
>
>


Re: New GitHub label for release blockers

2020-09-13 Thread Nicola Tuveri
I was not referring to this page in particular, but in general to the
various pages on github doc (and more in general among the first Google
results) about github labels and milestones and how to use them.

There seem to be, in the wild, different documented approaches, probably
even influenced by the introduction over time of milestones and ways of
using labels and milestones.

I think having a common understanding of how we want to use these tools is
a good idea, so I definitely welcome the proposition of talking about it
during the vf2f!

Nicola

On Sun, Sep 13, 2020, 17:26 Dr. Matthias St. Pierre <
matthias.st.pie...@ncp-e.com> wrote:

> Since we were able to collect some experiences working on the ‘3.0 New
> Core + FIPS’ project, I think that’s a perfect subject
> to be discussed on the face-to-face meeting. I will add it to the list of
> proposed topics. As for the official documentation you
> mentioned, are you talking about this one?
> https://github.com/features/project-management
>
> Matthias
>
>
> From: Nicola Tuveri 
> Sent: Sunday, September 13, 2020 4:17 PM
> To: Dr. Matthias St. Pierre 
> Cc: openssl-project@openssl.org
> Subject: Re: New GitHub label for release blockers
>
> Matthias overcredits me: I just wanted to know his opinion about when we
> should use labels and when milestones (and that is why I wrote to him
> off-list, as a very confused and shy pupil asking a sensei for wisdom
> pearls).
>
> All the alleged convincing was self-inflicted :P
>
>
> And now that my ignorance is out of the closet...
> ... I still have very confused ideas regarding the "best" conventional
> usage of github features like labels, milestones and projects: I read the
> official documentation about them and I grasp the general ideas behind
> them, but too often the boundaries are too foggy for me to navigate and
> pick the right tool for the job in a consistent and organic manner.
>
> Nicola
>


Re: New GitHub label for release blockers

2020-09-13 Thread Nicola Tuveri
Matthias overcredits me: I just wanted to know his opinion about when we
should use labels and when milestones (and that is why I wrote to him
off-list, as a very confused and shy pupil asking a sensei for wisdom
pearls).

All the alleged convincing was self-inflicted :P


And now that my ignorance is out of the closet...
... I still have very confused ideas regarding the "best" conventional
usage of github features like labels, milestones and projects: I read the
official documentation about them and I grasp the general ideas behind
them, but too often the boundaries are too foggy for me to navigate and
pick the right tool for the job in a consistent and organic manner.

Nicola

On Sun, Sep 13, 2020, 17:01 Dr. Matthias St. Pierre <
matthias.st.pie...@ncp-e.com> wrote:

> Nicola suggested and convinced me, that it would be better to have a
> dedicated
> milestone for the 3.0.0 beta1 release instead of adding a new label.
>
> So here it is, I already added all the tickets with the release blocker
> label and will
> remove the label again.
>
> https://github.com/openssl/openssl/milestone/17
>
> Matthias
>
>
>
> > -Original Message-
> > From: Dr. Matthias St. Pierre
> > Sent: Sunday, September 13, 2020 3:17 PM
> > To: openssl-project@openssl.org
> > Subject: New GitHub label for release blockers
> >
> > Hi all,
> >
> > taking up again the discussion from openssl-project where I suggested to
> (ab)use
> > the 3.0.0 milestone for release blockers, (see link and citation at the
> end of the mail),
> > I propose to add a new label for this purpose instead. In fact, I
> already created the label
> >
> >   [urgent: release blocker]   (see link below)
> >
> > and will add the mentioned tickets within shortly. So you can take a
> look and tell
> > me whether you like it or not. (If not, no problem. I'll just delete the
> label again.)
> >
> > Matthias
> >
> >
> > BTW: It took me all my force of will to resist the temptation of making
> a pun
> >  by naming the label [urgent: beta blocker].
> >
> >
> > References:
> > ==
> >
> > [urgent: release blocker]:
> >
> https://github.com/openssl/openssl/labels/urgent%3A%20release%20blocker
> >
> > [openssl-project message]:
> >
> https://mta.openssl.org/pipermail/openssl-project/2020-September/002191.html
> >
> >
> > > > > For a more accurate and timely public overview over the current
> state of the blockers,
> > > > > it might be helpful to manage them via the 3.0.0  milestone
> > > > >
> > > > > https://github.com/openssl/openssl/milestone/15
> > > > >
> > > > > Some of the tickets listed below were already associated to the
> milestone, the others
> > > > > were added by me now.
> > > >
> > > > I think the 3.0.0 milestone is what we expect to be in the
> > > > 3.0.0 release, not the beta release. That is bug fixes don't need
> > > > to be in the beta release, but if it adds new functionallity it
> > > > needs to be in the beta release.
> > >
> > > I was aware of this subtlety but I thought that we just could (ab-)use
> the milestone for
> > > the beta1 release and reuse it later for the final release, instead of
> creating a new milestone.
> > >
> > > Practically all of the relevant PRs are associated to the [3.0 New
> Core + FIPS] GitHub Project
> > > anyway, so it would be possible to remove the post-beta PRs from the
> milestone and restore
> > > them later.  (In my mind, I see project managers running away
> screeming...)
> > >
> > > Matthias
> > >
> > >
> > > [3.0 New Core + FIPS]:  https://github.com/openssl/openssl/projects/2
>
>


Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Nicola Tuveri
On Sat, Sep 5, 2020, 14:01 Tim Hudson  wrote:

> On Sat, Sep 5, 2020 at 8:45 PM Nicola Tuveri  wrote:
>
>> Or is your point that we are writing in C, all the arguments are
>> positional, none is ever really optional, there is no difference between
>> passing a `(void*) NULL` or a valid `(TYPE*) ptr` as the value of a `TYPE
>> *` argument, so "importance" is the only remaining sorting criteria, hence
>> (libctx, propq) are always the most important and should go to the
>> beginning of the args list (with the exception of the `self/this` kind of
>> argument that always goes first)?
>>
>
> That's a reasonable way to express things.
>
> The actual underlying point I am making is that we should have a rule in
> place that is documented and that works within the programming language we
> are using and that over time doesn't turn into a mess.
> We do add parameters (in new function names) and we do like to keep the
> order of the old function - and ending up with a pile of things in the
> "middle" is in my view is one of the messes that we should be avoiding.
>

We are already adding new functions, with the ex suffix, to allow users to
keep using the old version, so given that the users passing to the "_ex"
function are already altering their source, why are we limiting us from
rationalizing the signature from the PoV of new users and old users alike
that are in general quite critic of our API usability, even if it means
that applying both "required-ness" and "importance" as sorting criteria
sometime we end up adding in the middle?

I don't think I am being dismissive of the needs of existing applications
here: if a maintainer is altering their code from using "EVP_foo()" to
"EVP_foo_ex()", they will likely also be looking at the documentation of
the old and the new API versions and there shouldn't be really any
additional significanf cost in adding a parameter a the edges or in the
middle.

I think the importance argument is the one that helps for setting order and
> on your "required args" coming first approach the importance argument also
> applies because it is actually a required argument simply with an implied
> default - which again puts it not at the end of the argument list. The
> library context is required - always - there is just a default - and that
> parameter must be present because we are working in C.
>

I think I disagree with this, from the API CONSUMER PoV there is a clear
difference between a function where they don't need to pass a valid libctx
pointer and instead pass NULL (because there is a default associated with
passing NULL) and a function like an hypothetical
OSSL_LIBCTX_get_this(libctx) or OSSL_LIBCTX_set_that(libctx, value).

In the first case the function operates despite the programmer not
providing an explicit libctx (because a default one is used), in the latter
the programmer is querying/altering directly the libctx and one must be
provided.

*… actually only now that I wrote out the greyed text above I think I am
starting to understand what you meant all along!*

Your point is that any API that uses a `libctx` (and `propq`) is always
querying/altering a libctx, even if we use NULL as a shortcut to mean "the
global one", so if we are fetching an algorithm, getting/setting something
on a libctx scope, creating a new object, we are always doing so from a
given libctx.
In that sense, when an API consumer is passing NULL they are not passing
"nothing" (on which my "optional arguments" approach is based), but they
are passing NULL as a valid reference to a very specific libctx.

I now see what you meant, and I can see how it is a relevant thing to make
evident also in the function signature/usage.

But I guess we can also agree that passing NULL as a very specific
reference instead of as "nothing", can create a lot of confusion for
external consumers of the API, if it took this long for one of us — ­*ok,
it's me, so probably everyone else understood your point immediately, but
still…* — to understand the difference you were pointing out, even knowing
how these functions work internally and being somewhat familiar with
dealing with libctx-s.
If we want to convey this difference properly to our users, would it be
better to use a new define?

#define OSSL_DEFAULT ((void*)42)

- OSSL_DFLT could be a shorter alternative, if we prefer brevity.
- I am intentionally not specifying _LIBCTX as we might reuse a similar
mechanism to avoid overloading NULL in other similar situations: e.g. for
propq, where NULL is again a shortcut for the global ones, compared with
passing "" as an empty properties query, seems like another good candidate
for using a symbol to explicitly highlight that the default propq will be
used
- I would avoid making OSSL_DFLT an alias for NU

Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Nicola Tuveri
On Sat, Sep 5, 2020, 12:13 Tim Hudson  wrote:

> On Sat, Sep 5, 2020 at 6:38 PM Nicola Tuveri  wrote:
>
>> In most (if not all) cases in our functions, both libctx and propquery
>> are optional arguments, as we have global defaults for them based on the
>> loaded config file.
>> As such, explicitly passing non-NULL libctx and propquery, is likely to
>> be an exceptional occurrence rather than the norm.
>>
>
> And that is where we have a conceptual difference, the libctx is *always* 
> used.
> If it is provided as a NULL parameter then it is a shortcut to make the
> call to get the default or to get the already set one.
> Conceptually it is always required for the function to operate.
>
> And the conceptual issue is actually important here - all of these
> functions require the libctx to do their work - if it is not available then
> they are unable to do their work.
> We just happen to have a default-if-NULL.
>
> If C offered the ability to default a parameter if not provided (and many
> languages offer that) I would expect we would be using it.
> But it doesn't - we are coding in C.
>
> So it is really where-is-what-this-function-needs-coming-from that
> actually is the important thing - the source of the information the
> function needs to make its choice.
> It isn't which algorithm is being selected - the critical thing is from
> which pool of algorithm implementations are we operating. The pool must be
> specified (this is C code), but we have a default value.
>
> And that is why I think the conceptual approach here is getting confused
> by the arguments appearing to be optional - conceptually they are not - we
> just have a defaulting mechanism and that isn't the same conceptually as
> the arguments actually being optional.
>
> Clearer?
>

It's not yet clear to me the distinction you are trying to make.

I'll try to spell out what I extrapolated from your answer, and I apologize
in advance if I am misunderstanding your argument, please be patient and
correct me in that case so I can better understand your point!

It seems to me you are making a conceptual difference between
- a function that internally requires an instance of foo to work (and has a
default if foo is given as NULL among the arguments); e.g libctx is
necessary for a fetch, if a NULL libctx is given a mechanism is in place to
retrieve the global default one
- a function that internally uses an instance of foo only if a non-NULL one
is passed as argument; e.g. bnctx, if the user provides it this is used by
the callee and passed to its callee, if the user passes NULL the function
creates a fresh one for itself and/or its callees


But as a consumer of the API that difference is not visible and probably
not even interesting, as we are programming in C and passing pointers,
there are certain arguments that are required and must be passed as valid
pointers, others that appear optional because as a consumer of that API I
can pass NULL and let the function internally default to a reasonable
behavior (and whatever this "reasonable behavior" is — whether the first or
the second case from above, or another one I did not list —, it's part of
the documentation of that API).

IMHO, in the consumer POV, libctx and propq are optional arguments (even in
C where optional or default arguments do not technically exist and the
caller needs to always specify a value for each argument, which are always
positional) in the sense that they can pass NULL as a value rather than a
pointer to a fully instantiated object of the required type.
Even more so given that, excluding a minority of cases, we can expect
consumers of the APIs taking libctx and propq as arguments to pass NULL for
both of them and rely on the openssl config mechanism.

So while I agree with Tim that sometime it is valuable to make a difference
among the consequences of passing NULL as arguments in the context of one
kind of function and another, I believe the place for that is the
documentation not its signature.
The signature of the function should be designed for consumer usability,
and the conventional pattern there seems to be
- required args
- optional args
- callback+cb_args
and inside each group the "importance" factor should be the secondary
sorting criteria.

"importance" is probably going to be affected by the difference you are
making (or my understanding of it): e.g. if a function took both libctx and
bnctx, the fact that a valid pre-existing libctx is required to work (and a
global already existing one will be retrieved in case none is given), while
a fresh short-lived bnctx is going to be created only for the lifetime of
the called function in case none is given seems to indicate that libctx is
of vital importance for the API functionality, while bnctx is of minor
relevance.

But... going this way as a generalize

Re: Reordering new API's that have a libctx, propq

2020-09-05 Thread Nicola Tuveri
Thanks Tim for the writeup!


I tend to agree with Tim's conclusions in general, but I fear the analysis
here is missing an important premise that could influence the outcome of
our decision.

In most (if not all) cases in our functions, both libctx and propquery are
optional arguments, as we have global defaults for them based on the loaded
config file.
As such, explicitly passing non-NULL libctx and propquery, is likely to be
an exceptional occurrence rather than the norm.

For optional parameters most developers from C and a variety of languages,
would expect them to come at the end of the list of parameters, and this
also follows the rule of thumb of "importance" used by Tim to pick 1) and
B/A).

For this reason I would instead suggest to go with 2) and C) in this case
(with the caveat of keeping callback and its args as the very last
arguments, again this is a non-written convention not only for us but quite
widespread).



Nicola



On Sat, Sep 5, 2020, 10:10 Tim Hudson  wrote:

> I place the OTC hold because I don't believe we should be making parameter
> reordering decisions without actually having documented what approach we
> want to take so there is clear guidance.
> This was the position I expressed in the last face to face OTC meeting -
> that we need to write such things down - so that we avoid this precise
> situation - where we have new functions that are added that introduce the
> inconsistency that has been noted here that PR#12778 is attempting to
> address.
>
> However, this is a general issue and not a specific one to OPENSSL_CTX and
> it should be discussed in the broader context and not just be a last minute
> (before beta1) API argument reordering.
> That does not provide anyone with sufficient time to consider whether or
> not the renaming makes sense in the broader context.
> I also think that things like API argument reordering should have been
> discussed on openssl-project so that the broader OpenSSL community has an
> opportunity to express their views.
>
> Below is a quick write up on APIs in an attempt to make it easier to hold
> an email discussion about the alternatives and implications of the various
> alternatives over time.
> I've tried to outline the different options.
>
> In general, the OpenSSL API approach is of the following form:
>
>
>
> *rettype  TYPE_get_something(TYPE *,[args])rettype  TYPE_do_something(TYPE
> *,[args])TYPE*TYPE_new([args])*
>
> This isn't universally true, but it is the case for the majority of
> OpenSSL functions.
>
> In general, the majority of the APIs place the "important" parameters
> first, and the ancillary information afterwards.
>
> In general, for output parameters we tend to have those as the return
> value of the function or an output parameter that
> tends to be at the end of the argument list. This is less consistent in
> the OpenSSL APIs.
>
> We also have functions which operate on "global" information where the
> information used or updated or changed
> is not explicitly provided as an API parameter - e.g. all the byname
> functions.
>
> Adding the OPENSSL_CTX is basically providing where to get items from that
> used to be "global".
> When performing a lookup, the query string is a parameter to modify the
> lookup being performed.
>
> OPENSSL_CTX is a little different, as we are adding in effectively an
> explicit parameter where there was an implicit (global)
> usage in place. But the concept of adding parameters to functions over
> time is one that we should have a policy for IMHO.
>
> For many APIs we basically need the ability to add the OPENSSL_CTX that is
> used to the constructor so that
> it can be used for handling what used to be "global" information where
> such things need the ability to
> work with other-than-the-default OPENSSL_CTX (i.e. not the previous single
> "global" usage).
>
> That usage works without a query string - as it isn't a lookup as such -
> so there is no modifier present.
> For that form of API usage we have three choices as to where we place
> things:
>
> 1) place the context first
>
> *TYPE *TYPE_new(OPENSSL_CTX *,[args])*
>
> 2) place the context last
>
> *TYPE *TYPE_new([args],OPENSSL_CTX *)*
>
> 3) place the context neither first nor last
>
> *TYPE *TYPE_new([some-args],OPENSSL_CTX *,[more-args])*
>
> Option 3 really isn't a sensible choice to make IMHO.
>
> When we are performing effectively a lookup that needs a query string, we
> have a range of options.
> If we basically state that for a given type, you must use the OPENSSL_CTX
> you have been provided with on construction,
> (not an unreasonable position to take), then you don't need to modify the
> existing APIs.
>
> If we want to allow for a different OPENSSL_CTX for a specific existing
> function, then we have to add items.
> Again we have a range of choices:
>
> A) place the new arguments first
>
>
> *rettype  TYPE_get_something(OPENSSL_CTX *,TYPE *,[args])rettype
>  TYPE_do_something(OPENSSL_CTX *,TYPE *,[args])*

Re: Backports to 1.1.1 and what is allowed

2020-06-25 Thread Nicola Tuveri
Sorry yes, I meant to refer to the open PR with the s390 support, I picked
the wrong number!

On Thu, Jun 25, 2020, 17:54 Matt Caswell  wrote:

>
>
> On 25/06/2020 15:33, Nicola Tuveri wrote:
> > In light of how the discussion evolved I would say that not only there
> > is consensus on supporting the definition of a detailed policy on
> > backports and the definitions of what are the requirements for regular
> > releases vs LTS releases (other than the longer support timeframe),
> > but also highlights a need to do it sooner rather than later!
> >
> > This seems a job for the OMC, as it falls under:
> >
> >> makes all decisions regarding management and strategic direction of the
> project; including:
> >> - business requirements;
> >> - feature requirements;
> >> - platform requirements;
> >> - roadmap requirements and priority;
> >> - end-of-life decisions;
> >> - release timing and requirement decisions;
> >
> > My contribution to the discussion is to ask if the OMC has plans for
> > addressing this in the very short term.
>
> I think its unlikely we are going to get to a policy in the short term.
> It seems to me we are still some way away from a consensus here.
>
> > If working on a policy is going to be a medium-term effort, maybe it
> > would be opportune to call an OTC vote specific to #11968 under the
> > current release requirements defined by the OMC (or lack thereof).
>
> 11968 is already merged and, AFAIK, no one has proposed reverting it. If
> such a PR was raised then a vote might be a way forward for it.
>
> 11188 is the more pressing problem because that is currently unmerged
> and stuck. That has an OTC hold on it (placed there by me), so an OTC
> vote seems appropriate. If a vote is held it should be to decide whether
> backporting it is consistent with our current understanding of the
> policy such as it is. It is for the OMC to decide whether a different
> policy should be introduced at some point in the future.
>
> Matt
>
>
> >
> > We already saw a few comments in favor of evaluating backporting
> > #11968 as an exception, in light of the supporting arguments, even if
> > it was in conflict with the current policy understanding or the
> > upcoming policy formulation.
> > So if we could swiftly agree on this being an OTC or OMC vote, I would
> > urge to have a dedicated discussion/vote specific to #11968, while
> > more detailed policies and definitions are in the process of being
> > formulated.
> >
> > - What is the consensus on splitting the 2 discussions?
> > - If splitting the discussions, is deciding on #11968 an OTC or OMC
> matter?
> >
> >
> >
> > Cheers,
> >
> > Nicola
> >
>


Re: Backports to 1.1.1 and what is allowed

2020-06-25 Thread Nicola Tuveri
In light of how the discussion evolved I would say that not only there
is consensus on supporting the definition of a detailed policy on
backports and the definitions of what are the requirements for regular
releases vs LTS releases (other than the longer support timeframe),
but also highlights a need to do it sooner rather than later!

This seems a job for the OMC, as it falls under:

> makes all decisions regarding management and strategic direction of the 
> project; including:
> - business requirements;
> - feature requirements;
> - platform requirements;
> - roadmap requirements and priority;
> - end-of-life decisions;
> - release timing and requirement decisions;

My contribution to the discussion is to ask if the OMC has plans for
addressing this in the very short term.
If working on a policy is going to be a medium-term effort, maybe it
would be opportune to call an OTC vote specific to #11968 under the
current release requirements defined by the OMC (or lack thereof).

We already saw a few comments in favor of evaluating backporting
#11968 as an exception, in light of the supporting arguments, even if
it was in conflict with the current policy understanding or the
upcoming policy formulation.
So if we could swiftly agree on this being an OTC or OMC vote, I would
urge to have a dedicated discussion/vote specific to #11968, while
more detailed policies and definitions are in the process of being
formulated.

- What is the consensus on splitting the 2 discussions?
- If splitting the discussions, is deciding on #11968 an OTC or OMC matter?



Cheers,

Nicola


Re: The hold on PR 12089

2020-06-10 Thread Nicola Tuveri
I believe the OMC is called into action as some name changes might be seen
as breaking API or ABI compatibility and that has been considered so far as
part of the first item in the OMC prerogatives list.

The matter of OMC Vs OTC vote also depends on what kind of hold Tim is
applying with his - 1: is it a OMC or a OTC hold?
Of course OMC can always override what OTC decides, but the discussion/vote
should happen in OTC/OMC depending on which hat Tim was wearing when
placing the hold.


Cheers,

Nicola

On Wed, Jun 10, 2020, 10:43 Salz, Rich  wrote:

> What is the timetable for resolving
> https://github.com/openssl/openssl/pull/12089 ?
>
>
>
> The Beta is planned for a July 16 release.  There is a massive RAND/DRBG
> PR (https://github.com/openssl/openssl/pull/11682, the provider-friendly
> random) that is in the pipeline, and 12089 and 11682 will undoubtedly cause
> merge issues whichever gets merged first. That means extra time will be
> needed to reconcile. An OMC vote, once started, can be resolved in as
> quickly as 24 hours, but often take one or two weeks if most people
> abstain.
>
>
>
> Being conservative, then, the OMC needs to discuss and vote, before the
> end of this month.
>
>
>
> An additional complication is around the question of who votes: the OMC or
> the OTC. It is hard to justify this as requiring OMC action, unless the
> project is committing to avoiding such language in the future as a policy.
> But if the project wants to decide that, it can do so. Regardless of the
> policy, PR 12089 could be seen as purely an OTC issue, and OMC involvement
> is over-reach – what in https://www.openssl.org/policies/omc-bylaws.html
> justifies OMC involvement?. Nothing changes but some names; is the naming
> of things within OMC perview? I would love to know what OTC members think.
>
>
>
> So, what is the timetable, and what is the plan?
>
>
>


Re: addrev warning

2020-06-08 Thread Nicola Tuveri
Yes, I also got that since I updated my git installation from the upstream
source tarball.

With recent versions of git this warning has been showing for months
already, but I don't know enough about it to propose a fix!


Nicola

On Mon, Jun 8, 2020, 12:16 Matt Caswell  wrote:

> After upgrading Ubuntu over the weekend I'm suddenly seeing this warning
> from addrev. Is anyone else getting this?
>
> WARNING: git-filter-branch has a glut of gotchas generating mangled history
>  rewrites.  Hit Ctrl-C before proceeding to abort, then use an
>  alternative filtering tool such as 'git filter-repo'
>  (https://github.com/newren/git-filter-repo/) instead.  See the
>  filter-branch manual page for more details; to squelch this
> warning,
>  set FILTER_BRANCH_SQUELCH_WARNING=1.
> Proceeding with filter-branch...
>
>
> MMatt
>


Re: Some more extra tests

2020-05-07 Thread Nicola Tuveri
I would be interested in seeing a PR to see what enabling these tests would
require!

I believe we do indeed need to test more thoroughly to ensure we are not
breaking the engine API!


Nicola

On Thu, May 7, 2020, 21:08 Dmitry Belyavsky  wrote:

> Dear colleagues,
>
> Let me draw your attention to a potentially reasonable set of extended
> tests for the openssl engines.
>
> The gost-engine project (https://github.com/gost-engine/engine) has some
> test scenarios robust enough for testing engine-provided algorithms and
> some basic RSA regression tests. It contains a rather eclectic set of C,
> Perl, and TCL(!) tests that are used by me on a regular basis.
>
> If these tests are included in the project extended test suite, they could
> reduce some regression that sometimes occurs (see
> https://github.com/gost-engine/engine/issues/232 as a current list of
> known problems).
>
> I will be happy to assist in enabling these tests as a part of openssl
> test suites.
> Many thanks!
>
> --
> SY, Dmitry Belyavsky
>


Re: Cherry-pick proposal

2020-04-29 Thread Nicola Tuveri
I think we changed enough things in the test infrastructure that there is a
chance of creating subtle failures by merging cherry-picked commits from
master directly.

>From the burden perspective, from my point of view having a separate PR
that ran all the CI without failures is actually a benefit: I can do some
minimal testing on my machine before the final merge, instead of having to
push a branch to my personal github fork to run travis and appveyor to test
weird build options on platforms I don't have access to.

If we stick to opening the PR for backporting only after master is
completely approved, there shouldn't be too big a burden for the original
reviewers either: we can use `fixup!` commits if trivial changes are
required to clearly highlight what was changed compared to the original PR
for master, and other than that it's just a matter of checking that nothing
else changed from the originally approved changes. Approvals conditional to
the CI passing can also help to further reduce the burden of the grace
period for backport PRs.


Nicola

On Wed, 29 Apr 2020 at 14:24, Dr Paul Dale  wrote:

> My concern is are 1.1.1 and 3.0 really all that different?
> The core is quite different but the cryptographic algorithms aren’t.  CMS,
> x509, …?
>
> I’d rather not introduce a burden where it isn’t necessary.
>
> Pauli
> --
> Dr Paul Dale | Distinguished Architect | Cryptographic Foundations
> Phone +61 7 3031 7217
> Oracle Australia
>
>
>
>
> On 29 Apr 2020, at 10:08 pm, Matt Caswell  wrote:
>
>
> The OTC have received this proposal and a request that we vote on it:
>
> I would like to request that we do not allow cherry-picks between master
> and 1.1.1-stable because these two versions are now very different, if a
> cherry-pick succeeds, there is no guarantee that the result will work.
> Because we have no CI for the cherry-pick. If a cherry-pick is needed, a
> new PR for 1.1.1 should be done and approved independently.
>
> Before starting a vote I'd like to provide opportunity for comments, and
> also what the vote text should be.
>
> Thanks
>
> Matt
>
>
>


Re: Cherry-pick proposal

2020-04-29 Thread Nicola Tuveri
I can agree it is a good idea to always require backport as a separate PR,
with the following conditions:
- unless it's a 1.1.1 only issue, we MUST always wait to open the
backport-to-111 PR until after the master PR has been approved and merged
(to avoid splitting the discussions among different PRs, which make review
and revisiting our history very hard)
- trivial documentation changes, such as fixing typos, can be exempted

It must be clear that although things have changed a lot in the inner
plumbings, we have so far managed to keep crypto implementations very much
in sync between 1.1.1 and master, by applying the policy of "first merge to
master, then possibly backport".

What I am afraid of in Bernd's proposal, and recent discussions, is that
committers might be tempted to open PRs changing implementations against
1.1.1 first (to avoid frequent rebases due to unrelated changes) and let
the `master` PR stagnate indefinitely because it feels like too much hassle
to keep up with the development pace of master if your PR collaterally
changes certain files.

An example of what can go wrong if we open a 1.1.1 PR concurrently with a
PR for master can be seen here:
- `master` PR: https://github.com/openssl/openssl/pull/10828
- `1.1.1` PR: https://github.com/openssl/openssl/pull/11411

I highlight the following problems related to the above example:
- as you can see the `1.1.1` has been merged, even though the `master` one
has stalled while discussing which implementation we should pick. (this was
my fault, I should have applied the `hold` label after stating that my
approval for 1.1.1 was conditional to approving the `master` counterpart)
- discussion that is integral part of the decision process was split among
the 2 PRs, with over 40 comments each
- there is no clear link between the `master` PR and the `1.1.1` PR for the
same feature (this makes it very difficult to review what is the state of
the "main" PR, and if we or someone else in some months or years needs to
go check why we did things the way we did, will have a hard time connecting
the dots)

I also think that the proposal as written is a bit misleading: I would very
like to keep seeing in 1.1.1 a majority of commits cherry-picked from
commits merged to master, with explicit tags in the 1.1.1 commit messages
(this helps keeping the git history self-contained without a too strong
dependency on GitHub).
I would rephrase the proposal as follows:

Merge to 1.1.1 should only happen after approval of a dedicated PR
targeting the OpenSSL_1_1_1-stable branch.

Potential amendments that I'd like the OTC to consider are:
a) before the end of the sentence add ", with the optional exclusion of
trivial documentation-only changes"
b) after the end of the sentence add "In composing backport pull requests,
explicit cherry-picking (`git cherry-pick -x`) of relevant commits merged
to `master` or another stable branch is recommended and encouraged whenever
possible."
c) adopt a more general statement:

Merge to any stable branch should only happen after approval of a
dedicated PR targeting specifically that branch.




So, in summary, I would agree with the proposal, as I definitely think
Bernd has a good point about running the 1.1.1 CI for things we think
should be backported, but requires careful consideration of additional
requirements to avoid duplicating review efforts, splitting discussions
that should be kept together, or disrupting our processes making 1.1.1 and
master diverge more than necessary.


Cheers,


Nicola

On Wed, 29 Apr 2020 at 14:08, Matt Caswell  wrote:

>
> The OTC have received this proposal and a request that we vote on it:
>
> I would like to request that we do not allow cherry-picks between master
> and 1.1.1-stable because these two versions are now very different, if a
> cherry-pick succeeds, there is no guarantee that the result will work.
> Because we have no CI for the cherry-pick. If a cherry-pick is needed, a
> new PR for 1.1.1 should be done and approved independently.
>
> Before starting a vote I'd like to provide opportunity for comments, and
> also what the vote text should be.
>
> Thanks
>
> Matt
>


Re: Face to face

2020-03-04 Thread Nicola Tuveri
I would like to propose as a date for the OTC meeting somewhere close to
the projected release date for 3.0alpha1.

Ideally it would be nice if OMC and OTC could coordinate the dates to be
close enough to ease the discussion of agenda items that might require
coordination between OMC and OTC.

My rationale for proposing a date next to the alpha1 release is:
- there might be open items for the release process that might benefit from
the attention of the whole OTC and be expedited with f2f discussions
- OMC and OTC might have items to discuss in both directions regarding the
final stages of the 3.0 release roadmap and the milestones for alpha2,
alpha3,  beta1

---

Regarding time for the OTC meeting, I would personally prefer during the
week.
Considering that the bulk of OTC members seem to be distributed between
Europe and Australia, would 7:00 (AM) UTC be a suitable time?

Keep in mind that dates between 29.03 and 05.04 happen to be between
daylight saving time changes in Europe and Australia, so using 31.03 as the
tentative date for the meeting 7:00 AM UTC would mean [0]:
- 8 AM in London
- 9 AM in Berlin/Brussels/Prague/Stockholm
- 5 PM in Brisbane

Nicola

[0]: http://j.mp/39oYWhw

On Tue, 3 Mar 2020 at 10:22, Dr Paul Dale  wrote:

> I propose that we don’t have either an OMC or OTC face to face meeting at
> ICMC.
> Travel restrictions and the availability list make it look unlikely.
>
> Instead, I’d suggest a video conference for both.
>
> This will probably require some kind of vote (for both).
>
>
> Any suggestions as to date and time.
>
>
> Pauli
> --
> Dr Paul Dale | Distinguished Architect | Cryptographic Foundations
> Phone +61 7 3031 7217
> Oracle Australia
>
>
>
>
>


Re: Errored: openssl/openssl#31939 (master - 34b1676)

2020-02-14 Thread Nicola Tuveri
On Fri, 14 Feb 2020 at 14:00, Matt Caswell  wrote:

>
> To be clear the build that is timing out uses *msan* not *asan*.


>
As I understand it msan detects unitialised reads. whereas asan detects
> memory corruption, buffer overflows, use-after-free bugs, and memory leaks.
>
> The previous "home-made" checks only detected memory leaks, so it is not
> comparable with the functionality offered by msan.
>
>
Thanks for the clarification! I was indeed confused!



> The msan documentation
> (https://clang.llvm.org/docs/MemorySanitizer.html) suggests that a slow
> down of 3x is typical.
>
> It seems reasonable to me to disable msan checks in Travis entirely, and
> have them only in run-checker.
>
>
I agree with you.


> > Here is another idea that would be interesting if we restore the
> > previous checks:
> > I don't know what kind of options github offers on this, but would it be
> > possible to run triggered CI on something that is not Travis and does
> > not timeout and still have the results in the PR?
>
> I am sure there are hooks to do this. Richard has been talking for quite
> a while about setting up a buildbot infrastructure. If that could be
> integrated into github that would be really neat.
>
>
It would be neat indeed, when I first heard about buildbot I tried to set
aside some time to play with it, but failed in finding the time needed!
But at least from what I read it does indeed seem like a very interesting
and useful tool for our purposes!

I have no doubt sooner or later Richard will be more successful than I was
at finding the time to work on this item as well!

Nicola


Re: Errored: openssl/openssl#31939 (master - 34b1676)

2020-02-14 Thread Nicola Tuveri
If ASAN is too slow to run in the CI should we restore the previous
homemade checks for memory leaks as an alternative to be run in regular CI
runs and leave ASAN builds to run-checker on the master branch only?

Here is another idea that would be interesting if we restore the previous
checks:
I don't know what kind of options github offers on this, but would it be
possible to run triggered CI on something that is not Travis and does not
timeout and still have the results in the PR?
If something like that would be possible we could move the ASAN builds to
extended_tests, rely on the previous memleak detection for the regular CI
runs, and then trigger with a script or Github Action the extended_tests
when the approval:done label is added.

That way, by the time something is ready to be merged we should have a full
picture!


Nicola

On Wed, Feb 5, 2020, 10:25 Matt Caswell  wrote:

> Since we fixed the Travis builds 4 out of the 8 builds on master that
> have taken place have errored due to a timeout.
>
> The msan build is consistently taking a *very* long time to run. If it
> gets to 50 minutes then Travis cuts it off and the build fails.
>
> Should we disable the msan build?
>
> Matt
>
>
>  Forwarded Message 
> Subject:Errored: openssl/openssl#31939 (master - 34b1676)
> Date:   Wed, 05 Feb 2020 00:02:01 +
> From:   Travis CI 
> To: openssl-comm...@openssl.org
>
>
>
> openssl
>
> /
>
> openssl
>
> <
> https://travis-ci.org/openssl/openssl?utm_medium=notification_source=email
> >
>
>
> branch iconmaster 
>
> build has errored
> Build #31939 has errored
> <
> https://travis-ci.org/openssl/openssl/builds/646181069?utm_medium=notification_source=email
> >
> arrow to build time
> clock icon50 mins and 3 secs
>
> Pauli avatarPauli
>
> 34b1676 CHANGESET →
> 
>
> Make minimum size for secure memory a size_t.
>
> The minimum size argument to CRYPTO_secure_malloc_init() was an int but
> ought
> to be a size_t since it is a size.
>
> From an API perspective, this is a change. However, the minimum size is
> verified as being a positive power of two and it will typically be a small
> constant.
>
> Reviewed-by: David von Oheimb 
> (Merged from #11003)
>
> Want to know about upcoming build environment updates?
>
> Would you like to stay up-to-date with the upcoming Travis CI build
> environment updates? We set up a mailing list for you!
>
> SIGN UP HERE 
>
> book icon
>
> Documentation  about Travis CI
>
> Have any questions? We're here to help. 
> Unsubscribe
> <
> https://travis-ci.org/account/preferences/unsubscribe?repository=5849220_medium=notification_source=email
> >
> from build emails from the openssl/openssl repository.
> To unsubscribe from *all* build emails, please update your settings
> <
> https://travis-ci.org/account/preferences/unsubscribe?utm_medium=notification_source=email
> >.
>
> black and white travis ci logo 
>
> Travis CI GmbH, Rigaer Str. 8, 10427 Berlin, Germany | GF/CEO: Randy
> Jacops | Contact: cont...@travis-ci.com  |
> Amtsgericht Charlottenburg, Berlin, HRB 140133 B | Umsatzsteuer-ID gemäß
> §27 a Umsatzsteuergesetz: DE282002648
>
>


Re: AW: [openssl] OpenSSL_1_1_1-stable update

2019-05-24 Thread Nicola Tuveri
I have always implicitly assumed Matt view, but I am happy to conform to
what the consensus is.

I believe this discussion is very useful and could contribute a new entry
in the commiter guidelines.

Nicola

On Fri, May 24, 2019, 07:21 Matt Caswell  wrote:

>
>
> On 24/05/2019 15:10, Richard Levitte wrote:
> > Not sure I see it as picking nits, it's rather about some fundamental
> > difference in what we thinking we're approving, and how we actually
> > act around that.
> >
> > My idea has always been that I approve a code change, i.e. essentially
> > a patch or a set of patches, without regard for exact branches it ends
> > up in.  With the in mind, the exact branches it gets applied to is a
> > *separate* question.
>
> That's not the way I've ever thought of it. In my mind an approval is for a
> change applied to a specific branch. Where a PR lists more than one branch
> in it
> and you approve the PR then effectively you are approving it multiple
> times all
> in one go - once for each branch.
>
>
> > If we go with the idea that an approval also involves approving what
> > branches it goes to, then what happens if someone realises after some
> > time that a set of commits (a PR) that was applied to master only
> > should really also be applied to 1.1.1?  Should the approval process
> > start over from scratch, i.e. all approvals that went to master should
> > be scratched and replaced with a new set of approvals (in principle)?
>
> No. If the PR was approved for master and applied to master then no
> problem - it
> stays in master. If it is later realised that it needs to be backported to
> other
> branches then, yes, new approvals need to be sought for that change to
> *those
> branches*.
>
> As far as I was aware we've always done this.
>
> This is essential in my mind. A change for one branch does not always make
> sense
> in another branch. So you can't just say "I approve this change" and *then*
> worry about what branches it applies to. A change only makes sense in the
> context of the branch it applies to.
>
> Matt
>