On 05/22/2017 05:23 AM, Amar Tumballi wrote:
All,

NOTE: Currently sent to only maintainers list, planning to send it to
'gluster-devel@' by Wednesday (24th) if there are no serious comments.

Below is mainly a proposal, and I would like to hear people's thought on
these.

  * Over the years, we have added many test cases to our regression test
    suite, and currently the testing time stands at ~5hrs for 1 patch.
    But the current way of '*.t' based regression can't be called as a
    'complete' test for the filesystem.

  * Correct me if I am wrong, but even when we have to make a release,
    other than the same set of tests, we don't have much to validate the
    build.

Yes, the above is true. There was a list curated by Pranith and Aravinda for 3.9 but, as that is not accessible at present.



Now considering above points and taking the proposal of'Good Build' from
Nigel
<http://lists.gluster.org/pipermail/gluster-devel/2017-March/052245.html> [1],
I am thinking of making below changes to how we look at testing and
stability.

'What to test on nightly build':

  * Build verification
  * Run all the regression as it runs now.
      o Run CentOS regression
      o Run NetBSD regression
  * Run coverity
  * Run gcov/lcov (for coverage)
  * Run more tests with currently optional options made as default (like
    brick-multiplexing etc).
  * Open up the infra to community contribution, so anyone can write
    test cases to make sure GlusterFS passes their usecases, everynight.
      o Should be possible to run a python script, ruby script, or a
        bash script, need not be in a 'prove' like setup.
  * <Add more here>

+1 to all of the above.


'master' branch:

  * make the overall regression lightweight.
      o Run what netbsd tests run now (ie, basic and features in tests).
      o Don't run netbsd builds, instead add a compilation test on
        centos 32bit machine to keep reminding ourself how many warnings
        we get.
  * Make sure 'master' branch is well tested in 'Nightly'.
  * Let the approach of maintainers and over all project is to promote
    new changes, instead of being very sceptical about new patches, ideas.
  * Provide option to run the whole nightly build suit with a given
    patchset to maintainers, so when in doubt, they can ask for the
    build to complete before merging. Mostly applies to new feature or
    some changes which change the way things behave fundamentally.

'release-x.y' branch:

  * During the release-plan come out with target number of 'coverity'
    issues, and line coverage % to meet. Also consider number of
    'glusto-tests' to pass.
  * Agree to branch out early (at least 45days, compared to current
    30days), so we can iron-out the issues caused by the making the
    'master' branch process lean.

The above is fine, post branching the activities for a release-owner are lean (monitor fstat, monitor backports to other releases are made to current as well, monitor and curate the merge queue). Hence, growing this window from 30 to 45 days is viable and not process heavy.

Of course, based on what is stated above, more things like fstat needs monitoring on a regular basis (coverage, coverity, etc.) but not an issue.


  * Change the voting logic, add more tests now (Example: fall back to
    current regression suit).

+1 for release branches this is a better option IMO

  * On the first build, run agreed performance tests and compare the
    numbers with previous versions.
  * Run NetBSD regression now.
      o Btw, noticed the latest NetBSD package is for 3.8.9 (done in Jan).
      o Work with Emmanuel  <[email protected] <mailto:[email protected]>>
        for better options here.
  * Run nightly on the branch.
  * Block the release till we meet the initially agreed metrics during
    the release-plan. (like coverity/glusto-tests, line coverage etc).

This is a tough one!

How about, block feature growth till we hit targets? Blocking a release for such needs have often shown that it can take 1-2 months delay in releasing the same.

I suggest we stick to the release calendar, but keep chasing the right goals and focus review and maintainer effort on those efforts that takes us closer to these goals.

IOW, review priority is for things that help in said focus areas, and only then is time spent in curating other patches. Thoughts?

  * On the final build, run agree performance tests and publish the
    numbers. Make sure it is not regressed.

(possibly some are already covered in the good build thread)

- Add install and upgrade tests to this mix (install should be covered when running above tests, but not upgrades)
  - Upgrade tests to test out all(?) options

- Add, version compatibility tests

- Add testing documented procedures into the mix, IOW, if we state "this is how you setup(/recover from/address) XYZ" then have a test case for that. This ensures documented procedures are right, or are improved over time.



We all agreeing to this is critical, as I saw that (refer mail on 90days
old patches)
<http://lists.gluster.org/pipermail/gluster-devel/2017-May/052844.html>[2],
there were more than 600 patches which were old, and many of these are
good patches which would make the project better.

Please take time to review the points here and comment if any. Planning
to raise a ticket to Infra team to validate these changes by June 1st.
Lets move towards these changes by June 15th if there are not any
serious concerns.


Regards,


[1] -
http://lists.gluster.org/pipermail/gluster-devel/2017-March/052245.html
[2] - http://lists.gluster.org/pipermail/gluster-devel/2017-May/052844.html
--
Amar Tumballi (amarts)


_______________________________________________
maintainers mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/maintainers

_______________________________________________
maintainers mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/maintainers

Reply via email to