Re: [Gluster-devel] Re-thinking gluster regression logging

2018-07-03 Thread Vijay Bellur
On Mon, Jul 2, 2018 at 4:59 AM Nigel Babu  wrote:

> Hello folks,
>
> Deepshikha is working on getting the distributed-regression testing into
> production. This is a good time to discuss how we log our regression. We
> tend to go with the approach of "get as many logs as possible" and then we
> try to make sense of it when it something fails.
>
> In a setup where we distribute the tests to 10 machines, that means
> fetching runs from 10 machines and trying to make sense of it. Granted, the
> number of files will most likely remain the same since a successful test is
> only run once, but a failed test is re-attempted two more times on
> different machines. So we will now have duplicates.
>
> I have a couple of suggestions and I'd like to see what people think.
> 1. We stop doing tar of tars to do the logs and just tar the
> /var/log/glusterfs folder at the end of the run. That will probably achieve
> better compression.
> 2. We could stream the logs to a service like ELK that we host. This means
> that no more tarballs. It also lets us test any logging improvements we
> plan to make for Gluster in one place.
> 2. I've been looking at Citellus[1] to write parsers that help us identify
> critical problems. This could be a way for us to build a repo of parsers
> that can identify common gluster issues.
>
> Perhaps our solution would be a mix of all 2 and 3. Ideally, I'd like us
> to avoid archiving tarballs to debug regression issues in the future.
>
>
>
A combination of 2 and 3 sounds good to me.

If we could dogfood gluster somewhere in this setup (storage backend for
Elastic?), it would be even more awesome! :)

Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster Release cadence and version changes

2018-07-03 Thread Shyam Ranganathan
This announcement is to publish changes to the upstream release cadence
from quarterly (every 3 months), to every 4 months and to have all
releases maintained (no more LTM/STM releases), based on the maintenance
and EOL schedules for the same.

Further, it is to start numbering releases with just a MAJOR version
rather than a "x.y.z" format that we currently employ.

1. Release cadence (Major releases)
- Release every 4 months
- Makes up 3 releases each year
- Each release is maintained till n+3 is released (IOW, for a year, and
n is the release version, thus retaining EOL time for a release as it
stands currently)
- Retain backward compatibility across releases, for ease of
migrations/upgrades

2. Update releases (minor update release containing bug fixes against a
major release)
- First 3 update releases will be done every month
- Further update releases will be made once every 2 months till EOL of
the major release
- Out of band releases for critical issues or vulnerabilities will be
done on demand

3. Release versions
- Releases will be version-ed using a monotonic increasing number
starting at 5
- Hence future releases would be, release-5, release-6, and so on
- Use minor numbers for update releases, like 5.x or 6.x, x
monotonically increasing every update
- RPM versions would look like .-..

4. Note on op-version
- op-versions were tied to release versions, and may undergo a change in
description to make it release agnostic

Expect the Gluster release web page to undergo an update within a week.
( https://www.gluster.org/release-schedule/ )

Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel