>
> It would be nice to have a graph on our weekly status of the number of
> issues reported on 4.0. I feel like having a visual representation of the
> number of bugs on 4.0 over time would be really helpful to give us a
> feeling of the progress on its stability.
>

Berenguer pointed out to me that we already have a graph to track those
things:

https://issues.apache.org/jira/secure/ConfigureReport.jspa?projectOrFilterId=filter-12347782&periodName=weekly&daysprevious=30&cumulative=true&versionLabels=none&selectedProjectId=12310865&reportKey=com.atlassian.jira.jira-core-reports-plugin%3Acreatedvsresolved-report&atl_token=A5KQ-2QAV-T4JA-FDED_fd75a3db98350d94229fbb4cf29cb50f3051d7ce_lin&Next=Next



On Tue, Jun 30, 2020 at 10:20 AM Benjamin Lerer <benjamin.le...@datastax.com>
wrote:

> Thanks a lot for starting this thread Dinesh.
>
> As a baseline expectation, we thought big users of Cassandra should be
>> running the latest trunk internally and testing it out for their particular
>> use cases. We wanted them to file as many jiras as possible based on their
>> experience. Operations such as host replacement, expansions, shrinks, etc.
>> and obviously any issues with durability, performance, availability. This
>> was thought to generate a body of work (jiras), when fixed, over time would
>> stabilize trunk. When we see the trickle of new jiras coming to a halt or
>> at least nothing serious shows up, thats when the big users of Cassandra
>> would feel comfortable running the build in prod. This would be a good time
>> to cut the final stable release.
>>
>
> It would be nice to have a graph on our weekly status of the number of
> issues reported on 4.0. I feel like having a visual representation of the
> number of bugs on 4.0 over time would be really helpful to give us a
> feeling of the progress on its stability.
> It might also be interesting to see which components are the most affected
> to help us to determine where we should increase the testing.
>
> We also created a confluence doc for a test plan with major areas that
>> require testing. There were shepherds that were tentatively assigned[1].
>> The rationale for this doc was that these areas have significantly changed
>> and we need more eye balls on it to ensure stability. The shepherds would
>> help guide the testing for these areas.
>
>
> I had a quick look at the JIRAs associated with the different areas of the
> plan and a lot of them appear to be blocked. I believe that most people are
> unsure of what or how to test things and want to get some feedback before
> starting to add tests.
> It would be great if in the coming weeks we can all help to unblock those
> tickets by clarifying what needs to be done on each of them. I guess that
> none of us have a clear picture but sharing ideas would definitely help.
> :-)
>
> The final concern was around some people felt that the lack of visible
>> activity signals that the project is dead. While I don't fully agree with
>> this assessment, I suspect sending a periodic update on new issues or test
>> runs that people are running to the mailing list would definitely help
>> keeping everyone engaged. It also helps bring visibility to the community.
>> I am not 100% sure whether it is feasible for everyone to share what
>> they're doing internally but I think if you're working on something,
>> summarizing on a weekly or biweekly basis can help the community. This is
>> just a thought and if there are other suggestions, lets discuss them
>> without shooting down new ideas (assume positive intent).
>>
>
> Your suggestion makes sense to me.Hopefully releasing 4.0-beta will also
> be a strong sign that the project is still active.
>
> On Mon, Jun 29, 2020 at 10:48 PM Dinesh Joshi <djo...@apache.org> wrote:
>
>> Hi all,
>>
>> I am starting a separate thread as the other thread has veered off in a
>> very different direction. The ground rules for this thread are that we are
>> not discussing branching models or release strategy here.
>>
>> Some folks in the community had the following questions and concerns:
>>
>> 1. Lack of clarity on how is stability and quality is being measured.
>> 2. Lack of visibility on the progress to stabilizing 4.0.
>> 3. Lack of clarity on what is remaining to get 4.0 to a stable state.
>>
>> My 2 cents on these 3 questions are as follows:
>>
>> As a baseline expectation, we thought big users of Cassandra should be
>> running the latest trunk internally and testing it out for their particular
>> use cases. We wanted them to file as many jiras as possible based on their
>> experience. Operations such as host replacement, expansions, shrinks, etc.
>> and obviously any issues with durability, performance, availability. This
>> was thought to generate a body of work (jiras), when fixed, over time would
>> stabilize trunk. When we see the trickle of new jiras coming to a halt or
>> at least nothing serious shows up, thats when the big users of Cassandra
>> would feel comfortable running the build in prod. This would be a good time
>> to cut the final stable release.
>>
>> We also created a confluence doc for a test plan with major areas that
>> require testing. There were shepherds that were tentatively assigned[1].
>> The rationale for this doc was that these areas have significantly changed
>> and we need more eye balls on it to ensure stability. The shepherds would
>> help guide the testing for these areas.
>>
>> I think the big missing piece is that we don't know who is actively
>> running trunk internally and how aggressive their timelines are in getting
>> to a stable 4.0. However, we can see new jiras being reported every day.
>> There are also a lot of open jiras that require attention and they are
>> being reported by diverse set of Cassandra users which is great. I think
>> everyone would like to see a stable release in ~6 months from now. The
>> quality of this release will be dependent on how aggressively everyone in
>> the community tests the release in the coming weeks and months.
>>
>> The final concern was around some people felt that the lack of visible
>> activity signals that the project is dead. While I don't fully agree with
>> this assessment, I suspect sending a periodic update on new issues or test
>> runs that people are running to the mailing list would definitely help
>> keeping everyone engaged. It also helps bring visibility to the community.
>> I am not 100% sure whether it is feasible for everyone to share what
>> they're doing internally but I think if you're working on something,
>> summarizing on a weekly or biweekly basis can help the community. This is
>> just a thought and if there are other suggestions, lets discuss them
>> without shooting down new ideas (assume positive intent).
>>
>> Thanks,
>>
>> Dinesh
>>
>> [1]
>> https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Quality%3A+Components+and+Test+Plans
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: dev-h...@cassandra.apache.org
>>
>>

Reply via email to