I have captured in writing everything I have learned so far about Launchable and how to do model training in a way that is most effective for the Jenkins project:
https://docs.google.com/document/d/12LdoAFA566P4LgHhu1L-fuOQuhipxk77tUDXQdNmhss/edit Based on this document, we have three models to train, and training each one requires collecting builds and collecting test results. Collecting builds of BOM/PCT runs is the most complicated and will likely require developing a new tool of some sort, so I will focus on the other areas first. Once models are trained, Launchable will be able to create subsets of each of our three test suites. It is up to us as to how to use that information to increase velocity, decrease costs, and/or shift left. I provided several examples in the above document. Hopefully the least controversial of those examples is running a subset of Windows tests on core PR builds for a mild decrease in test time and cost. I plan to implement this as soon as the model is trained with a decent amount of data, likely about one month after model training begins. The other ideas mentioned above (feel free to suggest new ones in the Google Document as well!) can be discussed further once the model is trained and we can query Launchable for subsets and assess whether the provided subsets are acceptable or not. I am noting that BOM costs are unsustainable given budgetary constraints, so regardless of the quality of the subsets, a change in BOM maintenance practices will likely be needed one way or another. -- You received this message because you are subscribed to the Google Groups "Jenkins Developers" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-dev+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CAFwNDjpDzmQzYxS_Z0cgEN7U%2BLT_2fUNpUVafey%2BLYs%2BkZGX2g%40mail.gmail.com.