paulirwin opened a new issue, #1085:
URL: https://github.com/apache/lucenenet/issues/1085

   ### Is there an existing issue for this?
   
   - [X] I have searched the existing issues
   
   ### Task description
   
   This issue intends to formalize the plan for benchmarking with 
[BenchmarkDotNet](https://github.com/dotnet/BenchmarkDotNet) that was started 
in PRs #310 (and #349 can likely be closed, as that was effectively done as an 
update to #310), and track it against the release. Note that this is 
specifically about BenchmarkDotNet benchmarks, and this is not the same thing 
as the Lucene.Net.Benchmark project or their use in lucene-cli.
   
   First, we should get the project that tests Demos (PR #310) in the repo as a 
starting point, addressing the structural feedback in PR #349 so that it is set 
up for future projects. This will allow us to run the benchmarks between 
branches locally to be on the lookout for performance regressions while we go, 
and compare them to maybe the last 2 or 3 NuGet packages. We should also have 
CI scripts for GitHub and Azure DevOps that run this benchmark project, to 
ensure they continue to work as future changes are made, although centralizing 
benchmarking reporting will come next. If we can have this CI trivially output 
a user-friendly file like HTML that could be a build asset (and even visualized 
in i.e. an ADO tab) that would be great; but this would be limited to viewing 
the data from that benchmark run only to keep the scope reasonable. That latter 
part can be split out as a separate issue if needed. But having this initial 
benchmarking infrastructure in place should be a requirement
  for beta 18.
   
   Second, we should set up centralized benchmark reporting so that we can 
track benchmark performance data over time. While our first attempt naturally 
should start out _much_ smaller in scope, it would be nice to have something 
that aims to eventually be equivalent to [Lucene's nightly 
benchmarks](https://benchmarks.mikemccandless.com/). Where to publish this 
data, how to visualize it, etc. is TBD. This part likely will be a post-beta-18 
item, and we can split that out as its own issue if needed.
   
   Any additional benchmarks that we think would be useful can be logged as 
their own issues. Hopefully, having this infrastructure in place will encourage 
the community to provide additional benchmarks to help us build out our 
benchmark test suite.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@lucenenet.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to