mikemccand commented on issue #13768: URL: https://github.com/apache/lucene/issues/13768#issuecomment-2349362383
>> But I don't think we should block removing compress option due to non-SIMD results? Actually, thinking about this more ... I'm changing my mind. I don't fully understand how poor our Panama/SIMD coverage is across CPU types/versions, "typically" in use by our users. E.g. for ARM CPUs (various versions of NEON instructions). What %tg of our users would hit the non-SIMD (non-Panama) path? It's spooky that the likes of OpenSearch, Elasticsearch, Solr are needing to pull in their own Panama FMA wrappers around native code to better optimize for certain vectorized instruction cases (see discussion on #13572). Ideally such optimizations would be in Lucene so we could make decisions like this (remove `compress` option to simplify our API / reduce surface area) with more confidence. I'd like to run benchmarks across many more CPUs before rushing to a decision here, and I think for now we should otherwise respect the non-SIMD results? I love our new `aws-jmh` dev tool (thank you @rmuir)! I looked at its `playbook.yml` to figure out if I could also add "go check out `luceneutil`, download this massive 95 GB `.vec` file, and run `knnPerfTest.py` and summarize the results" but I haven't made much progress so far ... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org