On 8/24/25 05:05, danny mcClanahan wrote: > On Monday, August 18th, 2025 at 09:27, Bernhard Voelker <m...@bernhard-voelker.de> wrote: > >> I've rolled the change into a proper Git commit and would push it like >> that in your name (unless you don't want that, or want another change). >> Good to go? > > I haven't been able to identify another change yet! danny mcClanahan <dmc2@amass.energy> is fine for attribution.
Thanks, pushed in your name: https://cgit.git.savannah.gnu.org/cgit/findutils.git/commit/?id=fbbda507c68 > However, I'd be curious to know if multithreaded directory traversal would ever be in scope > for the find and/or updatedb commands (I haven't looked at updatedb yet). This is a matter about optimization, and the top-most rule to keep in mind is: "Premature optimization is the root of all evil." So the question is: where is the bottleneck? Even if find(1) would be able to process the entries faster and in parallel, would the hardware and the kernel be able to provide that information faster? Or if find has to pass the file names to another tool - let's take grep(1) as a simple example -, would the overall processing improve? I don't believe that multi-threaded directory traversal is helping in a significant way here, and hence would not be worth the enormous added complexity for that in the code. Again about optimization: if we have numbers that multi-threading would boost performance about e.g. 30% and the code would grow only like 10%, then we could think about considering it. Otherwise, search further for the bottleneck in the whole chain. Have a nice day, Berny