sam-herman commented on issue #15420:
URL: https://github.com/apache/lucene/issues/15420#issuecomment-3528469116

   > I think if we want to make graph based indices (HNSW or others) build 
faster and merge faster, we should instead do something better with the 
algorithm.
   > 
   > There are many good ideas to explore around boot strapping graph building 
through clustering (and merging clusters between segments can be REALLY 
fast....)
   > 
   > Or taking advantage of bipartite ordering (and making that better) so that 
graph building has a head start due vectors that are similar being near each 
other on disk.
   > 
   > I would say lets do the "harder" thing (arguably, given how Lucene is 
built, I doubt its actually harder), and just make the graph based indices 
better in a segment based architecture.
   
   @benwtrent all good ideas, which I'm actively exploring. Any thoughts on 
minimizing write IO to disk for frequent merges? Can take the previous extreme 
example of 1Billion vectors graph updated with a single vector and merging will 
cost writing back ~6TB of data to the disk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to