Pulkitg64 commented on issue #12403:
URL: https://github.com/apache/lucene/issues/12403#issuecomment-3711678543

   I tried to implement float16 vector encoding as part of PR: #15549, where I 
am trying to use float16 vectors just for storing and use float32 vectors for 
all computations. The results are not encouraging as of now since we are seeing 
large regression in latency (100% in case of no quantization) and indexing time 
(about 20%). But maybe we can use float16 vectors for quantization cases where 
search happens on quantized vectors instead of full-precision vectors if we can 
take hit on indexing rate. Looking for feedback.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to