Hello List!
I'm new to the list, and that's my first message.
We got to know about SOLR, and we are very excited about it to replace our
current elasticsearch infra.Currently, our main issue is regarding data and
model size running on each machine.

*Our setup:*
1. We use the following search arch: 1st tier, the fast search (low
response time) with most likely data to be retrieved,
2. 2nd tier with the rest (including on-disk data)

We saw the all features (solr wabpage) provided by SOLr, and we would like
to ask about them, more specifically we would like to know:
1. Can we do text search and vector similarity?
2. Can we filter by metadata?
3. How about index/memory consumption? 1st tier needs around 4000M
embeddings vector (128 fp32) + metadata stored in memory
4. Can we execute models in the DB itself? (not outside SOLr). We have
per-user models, and we need a way of executing TensorFlow models on the
database to prevent moving data outside of the DB
5. Subsecond queries
6. Real-time indexing (or near real-time) of new data
7. Easily scalable

Thank you so much!!

Reply via email to