rahil-c commented on code in PR #14255:
URL: https://github.com/apache/hudi/pull/14255#discussion_r2530273290


##########
rfc/rfc-103/rfc-103.md:
##########
@@ -0,0 +1,178 @@
+   <!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+# RFC-101: Support Vector Index on Hudi
+
+## Proposers
+
+- @suryaprasanna
+- @prashantwason
+
+## Approvers
+- @vinoth
+- @rahil-c
+
+## Status
+
+
+## Abstract
+As LLM applications are on the rise, a lot of focus has been on “operational” 
or “online” databases, that have added vector search capabilities or 
specialized vector databases (Chroma, Pinecone, ..), which offer similar 
capabilities. Specialized vector databases claim support for better algorithms, 
optimized ingest/serving performance and better integration with LLM 
application development frameworks like Langchain/llamaIndex et al.
+
+With its shared-storage/decoupled compute model, the data lakehouse 
architecture has already proven scalability and cost-effectiveness advantages 
compared to storing all data for analysis and processing in shared-nothing 
database architectures or datawarehouses. We believe that extending data 
lakehouse storage and query engines with vector search capabilities can unlock 
best-of-both-worlds with some exciting outcomes.
+
+Storing vector indexes in a data lakehouse offers several advantages:
+- **Infinitely scalable storage:** 
+  - Overcomes the pain of storing/scaling large amounts of embeddings in an 
online database forever, reducing costs, while also allowing the production 
database to be more efficient/easier to operate.
+- **Leverage scalable compute frameworks:** 
+  - There is already rich support for compute frameworks (Spark, Flink) to 
build ingest pipelines to maintain embeddings from upstream sources, as well as 
fast query engines (Presto, Trino, Starrocks) that can serve vector searches.
+- **Tiered serving layer:** 
+  - Given much of the embedding data is updated from data pipelines, often 
every so often (versus real-time individual updates),  we can also provide the 
reasonably fast serving of vector queries (e.g. 80% speed at 80% lower cost) by 
either extending the lakehouse storage with a caching tier (or) a tiered 
storage integration into existing production/online vector databases. They 
could serve applications with different end-user expectations e.g internal 
business apps vs user-facing applications.
+
+By extending the multi-modal indexing subsystem, vector indexes can be stored 
as part of the Hudi’s metadata tables and can be served directly using
+
+
+## Background
+Following are the goals for this RFC
+- Creating vector indexes based on a base column in a table - either an 
embedding column or a text column.
+- Indexes are automatically kept up to date when the base column changes, 
consistent with transactional boundaries.
+- First-class SQL experience for creating, and dropping indexes (Spark)
+- SQL extensions to query the index. (Spark, then Presto/Trino)
+
+Non-goals/Unclear:
+- Fast serving layer, directly usable from RAG applications (this can be left 
to existing ODBC/SQL gateways that can talk to Spark?)

Review Comment:
   @suryaprasanna For these RAG applcations, can the rag application just be a 
python script shared via pyspark? The developer can then directly invoke our 
vector search api from RFC 102? I saw this basic blog on RAG 
https://huggingface.co/learn/cookbook/en/rag_with_hf_and_milvus that shows an 
example with a vector database like milvus, so wondering if instead you swap 
out milvus with hudi acting as the vector store, and using pyspark as execution 
layer.
   
   Or is spark not suited for these kinda use cases?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to