rahil-c commented on code in PR #14255:
URL: https://github.com/apache/hudi/pull/14255#discussion_r2525963595


##########
rfc/rfc-103/rfc-103.md:
##########
@@ -0,0 +1,178 @@
+   <!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+# RFC-101: Support Vector Index on Hudi
+
+## Proposers
+
+- @suryaprasanna
+- @prashantwason
+
+## Approvers
+- @vinoth
+- @rahil-c
+
+## Status
+
+
+## Abstract
+As LLM applications are on the rise, a lot of focus has been on “operational” 
or “online” databases, that have added vector search capabilities or 
specialized vector databases (Chroma, Pinecone, ..), which offer similar 
capabilities. Specialized vector databases claim support for better algorithms, 
optimized ingest/serving performance and better integration with LLM 
application development frameworks like Langchain/llamaIndex et al.
+
+With its shared-storage/decoupled compute model, the data lakehouse 
architecture has already proven scalability and cost-effectiveness advantages 
compared to storing all data for analysis and processing in shared-nothing 
database architectures or datawarehouses. We believe that extending data 
lakehouse storage and query engines with vector search capabilities can unlock 
best-of-both-worlds with some exciting outcomes.
+
+Storing vector indexes in a data lakehouse offers several advantages:
+- **Infinitely scalable storage:** 
+  - Overcomes the pain of storing/scaling large amounts of embeddings in an 
online database forever, reducing costs, while also allowing the production 
database to be more efficient/easier to operate.
+- **Leverage scalable compute frameworks:** 
+  - There is already rich support for compute frameworks (Spark, Flink) to 
build ingest pipelines to maintain embeddings from upstream sources, as well as 
fast query engines (Presto, Trino, Starrocks) that can serve vector searches.
+- **Tiered serving layer:** 
+  - Given much of the embedding data is updated from data pipelines, often 
every so often (versus real-time individual updates),  we can also provide the 
reasonably fast serving of vector queries (e.g. 80% speed at 80% lower cost) by 
either extending the lakehouse storage with a caching tier (or) a tiered 
storage integration into existing production/online vector databases. They 
could serve applications with different end-user expectations e.g internal 
business apps vs user-facing applications.
+
+By extending the multi-modal indexing subsystem, vector indexes can be stored 
as part of the Hudi’s metadata tables and can be served directly using
+
+
+## Background
+Following are the goals for this RFC
+- Creating vector indexes based on a base column in a table - either an 
embedding column or a text column.
+- Indexes are automatically kept up to date when the base column changes, 
consistent with transactional boundaries.
+- First-class SQL experience for creating, and dropping indexes (Spark)
+- SQL extensions to query the index. (Spark, then Presto/Trino)
+
+Non-goals/Unclear:
+- Fast serving layer, directly usable from RAG applications (this can be left 
to existing ODBC/SQL gateways that can talk to Spark?)
+- Integration within the ecosystem - langchain , llamaIndex. … and developer 
experience.
+
+Hudi, with its extensible multi-modal indexing subsystem, already offers a lot 
of capabilities for us to build this e2e architecture. Hudi already supports - 
column statistics, files, bloom filters, and record-level - indexes in 0.x 
release line, while 1.x already has  - functional indexes, secondary indexing, 
text indexes - with integrations into Spark SQL.
+
+Specifically, the indexing infrastructure provides the following capabilities
+- Async index creation, allowing for indexes to be (re)built without impacting 
the writers of the table.
+- Allowing queries to gracefully degrade, by an index discovery mechanism.
+- Concurrency control mechanisms to keep index and data integrity in-tact, 
even in the face of failures.
+- Hudi table services automatically manage index data.
+- Extensible to allow new index types to be integrated easily.
+
+The overall idea is to see if we can add a new vector search index, into the 
Hudi indexing subsystem. We can directly leverage Hudi’s strengths around table 
management, ingest tooling and diverse writers - while doing some interesting 
extensions to turn Hudi into a direct serving layer.
+
+## Implementation
+
+### Approximate Nearest Neighbor (ANN) for Vector Indexing
+To efficiently find the nearest neighbors of a vector, the industry-standard 
approach is to use Approximate Nearest Neighbor (ANN) algorithms.
+
+While exact search methods (e.g., direct cosine distance computation) achieve 
100% accuracy, they are computationally expensive and scale poorly for large 
datasets. In contrast, ANN indexing structures typically provide around 95% 
accuracy while offering 10× to 100× faster retrieval times.
+
+Given these trade-offs, ANN-based indexing is the most practical and 
performant choice for implementing vector search in Apache Hudi. Recent 
advances in AI and vector search technologies have led to several efficient ANN 
algorithms. Depending on the use case and environment, the following are 
recommended options:
+
+1. HNSW (Hierarchical Navigable Small World) – implemented via hnswlib-ann
+   - Graph-based approach
+   - Excellent recall-speed tradeoff
+   - Widely used in production systems
+2. jVector
+   - Java-native implementation of ANN structures
+   - Suitable for JVM-based environments like Apache Hudi
+3. FAISS (Facebook AI Similarity Search)
+   - Highly optimized C++ library with Python bindings
+   - Ideal for large-scale GPU-accelerated similarity search
+
+For initial implementation, we will look into HNSW and Jvector implementations.
+
+### Requirements:
+Before going over the architecture of Vector index first let us go through the 
basic requirements,
+
+- **Multi-index support:** The implementation must support maintaining 
multiple vector indexes, each corresponding to different columns.
+- **Data mutation handling:** The vector index must support upserts, inserts, 
and deletes to stay consistent with the changes happening on the underlying 
dataset.
+- **Transactional consistency:** The implementation must support rollback all 
vector index changes when commits are reverted or removed during restore or 
rollback operations.
+- **Index refresh capability:** The implementation must provide an ability to 
refresh or rebuild vector indexes after data or configuration changes to ensure 
accuracy and consistency.
+
+### Vector Index stored under Metadata table:
+Each Hudi dataset contains a Metadata table, which is stored under the .hoodie 
directory of the dataset. This Hudi Metadata table is another Hudi table which 
uses MOR table type. This internal table is used for storing Files metadata, 
Column Stats, Record indexes and Secondary Indexes etc related to the main Hudi 
dataset. Hudi Metadata table is not a centralized storage for all Hudi 
datasets, it is tightly coupled with the main dataset and every commit on the 
main table contain a corresponding commit in the Metadata table. This RFC 
proposes to store Vector Indexes under the Hudi Metadata table similar to 
Record and Secondary indexes.
+
+So, the vector index related information will be stored under metadata folder.
+
+### Key considerations:
+
+#### Clusters of Indexes (Horizontal Scalability for Vector Indexing)
+
+Approximate Nearest Neighbor (ANN) algorithms are graph-based structures that 
are inherently designed for in-memory computation. As a result, traditional ANN 
implementations typically achieve scalability through vertical scaling — by 
adding more memory and CPU resources to a single node.

Review Comment:
   If a base file contains multiple embedding columns, then yes I would assume 
that you can have indexes built on each one.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to