danny0405 commented on code in PR #14255: URL: https://github.com/apache/hudi/pull/14255#discussion_r2525700739
########## rfc/rfc-103/rfc-103.md: ########## @@ -0,0 +1,178 @@ + <!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +# RFC-101: Support Vector Index on Hudi + +## Proposers + +- @suryaprasanna +- @prashantwason + +## Approvers +- @vinoth +- @rahil-c + +## Status + + +## Abstract +As LLM applications are on the rise, a lot of focus has been on “operational” or “online” databases, that have added vector search capabilities or specialized vector databases (Chroma, Pinecone, ..), which offer similar capabilities. Specialized vector databases claim support for better algorithms, optimized ingest/serving performance and better integration with LLM application development frameworks like Langchain/llamaIndex et al. + +With its shared-storage/decoupled compute model, the data lakehouse architecture has already proven scalability and cost-effectiveness advantages compared to storing all data for analysis and processing in shared-nothing database architectures or datawarehouses. We believe that extending data lakehouse storage and query engines with vector search capabilities can unlock best-of-both-worlds with some exciting outcomes. + +Storing vector indexes in a data lakehouse offers several advantages: +- **Infinitely scalable storage:** + - Overcomes the pain of storing/scaling large amounts of embeddings in an online database forever, reducing costs, while also allowing the production database to be more efficient/easier to operate. +- **Leverage scalable compute frameworks:** + - There is already rich support for compute frameworks (Spark, Flink) to build ingest pipelines to maintain embeddings from upstream sources, as well as fast query engines (Presto, Trino, Starrocks) that can serve vector searches. +- **Tiered serving layer:** + - Given much of the embedding data is updated from data pipelines, often every so often (versus real-time individual updates), we can also provide the reasonably fast serving of vector queries (e.g. 80% speed at 80% lower cost) by either extending the lakehouse storage with a caching tier (or) a tiered storage integration into existing production/online vector databases. They could serve applications with different end-user expectations e.g internal business apps vs user-facing applications. + +By extending the multi-modal indexing subsystem, vector indexes can be stored as part of the Hudi’s metadata tables and can be served directly using + + +## Background +Following are the goals for this RFC +- Creating vector indexes based on a base column in a table - either an embedding column or a text column. +- Indexes are automatically kept up to date when the base column changes, consistent with transactional boundaries. +- First-class SQL experience for creating, and dropping indexes (Spark) +- SQL extensions to query the index. (Spark, then Presto/Trino) + +Non-goals/Unclear: +- Fast serving layer, directly usable from RAG applications (this can be left to existing ODBC/SQL gateways that can talk to Spark?) +- Integration within the ecosystem - langchain , llamaIndex. … and developer experience. + +Hudi, with its extensible multi-modal indexing subsystem, already offers a lot of capabilities for us to build this e2e architecture. Hudi already supports - column statistics, files, bloom filters, and record-level - indexes in 0.x release line, while 1.x already has - functional indexes, secondary indexing, text indexes - with integrations into Spark SQL. + +Specifically, the indexing infrastructure provides the following capabilities +- Async index creation, allowing for indexes to be (re)built without impacting the writers of the table. +- Allowing queries to gracefully degrade, by an index discovery mechanism. +- Concurrency control mechanisms to keep index and data integrity in-tact, even in the face of failures. +- Hudi table services automatically manage index data. +- Extensible to allow new index types to be integrated easily. + +The overall idea is to see if we can add a new vector search index, into the Hudi indexing subsystem. We can directly leverage Hudi’s strengths around table management, ingest tooling and diverse writers - while doing some interesting extensions to turn Hudi into a direct serving layer. + +## Implementation + +### Approximate Nearest Neighbor (ANN) for Vector Indexing +To efficiently find the nearest neighbors of a vector, the industry-standard approach is to use Approximate Nearest Neighbor (ANN) algorithms. + +While exact search methods (e.g., direct cosine distance computation) achieve 100% accuracy, they are computationally expensive and scale poorly for large datasets. In contrast, ANN indexing structures typically provide around 95% accuracy while offering 10× to 100× faster retrieval times. + +Given these trade-offs, ANN-based indexing is the most practical and performant choice for implementing vector search in Apache Hudi. Recent advances in AI and vector search technologies have led to several efficient ANN algorithms. Depending on the use case and environment, the following are recommended options: + +1. HNSW (Hierarchical Navigable Small World) – implemented via hnswlib-ann + - Graph-based approach + - Excellent recall-speed tradeoff + - Widely used in production systems +2. jVector + - Java-native implementation of ANN structures + - Suitable for JVM-based environments like Apache Hudi +3. FAISS (Facebook AI Similarity Search) + - Highly optimized C++ library with Python bindings + - Ideal for large-scale GPU-accelerated similarity search + +For initial implementation, we will look into HNSW and Jvector implementations. Review Comment: I'm wondering if the Locality-Sensitive Hashing(LSH) is a good candicate for hashing index, because of it's efficiency and may be more friendly for streaming ingestion? https://www.cs.princeton.edu/courses/archive/spring13/cos598C/Gionis.pdf -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
