This is an automated email from the ASF dual-hosted git repository.

jin pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-ai.git


The following commit(s) were added to refs/heads/main by this push:
     new 5ca7a1c  docs(llm): synchronization with official documentation (#273)
5ca7a1c is described below

commit 5ca7a1cc05c703132f63062820a5aa322d782b51
Author: Linyu <94553312+weijing...@users.noreply.github.com>
AuthorDate: Mon Jun 16 14:27:56 2025 +0800

    docs(llm): synchronization with official documentation (#273)
    
    ## Key Updates
    
    Synchronization with official documentation.
    
    ---------
    
    Co-authored-by: imbajin <j...@apache.org>
---
 hugegraph-llm/README.md      | 16 ++++++++--------
 hugegraph-llm/quick_start.md | 10 ----------
 2 files changed, 8 insertions(+), 18 deletions(-)

diff --git a/hugegraph-llm/README.md b/hugegraph-llm/README.md
index ae5e281..21b7172 100644
--- a/hugegraph-llm/README.md
+++ b/hugegraph-llm/README.md
@@ -30,9 +30,9 @@ graph systems and large language models.
    - Ensure you have Docker installed
    - We provide two container images:
      - **Image 1**: 
[hugegraph/rag](https://hub.docker.com/r/hugegraph/rag/tags)  
-       For building and running the RAG functionality, suitable for quick 
deployment and development
+       For building and running RAG functionality for rapid deployment and 
direct source code modification
      - **Image 2**: 
[hugegraph/rag-bin](https://hub.docker.com/r/hugegraph/rag-bin/tags)  
-       Binary version compiled with Nuitka for more stable and efficient 
performance in production
+       A binary translation of C compiled with Nuitka, for better performance 
and efficiency.
    - Pull the Docker images:
      ```bash
      docker pull hugegraph/rag:latest # Pull Image 1
@@ -40,8 +40,8 @@ graph systems and large language models.
      ```
    - Start the Docker container:
      ```bash
-     docker run -it --name rag -p 8001:8001 hugegraph/rag bash
-     docker run -it --name rag-bin -p 8001:8001 hugegraph/rag-bin bash
+     docker run -it --name rag -v /path/to/.env:/home/work/hugegraph-llm/.env 
-p 8001:8001 hugegraph/rag bash
+     docker run -it --name rag-bin -v 
/path/to/.env:/home/work/hugegraph-llm/.env -p 8001:8001 hugegraph/rag-bin bash
      ```
    - Start the Graph RAG demo:
      ```bash
@@ -60,7 +60,7 @@ graph systems and large language models.
     ```bash
    docker run -itd --name=server -p 8080:8080 hugegraph/hugegraph
     ```  
-   You can refer to the detailed documents 
[doc](https://hugegraph.apache.org/docs/quickstart/hugegraph-server/#31-use-docker-container-convenient-for-testdev)
 for more guidance.
+   You can refer to the detailed documents 
[doc](/docs/quickstart/hugegraph/hugegraph-server/#31-use-docker-container-convenient-for-testdev)
 for more guidance.
 
 2. Configuring the uv environment, Use the official installer to install uv, 
See the [uv documentation](https://docs.astral.sh/uv/configuration/installer/) 
for other installation methods   
     ```bash
@@ -80,7 +80,7 @@ graph systems and large language models.
     ```  
     If dependency download fails or too slow due to network issues, it is 
recommended to modify `hugegraph-llm/pyproject.toml`.
 
-5. Start the gradio interactive demo of **Graph RAG**, you can run with the 
following command and open http://127.0.0.1:8001 after starting
+5. To start the Gradio interactive demo for **Graph RAG**, run the following 
command, then open http://127.0.0.1:8001 in your browser.
     ```bash
     python -m hugegraph_llm.demo.rag_demo.app  # same as "uv run xxx"
     ```
@@ -97,7 +97,7 @@ graph systems and large language models.
     ```
     Note: `Litellm` support multi-LLM provider, refer 
[litellm.ai](https://docs.litellm.ai/docs/providers) to config it
 7. (__Optional__) You could use 
-    
[hugegraph-hubble](https://hugegraph.apache.org/docs/quickstart/hugegraph-hubble/#21-use-docker-convenient-for-testdev)
 
+    
[hugegraph-hubble](/docs/quickstart/toolchain/hugegraph-hubble/#21-use-docker-convenient-for-testdev)
 
     to visit the graph data, could run it via 
[Docker/Docker-Compose](https://hub.docker.com/r/hugegraph/hubble) 
     for guidance. (Hubble is a graph-analysis dashboard that includes data 
loading/schema management/graph traverser/display).
 8. (__Optional__) offline download NLTK stopwords  
@@ -107,7 +107,7 @@ graph systems and large language models.
 > [!TIP]   
 > You can also refer to our 
 > [quick-start](https://github.com/apache/incubator-hugegraph-ai/blob/main/hugegraph-llm/quick_start.md)
 >  doc to understand how to use it & the basic query logic 🚧
 
-## 4 Examples
+## 4. Examples
 
 ### 4.1 Build a knowledge graph in HugeGraph through LLM
 
diff --git a/hugegraph-llm/quick_start.md b/hugegraph-llm/quick_start.md
index dbbfe92..dab247c 100644
--- a/hugegraph-llm/quick_start.md
+++ b/hugegraph-llm/quick_start.md
@@ -17,8 +17,6 @@ Construct a knowledge graph, chunk vector, and graph vid 
vector from the text.
 
 
![image](https://github.com/user-attachments/assets/f3366d46-2e31-4638-94c4-7214951ef77a)
 
-
-
 ```mermaid
 graph TD;
     A[Raw Text] --> B[Text Segmentation]
@@ -30,11 +28,8 @@ graph TD;
     G --> H[Store graph in Graph Database, \nautomatically vectorize vertices 
\nand store in Vector Database]
     
     I[Retrieve vertices from Graph Database] --> J[Vectorize vertices and 
store in Vector Database \nNote: Incremental update]
-
 ```
 
-
-
 ### Four Input Fields:
 
 - **Doc(s):** Input text
@@ -96,8 +91,6 @@ graph TD;
     J --> K[Generate answer]
 ```
 
-
-
 ### Input Fields:
 
 - **Question:** Input the query
@@ -172,11 +165,8 @@ graph TD;
     
     F[Natural Language Query] --> G[Search for the most similar query \nin the 
Vector Database \n&#40If no Gremlin pairs exist in the Vector Database, 
\ndefault files will be automatically vectorized&#41 \nand retrieve the 
corresponding Gremlin]
     G --> H[Add the matched pair to the prompt \nand use LLM to generate the 
Gremlin \ncorresponding to the Natural Language Query]
-
 ```
 
-
-
 ### Input Fields for the Second Part:
 
 - **Natural Language** **Query**: Input the natural language text to be 
converted into Gremlin.

Reply via email to