GitHub user imbajin added a comment to the discussion: [Discussion] The 
selection of Agentic/Taskflow frame

> [@Aryankb](https://github.com/Aryankb?rgh-link-date=2025-02-28T07%3A12%3A17.000Z)
>  I think most of the people won't be proficient enough to write their own 
> queries I have worked quite a bit with graph rag in my intern and at first 
> even I had a bit trouble in writing those. So, I would suggest that we can 
> get a description for the knowledge that the user will be providing us if 
> they don't, we will by default use a LLM to get what the knowledge or text is 
> about and then make an agent write the query for us and use that query ? 
> [@imbajin](https://github.com/imbajin?rgh-link-date=2025-02-28T07%3A12%3A17.000Z)
>  sir what is your opinion on this?

@chiruu12 @Aryankb 
First, regarding the `text2gql` part, it is an independent matter, and I 
understand that it is not strongly related to the selection of agentic frame or 
workflow impl.

Here is a brief description of the actual situation. Our implementation and 
approach earlier was to use both model fine-tuning and **user templates** 
simultaneously. (see it ↓ By default, we use the GQL query template to optimize 
the effect of text2gql.)

<img width="1610" alt="Image" 
src="https://github.com/user-attachments/assets/fc278898-4cbf-46b4-8d4d-90dcf0e7df6d";
 />

 General encoder model fine-tuning for `7-14B` can be a significant task, 
especially when it comes to how to generate GQL corpus (HG uses Gremlin queries 
by default and is compatible with most of the Cypher syntax), refer 
[wiki](https://github.com/apache/incubator-hugegraph-ai/wiki/HugeGraph-LLM-Roadmap#4-graph-query-core-1)
 to get more context 



GitHub link: 
https://github.com/apache/incubator-hugegraph-ai/discussions/203#discussioncomment-12666601

----
This is an automatically sent email for dev@hugegraph.apache.org.
To unsubscribe, please send an email to: dev-unsubscr...@hugegraph.apache.org

Reply via email to