Croway commented on code in PR #169:
URL: 
https://github.com/apache/camel-spring-boot-examples/pull/169#discussion_r2565390163


##########
ai-agent/README.adoc:
##########
@@ -0,0 +1,172 @@
+== Spring Boot Example with Spring AI Agent
+
+=== Introduction
+
+This example demonstrates how to use Apache Camel with Spring AI to build 
AI-powered integration applications. The example showcases three key 
capabilities:
+
+1. *Chat* - Basic conversational AI using the Spring AI Chat component
+2. *Tools* - Function calling where the AI can invoke Camel routes as tools
+3. *Vector Store* - Semantic search using document embeddings with Qdrant
+
+The example uses Ollama as the AI model provider with the granite4:3b model 
for chat and embeddinggemma:300m for embeddings, and Qdrant as the vector 
database.
+
+==== Design Principles
+
+This example follows Apache Camel Spring AI's design philosophy of separating 
concerns:
+
+* *Business Logic in Camel Routes* - The route definitions focus purely on 
integration logic and data flow
+* *Infrastructure Configuration in application.yaml* - All non-functional 
requirements such as AI model selection, connection parameters, temperature 
settings, and external service endpoints are externalized to configuration files
+
+This separation enables:
+
+* Easy switching between different AI providers (e.g., Ollama, OpenAI, Azure) 
without changing route code
+* Environment-specific configurations (dev, staging, production) with 
different models or endpoints
+* Clear distinction between what the integration does (routes) and how it 
connects (configuration)
+
+=== Prerequisites
+
+==== Camel JBang
+Install Camel JBang to easily run infrastructure services:
+
+    $ curl -Ls https://sh.jbang.dev | bash -s - trust add 
https://github.com/apache/camel/
+    $ curl -Ls https://sh.jbang.dev | bash -s - app install --fresh --force 
camel@apache/camel
+
+Or if you already have JBang installed:
+
+    $ jbang app install camel@apache/camel
+
+==== Ollama
+You need to have Ollama installed and running locally. Install Ollama from 
https://ollama.ai and then pull the required models:
+
+    $ ollama pull granite4:3b
+    $ ollama pull embeddinggemma:300m
+
+The granite4:3b model is used for chat and the embeddinggemma:300m model is 
used for embeddings.

Review Comment:
   I've added a note about `camel infra run ollama`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to