branch: externals/minuet
commit 126af4dc86f68e037cdcb31b94a1e83ae0a9ebcf
Author: Milan Glacier <d...@milanglacier.com>
Commit: Milan Glacier <d...@milanglacier.com>

    doc: update instruction for ollama user for choosing FIM models.
---
 README.md | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/README.md b/README.md
index b5a1945142..b9c37c967b 100644
--- a/README.md
+++ b/README.md
@@ -171,7 +171,7 @@ The `gemini-flash` and `codestral` models offer 
high-quality output with free
 and fast processing. For optimal quality, consider using the `deepseek-chat`
 model, which is compatible with both `openai-fim-compatible` and
 `openai-compatible` providers. For local LLM inference, you can deploy either
-`qwen-coder` or `deepseek-coder` through Ollama using the
+`qwen-2.5-coder` or `deepseek-coder-v2` through Ollama using the
 `openai-fim-compatible` provider.
 
 # Prompt
@@ -475,6 +475,13 @@ For example, you can set the `end_point` to
 
 <details>
 
+Additionally, for Ollama users, it is essential to verify whether the model's
+template supports FIM completion. For example,
+[qwen2.5-coder's 
template](https://ollama.com/library/qwen2.5-coder/blobs/e94a8ecb9327)
+is a supported model. However it may come as a surprise to some users that,
+`deepseek-coder` does not support the FIM template, and you should use
+`deepseek-coder-v2` instead.
+
 The following config is the default.
 
 ```lisp

Reply via email to