xilexu opened a new issue, #59596:
URL: https://github.com/apache/doris/issues/59596

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Version
   
   Backend returns 400 error in version 4.0.2 when AI functions send wrong 
Content-Type to large model
   
   ### What's Wrong?
   
   When calling AI functions like `AI_FILTER` or `AI_GENERATE` to invoke the 
large model, the backend reports a `Content-Type` error. Has this been resolved?
   
   **Request example:**
   
   ```
   POST /v1/chat/completions HTTP/1.1
   Host: 192.168.1.2:8000
   Accept: */*
   Auth-Token: 49e81a55-bb37-40c9-b0d9-f87edb890494
   Content-Length: 298
   Content-Type: application/x-www-form-urlencoded
   
   {
     "model":"Qwen3-4B-Instruct-2507",
     "messages":[
       {"role":"system","content":"You are a creative text generator. You will 
generate a concise and highly relevant response based on the user's input; aim 
for maximum brevity...cut every non-essential word."},
       {"role":"user","content":"test"}
     ]
   }
   ```
   
   **Response:**
   
   ```
   HTTP/1.1 400 Bad Request
   date: Tue, 06 Jan 2026 08:41:15 GMT
   server: uvicorn
   content-length: 215
   content-type: application/json
   
   {
     "error": {
       "message": "1 validation error:\n  Unsupported Media Type: Only 
'application/json' is allowed [\"Unsupported Media Type: Only 
'application/json' is allowed\"]",
       "type": "Bad Request",
       "param": null,
       "code": 400
     }
   }
   ```
   
   **MySQL client error:**
   
   ```
   SQL Error [1105] [HY000]: errCode = 2, detailMessage = 
(192.168.100.11)[HTTP_ERROR]The requested URL returned error: 400, 
url=http://192.168.1.2:8000/v1/chat/completions
   ```
   
   
   
   ### What You Expected?
   
   When calling the large model via AI functions like AI_FILTER or AI_GENERATE, 
the backend should correctly handle requests with Content-Type: 
application/json and return the model’s response. Requests should not fail with 
a 400 Bad Request due to an unsupported content type.
   
   ### How to Reproduce?
   
   Deploy the model Qwen3-4B-Instruct-2507 using vllm/vllm-openai:v0.10.0.
   
   Call AI functions such as AI_FILTER or AI_GENERATE that internally send 
requests to the model.
   
   Observe that the backend responds with a 400 Bad Request error indicating 
unsupported media type.
   
   ### Anything Else?
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to