Jordens1 opened a new issue, #12665:
URL: https://github.com/apache/apisix/issues/12665

   ### Current Behavior
   
   Environment
   APISIX Version: 3.9.1
   APISIX Helm Chart: 2.8.1
   Kubernetes Version: 1.28.9-aliyun.1
   Platform: Aliyun
   Deployment: Kubernetes with Helm
   Description
   In SSE (Server-Sent Events) streaming response scenarios, even with 
proxy-control plugin configured to disable request buffering, different domain 
access to the same endpoint shows inconsistent behavior:
   http://xxx.com/assist-mcc/api/chat/messages - Normal streaming response
   https://xxx.com/assist-mcc/api/chat/messages - Data gets buffered and 
aggregated before returning
   Route Configuration
   ```
   {
     "uri": "/assist-mcc/api/*",
     "name": "xxx-dev",
     "methods": ["GET", "POST", "PUT", "DELETE", "PATCH"],
     "host": "xxx.xx.com",
     "plugins": {
       "gzip": {
         "_meta": {
           "disable": true
         }
       },
       "proxy-cache": {
         "_meta": {
           "disable": false
         },
         "cache_ttl": 1
       },
       "proxy-control": {
         "_meta": {
           "disable": false
         },
         "request_buffering": false
       }
     },
     "upstream_id": "585786019052258025",
     "status": 1
   }
   ```
   APISIX Configuration
   ```
   nginx_config:
     error_log: "/dev/stderr"
     error_log_level: "warn"
     worker_processes: "auto"
     enable_cpu_affinity: true
     worker_rlimit_nofile: 20480
     event:
       worker_connections: 10620
     http:
       proxy_request_buffering: "off"
       client_max_body_size: 2048M
       client_body_buffer_size: 128k
       enable_access_log: true
       access_log: "/dev/stdout"
       access_log_format: '$remote_addr - $remote_user [$time_local] $http_host 
\"$request\" $status $body_bytes_sent $request_time \"$http_referer\" 
\"$http_user_agent\" $upstream_addr $upstream_status $upstream_response_time 
\"$upstream_scheme://$upstream_host$upstream_uri\"'
       access_log_format_escape: default
       keepalive_timeout: "5s"
       client_header_timeout: 60s
       client_body_timeout: 60s
       send_timeout: 10s
       underscores_in_headers: "on"
       real_ip_header: "X-Real-IP"
       real_ip_from:
         - 127.0.0.1
         - 'unix:'
       upstream:
         keepalive: 320
         keepalive_requests: 1000
         keepalive_timeout: 0s
   ```
   
   <img width="1071" height="872" alt="Image" 
src="https://github.com/user-attachments/assets/d9f84084-2614-482a-8f03-6d17e74d1648";
 />
   
   <img width="1037" height="1055" alt="Image" 
src="https://github.com/user-attachments/assets/8123a169-a7ee-46ea-b8fc-a953c3d20897";
 />
   
   Note: After adding the following nginx configuration in the location block, 
HTTPS streaming works normally:
   ```
   proxy_buffering off;
   gzip off;  # Disable Gzip compression to prevent data aggregation
   ```
   
   ### Expected Behavior
   
   SSE streaming responses should work consistently across all domains without 
data buffering or aggregation delays.
   
   
   ### Error Logs
   
   _No response_
   
   ### Steps to Reproduce
   
   1. Configure the route and plugins as shown above
   2. Access http://xxx.com/mcc/api/chat/messages - Normal streaming
   3. Access https://xxx.com/mcc/api/chat/messages - Buffering occurs
   
   ### Environment
   
   - APISIX version (run `apisix version`):3.9.1
   - Operating system (run `uname -a`):5.10.134-16.1.al8.x86_64 
   - OpenResty / Nginx version (run `openresty -V` or `nginx 
-V`):openresty/1.25.3.1
   - etcd version, if relevant (run `curl 
http://127.0.0.1:9090/v1/server_info`):3.5.10-debian-11-r2
   - APISIX Dashboard version, if relevant:3.0.0-alpine
   - Plugin runner version, for issues related to plugin runners:
   - LuaRocks version, for installation issues (run `luarocks --version`):
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to