Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package aws-c-s3 for openSUSE:Factory 
checked in at 2025-12-01 11:14:46
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/aws-c-s3 (Old)
 and      /work/SRC/openSUSE:Factory/.aws-c-s3.new.14147 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "aws-c-s3"

Mon Dec  1 11:14:46 2025 rev:33 rq:1320660 version:0.11.2

Changes:
--------
--- /work/SRC/openSUSE:Factory/aws-c-s3/aws-c-s3.changes        2025-11-18 
15:35:17.143917935 +0100
+++ /work/SRC/openSUSE:Factory/.aws-c-s3.new.14147/aws-c-s3.changes     
2025-12-01 11:15:32.684153524 +0100
@@ -1,0 +2,11 @@
+Thu Nov 27 11:10:02 UTC 2025 - John Paul Adrian Glaubitz 
<[email protected]>
+
+- Update to version 0.11.2
+  * Fix the read window update from the same thread by @TingDaoK in (#601)
+- from version 0.11.1
+  * Delivery exact bytes for read window by @TingDaoK in (#600)
+- from version 0.11.0
+  * Fix the deadlock for pause/cancel by @TingDaoK in (#596)
+  * Accept memory limit setting from environment variable by @TingDaoK in 
(#598)
+
+-------------------------------------------------------------------

Old:
----
  v0.10.1.tar.gz

New:
----
  v0.11.2.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ aws-c-s3.spec ++++++
--- /var/tmp/diff_new_pack.2FkGAq/_old  2025-12-01 11:15:33.316180278 +0100
+++ /var/tmp/diff_new_pack.2FkGAq/_new  2025-12-01 11:15:33.316180278 +0100
@@ -19,7 +19,7 @@
 %define library_version 1.0.0
 %define library_soversion 0unstable
 Name:           aws-c-s3
-Version:        0.10.1
+Version:        0.11.2
 Release:        0
 Summary:        AWS Cross-Platform, C99 wrapper for cryptography primitives
 License:        Apache-2.0

++++++ v0.10.1.tar.gz -> v0.11.2.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/aws-c-s3-0.10.1/README.md 
new/aws-c-s3-0.11.2/README.md
--- old/aws-c-s3-0.10.1/README.md       2025-11-08 00:06:18.000000000 +0100
+++ new/aws-c-s3-0.11.2/README.md       2025-11-25 02:17:00.000000000 +0100
@@ -3,17 +3,49 @@
 The AWS-C-S3 library is an asynchronous AWS S3 client focused on maximizing 
throughput and network utilization.
 
 ### Key features:
-- **Automatic Request Splitting**: Improves throughput by automatically 
splitting the request into part-sized chunks and performing parallel 
uploads/downloads of these chunks over multiple connections. There's a cap on 
the throughput of single S3 connection, the only way to go faster is multiple 
parallel connections.
-- **Automatic Retries**: Increases resilience by retrying individual failed 
chunks of a file transfer, eliminating the need to restart transfers from 
scratch after an intermittent error.
-- **DNS Load Balancing**: DNS resolver continuously harvests Amazon S3 IP 
addresses. When load is spread across the S3 fleet, overall throughput more 
reliable than if all connections are going to a single IP.
-- **Advanced Network Management**: The client incorporates automatic request 
parallelization, effective timeouts and retries, and efficient connection 
reuse. This approach helps to maximize throughput and network utilization, and 
to avoid network overloads.
-- **Thread Pools and Async I/O**: Avoids bottlenecks associated with 
single-thread processing.
-- **Parallel Reads**: When uploading a large file from disk, reads from 
multiple parts of the file in parallel. This is faster than reading the file 
sequentially from beginning to end.
+
+* **Automatic Request Splitting**: Improves throughput by automatically 
splitting the request into part-sized chunks and performing parallel 
uploads/downloads of these chunks over multiple connections. There's a cap on 
the throughput of single S3 connection, the only way to go faster is multiple 
parallel connections.
+* **Automatic Retries**: Increases resilience by retrying individual failed 
chunks of a file transfer, eliminating the need to restart transfers from 
scratch after an intermittent error.
+* **DNS Load Balancing**: DNS resolver continuously harvests Amazon S3 IP 
addresses. When load is spread across the S3 fleet, overall throughput more 
reliable than if all connections are going to a single IP.
+* **Advanced Network Management**: The client incorporates automatic request 
parallelization, effective timeouts and retries, and efficient connection 
reuse. This approach helps to maximize throughput and network utilization, and 
to avoid network overloads.
+* **Thread Pools and Async I/O**: Avoids bottlenecks associated with 
single-thread processing.
+* **Parallel Reads**: When uploading a large file from disk, reads from 
multiple parts of the file in parallel. This is faster than reading the file 
sequentially from beginning to end.
 
 ### Documentation
 
-- [GetObject](docs/GetObject.md): A visual representation of the GetObject 
request flow.
-- [Memory Aware Requests Execution](docs/memory_aware_request_execution.md): 
An in-depth guide on optimizing memory usage during request executions.
+* [GetObject](docs/GetObject.md): A visual representation of the GetObject 
request flow.
+* [Memory Aware Requests Execution](docs/memory_aware_request_execution.md): 
An in-depth guide on optimizing memory usage during request executions.
+
+### Configuration
+
+#### Memory Limit
+
+The S3 client uses a buffer pool to manage memory for concurrent transfers. 
You can control the memory limit in two ways:
+
+1. **Via Configuration** (Recommended): Set `memory_limit_in_bytes` in 
`aws_s3_client_config`:
+
+```c
+   struct aws_s3_client_config config = {
+       .memory_limit_in_bytes = GB_TO_BYTES(4), // 4 GiB limit
+       // ... other configuration
+   };
+   ```
+
+2. **Via Environment Variable**: Set the `AWS_CRT_S3_MEMORY_LIMIT_IN_GIB` 
environment variable:
+
+```bash
+   export AWS_CRT_S3_MEMORY_LIMIT_IN_GIB=4  # 4 GiB limit
+   ```
+
+**Priority**: The configuration value takes precedence over the environment 
variable. If `memory_limit_in_bytes` is set to a non-zero value in the config, 
the environment variable is ignored.
+
+**Default Behavior**: If neither is set (config is 0 and environment variable 
is not set), the client sets a default memory limit based on the target 
throughput.
+
+**Notes**:
+* The limit applies per client. If multiple clients created, limit will apply 
to each separately.
+* The environment variable value must be a valid positive integer representing 
gigabytes (GiB).
+* The value is converted from GiB to bytes internally (1 GiB = 1024³ bytes).
+* Invalid values or overflow conditions will cause client creation to fail 
with `AWS_ERROR_INVALID_ARGUMENT`.
 
 ## License
 
@@ -86,14 +118,19 @@
 After installing all the dependencies, and building aws-c-s3, you can run the 
sample directly from the s3 build directory.
 
 To download:
+
 ```
 aws-c-s3/build/samples/s3/s3 cp s3://<bucket-name>/<object-name> 
<download-path> --region <region>
 ```
+
 To upload:
+
 ```
 aws-c-s3/build/samples/s3/s3 cp <upload-path> s3://<bucket-name>/<object-name> 
--region <region>
 ```
+
 To list objects:
+
 ```
 aws-c-s3/build/samples/s3/s3 ls s3://<bucket-name> --region <region>
 ```
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/aws-c-s3-0.10.1/include/aws/s3/private/s3_meta_request_impl.h 
new/aws-c-s3-0.11.2/include/aws/s3/private/s3_meta_request_impl.h
--- old/aws-c-s3-0.10.1/include/aws/s3/private/s3_meta_request_impl.h   
2025-11-08 00:06:18.000000000 +0100
+++ new/aws-c-s3-0.11.2/include/aws/s3/private/s3_meta_request_impl.h   
2025-11-25 02:17:00.000000000 +0100
@@ -66,6 +66,7 @@
         /* data for AWS_S3_META_REQUEST_EVENT_RESPONSE_BODY */
         struct {
             struct aws_s3_request *completed_request;
+            size_t bytes_delivered;
         } response_body;
 
         /* data for AWS_S3_META_REQUEST_EVENT_PROGRESS */
@@ -195,6 +196,9 @@
     /* If the buffer pool optimized for the specific size or not. */
     bool buffer_pool_optimized;
 
+    /* Track the number of requests being prepared for this meta request. */
+    struct aws_atomic_var num_request_being_prepared;
+
     struct {
         struct aws_mutex lock;
 
@@ -221,12 +225,15 @@
 
         /* Task for delivering events on the meta-request's io_event_loop 
thread.
          * We do this to ensure a meta-request's callbacks are fired 
sequentially and non-overlapping.
-         * If `event_delivery_array` has items in it, then this task is 
scheduled.
+         * If `event_delivery_task_scheduled` is true, then this task is 
scheduled.
          * If `event_delivery_active` is true, then this task is actively 
running.
          * Delivery is not 100% complete until `event_delivery_array` is empty 
AND `event_delivery_active` is false
          * (use aws_s3_meta_request_are_events_out_for_delivery_synced()  to 
check) */
         struct aws_task event_delivery_task;
 
+        /* Whether or not event delivery is currently scheduled. */
+        uint32_t event_delivery_task_scheduled : 1;
+
         /* Array of `struct aws_s3_meta_request_event` to deliver when the 
`event_delivery_task` runs. */
         struct aws_array_list event_delivery_array;
 
@@ -278,9 +285,6 @@
         /* True if this meta request is currently in the client's list. */
         bool scheduled;
 
-        /* Track the number of requests being prepared for this meta request. 
*/
-        size_t num_request_being_prepared;
-
     } client_process_work_threaded_data;
 
     /* Anything in this structure should only ever be accessed by the 
meta-request from its io_event_loop thread. */
@@ -292,6 +296,10 @@
 
         /* The range start for the next response body delivery */
         uint64_t next_deliver_range_start;
+
+        /* Total number of bytes that have been attempted to be delivered. 
(Will equal the sum of succeeded and
+         * failed.)*/
+        uint64_t num_bytes_delivery_completed;
     } io_threaded_data;
 
     const bool should_compute_content_md5;
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/aws-c-s3-0.10.1/include/aws/s3/s3_client.h 
new/aws-c-s3-0.11.2/include/aws/s3/s3_client.h
--- old/aws-c-s3-0.10.1/include/aws/s3/s3_client.h      2025-11-08 
00:06:18.000000000 +0100
+++ new/aws-c-s3-0.11.2/include/aws/s3/s3_client.h      2025-11-25 
02:17:00.000000000 +0100
@@ -606,7 +606,8 @@
      *
      * WARNING: This feature is experimental.
      * Currently, backpressure is only applied to GetObject requests which are 
split into multiple parts,
-     * and you may still receive some data after the window reaches 0.
+     * - If you set body_callback, no more data will be delivered once the 
window reaches 0.
+     * - If you set body_callback_ex, you may still receive some data after 
the window reaches 0. TODO: fix it.
      */
     bool enable_read_backpressure;
 
@@ -1220,7 +1221,8 @@
  *
  * WARNING: This feature is experimental.
  * Currently, backpressure is only applied to GetObject requests which are 
split into multiple parts,
- * and you may still receive some data after the window reaches 0.
+ * - If you set body_callback, no more data will be delivered once the window 
reaches 0.
+ * - If you set body_callback_ex, you may still receive some data after the 
window reaches 0. TODO: fix it.
  */
 AWS_S3_API
 void aws_s3_meta_request_increment_read_window(struct aws_s3_meta_request 
*meta_request, uint64_t bytes);
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/aws-c-s3-0.10.1/source/s3_client.c 
new/aws-c-s3-0.11.2/source/s3_client.c
--- old/aws-c-s3-0.10.1/source/s3_client.c      2025-11-08 00:06:18.000000000 
+0100
+++ new/aws-c-s3-0.11.2/source/s3_client.c      2025-11-25 02:17:00.000000000 
+0100
@@ -21,8 +21,10 @@
 #include <aws/common/atomics.h>
 #include <aws/common/clock.h>
 #include <aws/common/device_random.h>
+#include <aws/common/environment.h>
 #include <aws/common/file.h>
 #include <aws/common/json.h>
+#include <aws/common/math.h>
 #include <aws/common/priority_queue.h>
 #include <aws/common/string.h>
 #include <aws/common/system_info.h>
@@ -90,6 +92,11 @@
  * count. S3 closes the idle connections in ~5 seconds. */
 static const uint32_t s_endpoints_cleanup_time_offset_in_s = 5;
 
+/**
+ * The envrionment variable name for memory limit control.
+ */
+static const char *s_memory_limit_env_var = "AWS_CRT_S3_MEMORY_LIMIT_IN_GIB";
+
 /* Called when ref count is 0. */
 static void s_s3_client_start_destroy(void *user_data);
 
@@ -321,7 +328,42 @@
         aws_raise_error(AWS_ERROR_INVALID_ARGUMENT);
         return NULL;
     }
+    uint64_t mem_limit_configured = 0;
+    if (client_config->memory_limit_in_bytes == 0) {
+        /* Try to read from the envrionment variable for memory limit */
+        struct aws_string *memory_limit_from_env_var = 
aws_get_env_nonempty(allocator, s_memory_limit_env_var);
+        if (memory_limit_from_env_var) {
+            uint64_t mem_limit_in_gib = 0;
+            if (aws_byte_cursor_utf8_parse_u64(
+                    aws_byte_cursor_from_string(memory_limit_from_env_var), 
&mem_limit_in_gib)) {
+                aws_string_destroy(memory_limit_from_env_var);
+                AWS_LOGF_ERROR(
+                    AWS_LS_S3_CLIENT,
+                    "Cannot create client from client_config; envrionment 
variable: %s, is not set correctly, only "
+                    "integers supported.",
+                    s_memory_limit_env_var);
+                aws_raise_error(AWS_ERROR_INVALID_ARGUMENT);
+                return NULL;
+            }
+            aws_string_destroy(memory_limit_from_env_var);
+            uint64_t mem_limit_in_bytes = 0;
+            /* Covert mem_limit_in_gib to bytes */
+            if (aws_mul_u64_checked(mem_limit_in_gib, 1024, 
&mem_limit_in_bytes) ||
+                aws_mul_u64_checked(mem_limit_in_bytes, 1024, 
&mem_limit_in_bytes) ||
+                aws_mul_u64_checked(mem_limit_in_bytes, 1024, 
&mem_limit_in_bytes)) {
+                AWS_LOGF_ERROR(
+                    AWS_LS_S3_CLIENT,
+                    "Cannot create client from client_config; envrionment 
variable: %s, overflow detected.",
+                    s_memory_limit_env_var);
+                aws_raise_error(AWS_ERROR_INVALID_ARGUMENT);
+                return NULL;
+            }
 
+            mem_limit_configured = mem_limit_in_bytes;
+        }
+    } else {
+        mem_limit_configured = client_config->memory_limit_in_bytes;
+    }
 #ifdef BYO_CRYPTO
     if (client_config->tls_mode == AWS_MR_TLS_ENABLED && 
client_config->tls_connection_options == NULL) {
         AWS_LOGF_ERROR(
@@ -338,7 +380,7 @@
     client->allocator = allocator;
 
     size_t mem_limit = 0;
-    if (client_config->memory_limit_in_bytes == 0) {
+    if (mem_limit_configured == 0) {
 #if SIZE_BITS == 32
         if (client_config->throughput_target_gbps > 25.0) {
             mem_limit = GB_TO_BYTES(2);
@@ -360,10 +402,10 @@
 #endif
     } else {
         // cap memory limit to SIZE_MAX
-        if (client_config->memory_limit_in_bytes > SIZE_MAX) {
+        if (mem_limit_configured > SIZE_MAX) {
             mem_limit = SIZE_MAX;
         } else {
-            mem_limit = (size_t)client_config->memory_limit_in_bytes;
+            mem_limit = (size_t)mem_limit_configured;
         }
     }
 
@@ -1906,8 +1948,9 @@
         return false;
     }
 
-    if 
(meta_request->client_process_work_threaded_data.num_request_being_prepared >=
-        aws_s3_client_get_max_active_connections(client, meta_request)) {
+    /* This is not 100% thread safe, but prepare a bit more for the meta 
request level won't actually hurt. */
+    size_t specific_request_being_prepared = 
aws_atomic_load_int(&meta_request->num_request_being_prepared);
+    if (specific_request_being_prepared >= 
aws_s3_client_get_max_active_connections(client, meta_request)) {
         /* Don't prepare more than it's allowed for the meta request */
         return false;
     }
@@ -2125,7 +2168,7 @@
                     request->tracked_by_client = true;
 
                     ++client->threaded_data.num_requests_being_prepared;
-                    
++meta_request->client_process_work_threaded_data.num_request_being_prepared;
+                    
aws_atomic_fetch_add(&meta_request->num_request_being_prepared, 1);
 
                     num_requests_in_flight =
                         
(uint32_t)aws_atomic_fetch_add(&client->stats.num_requests_in_flight, 1) + 1;
@@ -2192,6 +2235,7 @@
         request = aws_s3_request_release(request);
     }
 
+    aws_atomic_fetch_sub(&meta_request->num_request_being_prepared, 1);
     /* BEGIN CRITICAL SECTION */
     {
         aws_s3_client_lock_synced_data(client);
@@ -2223,8 +2267,6 @@
         struct aws_s3_request *request = 
aws_s3_client_dequeue_request_threaded(client);
         struct aws_s3_meta_request *meta_request = request->meta_request;
         const uint32_t max_active_connections = 
aws_s3_client_get_max_active_connections(client, meta_request);
-        /* As the request removed from the queue. Decrement the preparing 
track */
-        
--meta_request->client_process_work_threaded_data.num_request_being_prepared;
         if (request->is_noop) {
             /* If request is no-op, finishes and cleans up the request */
             s_s3_client_meta_request_finished_request(client, meta_request, 
request, AWS_ERROR_SUCCESS);
@@ -2250,8 +2292,6 @@
         } else {
             /* Push the request into the left-over list to be used in a future 
call of this function. */
             aws_linked_list_push_back(&left_over_requests, &request->node);
-            /* Increment the count as we put it back to the queue. */
-            
++meta_request->client_process_work_threaded_data.num_request_being_prepared;
         }
         client_max_active_connections = 
aws_s3_client_get_max_active_connections(client, NULL);
         num_requests_network_io = 
s_s3_client_get_num_requests_network_io(client, AWS_S3_META_REQUEST_TYPE_MAX);
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/aws-c-s3-0.10.1/source/s3_meta_request.c 
new/aws-c-s3-0.11.2/source/s3_meta_request.c
--- old/aws-c-s3-0.10.1/source/s3_meta_request.c        2025-11-08 
00:06:18.000000000 +0100
+++ new/aws-c-s3-0.11.2/source/s3_meta_request.c        2025-11-25 
02:17:00.000000000 +0100
@@ -200,6 +200,7 @@
     /* Set up reference count. */
     aws_ref_count_init(&meta_request->ref_count, meta_request, 
s_s3_meta_request_destroy);
     aws_atomic_init_int(&meta_request->num_requests_network, 0);
+    aws_atomic_init_int(&meta_request->num_request_being_prepared, 0);
     
aws_linked_list_init(&meta_request->synced_data.cancellable_http_streams_list);
     aws_linked_list_init(&meta_request->synced_data.pending_buffer_futures);
 
@@ -411,6 +412,8 @@
     /* Response will never approach UINT64_MAX, so do a saturating sum instead 
of worrying about overflow */
     meta_request->synced_data.read_window_running_total =
         aws_add_u64_saturating(bytes, 
meta_request->synced_data.read_window_running_total);
+    /* Try to schedule the delivery task again. */
+    aws_s3_meta_request_add_event_for_delivery_synced(meta_request, NULL);
 
     aws_s3_meta_request_unlock_synced_data(meta_request);
     /* END CRITICAL SECTION */
@@ -1881,12 +1884,13 @@
     const struct aws_s3_meta_request_event *event) {
 
     ASSERT_SYNCED_DATA_LOCK_HELD(meta_request);
+    if (event) {
+        
aws_array_list_push_back(&meta_request->synced_data.event_delivery_array, 
event);
+    }
 
-    aws_array_list_push_back(&meta_request->synced_data.event_delivery_array, 
event);
-
-    /* If the array was empty before, schedule task to deliver all events in 
the array.
-     * If the array already had things in it, then the task is already 
scheduled and will run soon. */
-    if (aws_array_list_length(&meta_request->synced_data.event_delivery_array) 
== 1) {
+    /* If the event delivery task is not scheduled before, and there are more 
to be delivered. */
+    if (!meta_request->synced_data.event_delivery_task_scheduled &&
+        aws_array_list_length(&meta_request->synced_data.event_delivery_array) 
> 0) {
         aws_s3_meta_request_acquire(meta_request);
 
         aws_task_init(
@@ -1895,6 +1899,7 @@
             meta_request,
             "s3_meta_request_event_delivery");
         aws_event_loop_schedule_task_now(meta_request->io_event_loop, 
&meta_request->synced_data.event_delivery_task);
+        meta_request->synced_data.event_delivery_task_scheduled = true;
     }
 }
 
@@ -1975,9 +1980,14 @@
     struct aws_array_list *event_delivery_array = 
&meta_request->io_threaded_data.event_delivery_array;
     AWS_FATAL_ASSERT(aws_array_list_length(event_delivery_array) == 0);
 
+    struct aws_array_list incomplete_deliver_events_array;
+    aws_array_list_init_dynamic(
+        &incomplete_deliver_events_array, meta_request->allocator, 1, 
sizeof(struct aws_s3_meta_request_event));
+
     /* If an error occurs, don't fire callbacks anymore. */
     int error_code = AWS_ERROR_SUCCESS;
     uint32_t num_parts_delivered = 0;
+    uint64_t bytes_allowed_to_deliver = 0;
 
     /* BEGIN CRITICAL SECTION */
     {
@@ -1985,14 +1995,21 @@
 
         aws_array_list_swap_contents(event_delivery_array, 
&meta_request->synced_data.event_delivery_array);
         meta_request->synced_data.event_delivery_active = true;
+        meta_request->synced_data.event_delivery_task_scheduled = false;
 
         if (aws_s3_meta_request_has_finish_result_synced(meta_request)) {
             error_code = AWS_ERROR_S3_CANCELED;
         }
 
+        bytes_allowed_to_deliver = 
meta_request->synced_data.read_window_running_total -
+                                   
meta_request->io_threaded_data.num_bytes_delivery_completed;
+
         aws_s3_meta_request_unlock_synced_data(meta_request);
     }
     /* END CRITICAL SECTION */
+    if (bytes_allowed_to_deliver > SIZE_MAX) {
+        bytes_allowed_to_deliver = SIZE_MAX;
+    }
 
     /* Deliver all events */
     for (size_t event_i = 0; event_i < 
aws_array_list_length(event_delivery_array); ++event_i) {
@@ -2002,15 +2019,44 @@
 
             case AWS_S3_META_REQUEST_EVENT_RESPONSE_BODY: {
                 struct aws_s3_request *request = 
event.u.response_body.completed_request;
+                size_t bytes_delivered_for_request = 
event.u.response_body.bytes_delivered;
                 AWS_ASSERT(meta_request == request->meta_request);
+                bool delivery_incomplete = false;
                 struct aws_byte_cursor response_body = 
aws_byte_cursor_from_buf(&request->send_data.response_body);
+                if (response_body.len == 0) {
+                    /* Nothing to delivery, finish this delivery event and 
break out. */
+                    
aws_atomic_fetch_sub(&client->stats.num_requests_streaming_response, 1);
+
+                    ++num_parts_delivered;
+                    request->send_data.metrics =
+                        
s_s3_request_finish_up_and_release_metrics(request->send_data.metrics, 
meta_request);
+
+                    aws_s3_request_release(request);
+                    break;
+                }
+
+                if (meta_request->body_callback && 
meta_request->client->enable_read_backpressure) {
+                    /* If customer set the body callback, make sure we are not 
delivery them more than asked via the
+                     * callback. */
+                    aws_byte_cursor_advance(&response_body, 
bytes_delivered_for_request);
+                    if (response_body.len > (size_t)bytes_allowed_to_deliver) {
+                        response_body.len = (size_t)bytes_allowed_to_deliver;
+                        delivery_incomplete = true;
+                    }
+                    /* Update the remaining bytes we allow to delivery. */
+                    bytes_allowed_to_deliver -= response_body.len;
+                } else {
+                    /* We should not have any incomplete delivery in this 
case. */
+                    AWS_FATAL_ASSERT(bytes_delivered_for_request == 0);
+                }
+                uint64_t delivery_range_start = request->part_range_start + 
bytes_delivered_for_request;
 
                 AWS_ASSERT(request->part_number >= 1);
                 if (request->part_number == 1) {
-                    meta_request->io_threaded_data.next_deliver_range_start = 
request->part_range_start;
+                    meta_request->io_threaded_data.next_deliver_range_start = 
delivery_range_start;
                 }
                 /* Make sure the response body is delivered in the sequential 
order */
-                AWS_FATAL_ASSERT(request->part_range_start == 
meta_request->io_threaded_data.next_deliver_range_start);
+                AWS_FATAL_ASSERT(delivery_range_start == 
meta_request->io_threaded_data.next_deliver_range_start);
                 meta_request->io_threaded_data.next_deliver_range_start += 
response_body.len;
 
                 if (error_code == AWS_ERROR_SUCCESS && response_body.len > 0) {
@@ -2054,7 +2100,7 @@
                                 meta_request,
                                 &response_body,
                                 (struct 
aws_s3_meta_request_receive_body_extra_info){
-                                    .range_start = request->part_range_start, 
.ticket = request->ticket},
+                                    .range_start = delivery_range_start, 
.ticket = request->ticket},
                                 meta_request->user_data)) {
                             error_code = aws_last_error_or_unknown();
                             AWS_LOGF_ERROR(
@@ -2066,7 +2112,7 @@
                         } else if (
                             meta_request->body_callback != NULL &&
                             meta_request->body_callback(
-                                meta_request, &response_body, 
request->part_range_start, meta_request->user_data)) {
+                                meta_request, &response_body, 
delivery_range_start, meta_request->user_data)) {
 
                             error_code = aws_last_error_or_unknown();
                             AWS_LOGF_ERROR(
@@ -2086,13 +2132,25 @@
                         }
                     }
                 }
-                
aws_atomic_fetch_sub(&client->stats.num_requests_streaming_response, 1);
+                event.u.response_body.bytes_delivered += response_body.len;
+                meta_request->io_threaded_data.num_bytes_delivery_completed += 
response_body.len;
 
-                ++num_parts_delivered;
-                request->send_data.metrics =
-                    
s_s3_request_finish_up_and_release_metrics(request->send_data.metrics, 
meta_request);
+                if (!delivery_incomplete || error_code != AWS_ERROR_SUCCESS) {
+                    /* We completed the delivery for this request. */
+                    
aws_atomic_fetch_sub(&client->stats.num_requests_streaming_response, 1);
+
+                    ++num_parts_delivered;
+                    request->send_data.metrics =
+                        
s_s3_request_finish_up_and_release_metrics(request->send_data.metrics, 
meta_request);
 
-                aws_s3_request_release(request);
+                    aws_s3_request_release(request);
+                } else {
+                    /* We didn't complete the delivery for this request and no 
error happened */
+                    /* Push to the front of the queue and wait for the next 
tick to deliver the rest of the bytes. */
+                    /* Note: we push to the front of the array since when we 
move those incomplete events back to the
+                     * synced_queue, we need to make sure it still has the 
same order. */
+                    
aws_array_list_push_front(&incomplete_deliver_events_array, &event);
+                }
             } break;
 
             case AWS_S3_META_REQUEST_EVENT_PROGRESS: {
@@ -2148,12 +2206,32 @@
         if (error_code != AWS_ERROR_SUCCESS) {
             aws_s3_meta_request_set_fail_synced(meta_request, NULL, 
error_code);
         }
+        if (aws_array_list_length(&incomplete_deliver_events_array) > 0) {
+            /* Only if we don't have any window to deliver the bytes, we will 
have incomplete parts. */
+            AWS_FATAL_ASSERT(bytes_allowed_to_deliver == 0);
+            /* Push the incomplete events back to the queue */
+            for (size_t i = 0; i < 
aws_array_list_length(&incomplete_deliver_events_array); ++i) {
+                struct aws_s3_meta_request_event event;
+                aws_array_list_get_at(&incomplete_deliver_events_array, 
&event, i);
+                /* Push the incomplete one to the front of the queue. */
+                
aws_array_list_push_front(&meta_request->synced_data.event_delivery_array, 
&event);
+            }
+            /* As we push to the event delivery array, we check if we need to 
schedule another delivery by if there
+             * is space to make the delivery or not. */
+            bytes_allowed_to_deliver = 
meta_request->synced_data.read_window_running_total -
+                                       
meta_request->io_threaded_data.num_bytes_delivery_completed;
+            if (bytes_allowed_to_deliver > 0 && error_code == 
AWS_ERROR_SUCCESS) {
+                /* We have more space now, let's try another delivery now. */
+                
aws_s3_meta_request_add_event_for_delivery_synced(meta_request, NULL);
+            }
+        }
 
         meta_request->synced_data.num_parts_delivery_completed += 
num_parts_delivered;
         meta_request->synced_data.event_delivery_active = false;
         aws_s3_meta_request_unlock_synced_data(meta_request);
     }
     /* END CRITICAL SECTION */
+    aws_array_list_clean_up(&incomplete_deliver_events_array);
 
     aws_s3_client_schedule_process_work(client);
     aws_s3_meta_request_release(meta_request);
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/aws-c-s3-0.10.1/tests/CMakeLists.txt 
new/aws-c-s3-0.11.2/tests/CMakeLists.txt
--- old/aws-c-s3-0.10.1/tests/CMakeLists.txt    2025-11-08 00:06:18.000000000 
+0100
+++ new/aws-c-s3-0.11.2/tests/CMakeLists.txt    2025-11-25 02:17:00.000000000 
+0100
@@ -448,6 +448,12 @@
 add_net_test_case(client_meta_request_override_part_size)
 add_net_test_case(client_meta_request_override_multipart_upload_threshold)
 
+# Memory limit environment variable tests
+add_net_test_case(s3_client_memory_limit_from_env_var_valid)
+add_net_test_case(s3_client_memory_limit_config_takes_precedence)
+add_net_test_case(s3_client_memory_limit_from_env_var_invalid)
+add_net_test_case(s3_client_memory_limit_from_env_var_overflow)
+
 add_net_test_case(test_s3_default_get_without_content_length)
 
 set(TEST_BINARY_NAME ${PROJECT_NAME}-tests)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/aws-c-s3-0.10.1/tests/s3_client_memory_limit_env_var_test.c 
new/aws-c-s3-0.11.2/tests/s3_client_memory_limit_env_var_test.c
--- old/aws-c-s3-0.10.1/tests/s3_client_memory_limit_env_var_test.c     
1970-01-01 01:00:00.000000000 +0100
+++ new/aws-c-s3-0.11.2/tests/s3_client_memory_limit_env_var_test.c     
2025-11-25 02:17:00.000000000 +0100
@@ -0,0 +1,181 @@
+/**
+ * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+ * SPDX-License-Identifier: Apache-2.0.
+ */
+
+#include "s3_tester.h"
+#include <aws/common/environment.h>
+#include <aws/s3/private/s3_default_buffer_pool.h>
+#include <aws/s3/private/s3_util.h>
+#include <aws/testing/aws_test_harness.h>
+
+#define TEST_CASE(NAME)                                                        
                                        \
+    AWS_TEST_CASE(NAME, s_test_##NAME);                                        
                                        \
+    static int s_test_##NAME(struct aws_allocator *allocator, void *ctx)
+
+static const char *s_memory_limit_env_var = "AWS_CRT_S3_MEMORY_LIMIT_IN_GIB";
+
+/* Copied from s3_default_buffer_pool.c */
+static const size_t s_buffer_pool_reserved_mem = MB_TO_BYTES(128);
+
+/**
+ * Test that memory limit can be set via environment variable when config 
value is 0
+ */
+TEST_CASE(s3_client_memory_limit_from_env_var_valid) {
+    (void)ctx;
+    struct aws_s3_tester tester;
+    ASSERT_SUCCESS(aws_s3_tester_init(allocator, &tester));
+
+    /* Set environment variable to 4 GiB */
+    struct aws_string *env_var_name = aws_string_new_from_c_str(allocator, 
s_memory_limit_env_var);
+    struct aws_string *env_var_value = aws_string_new_from_c_str(allocator, 
"1");
+    ASSERT_SUCCESS(aws_set_environment_value(env_var_name, env_var_value));
+
+    struct aws_s3_client_config client_config = {
+        .part_size = MB_TO_BYTES(8),
+        .throughput_target_gbps = 10.0,
+        .memory_limit_in_bytes = 0, /* Will read from environment variable */
+    };
+
+    ASSERT_SUCCESS(aws_s3_tester_bind_client(
+        &tester, &client_config, AWS_S3_TESTER_BIND_CLIENT_REGION | 
AWS_S3_TESTER_BIND_CLIENT_SIGNING));
+
+    struct aws_s3_client *client = aws_s3_client_new(allocator, 
&client_config);
+    ASSERT_TRUE(client != NULL);
+
+    /* Verify that buffer pool was configured with 4 GiB limit */
+    size_t expected_memory_limit = GB_TO_BYTES(1) - s_buffer_pool_reserved_mem;
+    ASSERT_TRUE(client->buffer_pool != NULL);
+    struct aws_s3_default_buffer_pool_usage_stats stats = 
aws_s3_default_buffer_pool_get_usage(client->buffer_pool);
+    ASSERT_UINT_EQUALS(stats.mem_limit, expected_memory_limit);
+
+    aws_s3_client_release(client);
+
+    /* Clean up environment variable */
+    aws_string_destroy(env_var_name);
+    aws_string_destroy(env_var_value);
+
+    aws_s3_tester_clean_up(&tester);
+    return AWS_OP_SUCCESS;
+}
+
+/**
+ * Test that config value takes precedence over environment variable
+ */
+TEST_CASE(s3_client_memory_limit_config_takes_precedence) {
+    (void)ctx;
+    struct aws_s3_tester tester;
+    ASSERT_SUCCESS(aws_s3_tester_init(allocator, &tester));
+
+    /* Set environment variable to 4 GiB */
+    struct aws_string *env_var_name = aws_string_new_from_c_str(allocator, 
s_memory_limit_env_var);
+    struct aws_string *env_var_value = aws_string_new_from_c_str(allocator, 
"1");
+    ASSERT_SUCCESS(aws_set_environment_value(env_var_name, env_var_value));
+    aws_string_destroy(env_var_name);
+    aws_string_destroy(env_var_value);
+
+    struct aws_s3_client_config client_config = {
+        .part_size = MB_TO_BYTES(8),
+        .throughput_target_gbps = 10.0,
+        .memory_limit_in_bytes = GB_TO_BYTES(2), /* Config value should take 
precedence */
+    };
+    ASSERT_SUCCESS(aws_s3_tester_bind_client(
+        &tester, &client_config, AWS_S3_TESTER_BIND_CLIENT_REGION | 
AWS_S3_TESTER_BIND_CLIENT_SIGNING));
+
+    struct aws_s3_client *client = aws_s3_client_new(allocator, 
&client_config);
+    ASSERT_TRUE(client != NULL);
+
+    /* The 2 GiB from config should be used, not the 1 GiB from env var */
+    ASSERT_TRUE(client->buffer_pool != NULL);
+    size_t expected_memory_limit = GB_TO_BYTES(2) - s_buffer_pool_reserved_mem;
+    ASSERT_TRUE(client->buffer_pool != NULL);
+    struct aws_s3_default_buffer_pool_usage_stats stats = 
aws_s3_default_buffer_pool_get_usage(client->buffer_pool);
+    ASSERT_UINT_EQUALS(stats.mem_limit, expected_memory_limit);
+
+    aws_s3_client_release(client);
+
+    /* Clean up environment variable */
+    env_var_name = aws_string_new_from_c_str(allocator, 
s_memory_limit_env_var);
+    ASSERT_SUCCESS(aws_unset_environment_value(env_var_name));
+    aws_string_destroy(env_var_name);
+
+    aws_s3_tester_clean_up(&tester);
+    return AWS_OP_SUCCESS;
+}
+
+/**
+ * Test that invalid environment variable value causes client creation to fail
+ */
+TEST_CASE(s3_client_memory_limit_from_env_var_invalid) {
+    (void)ctx;
+    struct aws_s3_tester tester;
+    ASSERT_SUCCESS(aws_s3_tester_init(allocator, &tester));
+
+    /* Set environment variable to invalid value */
+    struct aws_string *env_var_name = aws_string_new_from_c_str(allocator, 
s_memory_limit_env_var);
+    struct aws_string *env_var_value = aws_string_new_from_c_str(allocator, 
"invalid");
+    ASSERT_SUCCESS(aws_set_environment_value(env_var_name, env_var_value));
+    aws_string_destroy(env_var_name);
+    aws_string_destroy(env_var_value);
+
+    struct aws_s3_client_config client_config = {
+        .part_size = MB_TO_BYTES(8),
+        .throughput_target_gbps = 10.0,
+        .memory_limit_in_bytes = 0, /* Will try to read from environment 
variable */
+    };
+    ASSERT_SUCCESS(aws_s3_tester_bind_client(
+        &tester, &client_config, AWS_S3_TESTER_BIND_CLIENT_REGION | 
AWS_S3_TESTER_BIND_CLIENT_SIGNING));
+
+    /* Client creation should fail due to invalid env var value */
+    struct aws_s3_client *client = aws_s3_client_new(allocator, 
&client_config);
+    ASSERT_TRUE(client == NULL);
+    ASSERT_INT_EQUALS(AWS_ERROR_INVALID_ARGUMENT, aws_last_error());
+    /* Client failed to set up. */
+    tester.bound_to_client = false;
+    /* Clean up environment variable */
+    env_var_name = aws_string_new_from_c_str(allocator, 
s_memory_limit_env_var);
+    ASSERT_SUCCESS(aws_unset_environment_value(env_var_name));
+    aws_string_destroy(env_var_name);
+
+    aws_s3_tester_clean_up(&tester);
+    return AWS_OP_SUCCESS;
+}
+
+/**
+ * Test that environment variable with value causing overflow is handled 
properly
+ */
+TEST_CASE(s3_client_memory_limit_from_env_var_overflow) {
+    (void)ctx;
+    struct aws_s3_tester tester;
+    ASSERT_SUCCESS(aws_s3_tester_init(allocator, &tester));
+
+    /* Set environment variable to a very large value that would overflow when 
converted to bytes */
+    struct aws_string *env_var_name = aws_string_new_from_c_str(allocator, 
s_memory_limit_env_var);
+    struct aws_string *env_var_value = aws_string_new_from_c_str(allocator, 
"18446744073709551615"); /* UINT64_MAX */
+    ASSERT_SUCCESS(aws_set_environment_value(env_var_name, env_var_value));
+    aws_string_destroy(env_var_name);
+    aws_string_destroy(env_var_value);
+
+    struct aws_s3_client_config client_config = {
+        .part_size = MB_TO_BYTES(8),
+        .throughput_target_gbps = 10.0,
+        .memory_limit_in_bytes = 0, /* Will try to read from environment 
variable */
+    };
+
+    ASSERT_SUCCESS(aws_s3_tester_bind_client(
+        &tester, &client_config, AWS_S3_TESTER_BIND_CLIENT_REGION | 
AWS_S3_TESTER_BIND_CLIENT_SIGNING));
+
+    /* Client creation should fail due to overflow during GiB to bytes 
conversion */
+    struct aws_s3_client *client = aws_s3_client_new(allocator, 
&client_config);
+    ASSERT_TRUE(client == NULL);
+    ASSERT_INT_EQUALS(AWS_ERROR_INVALID_ARGUMENT, aws_last_error());
+    /* Client failed to set up. */
+    tester.bound_to_client = false;
+
+    /* Clean up environment variable */
+    env_var_name = aws_string_new_from_c_str(allocator, 
s_memory_limit_env_var);
+    ASSERT_SUCCESS(aws_unset_environment_value(env_var_name));
+    aws_string_destroy(env_var_name);
+    aws_s3_tester_clean_up(&tester);
+    return AWS_OP_SUCCESS;
+}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/aws-c-s3-0.10.1/tests/s3_data_plane_tests.c 
new/aws-c-s3-0.11.2/tests/s3_data_plane_tests.c
--- old/aws-c-s3-0.10.1/tests/s3_data_plane_tests.c     2025-11-08 
00:06:18.000000000 +0100
+++ new/aws-c-s3-0.11.2/tests/s3_data_plane_tests.c     2025-11-25 
02:17:00.000000000 +0100
@@ -1726,6 +1726,7 @@
     size_t part_size,
     size_t window_initial_size,
     uint64_t window_increment_size) {
+    (void)part_size;
 
     /* Remember the last time something happened (we received download data, 
or incremented read window) */
     uint64_t last_time_something_happened;
@@ -1753,12 +1754,13 @@
         size_t received_body_size_delta = 
aws_atomic_exchange_int(&test_results->received_body_size_delta, 0);
         accumulated_data_size += (uint64_t)received_body_size_delta;
 
-        /* Check that we haven't received more data than the window allows.
-         * TODO: Stop allowing "hacky wiggle room". The current implementation
-         *       may push more bytes to the user (up to 1 part) than they've 
asked for. */
-        uint64_t hacky_wiggle_room = part_size;
-        uint64_t max_data_allowed = accumulated_window_increments + 
hacky_wiggle_room;
-        ASSERT_TRUE(accumulated_data_size <= max_data_allowed, "Received more 
data than the read window allows");
+        /* Check that we haven't received more data than the window allows */
+        uint64_t max_data_allowed = accumulated_window_increments;
+        ASSERT_TRUE(
+            accumulated_data_size <= max_data_allowed,
+            "Received more data than the read window allows 
accumulated_data_size: %zu, max_data_allowed: %zu",
+            (size_t)accumulated_data_size,
+            (size_t)max_data_allowed);
 
         /* If we're done, we're done */
         if (done) {
@@ -1891,7 +1893,7 @@
     size_t file_size = 1 * 1024 * 1024; /* Test downloads 1MB file */
     size_t part_size = file_size / 4;
     size_t window_initial_size = 1024;
-    uint64_t window_increment_size = part_size / 2;
+    uint64_t window_increment_size = part_size / 4;
     return s_test_s3_get_object_backpressure_helper(
         allocator, part_size, window_initial_size, window_increment_size, 
false);
 }

Reply via email to