Github user ekovacs commented on a diff in the pull request:
https://github.com/apache/nifi/pull/2991#discussion_r223290455
--- Diff:
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/HandleHttpRequest.java
---
@@ -229,7 +252,25 @@
.name("container-queue-size").displayName("Container Queue
Size")
.description("The size of the queue for Http Request
Containers").required(true)
.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR).defaultValue("50").build();
-
+ public static final PropertyDescriptor MAX_REQUEST_SIZE = new
PropertyDescriptor.Builder()
+ .name("max-request-size")
+ .displayName("Max Request Size")
+ .description("The max size of the request. Only applies for
requests with Content-Type: multipart/form-data, "
+ + "and is used to prevent denial of service type of
attacks, to prevent filling up the heap or disk space")
+ .required(true)
+ .addValidator(StandardValidators.DATA_SIZE_VALIDATOR)
+ .defaultValue("1 MB")
+ .build();
+ public static final PropertyDescriptor IN_MEMORY_FILE_SIZE_THRESHOLD =
new PropertyDescriptor.Builder()
+ .name("in-memory-file-size-threshold")
+ .displayName("The threshold size, at which the contents of an
incoming file would be written to disk. "
+ + "Only applies for requests with Content-Type:
multipart/form-data. "
+ + "It is used to prevent denial of service type of
attacks, to prevent filling up the heap or disk space.")
--- End diff --
I believe this still has to do with preventing DOS type attacks, tightly
coupled with MULTIPART_REQUEST_MAX_SIZE.
If request size is very high (eg.: and your example of 9GB file) and this
measure would not be in place, the heap would fill up, and would bring down the
JVM with OutOfMemoryError.
---