jozahner commented on code in PR #10829:
URL: https://github.com/apache/nifi/pull/10829#discussion_r2743593538
##########
nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroReader.java:
##########
@@ -60,12 +60,27 @@ public class AvroReader extends SchemaRegistryService
implements RecordReaderFac
.required(true)
.build();
+ static final PropertyDescriptor FAST_READER_ENABLED = new
PropertyDescriptor.Builder()
+ .name("Fast Reader Enabled")
+ .description("""
+ When enabled, the Avro library uses an optimized reader
implementation that improves read performance
+ by creating a detailed execution plan at initialization.
However, this optimization can lead to
+ significantly higher memory consumption, especially when
using schema inference. If OutOfMemory errors
+ occur during Avro processing, consider disabling this
option.""")
+ .addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+ .allowableValues("true", "false")
+ .defaultValue("true")
Review Comment:
We have a max. heap size of 31GB configured and under normal circumstance we
use around 5GB heap memory. So I don't think it's related to the maximum heap
size settings.
I'm pretty sure that you will get more and more OOM messages if you leave it
"true". It wasn't that easy for us to find the problem, we had to enable our
java profiler in production and we have done a few memory dumps to understand
the issue. But I'm just an end user and now aware of the issue ;-)...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]