[
https://issues.apache.org/jira/browse/NIFI-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683860#comment-16683860
]
ASF subversion and git services commented on NIFI-5805:
-------------------------------------------------------
Commit d3b16748139efe78373ca22ed116b0c7ed5dbbe3 in nifi's branch
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=d3b1674 ]
NIFI-5805: Pool the BinaryEncoders used by the
WriteAvroResultWithExternalSchema writer. Unfortunately, the writer that embeds
schemas does not allow for this optimization due to the Avro API
This closes #3160.
> Avro Record Writer service creates byte buffer for every Writer created
> -----------------------------------------------------------------------
>
> Key: NIFI-5805
> URL: https://issues.apache.org/jira/browse/NIFI-5805
> Project: Apache NiFi
> Issue Type: Bug
> Reporter: Mark Payne
> Assignee: Mark Payne
> Priority: Major
> Fix For: 1.9.0
>
>
> When we use the Avro RecordSet Writer, and do not embed the schema, the
> Writer uses the Avro BinaryEncoder object to serialize the data. This object
> can be initialized, but instead we create a new one for each writer. This
> results in creating a new 64 KB byte[] each time. When we are writing many
> records to a given FlowFile, this is not a big deal. However, when used in
> PublishKafkaRecord or similar processors, where a new writer must be created
> for every Record, this can have a very significant performance impact.
> An improvement would be to have the user configure the maximum number of
> BinaryEncoder objects to pool and then use a simple pooling mechanism to
> reuse these objects.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)