[
https://issues.apache.org/jira/browse/NIFI-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Iván Ezequiel Rodriguez updated NIFI-6800:
------------------------------------------
Description:
The error is present until version 1.9.2 and earlier in the class
StandardProcessSession.java
The error is on line 3383 in the following code block:
if (session.countersOnCommit != null)
{ this.countersOnCommit.putAll(session.countersOnCommit); }
The putALL method is called and the values are not being mixed when it comes
to existing keys
As we know hashmap does not allow duplicate keys. So when we merge the maps in
this way, for duplicate keys in map1 the value is overwitten by value for same
key in map2.
This replacement causes the counters to be overwritten when
highThroughputSession is activated.This situation only happens if the flag
"immediately" is false in the call at method adjustCounter, since in this way
the counters are adjusted in the commit and stored in each checkpoint.
I leave a video showing the error and the request for changes in git with the
solution in the code that I propose:
[Nifi Bug StandardProcessSession error - hashmap counter overwritten in
highThroughputSession|https://youtu.be/DzwEQWmxNKc]
was:
StandardProcessSession.java
The error is on line 3383 in the following code block:
if (session.countersOnCommit != null)
{ this.countersOnCommit.putAll(session.countersOnCommit); }
The putALL method is called and the values are not being mixed when it comes
to existing keys
As we know hashmap does not allow duplicate keys. So when we merge the maps in
this way, for duplicate keys in map1 the value is overwitten by value for same
key in map2.
This replacement causes the counters to be overwritten when
highThroughputSession is activated.This situation only happens if the flag
"immediately" is false in the call at method adjustCounter, since in this way
the counters are adjusted in the commit and stored in each checkpoint.
I leave a video showing the error and the request for changes in git with the
solution in the code that I propose:
[Nifi Bug StandardProcessSession error - hashmap counter overwritten in
highThroughputSession|https://youtu.be/DzwEQWmxNKc]
> StandardProcessSession error - hashmap counter overwritten in
> highThroughputSession
> -----------------------------------------------------------------------------------
>
> Key: NIFI-6800
> URL: https://issues.apache.org/jira/browse/NIFI-6800
> Project: Apache NiFi
> Issue Type: Bug
> Components: Core Framework
> Affects Versions: 1.9.2
> Reporter: Iván Ezequiel Rodriguez
> Assignee: Iván Ezequiel Rodriguez
> Priority: Major
> Labels: bug, core, counters, fix, framework, high, nifi, session
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> The error is present until version 1.9.2 and earlier in the class
> StandardProcessSession.java
> The error is on line 3383 in the following code block:
> if (session.countersOnCommit != null)
> { this.countersOnCommit.putAll(session.countersOnCommit); }
> The putALL method is called and the values are not being mixed when it
> comes to existing keys
> As we know hashmap does not allow duplicate keys. So when we merge the maps
> in this way, for duplicate keys in map1 the value is overwitten by value for
> same key in map2.
> This replacement causes the counters to be overwritten when
> highThroughputSession is activated.This situation only happens if the flag
> "immediately" is false in the call at method adjustCounter, since in this way
> the counters are adjusted in the commit and stored in each checkpoint.
> I leave a video showing the error and the request for changes in git with
> the solution in the code that I propose:
> [Nifi Bug StandardProcessSession error - hashmap counter overwritten in
> highThroughputSession|https://youtu.be/DzwEQWmxNKc]
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)