[ 
https://issues.apache.org/jira/browse/NIFI-14301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932438#comment-17932438
 ] 

ASF subversion and git services commented on NIFI-14301:
--------------------------------------------------------

Commit a2faf102ebc9f882a27baa016db53a52bee3d51d in nifi's branch 
refs/heads/main from Dariusz Seweryn
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a2faf102eb ]

NIFI-14301 Set HTTP Client Concurrency to Available Processors in 
ConsumeKinesisStream (#9754)

Signed-off-by: David Handermann <[email protected]>

> ConsumeKinesisStream decouple HttpClient maxConcurrency from "Max Concurrent 
> Tasks"
> -----------------------------------------------------------------------------------
>
>                 Key: NIFI-14301
>                 URL: https://issues.apache.org/jira/browse/NIFI-14301
>             Project: Apache NiFi
>          Issue Type: Improvement
>            Reporter: Dariusz Seweryn
>            Assignee: Dariusz Seweryn
>            Priority: Major
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> ConsumeKinesisStream currently uses "Max Concurrent Tasks" value as setting 
> for the underlying NettyNioAsyncHttpClient maxConcurrency value.
> On the surface this seems OK from the framework perspective but it seems 
> wasteful and non-optimal due to the nature of HTTP work which is generally IO 
> bound whereas the framework assumes the tasks are Compute bound.
> ConsumeKinesisStream onTrigger method does two things:
>  * at first creates a new thread for a AWS Scheduler class that does all the 
> heavy lifting for Kinesis interaction
>  * when Scheduler is already created it yields
> Having more Concurrent Tasks yielding does not seem to be a productive usage 
> of framework threads.
> Proposal: set NettyNioAsyncHttpClient maxConcurrency to minimum of 
> `Runtime.getRuntime().availableProcessors()` and processor's "Max Concurrent 
> Tasks"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to