[
https://issues.apache.org/jira/browse/NIFI-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066966#comment-16066966
]
ASF GitHub Bot commented on NIFI-4122:
--------------------------------------
Github user pvillard31 commented on a diff in the pull request:
https://github.com/apache/nifi/pull/1948#discussion_r124612715
--- Diff:
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
---
@@ -99,6 +102,13 @@ public ValidationResult validate(final String subject,
final String value, final
.required(false)
.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
.build();
+ static final PropertyDescriptor RESULTS_PER_FLOWFILE = new
PropertyDescriptor.Builder()
+ .name("results-per-flowfile")
+ .displayName("Results Per FlowFile")
+ .description("How many results to put into a flowfile at once. The
whole body will be treated as a JSON array of results.")
+ .required(false)
+ .addValidator(StandardValidators.INTEGER_VALIDATOR)
--- End diff --
Quick remark, the validator you chose means that 0 is an acceptable value.
It also means that if the user sets this property to 0 (or a negative integer),
one single flow file will be generated with a JSON array containing all the
documents. If that's on purpose, this should be indicated in the description.
If it's not the desired behavior, I would suggest:
``StandardValidators.POSITIVE_INTEGER_VALIDATOR``.
> GetMongo should be able to group results into a set of flowfiles
> ----------------------------------------------------------------
>
> Key: NIFI-4122
> URL: https://issues.apache.org/jira/browse/NIFI-4122
> Project: Apache NiFi
> Issue Type: Improvement
> Reporter: Mike Thomsen
> Priority: Minor
> Labels: getmongo, mongodb, nifi
>
> GetMongo should be able to take a user-defined limit and group results by
> that size into flowfiles rather than having only the ability to do a 1:1
> relationship between result and flowfile.
> For example, if the user specifies 100, 100 results should be grouped
> together and turned into a JSON array that can be broken up later as needed.
> This need arose when doing a bulk data ingestion from Mongo. We had shy of
> 400k documents, and the 1:1 generation of flowfiles blew right through our
> limits on the content repository. Adding this feature would make it feasible
> to control that sort of behavior more thoroughly for events like bulk
> ingestion.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)