[jira] [Commented] (NIFI-5841) PutHive3Streaming processor Mem Leak
[ https://issues.apache.org/jira/browse/NIFI-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16739160#comment-16739160 ] Kei Miyauchi commented on NIFI-5841: There's a memory leak issue on Hive Streaming itself. So PutHive3Streaming still occurs memory leak even after this issue was fixed. > PutHive3Streaming processor Mem Leak > > > Key: NIFI-5841 > URL: https://issues.apache.org/jira/browse/NIFI-5841 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0, 1.7.1 > Environment: Hive 3.1.* >Reporter: Advith Nagappa >Assignee: Kei Miyauchi >Priority: Major > Fix For: 1.9.0 > > Time Spent: 50m > Remaining Estimate: 0h > > Nifi versions: 1,7.1 and 1.8.0 > > nifi/nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/nifi/processors/hive/PutHive3Streaming.java** > , line 417 seems redundant, > {code:java} > ShutdownHookManager.addShutdownHook(hiveStreamingConnection::close, > FileSystem.SHUTDOWN_HOOK_PRIORITY + 1){code} > > Whereas Hive 3.0.0 did not add a shutdownhook within connect() method, Hive > 3.1.* does: > > hive/streaming/src/java/org/apache/hive/streaming/HiveStreamingConnection.java > {code:java} > ShutdownHookManager.addShutdownHook(streamingConnection::close, > FileSystem.SHUTDOWN_HOOK_PRIORITY + 1);{code} > This creates two references to the shutdownhook object per transaction out of > which only one is ever cleaned; resulting in a slow/fast degradation of heap > space depending on the velocity of transactions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4173) Processor to RemoveCache from DistributedMapCacheServer
[ https://issues.apache.org/jira/browse/NIFI-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16739161#comment-16739161 ] Ghanashyam Srinivas commented on NIFI-4173: --- I have created a processor for this. > Processor to RemoveCache from DistributedMapCacheServer > --- > > Key: NIFI-4173 > URL: https://issues.apache.org/jira/browse/NIFI-4173 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Gabriel Queiroz >Priority: Major > Attachments: Detect_Duplicate_V2_-_With_Remove_Cache.xml, > RemoveCacheExample.xml > > > I posted a question in hortonworks community asking for a way to remove a > cached identifier from DistributedMapCacheServer > (https://community.hortonworks.com/questions/110551/how-to-remove-a-cache-entry-identifier-from-distri.html) > and the user *kkawamura* answered the question and asked me to open this > issue requesting this funcionality. > I attached two examples using the RemoveCache processor created by kkawamura > using InvokeScriptedProcessor. > Thank you very much! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5909) PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
[ https://issues.apache.org/jira/browse/NIFI-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738938#comment-16738938 ] ASF subversion and git services commented on NIFI-5909: --- Commit 3e52ae952d05867f3366dc4f796ae193a84b4b2c in nifi's branch refs/heads/master from Alex Savitsky [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3e52ae9 ] NIFI-5909 added optional settings for date, time, and timestamp formats used to write Records to Elasticsearch NIFI-5909 added content checks to the unit tests NIFI-5937 use explicit long value for test dates/times (to not depend on the timezone of test executor) NIFI-5937 tabs to spaces Fixing checkstyle violations introduced by https://github.com/apache/nifi/pull/3249 PR) NIFI-5937 adjusted property descriptions for consistency; limited EL scope to variable registry; added an appropriate validator along with its Maven dependency; moved format initialization to @OnScheduled NIFI-5909 tabs to spaces Signed-off-by: Ed This closes #3227 > PutElasticsearchHttpRecord doesn't allow to customize the timestamp format > -- > > Key: NIFI-5909 > URL: https://issues.apache.org/jira/browse/NIFI-5909 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > All timestamps are sent to Elasticsearch in the "-MM-dd HH:mm:ss" format, > coming from the RecordFieldType.TIMESTAMP.getDefaultFormat(). There's plenty > of use cases that call for Elasticsearch data to be presented differently, > and the format should be customizable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-5909) PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
[ https://issues.apache.org/jira/browse/NIFI-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ed Berezitsky resolved NIFI-5909. - Resolution: Fixed Fix Version/s: 1.9.0 This is addressed by PR #3227 > PutElasticsearchHttpRecord doesn't allow to customize the timestamp format > -- > > Key: NIFI-5909 > URL: https://issues.apache.org/jira/browse/NIFI-5909 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Major > Fix For: 1.9.0 > > Time Spent: 2h > Remaining Estimate: 0h > > All timestamps are sent to Elasticsearch in the "-MM-dd HH:mm:ss" format, > coming from the RecordFieldType.TIMESTAMP.getDefaultFormat(). There's plenty > of use cases that call for Elasticsearch data to be presented differently, > and the format should be customizable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5937) PutElasticsearchHttpRecord uses system default encoding
[ https://issues.apache.org/jira/browse/NIFI-5937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738937#comment-16738937 ] ASF subversion and git services commented on NIFI-5937: --- Commit 3e52ae952d05867f3366dc4f796ae193a84b4b2c in nifi's branch refs/heads/master from Alex Savitsky [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3e52ae9 ] NIFI-5909 added optional settings for date, time, and timestamp formats used to write Records to Elasticsearch NIFI-5909 added content checks to the unit tests NIFI-5937 use explicit long value for test dates/times (to not depend on the timezone of test executor) NIFI-5937 tabs to spaces Fixing checkstyle violations introduced by https://github.com/apache/nifi/pull/3249 PR) NIFI-5937 adjusted property descriptions for consistency; limited EL scope to variable registry; added an appropriate validator along with its Maven dependency; moved format initialization to @OnScheduled NIFI-5909 tabs to spaces Signed-off-by: Ed This closes #3227 > PutElasticsearchHttpRecord uses system default encoding > --- > > Key: NIFI-5937 > URL: https://issues.apache.org/jira/browse/NIFI-5937 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Major > Fix For: 1.9.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > PutElasticsearchHttpRecord line 348: > {code:java} > json.append(out.toString()); > {code} > This results in the conversion being done using system default encoding, > possibly garbling non-ASCII characters in the output. Should use the encoding > configured in the processor in the toString call. > As a workaround, the "file.encoding" system property can be specified > explicitly in the bootstrap.conf: > {code:java} > java.arg.7=-Dfile.encoding=UTF-8{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5937) PutElasticsearchHttpRecord uses system default encoding
[ https://issues.apache.org/jira/browse/NIFI-5937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738936#comment-16738936 ] ASF subversion and git services commented on NIFI-5937: --- Commit 3e52ae952d05867f3366dc4f796ae193a84b4b2c in nifi's branch refs/heads/master from Alex Savitsky [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3e52ae9 ] NIFI-5909 added optional settings for date, time, and timestamp formats used to write Records to Elasticsearch NIFI-5909 added content checks to the unit tests NIFI-5937 use explicit long value for test dates/times (to not depend on the timezone of test executor) NIFI-5937 tabs to spaces Fixing checkstyle violations introduced by https://github.com/apache/nifi/pull/3249 PR) NIFI-5937 adjusted property descriptions for consistency; limited EL scope to variable registry; added an appropriate validator along with its Maven dependency; moved format initialization to @OnScheduled NIFI-5909 tabs to spaces Signed-off-by: Ed This closes #3227 > PutElasticsearchHttpRecord uses system default encoding > --- > > Key: NIFI-5937 > URL: https://issues.apache.org/jira/browse/NIFI-5937 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Major > Fix For: 1.9.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > PutElasticsearchHttpRecord line 348: > {code:java} > json.append(out.toString()); > {code} > This results in the conversion being done using system default encoding, > possibly garbling non-ASCII characters in the output. Should use the encoding > configured in the processor in the toString call. > As a workaround, the "file.encoding" system property can be specified > explicitly in the bootstrap.conf: > {code:java} > java.arg.7=-Dfile.encoding=UTF-8{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] bdesert closed pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
bdesert closed pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/pom.xml b/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/pom.xml index d4536cd037..4db59f6078 100644 --- a/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/pom.xml +++ b/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/pom.xml @@ -122,6 +122,11 @@ language governing permissions and limitations under the License. --> jackson-databind ${jackson.version} + +org.apache.nifi +nifi-standard-record-utils +1.9.0-SNAPSHOT + diff --git a/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java b/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java index 52de42442a..d431960d3d 100644 --- a/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java +++ b/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java @@ -55,6 +55,7 @@ import org.apache.nifi.serialization.MalformedRecordException; import org.apache.nifi.serialization.RecordReader; import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.SimpleDateFormatValidator; import org.apache.nifi.serialization.record.DataType; import org.apache.nifi.serialization.record.Record; import org.apache.nifi.serialization.record.RecordField; @@ -178,6 +179,38 @@ .required(true) .build(); +static final PropertyDescriptor DATE_FORMAT = new PropertyDescriptor.Builder() +.name("Date Format") +.description("Specifies the format to use when reading/writing Date fields. " ++ "If not specified, the default format '" + RecordFieldType.DATE.getDefaultFormat() + "' is used. " ++ "If specified, the value must match the Java Simple Date Format (for example, MM/dd/ for a two-digit month, followed by " ++ "a two-digit day, followed by a four-digit year, all separated by '/' characters, as in 01/01/2017).") + .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) +.addValidator(new SimpleDateFormatValidator()) +.required(false) +.build(); +static final PropertyDescriptor TIME_FORMAT = new PropertyDescriptor.Builder() +.name("Time Format") +.description("Specifies the format to use when reading/writing Time fields. " ++ "If not specified, the default format '" + RecordFieldType.TIME.getDefaultFormat() + "' is used. " ++ "If specified, the value must match the Java Simple Date Format (for example, HH:mm:ss for a two-digit hour in 24-hour format, followed by " ++ "a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15).") + .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) +.addValidator(new SimpleDateFormatValidator()) +.required(false) +.build(); +static final PropertyDescriptor TIMESTAMP_FORMAT = new PropertyDescriptor.Builder() +.name("Timestamp Format") +.description("Specifies the format to use when reading/writing Timestamp fields. " ++ "If not specified, the default format '" + RecordFieldType.TIMESTAMP.getDefaultFormat() + "' is used. " ++ "If specified, the value must match the Java Simple Date Format (for example, MM/dd/ HH:mm:ss for a two-digit month, followed by " ++ "a two-digit day, followed by a four-digit year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by " ++ "a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/01/2017 18:04:15).") + .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) +.addValidator(new SimpleDateFormatValidator()) +.required(false) +
[jira] [Commented] (NIFI-5909) PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
[ https://issues.apache.org/jira/browse/NIFI-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738933#comment-16738933 ] ASF subversion and git services commented on NIFI-5909: --- Commit 3e52ae952d05867f3366dc4f796ae193a84b4b2c in nifi's branch refs/heads/master from Alex Savitsky [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3e52ae9 ] NIFI-5909 added optional settings for date, time, and timestamp formats used to write Records to Elasticsearch NIFI-5909 added content checks to the unit tests NIFI-5937 use explicit long value for test dates/times (to not depend on the timezone of test executor) NIFI-5937 tabs to spaces Fixing checkstyle violations introduced by https://github.com/apache/nifi/pull/3249 PR) NIFI-5937 adjusted property descriptions for consistency; limited EL scope to variable registry; added an appropriate validator along with its Maven dependency; moved format initialization to @OnScheduled NIFI-5909 tabs to spaces Signed-off-by: Ed This closes #3227 > PutElasticsearchHttpRecord doesn't allow to customize the timestamp format > -- > > Key: NIFI-5909 > URL: https://issues.apache.org/jira/browse/NIFI-5909 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > All timestamps are sent to Elasticsearch in the "-MM-dd HH:mm:ss" format, > coming from the RecordFieldType.TIMESTAMP.getDefaultFormat(). There's plenty > of use cases that call for Elasticsearch data to be presented differently, > and the format should be customizable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5937) PutElasticsearchHttpRecord uses system default encoding
[ https://issues.apache.org/jira/browse/NIFI-5937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738935#comment-16738935 ] ASF subversion and git services commented on NIFI-5937: --- Commit 3e52ae952d05867f3366dc4f796ae193a84b4b2c in nifi's branch refs/heads/master from Alex Savitsky [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3e52ae9 ] NIFI-5909 added optional settings for date, time, and timestamp formats used to write Records to Elasticsearch NIFI-5909 added content checks to the unit tests NIFI-5937 use explicit long value for test dates/times (to not depend on the timezone of test executor) NIFI-5937 tabs to spaces Fixing checkstyle violations introduced by https://github.com/apache/nifi/pull/3249 PR) NIFI-5937 adjusted property descriptions for consistency; limited EL scope to variable registry; added an appropriate validator along with its Maven dependency; moved format initialization to @OnScheduled NIFI-5909 tabs to spaces Signed-off-by: Ed This closes #3227 > PutElasticsearchHttpRecord uses system default encoding > --- > > Key: NIFI-5937 > URL: https://issues.apache.org/jira/browse/NIFI-5937 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Major > Fix For: 1.9.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > PutElasticsearchHttpRecord line 348: > {code:java} > json.append(out.toString()); > {code} > This results in the conversion being done using system default encoding, > possibly garbling non-ASCII characters in the output. Should use the encoding > configured in the processor in the toString call. > As a workaround, the "file.encoding" system property can be specified > explicitly in the bootstrap.conf: > {code:java} > java.arg.7=-Dfile.encoding=UTF-8{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5909) PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
[ https://issues.apache.org/jira/browse/NIFI-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738934#comment-16738934 ] ASF subversion and git services commented on NIFI-5909: --- Commit 3e52ae952d05867f3366dc4f796ae193a84b4b2c in nifi's branch refs/heads/master from Alex Savitsky [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3e52ae9 ] NIFI-5909 added optional settings for date, time, and timestamp formats used to write Records to Elasticsearch NIFI-5909 added content checks to the unit tests NIFI-5937 use explicit long value for test dates/times (to not depend on the timezone of test executor) NIFI-5937 tabs to spaces Fixing checkstyle violations introduced by https://github.com/apache/nifi/pull/3249 PR) NIFI-5937 adjusted property descriptions for consistency; limited EL scope to variable registry; added an appropriate validator along with its Maven dependency; moved format initialization to @OnScheduled NIFI-5909 tabs to spaces Signed-off-by: Ed This closes #3227 > PutElasticsearchHttpRecord doesn't allow to customize the timestamp format > -- > > Key: NIFI-5909 > URL: https://issues.apache.org/jira/browse/NIFI-5909 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > All timestamps are sent to Elasticsearch in the "-MM-dd HH:mm:ss" format, > coming from the RecordFieldType.TIMESTAMP.getDefaultFormat(). There's plenty > of use cases that call for Elasticsearch data to be presented differently, > and the format should be customizable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] MAliNaqvi commented on issue #144: NIFIREG-209 Rebuild metadata DB from FlowPersistenceProvider when emp…
MAliNaqvi commented on issue #144: NIFIREG-209 Rebuild metadata DB from FlowPersistenceProvider when emp… URL: https://github.com/apache/nifi-registry/pull/144#issuecomment-452947813 @ijokarumawak @kevdoran @bbende I applied the patch for this PR and for #152 and was able to get the nifi git sync working. I am able to create different branches eg. for separate JIRA issues and the registry pushes correctly and also creates the remote branch as well. This works really well. Now, from what I understand, if we want to pull changes from git into a registry that is already running, we have to restart the registry which then repopulates its internal db. Is there another way? It would be great if the resync could be done via the nifi registry ui (ex. a button under the "Actions" drop down menu) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] asfgit closed pull request #469: MINIFICPP-705: Previous fix for some travis failures. Will help isola…
asfgit closed pull request #469: MINIFICPP-705: Previous fix for some travis failures. Will help isola… URL: https://github.com/apache/nifi-minifi-cpp/pull/469 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/extensions/civetweb/processors/ListenHTTP.cpp b/extensions/civetweb/processors/ListenHTTP.cpp index 210ae1c3..469915eb 100644 --- a/extensions/civetweb/processors/ListenHTTP.cpp +++ b/extensions/civetweb/processors/ListenHTTP.cpp @@ -276,6 +276,10 @@ void ListenHTTP::Handler::set_header_attributes(const mg_request_info *req_info, bool ListenHTTP::Handler::handlePost(CivetServer *server, struct mg_connection *conn) { auto req_info = mg_get_request_info(conn); + if (!req_info) { + logger_->log_error("ListenHTTP handling POST resulted in a null request"); + return false; + } logger_->log_debug("ListenHTTP handling POST request of length %ll", req_info->content_length); if (!auth_request(conn, req_info)) { @@ -335,6 +339,10 @@ bool ListenHTTP::Handler::auth_request(mg_connection *conn, const mg_request_inf bool ListenHTTP::Handler::handleGet(CivetServer *server, struct mg_connection *conn) { auto req_info = mg_get_request_info(conn); + if (!req_info) { + logger_->log_error("ListenHTTP handling GET resulted in a null request"); + return false; + } logger_->log_debug("ListenHTTP handling GET request of URI %s", req_info->request_uri); if (!auth_request(conn, req_info)) { This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] ijokarumawak commented on issue #3248: NIFI-3611: Added ability to set Transaction Isolation Level on Database connections for QueryDatabaseTable processor
ijokarumawak commented on issue #3248: NIFI-3611: Added ability to set Transaction Isolation Level on Database connections for QueryDatabaseTable processor URL: https://github.com/apache/nifi/pull/3248#issuecomment-452943657 Hi @erichanson5 Thanks for trying to incorporate the review comments. Now this PR has other unnecessary commits in it and the it looks not compiling well. Would you clean the commits? I don't use `merge` when updating the local brach with the latest master, instead I use `rebase`. If I'd recover from this state, I may use following steps: 1. Rename the current branch to different name, to clean-up the branch. e.g. `git branch -m NIFI-3611-bk` 2. Update the master branch to the latest. e.g. ``` $ git checkout master # the name 'upstream' may be different, it's pointing github.com/apache/nifi.git $ git pull upstream master ``` 3. Create new branch from the latest master. e.g. `git checkout -b NIFI-3611` 4. Port the 1st commit from the old branch. e.g. `git cherry-pick 27d91507a00e095e212c1c96856806f53e868b82` At this point, your new NIFI-3611 branch is synched with the latest master and has the 1st commit. 5. Apply the changes you wanted to make at the last commit. I recommend applying changes manually instead of cherry-picking here. At this point, your branch has the updates to incorporate review comments. 6. Push it to update this PR. e.g. `git push -f origin NIFI-3611` You need `-f` option to forcefully push the new commits Hope this helps. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] phrocker commented on a change in pull request #465: MINIFICPP-700: Add MSI Support via CPACK
phrocker commented on a change in pull request #465: MINIFICPP-700: Add MSI Support via CPACK URL: https://github.com/apache/nifi-minifi-cpp/pull/465#discussion_r246614252 ## File path: msi/LICENSE.txt ## @@ -0,0 +1,1633 @@ + Review comment: ah yeah. agreed. I'll get the cpack integrated into appveyor. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (NIFI-5941) LogMessage routes to nonexistent failure when log level is below logback allowed
[ https://issues.apache.org/jira/browse/NIFI-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738880#comment-16738880 ] Matthew Dinep commented on NIFI-5941: - [~mmo18] It looks like it does overlap, but looking at the code commit on that ticket, it looks like the fix just avoids the relationship not specified error being thrown by not processing anything if the log level isn't enabled, which means that log messages could potentially be lost by end users. The ProcessFlowFile method is only getting called if the isLogLevelEnabled switch returns true with no else case for a disabled log level. If this is the desired behavior, then this ticket can be closed. > LogMessage routes to nonexistent failure when log level is below logback > allowed > > > Key: NIFI-5941 > URL: https://issues.apache.org/jira/browse/NIFI-5941 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.7.0, 1.8.0, 1.7.1 >Reporter: Matthew Dinep >Priority: Major > > When using the LogMessage processor, if a message is configured to log at a > level that is below what is set in logback.xml (for example logging at "info" > when the default log level is "warn"), the message doesn't get logged and an > error is thrown because the flowfile is unable to be routed to failure (the > only available route on the processor is Success). Since this is a user > configurable log level for a specific case, the level for the message should > be able to override the global log level in logback.xml so as to avoid this > behavior. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5941) LogMessage routes to nonexistent failure when log level is below logback allowed
[ https://issues.apache.org/jira/browse/NIFI-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthew Dinep updated NIFI-5941: Affects Version/s: 1.8.0 > LogMessage routes to nonexistent failure when log level is below logback > allowed > > > Key: NIFI-5941 > URL: https://issues.apache.org/jira/browse/NIFI-5941 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.7.0, 1.8.0, 1.7.1 >Reporter: Matthew Dinep >Priority: Major > > When using the LogMessage processor, if a message is configured to log at a > level that is below what is set in logback.xml (for example logging at "info" > when the default log level is "warn"), the message doesn't get logged and an > error is thrown because the flowfile is unable to be routed to failure (the > only available route on the processor is Success). Since this is a user > configurable log level for a specific case, the level for the message should > be able to override the global log level in logback.xml so as to avoid this > behavior. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] bdesert commented on issue #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
bdesert commented on issue #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227#issuecomment-452924053 +1 LGTM. Pulled the changes and rebuilt the instance. Tested with different formats. Thank you for improvements! Merging to master. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (MINIFICPP-697) Same named AST levels are collapse in c2 response
[ https://issues.apache.org/jira/browse/MINIFICPP-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri resolved MINIFICPP-697. --- Resolution: Fixed Fix Version/s: 0.6.0 > Same named AST levels are collapse in c2 response > - > > Key: MINIFICPP-697 > URL: https://issues.apache.org/jira/browse/MINIFICPP-697 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > Fix For: 0.6.0 > > > May not always want this to be the case, so give the creator flexibility to > avoid problems. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (MINIFICPP-479) Incorporate property validation information into manifest
[ https://issues.apache.org/jira/browse/MINIFICPP-479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri updated MINIFICPP-479: -- Fix Version/s: 0.6.0 > Incorporate property validation information into manifest > - > > Key: MINIFICPP-479 > URL: https://issues.apache.org/jira/browse/MINIFICPP-479 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Mr TheSegfault >Priority: Major > Fix For: 0.6.0 > > Time Spent: 4.5h > Remaining Estimate: 0h > > High-level intent is to avoid round-trip to c2 to know that flow is valid > (or, invalid in common/trivial ways). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (MINIFICPP-479) Incorporate property validation information into manifest
[ https://issues.apache.org/jira/browse/MINIFICPP-479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri resolved MINIFICPP-479. --- Resolution: Fixed > Incorporate property validation information into manifest > - > > Key: MINIFICPP-479 > URL: https://issues.apache.org/jira/browse/MINIFICPP-479 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Mr TheSegfault >Priority: Major > Time Spent: 4.5h > Remaining Estimate: 0h > > High-level intent is to avoid round-trip to c2 to know that flow is valid > (or, invalid in common/trivial ways). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5945) Add password based login to kerberos utils in nifi-security-utils
[ https://issues.apache.org/jira/browse/NIFI-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-5945: -- Status: Patch Available (was: Open) > Add password based login to kerberos utils in nifi-security-utils > - > > Key: NIFI-5945 > URL: https://issues.apache.org/jira/browse/NIFI-5945 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > There is some utility code in nifi-commons security utils to do a kerberos > login for a principal using a keytab. It would be helpful to also be able to > login with a password. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] bbende opened a new pull request #3256: NIFI-5945 Add support for password login to kerberos code in nifi-sec…
bbende opened a new pull request #3256: NIFI-5945 Add support for password login to kerberos code in nifi-sec… URL: https://github.com/apache/nifi/pull/3256 …urity-utils Run "mvn clean install -Pcontrib-check,integration-tests" to run the IT that uses MiniKDC to test the kerberos functionality. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (NIFI-5945) Add password based login to kerberos utils in nifi-security-utils
Bryan Bende created NIFI-5945: - Summary: Add password based login to kerberos utils in nifi-security-utils Key: NIFI-5945 URL: https://issues.apache.org/jira/browse/NIFI-5945 Project: Apache NiFi Issue Type: Improvement Reporter: Bryan Bende Assignee: Bryan Bende There is some utility code in nifi-commons security utils to do a kerberos login for a principal using a keytab. It would be helpful to also be able to login with a password. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5944) When NiFi restarts, if a component is expected to be running but is invalid, it never starts, even if becoming valid later
Mark Payne created NIFI-5944: Summary: When NiFi restarts, if a component is expected to be running but is invalid, it never starts, even if becoming valid later Key: NIFI-5944 URL: https://issues.apache.org/jira/browse/NIFI-5944 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne When NiFi starts up, if a component is expected to be running, NiFi will attempt to start the component. However, if the component is invalid for any reason, instead of starting the component, it throws an Exception and gives up. Instead, it should wait until the component becomes valid and then begin scheduling (or enabling, in the case of a controller service) the component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5943) Enhance Avro conversions
[ https://issues.apache.org/jira/browse/NIFI-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738697#comment-16738697 ] Mark Payne commented on NIFI-5943: -- [~alex_savitsky] can you elaborate on your thoughts here? From the Description, it sounds as if you're concluding that no enhancements are needed. > Enhance Avro conversions > > > Key: NIFI-5943 > URL: https://issues.apache.org/jira/browse/NIFI-5943 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Minor > > AvroTypeUtil has all the necessary information to support conversions of List > objects to ARRAY (currently only Java array is supported) as well as Map > objects to RECORD (currently only NiFi Record is supported) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5941) LogMessage routes to nonexistent failure when log level is below logback allowed
[ https://issues.apache.org/jira/browse/NIFI-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738685#comment-16738685 ] Michael Moser commented on NIFI-5941: - This is possibly the same issue as NIFI-5652. Can you check that out [~coffeethulhu], and let us know? Thank you. > LogMessage routes to nonexistent failure when log level is below logback > allowed > > > Key: NIFI-5941 > URL: https://issues.apache.org/jira/browse/NIFI-5941 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.7.0, 1.7.1 >Reporter: Matthew Dinep >Priority: Major > > When using the LogMessage processor, if a message is configured to log at a > level that is below what is set in logback.xml (for example logging at "info" > when the default log level is "warn"), the message doesn't get logged and an > error is thrown because the flowfile is unable to be routed to failure (the > only available route on the processor is Success). Since this is a user > configurable log level for a specific case, the level for the message should > be able to override the global log level in logback.xml so as to avoid this > behavior. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5943) Enhance Avro conversions
Alex Savitsky created NIFI-5943: --- Summary: Enhance Avro conversions Key: NIFI-5943 URL: https://issues.apache.org/jira/browse/NIFI-5943 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Alex Savitsky Assignee: Alex Savitsky AvroTypeUtil has all the necessary information to support conversions of List objects to ARRAY (currently only Java array is supported) as well as Map objects to RECORD (currently only NiFi Record is supported) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227#discussion_r246531708 ## File path: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/pom.xml ## @@ -122,6 +122,11 @@ language governing permissions and limitations under the License. --> jackson-databind ${jackson.version} + Review comment: @SavtechSolutions , could you please change tabs to spaces here as well? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] patricker opened a new pull request #3255: NIFI-5940 Cluster Node Offload Hangs if any RPG on flow is Disabled
patricker opened a new pull request #3255: NIFI-5940 Cluster Node Offload Hangs if any RPG on flow is Disabled URL: https://github.com/apache/nifi/pull/3255 If any Remote Process Group on the flow is disabled when a user starts a node Offload, then offload fails. This is because the Offload process tries to turn off all Remote Process Group's, even if they are already disabled; and this causes an unexpected exception to be thrown. There are no test cases for OffLoad (there is the 'Problematic unit test' written in Groovy, but it's very problematic). ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (NIFI-5942) error attempting to provision flow from registry in invalid state
Charlie Meyer created NIFI-5942: --- Summary: error attempting to provision flow from registry in invalid state Key: NIFI-5942 URL: https://issues.apache.org/jira/browse/NIFI-5942 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.8.0 Reporter: Charlie Meyer I have a versioned process group that has a processor that is event driven with 0 concurrent tasks set. I then took that process group and changed the processor to be timer driven, but left it at zero concurrent tasks. I then pushed the flow changes to the registry and it accepted them. When I attempted to provision the flow from the registry, I got an error that a timer driven processor cannot have zero concurrent tasks. I would expect that if a flow cannot be provisioned from the registry, it should not be allowed to be pushed to the registry in the first place. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5941) LogMessage routes to nonexistent failure when log level is below logback allowed
Matthew Dinep created NIFI-5941: --- Summary: LogMessage routes to nonexistent failure when log level is below logback allowed Key: NIFI-5941 URL: https://issues.apache.org/jira/browse/NIFI-5941 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.7.1, 1.7.0 Reporter: Matthew Dinep When using the LogMessage processor, if a message is configured to log at a level that is below what is set in logback.xml (for example logging at "info" when the default log level is "warn"), the message doesn't get logged and an error is thrown because the flowfile is unable to be routed to failure (the only available route on the processor is Success). Since this is a user configurable log level for a specific case, the level for the message should be able to override the global log level in logback.xml so as to avoid this behavior. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246496358 ## File path: libminifi/include/core/state/Value.h ## @@ -74,94 +157,215 @@ class BoolValue : public Value { explicit BoolValue(bool value) : Value(value ? "true" : "false"), value(value) { +setTypeId(); + } + explicit BoolValue(const std::string ) + : Value(strvalue) { +bool l; +std::istringstream(strvalue) >> std::boolalpha >> l; +value = l; // avoid warnings } - bool getValue() { + + bool getValue() const { return value; } protected: + + virtual bool getValue(int ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(int64_t ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(uint64_t ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(bool ) { +ref = value; +return true; + } + bool value; }; -class Int64Value : public Value { +class UInt64Value : public Value { public: - explicit Int64Value(uint64_t value) + explicit UInt64Value(uint64_t value) : Value(std::to_string(value)), value(value) { +setTypeId(); + } + explicit UInt64Value(const std::string ) + : Value(strvalue), +value(std::stoull(strvalue)) { +setTypeId(); } - uint64_t getValue() { + + uint64_t getValue() const { return value; } protected: + + virtual bool getValue(int ) { +return false; + } + + virtual bool getValue(int64_t ) { +if (value < (std::numeric_limits::max)()) { Review comment: thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246495818 ## File path: libminifi/include/FlowController.h ## @@ -175,21 +175,27 @@ class FlowController : public core::controller::ControllerServiceProvider, publi bool applyConfiguration(const std::string , const std::string ); // get name - std::string getName() const{ + std::string getName() const { if (root_ != nullptr) return root_->getName(); else return ""; } - virtual std::string getComponentName() { + virtual std::string getComponentName() const { return "FlowController"; } + virtual std::string getComponentUUID() const { +utils::Identifier ident; +root_->getUUID(ident); +return ident.to_string(); + } + // get version virtual std::string getVersion() { Review comment: I didn't due to this existing a while, but will make the change in 707. thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (NIFI-5940) Cluster Node Offload Hangs if any RPG on flow is Disabled
Peter Wicks created NIFI-5940: - Summary: Cluster Node Offload Hangs if any RPG on flow is Disabled Key: NIFI-5940 URL: https://issues.apache.org/jira/browse/NIFI-5940 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Peter Wicks Assignee: Peter Wicks If any Remote Process Group on the flow is disabled when a user starts a node Offload, then offload fails. This is because the Offload process tries to turn off all Remote Process Group's, even if they are already disabled. 2019-01-09 17:22:00,823 ERROR [Offload Flow Files from Node] org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Offload Flow Files from Node,5,main]: java.lang.IllegalStateException: 33a4935b-5800-360d-9250-2179e3ef5efe is not transmitting 2019-01-09 17:22:00,823 ERROR [Offload Flow Files from Node] org.apache.nifi.NiFi java.lang.IllegalStateException: 33a4935b-5800-360d-9250-2179e3ef5efe is not transmitting at org.apache.nifi.remote.StandardRemoteProcessGroup.verifyCanStopTransmitting(StandardRemoteProcessGroup.java:1333) at org.apache.nifi.remote.StandardRemoteProcessGroup.stopTransmitting(StandardRemoteProcessGroup.java:1036) at java.util.ArrayList.forEach(ArrayList.java:1249) at org.apache.nifi.controller.StandardFlowService.offload(StandardFlowService.java:706) at org.apache.nifi.controller.StandardFlowService.handleOffloadRequest(StandardFlowService.java:688) at org.apache.nifi.controller.StandardFlowService.access$400(StandardFlowService.java:105) at org.apache.nifi.controller.StandardFlowService$3.run(StandardFlowService.java:428) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246492926 ## File path: libminifi/src/core/PropertyValidation.cpp ## @@ -0,0 +1,40 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "core/PropertyValidation.h" +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace core { + +std::shared_ptr StandardValidators::VALID = std::make_shared(true, "VALID"); +StandardValidators::StandardValidators() { + INVALID = std::make_shared(false, "INVALID"); + INTEGER_VALIDATOR = std::make_shared("INTEGER_VALIDATOR"); + LONG_VALIDATOR = std::make_shared("LONG_VALIDATOR"); + UNSIGNED_LONG_VALIDATOR = std::make_shared("LONG_VALIDATOR"); + SIZE_VALIDATOR = std::make_shared("DATA_SIZE_VALIDATOR"); Review comment: thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246492714 ## File path: libminifi/include/core/state/Value.h ## @@ -74,94 +157,215 @@ class BoolValue : public Value { explicit BoolValue(bool value) : Value(value ? "true" : "false"), value(value) { +setTypeId(); + } + explicit BoolValue(const std::string ) + : Value(strvalue) { +bool l; +std::istringstream(strvalue) >> std::boolalpha >> l; +value = l; // avoid warnings } - bool getValue() { + + bool getValue() const { return value; } protected: + + virtual bool getValue(int ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(int64_t ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(uint64_t ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(bool ) { +ref = value; +return true; + } + bool value; }; -class Int64Value : public Value { +class UInt64Value : public Value { public: - explicit Int64Value(uint64_t value) + explicit UInt64Value(uint64_t value) : Value(std::to_string(value)), value(value) { +setTypeId(); + } + explicit UInt64Value(const std::string ) + : Value(strvalue), +value(std::stoull(strvalue)) { +setTypeId(); } - uint64_t getValue() { + + uint64_t getValue() const { return value; } protected: + + virtual bool getValue(int ) { +return false; + } + + virtual bool getValue(int64_t ) { +if (value < (std::numeric_limits::max)()) { + ref = value; + return true; +} +return false; + } + + virtual bool getValue(uint64_t ) { +ref = value; +return true; + } + + virtual bool getValue(bool ) { +return false; + } + uint64_t value; }; +class Int64Value : public Value { + public: + explicit Int64Value(int64_t value) + : Value(std::to_string(value)), +value(value) { +setTypeId(); + } + explicit Int64Value(const std::string ) + : Value(strvalue), +value(std::stoll(strvalue)) { +setTypeId(); + } + + int64_t getValue() { +return value; + } + protected: -static inline std::shared_ptr createValue( -const bool ) { + virtual bool getValue(int ) { +return false; + } + + virtual bool getValue(int64_t ) { +ref = value; +return true; + } + + virtual bool getValue(uint64_t ) { +if (value >= 0) { + ref = value; + return true; +} +return true; Review comment: thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246492470 ## File path: libminifi/include/core/TypedValues.h ## @@ -0,0 +1,232 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef LIBMINIFI_INCLUDE_CORE_TYPEDVALUES_H_ +#define LIBMINIFI_INCLUDE_CORE_TYPEDVALUES_H_ + +#include "state/Value.h" +#include +#include "utils/StringUtils.h" +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace core { + +class TransformableValue { + public: + TransformableValue() { + } +}; + + +/** + * Purpose and Design: TimePeriodValue represents a time period that can be set via a numeric followed by + * a time unit string. This Value is based on uint64, but has the support to return + * the original string representation. Once set, both are immutable. + */ +class TimePeriodValue : public TransformableValue, public state::response::UInt64Value { + public: + static const std::type_index type_id; + + explicit TimePeriodValue(const std::string ) + : state::response::UInt64Value(0) { +TimeUnit units; +StringToTime(timeString, value, units); +string_value = timeString; +ConvertTimeUnitToMS(value, units, value); + } + + explicit TimePeriodValue(uint64_t value) + : state::response::UInt64Value(value) { + } + + // Convert TimeUnit to MilliSecond + template + static bool ConvertTimeUnitToMS(T input, TimeUnit unit, T ) { +if (unit == MILLISECOND) { + out = input; + return true; +} else if (unit == SECOND) { + out = input * 1000; + return true; +} else if (unit == MINUTE) { + out = input * 60 * 1000; + return true; +} else if (unit == HOUR) { + out = input * 60 * 60 * 1000; + return true; +} else if (unit == DAY) { + out = 24 * 60 * 60 * 1000; + return true; +} else if (unit == NANOSECOND) { + out = input / 1000 / 1000; + return true; +} else { + return false; +} + } + + static bool StringToTime(std::string input, uint64_t , TimeUnit ) { +if (input.size() == 0) { + return false; +} + +const char *cvalue = input.c_str(); +char *pEnd; +auto ival = std::strtoll(cvalue, , 0); + +if (pEnd[0] == '\0') { + return false; +} + +while (*pEnd == ' ') { + // Skip the space + pEnd++; +} + +std::string unit(pEnd); +std::transform(unit.begin(), unit.end(), unit.begin(), ::tolower); + +if (unit == "sec" || unit == "s" || unit == "second" || unit == "seconds" || unit == "secs") { + timeunit = SECOND; + output = ival; + return true; +} else if (unit == "msec" || unit == "ms" || unit == "millisecond" || unit == "milliseconds" || unit == "msecs") { + timeunit = MILLISECOND; + output = ival; + return true; +} else if (unit == "min" || unit == "m" || unit == "mins" || unit == "minute" || unit == "minutes") { + timeunit = MINUTE; + output = ival; + return true; +} else if (unit == "ns" || unit == "nano" || unit == "nanos" || unit == "nanoseconds") { + timeunit = NANOSECOND; + output = ival; + return true; +} else if (unit == "ms" || unit == "milli" || unit == "millis" || unit == "milliseconds") { + timeunit = MILLISECOND; + output = ival; + return true; +} else if (unit == "h" || unit == "hr" || unit == "hour" || unit == "hrs" || unit == "hours") { + timeunit = HOUR; + output = ival; + return true; +} else if (unit == "d" || unit == "day" || unit == "days") { + timeunit = DAY; + output = ival; + return true; +} else + return false; + } +}; + +/** + * Purpose and Design: DataSizeValue represents a file system size value that extends + * Uint64Value. This means that a string is converted to uint64_t. The string is of the + * format . + */ +class DataSizeValue : public TransformableValue, public state::response::UInt64Value { + public: + static const std::type_index type_id; + + explicit DataSizeValue(const std::string ) + :
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246490376 ## File path: libminifi/include/core/Property.h ## @@ -129,21 +102,72 @@ class Property { std::string getDisplayName() const; std::vector getAllowedTypes() const; std::string getDescription() const; - std::string getValue() const; + std::shared_ptr getValidator() const; + const PropertyValue () const; bool getRequired() const; bool supportsExpressionLangauge() const; std::string getValidRegex() const; std::vector getDependentProperties() const; std::vector> getExclusiveOfProperties() const; - std::vector (); + std::vector getValues(); + + const PropertyValue () const { +return default_value_; + } + + template + void setValue(const T ) { +PropertyValue vn = default_value_; +vn = value; +if (validator_) { + vn.setValidator(validator_); + ValidationResult result = validator_->validate(name_, vn.getValue()); + if (!result.valid()) { +// throw some exception? + } +} else { + vn.setValidator(core::StandardValidators::VALID); +} +if (!is_collection_) { + values_.clear(); + values_.push_back(vn); +} else { Review comment: Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (MINIFICPP-707) Should explore extending TypedValues for other types
Mr TheSegfault created MINIFICPP-707: Summary: Should explore extending TypedValues for other types Key: MINIFICPP-707 URL: https://issues.apache.org/jira/browse/MINIFICPP-707 Project: NiFi MiNiFi C++ Issue Type: Bug Reporter: Mr TheSegfault Assignee: Mr TheSegfault -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246490182 ## File path: libminifi/include/core/TypedValues.h ## @@ -0,0 +1,232 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef LIBMINIFI_INCLUDE_CORE_TYPEDVALUES_H_ +#define LIBMINIFI_INCLUDE_CORE_TYPEDVALUES_H_ + +#include "state/Value.h" +#include +#include "utils/StringUtils.h" +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace core { + +class TransformableValue { + public: + TransformableValue() { + } +}; + + +/** + * Purpose and Design: TimePeriodValue represents a time period that can be set via a numeric followed by + * a time unit string. This Value is based on uint64, but has the support to return + * the original string representation. Once set, both are immutable. + */ +class TimePeriodValue : public TransformableValue, public state::response::UInt64Value { + public: + static const std::type_index type_id; + + explicit TimePeriodValue(const std::string ) + : state::response::UInt64Value(0) { +TimeUnit units; +StringToTime(timeString, value, units); +string_value = timeString; +ConvertTimeUnitToMS(value, units, value); + } + + explicit TimePeriodValue(uint64_t value) + : state::response::UInt64Value(value) { + } + + // Convert TimeUnit to MilliSecond + template + static bool ConvertTimeUnitToMS(T input, TimeUnit unit, T ) { +if (unit == MILLISECOND) { + out = input; + return true; +} else if (unit == SECOND) { + out = input * 1000; + return true; +} else if (unit == MINUTE) { + out = input * 60 * 1000; + return true; +} else if (unit == HOUR) { + out = input * 60 * 60 * 1000; + return true; +} else if (unit == DAY) { + out = 24 * 60 * 60 * 1000; + return true; +} else if (unit == NANOSECOND) { + out = input / 1000 / 1000; + return true; +} else { + return false; +} + } + + static bool StringToTime(std::string input, uint64_t , TimeUnit ) { +if (input.size() == 0) { + return false; +} + +const char *cvalue = input.c_str(); +char *pEnd; +auto ival = std::strtoll(cvalue, , 0); + +if (pEnd[0] == '\0') { + return false; +} + +while (*pEnd == ' ') { + // Skip the space + pEnd++; +} + +std::string unit(pEnd); +std::transform(unit.begin(), unit.end(), unit.begin(), ::tolower); + +if (unit == "sec" || unit == "s" || unit == "second" || unit == "seconds" || unit == "secs") { + timeunit = SECOND; + output = ival; + return true; +} else if (unit == "msec" || unit == "ms" || unit == "millisecond" || unit == "milliseconds" || unit == "msecs") { + timeunit = MILLISECOND; + output = ival; + return true; +} else if (unit == "min" || unit == "m" || unit == "mins" || unit == "minute" || unit == "minutes") { + timeunit = MINUTE; + output = ival; + return true; +} else if (unit == "ns" || unit == "nano" || unit == "nanos" || unit == "nanoseconds") { + timeunit = NANOSECOND; + output = ival; + return true; +} else if (unit == "ms" || unit == "milli" || unit == "millis" || unit == "milliseconds") { + timeunit = MILLISECOND; + output = ival; + return true; +} else if (unit == "h" || unit == "hr" || unit == "hour" || unit == "hrs" || unit == "hours") { + timeunit = HOUR; + output = ival; + return true; +} else if (unit == "d" || unit == "day" || unit == "days") { + timeunit = DAY; + output = ival; + return true; +} else + return false; + } +}; + +/** + * Purpose and Design: DataSizeValue represents a file system size value that extends + * Uint64Value. This means that a string is converted to uint64_t. The string is of the + * format . + */ +class DataSizeValue : public TransformableValue, public state::response::UInt64Value { + public: + static const std::type_index type_id; + + explicit DataSizeValue(const std::string ) + :
[GitHub] SavtechSolutions commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
SavtechSolutions commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227#discussion_r246490098 ## File path: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java ## @@ -178,6 +178,36 @@ .required(true) .build(); +static final PropertyDescriptor DATE_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-date-format") +.displayName("Date Format") +.description("Custom date format to use when converting fields of date type " + Review comment: Also, I see the existing PDs using the SimpleDateFormatValidator to validate formats. It probably makes sense for me to use the same, but this means an additional POM dependency to nifi-standard-record-utils - is it OK to add that? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246489653 ## File path: libminifi/include/core/state/Value.h ## @@ -46,26 +56,99 @@ class Value { return string_value; } - bool empty(){ + template + bool convertValue(T ) { +return convertValueImpl::type>(ref); + } + + bool empty() { return string_value.empty(); } + std::type_index getTypeIndex() { +return type_id; + } + + static const std::type_index UINT64_TYPE; + static const std::type_index INT64_TYPE; + static const std::type_index INT_TYPE; + static const std::type_index BOOL_TYPE; + static const std::type_index STRING_TYPE; + protected: - std::string string_value; + template + bool convertValueImpl(T ) { +return getValue(ref); + } + + template + void setTypeId() { +type_id = std::type_index(typeid(T)); + } + + virtual bool getValue(int ) { +ref = std::stol(string_value); +return true; + } + + virtual bool getValue(int64_t ) { +ref = std::stoll(string_value); +return true; + } + + virtual bool getValue(uint64_t ) { +ref = std::stoull(string_value); Review comment: Yeah, and that's handled by the call path. Logs should make that evident when run in these cases (hopefully). This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246489520 ## File path: libminifi/include/core/state/Value.h ## @@ -46,26 +56,99 @@ class Value { return string_value; } - bool empty(){ + template + bool convertValue(T ) { +return convertValueImpl::type>(ref); + } + + bool empty() { return string_value.empty(); } + std::type_index getTypeIndex() { +return type_id; + } + + static const std::type_index UINT64_TYPE; + static const std::type_index INT64_TYPE; + static const std::type_index INT_TYPE; + static const std::type_index BOOL_TYPE; + static const std::type_index STRING_TYPE; + protected: - std::string string_value; + template + bool convertValueImpl(T ) { +return getValue(ref); + } + + template + void setTypeId() { +type_id = std::type_index(typeid(T)); + } + + virtual bool getValue(int ) { +ref = std::stol(string_value); +return true; + } + + virtual bool getValue(int64_t ) { +ref = std::stoll(string_value); +return true; + } + + virtual bool getValue(uint64_t ) { +ref = std::stoull(string_value); +return true; + } + + virtual bool getValue(bool ) { +std::istringstream(string_value) >> std::boolalpha >> ref; Review comment: That would be inconsistent with prior behavior I'm afraid. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
phrocker commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246489153 ## File path: libminifi/include/io/validation.h ## @@ -27,6 +27,21 @@ * A checker that will, at compile time, tell us * if the declared type has a size method. */ +template +class empty_function_functor_checker { Review comment: yep just a switch to empty. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (NIFI-4915) Add support for HBase 2.0.0
[ https://issues.apache.org/jira/browse/NIFI-4915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-4915: -- Status: Patch Available (was: Open) > Add support for HBase 2.0.0 > --- > > Key: NIFI-4915 > URL: https://issues.apache.org/jira/browse/NIFI-4915 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > The HBase community is gearing up for their 2.0.0 release and currently has a > 2.0.0-beta-1 release out. We should provide a new HBaseClientService that > uses the 2.0.0 client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] bbende opened a new pull request #3254: NIFI-4915 Adding HBase 2.x service bundle
bbende opened a new pull request #3254: NIFI-4915 Adding HBase 2.x service bundle URL: https://github.com/apache/nifi/pull/3254 Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apiri commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
apiri commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246466034 ## File path: libminifi/src/core/PropertyValidation.cpp ## @@ -0,0 +1,40 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "core/PropertyValidation.h" +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace core { + +std::shared_ptr StandardValidators::VALID = std::make_shared(true, "VALID"); +StandardValidators::StandardValidators() { + INVALID = std::make_shared(false, "INVALID"); + INTEGER_VALIDATOR = std::make_shared("INTEGER_VALIDATOR"); + LONG_VALIDATOR = std::make_shared("LONG_VALIDATOR"); + UNSIGNED_LONG_VALIDATOR = std::make_shared("LONG_VALIDATOR"); + SIZE_VALIDATOR = std::make_shared("DATA_SIZE_VALIDATOR"); Review comment: would be good to make this DATA_SIZE_VALIDATOR for consistency and to pair with the data size value This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (NIFI-5938) Allow Record Readers to Infer Schema on Read
[ https://issues.apache.org/jira/browse/NIFI-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-5938: - Status: Patch Available (was: Open) > Allow Record Readers to Infer Schema on Read > > > Key: NIFI-5938 > URL: https://issues.apache.org/jira/browse/NIFI-5938 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The introduction of record-oriented processors was a huge improvement for > NiFi in terms of usability. However, they only improve usability if you have > a schema for your data. There have been several comments along the lines of > "I would really love to use the record-oriented processors, but I don't have > a schema for my data." > Sometimes users have no schema because they don't want to bother with > creating the schemas. The schema becomes a usability issue. This is > especially true for very large documents that contain a lot of nested > Records. Other times, users cannot create a schema because they retrieve > arbitrary data from some source, and they have no idea what the data will > look like. > We do not want to remove the notion of a schema, however. Schemas provide for > a very powerful construct for many use cases, and it provides Processors a > much easier-to-use API. If we provide the ability to Infer the Schema on > Read, though, we can provide the best of both worlds. While we do have > processors for inferring schemas for JSON and CSV data, those are not always > sufficient. They cannot be used, for instance, by ConsumeKafkaRecord, > ExecuteSQL, etc. because those Processors need the schema before that. > Additionally, we have no ability to infer a schema for XML, logs, etc. > Finally, we need to consider processors that are designed to manipulate the > data. For example, UpdateRecord, JoltTransformRecord, LookupRecord (when used > for enrichment), and QueryRecord. These Processors follow a typical pattern > of "get reader's schema, then provide it to the writer in order to get > writer's schema." This means that if the Record Writer inherits the record's > schema, and we infer that schema, then any newly added fields will simply be > dropped by the writer because the writer's schema doesn't know about those > fields. As a result, we need to ensure that we first transform the first > record, get the schema for the transformed record, and then pass that > transformed record's schema to the Writer, so that the Writer inherits the > schema describing data after transformation. > Design/Implementation Goals should include: > - High performance: users should be impacted as little as is feasible. > - Usability: users should be able to infer schemas with as little > configuration as is reasonable. > - Ease of Development: code should be written in a way that makes it easy for > new Record Readers to provide schema inference that is fast, efficient, > correct, and consistent with how the other readers infer schemas. > - Implementations: At a minimum, we should provide the ability to infer > schemas for JSON, XML, and CSV data. > - Backward Compatibility: The new feature should not break backward > compatibility for any Record Reader. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5938) Allow Record Readers to Infer Schema on Read
[ https://issues.apache.org/jira/browse/NIFI-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-5938: - Fix Version/s: 1.9.0 > Allow Record Readers to Infer Schema on Read > > > Key: NIFI-5938 > URL: https://issues.apache.org/jira/browse/NIFI-5938 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.9.0 > > Time Spent: 10m > Remaining Estimate: 0h > > The introduction of record-oriented processors was a huge improvement for > NiFi in terms of usability. However, they only improve usability if you have > a schema for your data. There have been several comments along the lines of > "I would really love to use the record-oriented processors, but I don't have > a schema for my data." > Sometimes users have no schema because they don't want to bother with > creating the schemas. The schema becomes a usability issue. This is > especially true for very large documents that contain a lot of nested > Records. Other times, users cannot create a schema because they retrieve > arbitrary data from some source, and they have no idea what the data will > look like. > We do not want to remove the notion of a schema, however. Schemas provide for > a very powerful construct for many use cases, and it provides Processors a > much easier-to-use API. If we provide the ability to Infer the Schema on > Read, though, we can provide the best of both worlds. While we do have > processors for inferring schemas for JSON and CSV data, those are not always > sufficient. They cannot be used, for instance, by ConsumeKafkaRecord, > ExecuteSQL, etc. because those Processors need the schema before that. > Additionally, we have no ability to infer a schema for XML, logs, etc. > Finally, we need to consider processors that are designed to manipulate the > data. For example, UpdateRecord, JoltTransformRecord, LookupRecord (when used > for enrichment), and QueryRecord. These Processors follow a typical pattern > of "get reader's schema, then provide it to the writer in order to get > writer's schema." This means that if the Record Writer inherits the record's > schema, and we infer that schema, then any newly added fields will simply be > dropped by the writer because the writer's schema doesn't know about those > fields. As a result, we need to ensure that we first transform the first > record, get the schema for the transformed record, and then pass that > transformed record's schema to the Writer, so that the Writer inherits the > schema describing data after transformation. > Design/Implementation Goals should include: > - High performance: users should be impacted as little as is feasible. > - Usability: users should be able to infer schemas with as little > configuration as is reasonable. > - Ease of Development: code should be written in a way that makes it easy for > new Record Readers to provide schema inference that is fast, efficient, > correct, and consistent with how the other readers infer schemas. > - Implementations: At a minimum, we should provide the ability to infer > schemas for JSON, XML, and CSV data. > - Backward Compatibility: The new feature should not break backward > compatibility for any Record Reader. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] markap14 opened a new pull request #3253: NIFI-5938: Added ability to infer record schema on read from JsonTree…
markap14 opened a new pull request #3253: NIFI-5938: Added ability to infer record schema on read from JsonTree… URL: https://github.com/apache/nifi/pull/3253 …Reader, JsonPathReader, XML Reader, and CSV Reader. - Updates to make UpdateRecord and RecordPath automatically update Record schema when performing update and perform the updates on the first record in UpdateRecord before obtaining Writer Schema. This allows the Writer to to inherit the Schema of the updated Record instead of the Schema of the Record as it was when it was read. - Updated JoltTransformRecord so that schema is inferred on the first transformed object before passing the schema to the Record Writer, so that if writer inherits schema from record, the schema that is inherited is the trans transformed schema - Updated LookupRecord to allow for Record fields to be arbitrarily added - Implemented ContentClaimInputStream - Added controller service for caching schemas - UpdatedQueryRecord to cache schemas automatically up to some number of schemas, which will significantly inprove throughput in many cases, especially with inferred schemas. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] SavtechSolutions commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
SavtechSolutions commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227#discussion_r246458971 ## File path: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java ## @@ -178,6 +178,36 @@ .required(true) .build(); +static final PropertyDescriptor DATE_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-date-format") +.displayName("Date Format") +.description("Custom date format to use when converting fields of date type " + Review comment: Just to confirm, by "reusing" you mean copying what's in DateTimeUtils into my PDs, not to reference existing PDs? They do share a lot of common, but have different fallback values (millisecond longs in CSV vs RecordFieldType.*.getDefaultFormat() for ES), so I don't think I can simply reference the existing PDs This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246442774 ## File path: libminifi/include/core/state/Value.h ## @@ -74,94 +157,215 @@ class BoolValue : public Value { explicit BoolValue(bool value) : Value(value ? "true" : "false"), value(value) { +setTypeId(); + } + explicit BoolValue(const std::string ) + : Value(strvalue) { +bool l; +std::istringstream(strvalue) >> std::boolalpha >> l; +value = l; // avoid warnings } - bool getValue() { + + bool getValue() const { return value; } protected: + + virtual bool getValue(int ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(int64_t ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(uint64_t ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(bool ) { +ref = value; +return true; + } + bool value; }; -class Int64Value : public Value { +class UInt64Value : public Value { public: - explicit Int64Value(uint64_t value) + explicit UInt64Value(uint64_t value) : Value(std::to_string(value)), value(value) { +setTypeId(); + } + explicit UInt64Value(const std::string ) + : Value(strvalue), +value(std::stoull(strvalue)) { +setTypeId(); } - uint64_t getValue() { + + uint64_t getValue() const { return value; } protected: + + virtual bool getValue(int ) { +return false; + } + + virtual bool getValue(int64_t ) { +if (value < (std::numeric_limits::max)()) { + ref = value; + return true; +} +return false; + } + + virtual bool getValue(uint64_t ) { +ref = value; +return true; + } + + virtual bool getValue(bool ) { +return false; + } + uint64_t value; }; +class Int64Value : public Value { + public: + explicit Int64Value(int64_t value) + : Value(std::to_string(value)), +value(value) { +setTypeId(); + } + explicit Int64Value(const std::string ) + : Value(strvalue), +value(std::stoll(strvalue)) { +setTypeId(); + } + + int64_t getValue() { +return value; + } + protected: -static inline std::shared_ptr createValue( -const bool ) { + virtual bool getValue(int ) { +return false; + } + + virtual bool getValue(int64_t ) { +ref = value; +return true; + } + + virtual bool getValue(uint64_t ) { +if (value >= 0) { + ref = value; + return true; +} +return true; Review comment: I would expect this to be false This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246433382 ## File path: libminifi/include/core/FlowConfiguration.h ## @@ -70,7 +70,11 @@ class FlowConfiguration : public CoreComponent { logger_(logging::LoggerFactory::getLogger()) { controller_services_ = std::make_shared(); service_provider_ = std::make_shared(controller_services_, nullptr, configuration); -flow_version_ = std::make_shared("", "default", ""); +std::string flowUrl = "", bucket_id = "default", flowId = ""; Review comment: Seems it's left undone This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246439710 ## File path: libminifi/include/core/state/Value.h ## @@ -46,26 +56,99 @@ class Value { return string_value; } - bool empty(){ + template + bool convertValue(T ) { +return convertValueImpl::type>(ref); + } + + bool empty() { return string_value.empty(); } + std::type_index getTypeIndex() { +return type_id; + } + + static const std::type_index UINT64_TYPE; + static const std::type_index INT64_TYPE; + static const std::type_index INT_TYPE; + static const std::type_index BOOL_TYPE; + static const std::type_index STRING_TYPE; + protected: - std::string string_value; + template + bool convertValueImpl(T ) { +return getValue(ref); + } + + template + void setTypeId() { +type_id = std::type_index(typeid(T)); + } + + virtual bool getValue(int ) { +ref = std::stol(string_value); +return true; + } + + virtual bool getValue(int64_t ) { +ref = std::stoll(string_value); +return true; + } + + virtual bool getValue(uint64_t ) { +ref = std::stoull(string_value); Review comment: These throw. What's the intent here? Are we happy with an unhandled exception in case the getvalue call fails or shall we catch and set return value based on that? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246432554 ## File path: libminifi/include/FlowController.h ## @@ -175,21 +175,27 @@ class FlowController : public core::controller::ControllerServiceProvider, publi bool applyConfiguration(const std::string , const std::string ); // get name - std::string getName() const{ + std::string getName() const { if (root_ != nullptr) return root_->getName(); else return ""; } - virtual std::string getComponentName() { + virtual std::string getComponentName() const { return "FlowController"; } + virtual std::string getComponentUUID() const { +utils::Identifier ident; +root_->getUUID(ident); +return ident.to_string(); + } + // get version virtual std::string getVersion() { Review comment: const, too? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246448554 ## File path: libminifi/src/c2/protocols/RESTProtocol.cpp ## @@ -261,29 +280,49 @@ rapidjson::Value RESTProtocol::serializeJsonPayload(const C2Payload , ra // get the name from the content rapidjson::Value json_payload(payload.isContainer() ? rapidjson::kArrayType : rapidjson::kObjectType); - std::map> children; + std::vector children; for (const auto _payload : payload.getNestedPayloads()) { rapidjson::Value* child_payload = new rapidjson::Value(serializeJsonPayload(nested_payload, alloc)); -children[nested_payload.getLabel()].push_back(child_payload); +if (nested_payload.isCollapsible()) { + bool combine = false; Review comment: ``` bool combine = false; if (nested_payload.isCollapsible()) { for (auto : children) { if (subordinate.name == nested_payload.getLabel()) { subordinate.values.push_back(child_payload); combine = true; break; } } } if(!combine) { ValueObject obj; obj.name = nested_payload.getLabel(); obj.values.push_back(child_payload); children.push_back(obj); } ``` Code duplication can be avoided this way. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246434451 ## File path: libminifi/include/core/Property.h ## @@ -129,21 +102,72 @@ class Property { std::string getDisplayName() const; std::vector getAllowedTypes() const; std::string getDescription() const; - std::string getValue() const; + std::shared_ptr getValidator() const; + const PropertyValue () const; bool getRequired() const; bool supportsExpressionLangauge() const; std::string getValidRegex() const; std::vector getDependentProperties() const; std::vector> getExclusiveOfProperties() const; - std::vector (); + std::vector getValues(); + + const PropertyValue () const { +return default_value_; + } + + template + void setValue(const T ) { +PropertyValue vn = default_value_; +vn = value; +if (validator_) { + vn.setValidator(validator_); + ValidationResult result = validator_->validate(name_, vn.getValue()); + if (!result.valid()) { +// throw some exception? + } +} else { + vn.setValidator(core::StandardValidators::VALID); +} +if (!is_collection_) { + values_.clear(); + values_.push_back(vn); +} else { + values_.push_back(vn); +} + } - void setValue(std::string value); + void setValue(PropertyValue ) { +if (validator_) { + vn.setValidator(validator_); + ValidationResult result = validator_->validate(name_, vn.getValue()); + if (!result.valid()) { +// throw some exception? + } +} else { + vn.setValidator(core::StandardValidators::VALID); +} +if (!is_collection_) { + values_.clear(); + values_.push_back(vn); +} else { Review comment: Same here. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246449428 ## File path: libminifi/include/io/validation.h ## @@ -27,6 +27,21 @@ * A checker that will, at compile time, tell us * if the declared type has a size method. */ +template +class empty_function_functor_checker { Review comment: What's the reason for this? To use empty(), which is guaranteed to be O(1), while size() doesn't? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246442482 ## File path: libminifi/include/core/state/Value.h ## @@ -74,94 +157,215 @@ class BoolValue : public Value { explicit BoolValue(bool value) : Value(value ? "true" : "false"), value(value) { +setTypeId(); + } + explicit BoolValue(const std::string ) + : Value(strvalue) { +bool l; +std::istringstream(strvalue) >> std::boolalpha >> l; +value = l; // avoid warnings } - bool getValue() { + + bool getValue() const { return value; } protected: + + virtual bool getValue(int ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(int64_t ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(uint64_t ) { +if (ref == 1) { + ref = true; + return true; +} else if (ref == 0) { + ref = false; + return true; +} else { + return false; +} + } + + virtual bool getValue(bool ) { +ref = value; +return true; + } + bool value; }; -class Int64Value : public Value { +class UInt64Value : public Value { public: - explicit Int64Value(uint64_t value) + explicit UInt64Value(uint64_t value) : Value(std::to_string(value)), value(value) { +setTypeId(); + } + explicit UInt64Value(const std::string ) + : Value(strvalue), +value(std::stoull(strvalue)) { +setTypeId(); } - uint64_t getValue() { + + uint64_t getValue() const { return value; } protected: + + virtual bool getValue(int ) { +return false; + } + + virtual bool getValue(int64_t ) { +if (value < (std::numeric_limits::max)()) { Review comment: <= ? (just to make sure :) ) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246437513 ## File path: libminifi/include/core/TypedValues.h ## @@ -0,0 +1,232 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef LIBMINIFI_INCLUDE_CORE_TYPEDVALUES_H_ +#define LIBMINIFI_INCLUDE_CORE_TYPEDVALUES_H_ + +#include "state/Value.h" +#include +#include "utils/StringUtils.h" +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace core { + +class TransformableValue { + public: + TransformableValue() { + } +}; + + +/** + * Purpose and Design: TimePeriodValue represents a time period that can be set via a numeric followed by + * a time unit string. This Value is based on uint64, but has the support to return + * the original string representation. Once set, both are immutable. + */ +class TimePeriodValue : public TransformableValue, public state::response::UInt64Value { + public: + static const std::type_index type_id; + + explicit TimePeriodValue(const std::string ) + : state::response::UInt64Value(0) { +TimeUnit units; +StringToTime(timeString, value, units); +string_value = timeString; +ConvertTimeUnitToMS(value, units, value); + } + + explicit TimePeriodValue(uint64_t value) + : state::response::UInt64Value(value) { + } + + // Convert TimeUnit to MilliSecond + template + static bool ConvertTimeUnitToMS(T input, TimeUnit unit, T ) { +if (unit == MILLISECOND) { + out = input; + return true; +} else if (unit == SECOND) { + out = input * 1000; + return true; +} else if (unit == MINUTE) { + out = input * 60 * 1000; + return true; +} else if (unit == HOUR) { + out = input * 60 * 60 * 1000; + return true; +} else if (unit == DAY) { + out = 24 * 60 * 60 * 1000; + return true; +} else if (unit == NANOSECOND) { + out = input / 1000 / 1000; + return true; +} else { + return false; +} + } + + static bool StringToTime(std::string input, uint64_t , TimeUnit ) { +if (input.size() == 0) { + return false; +} + +const char *cvalue = input.c_str(); +char *pEnd; +auto ival = std::strtoll(cvalue, , 0); + +if (pEnd[0] == '\0') { + return false; +} + +while (*pEnd == ' ') { + // Skip the space + pEnd++; +} + +std::string unit(pEnd); +std::transform(unit.begin(), unit.end(), unit.begin(), ::tolower); + +if (unit == "sec" || unit == "s" || unit == "second" || unit == "seconds" || unit == "secs") { + timeunit = SECOND; + output = ival; + return true; +} else if (unit == "msec" || unit == "ms" || unit == "millisecond" || unit == "milliseconds" || unit == "msecs") { + timeunit = MILLISECOND; + output = ival; + return true; +} else if (unit == "min" || unit == "m" || unit == "mins" || unit == "minute" || unit == "minutes") { + timeunit = MINUTE; + output = ival; + return true; +} else if (unit == "ns" || unit == "nano" || unit == "nanos" || unit == "nanoseconds") { + timeunit = NANOSECOND; + output = ival; + return true; +} else if (unit == "ms" || unit == "milli" || unit == "millis" || unit == "milliseconds") { + timeunit = MILLISECOND; + output = ival; + return true; +} else if (unit == "h" || unit == "hr" || unit == "hour" || unit == "hrs" || unit == "hours") { + timeunit = HOUR; + output = ival; + return true; +} else if (unit == "d" || unit == "day" || unit == "days") { + timeunit = DAY; + output = ival; + return true; +} else + return false; + } +}; + +/** + * Purpose and Design: DataSizeValue represents a file system size value that extends + * Uint64Value. This means that a string is converted to uint64_t. The string is of the + * format . + */ +class DataSizeValue : public TransformableValue, public state::response::UInt64Value { + public: + static const std::type_index type_id; + + explicit DataSizeValue(const std::string ) + :
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246436829 ## File path: libminifi/include/core/TypedValues.h ## @@ -0,0 +1,232 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef LIBMINIFI_INCLUDE_CORE_TYPEDVALUES_H_ +#define LIBMINIFI_INCLUDE_CORE_TYPEDVALUES_H_ + +#include "state/Value.h" +#include +#include "utils/StringUtils.h" +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace core { + +class TransformableValue { + public: + TransformableValue() { + } +}; + + +/** + * Purpose and Design: TimePeriodValue represents a time period that can be set via a numeric followed by + * a time unit string. This Value is based on uint64, but has the support to return + * the original string representation. Once set, both are immutable. + */ +class TimePeriodValue : public TransformableValue, public state::response::UInt64Value { + public: + static const std::type_index type_id; + + explicit TimePeriodValue(const std::string ) + : state::response::UInt64Value(0) { +TimeUnit units; +StringToTime(timeString, value, units); +string_value = timeString; +ConvertTimeUnitToMS(value, units, value); + } + + explicit TimePeriodValue(uint64_t value) + : state::response::UInt64Value(value) { + } + + // Convert TimeUnit to MilliSecond + template + static bool ConvertTimeUnitToMS(T input, TimeUnit unit, T ) { +if (unit == MILLISECOND) { + out = input; + return true; +} else if (unit == SECOND) { + out = input * 1000; + return true; +} else if (unit == MINUTE) { + out = input * 60 * 1000; + return true; +} else if (unit == HOUR) { + out = input * 60 * 60 * 1000; + return true; +} else if (unit == DAY) { + out = 24 * 60 * 60 * 1000; + return true; +} else if (unit == NANOSECOND) { + out = input / 1000 / 1000; + return true; +} else { + return false; +} + } + + static bool StringToTime(std::string input, uint64_t , TimeUnit ) { +if (input.size() == 0) { + return false; +} + +const char *cvalue = input.c_str(); +char *pEnd; +auto ival = std::strtoll(cvalue, , 0); + +if (pEnd[0] == '\0') { + return false; +} + +while (*pEnd == ' ') { + // Skip the space + pEnd++; +} + +std::string unit(pEnd); +std::transform(unit.begin(), unit.end(), unit.begin(), ::tolower); + +if (unit == "sec" || unit == "s" || unit == "second" || unit == "seconds" || unit == "secs") { + timeunit = SECOND; + output = ival; + return true; +} else if (unit == "msec" || unit == "ms" || unit == "millisecond" || unit == "milliseconds" || unit == "msecs") { + timeunit = MILLISECOND; + output = ival; + return true; +} else if (unit == "min" || unit == "m" || unit == "mins" || unit == "minute" || unit == "minutes") { + timeunit = MINUTE; + output = ival; + return true; +} else if (unit == "ns" || unit == "nano" || unit == "nanos" || unit == "nanoseconds") { + timeunit = NANOSECOND; + output = ival; + return true; +} else if (unit == "ms" || unit == "milli" || unit == "millis" || unit == "milliseconds") { + timeunit = MILLISECOND; + output = ival; + return true; +} else if (unit == "h" || unit == "hr" || unit == "hour" || unit == "hrs" || unit == "hours") { + timeunit = HOUR; + output = ival; + return true; +} else if (unit == "d" || unit == "day" || unit == "days") { + timeunit = DAY; + output = ival; + return true; +} else + return false; + } +}; + +/** + * Purpose and Design: DataSizeValue represents a file system size value that extends + * Uint64Value. This means that a string is converted to uint64_t. The string is of the + * format . + */ +class DataSizeValue : public TransformableValue, public state::response::UInt64Value { + public: + static const std::type_index type_id; + + explicit DataSizeValue(const std::string ) + :
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246440108 ## File path: libminifi/include/core/state/Value.h ## @@ -46,26 +56,99 @@ class Value { return string_value; } - bool empty(){ + template + bool convertValue(T ) { +return convertValueImpl::type>(ref); + } + + bool empty() { return string_value.empty(); } + std::type_index getTypeIndex() { +return type_id; + } + + static const std::type_index UINT64_TYPE; + static const std::type_index INT64_TYPE; + static const std::type_index INT_TYPE; + static const std::type_index BOOL_TYPE; + static const std::type_index STRING_TYPE; + protected: - std::string string_value; + template + bool convertValueImpl(T ) { +return getValue(ref); + } + + template + void setTypeId() { +type_id = std::type_index(typeid(T)); + } + + virtual bool getValue(int ) { +ref = std::stol(string_value); +return true; + } + + virtual bool getValue(int64_t ) { +ref = std::stoll(string_value); +return true; + } + + virtual bool getValue(uint64_t ) { +ref = std::stoull(string_value); +return true; + } + + virtual bool getValue(bool ) { +std::istringstream(string_value) >> std::boolalpha >> ref; Review comment: stream >> operator has a return value to detect error, I guess that should be the return value. It's also not consistent with the std:stol approach above, which throws. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246444145 ## File path: libminifi/include/io/validation.h ## @@ -27,6 +27,21 @@ * A checker that will, at compile time, tell us * if the declared type has a size method. Review comment: This comment belongs to the function below (size_function_functor_checker ) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
arpadboda commented on a change in pull request #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#discussion_r246434173 ## File path: libminifi/include/core/Property.h ## @@ -129,21 +102,72 @@ class Property { std::string getDisplayName() const; std::vector getAllowedTypes() const; std::string getDescription() const; - std::string getValue() const; + std::shared_ptr getValidator() const; + const PropertyValue () const; bool getRequired() const; bool supportsExpressionLangauge() const; std::string getValidRegex() const; std::vector getDependentProperties() const; std::vector> getExclusiveOfProperties() const; - std::vector (); + std::vector getValues(); + + const PropertyValue () const { +return default_value_; + } + + template + void setValue(const T ) { +PropertyValue vn = default_value_; +vn = value; +if (validator_) { + vn.setValidator(validator_); + ValidationResult result = validator_->validate(name_, vn.getValue()); + if (!result.valid()) { +// throw some exception? + } +} else { + vn.setValidator(core::StandardValidators::VALID); +} +if (!is_collection_) { + values_.clear(); + values_.push_back(vn); +} else { Review comment: The else branch seems needless, just need to clear if it's not a collection. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227#discussion_r246438403 ## File path: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java ## @@ -178,6 +178,36 @@ .required(true) .build(); +static final PropertyDescriptor DATE_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-date-format") +.displayName("Date Format") +.description("Custom date format to use when converting fields of date type " + +"({\"type\": \"int\", \"logicalType\": \"date\"}).") +.required(false) +.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) +.build(); + +static final PropertyDescriptor TIME_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-time-format") +.displayName("Time Format") +.description("Custom time format to use when converting fields of time type " + +"({\"int\": \"long\", \"logicalType\": \"time-millis\"}).") Review comment: Since you provide avro types, it would be accurate to provide also second "time" type. Logical Type "Time" in avro can be int or long (milliseconds or microseconds). See also comment for date format . This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227#discussion_r246439182 ## File path: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java ## @@ -178,6 +178,36 @@ .required(true) .build(); +static final PropertyDescriptor DATE_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-date-format") +.displayName("Date Format") +.description("Custom date format to use when converting fields of date type " + Review comment: For consistency, the description can be reused from existing properties of CSVReader, for instance. Please consider reuse description also for time and timestamp. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227#discussion_r246439609 ## File path: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java ## @@ -178,6 +178,36 @@ .required(true) .build(); +static final PropertyDescriptor DATE_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-date-format") +.displayName("Date Format") +.description("Custom date format to use when converting fields of date type " + +"({\"type\": \"int\", \"logicalType\": \"date\"}).") +.required(false) +.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) +.build(); + +static final PropertyDescriptor TIME_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-time-format") +.displayName("Time Format") +.description("Custom time format to use when converting fields of time type " + +"({\"int\": \"long\", \"logicalType\": \"time-millis\"}).") +.required(false) +.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) +.build(); + +static final PropertyDescriptor TIMESTAMP_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-ts-format") +.displayName("Timestamp Format") +.description("Custom timestamp format to use when converting fields of timestamp type " + Review comment: description can be reused from existing timestamp property descriptor for consistency of documentation. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
bdesert commented on a change in pull request #3227: NIFI-5909 PutElasticsearchHttpRecord doesn't allow to customize the timestamp format URL: https://github.com/apache/nifi/pull/3227#discussion_r246433976 ## File path: nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java ## @@ -178,6 +178,36 @@ .required(true) .build(); +static final PropertyDescriptor DATE_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-date-format") +.displayName("Date Format") +.description("Custom date format to use when converting fields of date type " + +"({\"type\": \"int\", \"logicalType\": \"date\"}).") +.required(false) +.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) +.build(); + +static final PropertyDescriptor TIME_FORMAT = new PropertyDescriptor.Builder() +.name("put-es-record-time-format") +.displayName("Time Format") +.description("Custom time format to use when converting fields of time type " + +"({\"int\": \"long\", \"logicalType\": \"time-millis\"}).") Review comment: is that a typo? "int":"long" ? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apiri commented on issue #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation
apiri commented on issue #460: MINIFICPP-479: Add processor property descriptor updates with c2 validation URL: https://github.com/apache/nifi-minifi-cpp/pull/460#issuecomment-452742435 reviewing This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (NIFI-1293) Better template creation controls
[ https://issues.apache.org/jira/browse/NIFI-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738312#comment-16738312 ] Jason Zondor commented on NIFI-1293: At least having the DistributedMapCacheServer and Reporting Tasks included would help with managing configs for MiNiFi as well. > Better template creation controls > - > > Key: NIFI-1293 > URL: https://issues.apache.org/jira/browse/NIFI-1293 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joseph Percivall >Priority: Minor > > When creating a template you select everything on the graph you want to > include or don't select anything and the system automatically selects > everything. > This causes two problems, for controller services like > DistributedMapCacheServer and all reporting tasks, there are no processors > that reference them so therefore it's impossible (barring manual xml > creation) to include them in a template. The second problem arose when I was > working on improving a template then needed to switch tasks. So I created a > new version of the template then destroyed my NiFi instance and rebuilt. > Unfortunately I accidentally had one processor selected when creating the > template and that one processor was all that was included. > So it would be very nice to have a fleshed out UI window for template > creation that includes being able to specifically select controller services > and reporting tasks. As well as give an overview of the template before > creation (count of processors, types, etc.). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] phrocker commented on issue #470: MINIFICPP-706 - RawSiteToSite: remove code duplication
phrocker commented on issue #470: MINIFICPP-706 - RawSiteToSite: remove code duplication URL: https://github.com/apache/nifi-minifi-cpp/pull/470#issuecomment-452712251 Changes inherently look fine. Will run some tests before approval. Hope to send an email soon re a release so I'll wait for 0.7 to merge it. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Assigned] (NIFI-5909) PutElasticsearchHttpRecord doesn't allow to customize the timestamp format
[ https://issues.apache.org/jira/browse/NIFI-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Savitsky reassigned NIFI-5909: --- Assignee: Alex Savitsky > PutElasticsearchHttpRecord doesn't allow to customize the timestamp format > -- > > Key: NIFI-5909 > URL: https://issues.apache.org/jira/browse/NIFI-5909 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Assignee: Alex Savitsky >Priority: Major > Time Spent: 50m > Remaining Estimate: 0h > > All timestamps are sent to Elasticsearch in the "-MM-dd HH:mm:ss" format, > coming from the RecordFieldType.TIMESTAMP.getDefaultFormat(). There's plenty > of use cases that call for Elasticsearch data to be presented differently, > and the format should be customizable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-3611) Add option to select transaction isolation level for QueryDataBaseTable processor
[ https://issues.apache.org/jira/browse/NIFI-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738252#comment-16738252 ] Eric Hanson commented on NIFI-3611: --- Thanks [~ijokarumawak], I'll do that going forward > Add option to select transaction isolation level for QueryDataBaseTable > processor > - > > Key: NIFI-3611 > URL: https://issues.apache.org/jira/browse/NIFI-3611 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Eric Hanson >Assignee: Eric Hanson >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > In many organizations, when reading from transactional DB systems, it is > required to read from tables using the TRANSACTION_READ_UNCOMMITTED > transaction isolation level. The QueryDatabaseTable processor should have an > option in the properties to select from a list of transaction isolation > levels. Not all drivers will support all isolation levels, but there should > be an option to set this for the one's that do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (MINIFICPP-706) RawSiteToSite: remove code duplication
Arpad Boda created MINIFICPP-706: Summary: RawSiteToSite: remove code duplication Key: MINIFICPP-706 URL: https://issues.apache.org/jira/browse/MINIFICPP-706 Project: NiFi MiNiFi C++ Issue Type: Sub-task Reporter: Arpad Boda Assignee: Arpad Boda Fix For: 0.6.0 Some implementations are copy-pasted from base class, these can be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5939) The docs for File Filter (Unpack processor) still need to be more clear for beginners
[ https://issues.apache.org/jira/browse/NIFI-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pete Rivett updated NIFI-5939: -- Component/s: Documentation & Website > The docs for File Filter (Unpack processor) still need to be more clear for > beginners > - > > Key: NIFI-5939 > URL: https://issues.apache.org/jira/browse/NIFI-5939 > Project: Apache NiFi > Issue Type: Bug > Components: Documentation Website >Affects Versions: 1.8.0 >Reporter: Pete Rivett >Priority: Minor > > An improvement was made in NIFI-3549 but that still resulted in only: > “Only files contained in the archive whose names match the given regular > expression will be extracted” > I’m guessing (based on looking at the code) that “names” here means the full > path of the entries within the archive, including any parent directories. > What’s less clear is what ends up being used as the directory separator. Is > it “/” or could it be “\” in the Windows environment? I'd hope the former in > order to make processes portable. > Either way I think the documentation could be improved, and some sample > regular expressions would help. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5939) The docs for File Filter (Unpack processor) still need to be more clear for beginners
Pete Rivett created NIFI-5939: - Summary: The docs for File Filter (Unpack processor) still need to be more clear for beginners Key: NIFI-5939 URL: https://issues.apache.org/jira/browse/NIFI-5939 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.8.0 Reporter: Pete Rivett An improvement was made in NIFI-3549 but that still resulted in only: “Only files contained in the archive whose names match the given regular expression will be extracted” I’m guessing (based on looking at the code) that “names” here means the full path of the entries within the archive, including any parent directories. What’s less clear is what ends up being used as the directory separator. Is it “/” or could it be “\” in the Windows environment? I'd hope the former in order to make processes portable. Either way I think the documentation could be improved, and some sample regular expressions would help. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] arpadboda opened a new pull request #470: MINIFICPP-706 - RawSiteToSite: remove code duplication
arpadboda opened a new pull request #470: MINIFICPP-706 - RawSiteToSite: remove code duplication URL: https://github.com/apache/nifi-minifi-cpp/pull/470 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] vkcelik commented on a change in pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers
vkcelik commented on a change in pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers URL: https://github.com/apache/nifi/pull/3246#discussion_r245968943 ## File path: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java ## @@ -210,17 +218,23 @@ private void setConnectionFactoryProperties(ConfigurationContext context) { if (descriptor.isDynamic()) { this.setProperty(propertyName, entry.getValue()); } else { -if (propertyName.equals(BROKER)) { -String brokerValue = context.getProperty(descriptor).evaluateAttributeExpressions().getValue(); -if (context.getProperty(CONNECTION_FACTORY_IMPL).evaluateAttributeExpressions().getValue().startsWith("org.apache.activemq")) { +if (descriptor == BROKER_URI) { +String brokerValue = context.getProperty(BROKER_URI).evaluateAttributeExpressions().getValue(); +String connectionFactoryValue = context.getProperty(CONNECTION_FACTORY_IMPL).evaluateAttributeExpressions().getValue(); +if (connectionFactoryValue.startsWith("org.apache.activemq")) { this.setProperty("brokerURL", brokerValue); +} else if (connectionFactoryValue.startsWith("com.tibco.tibjms")) { +this.setProperty("serverUrl", brokerValue); } else { +// Try to parse broker URI as colon separated host/port pair String[] hostPort = brokerValue.split(":"); +// If broker URI indeed was colon separated host/port pair if (hostPort.length == 2) { this.setProperty("hostName", hostPort[0]); this.setProperty("port", hostPort[1]); -} else if (hostPort.length != 2) { -this.setProperty("serverUrl", brokerValue); // for tibco +} else if (connectionFactoryValue.startsWith("com.ibm.mq.jms")) { +// Assuming IBM MQ style broker was specified, e.g. "myhost(1414)" and "myhost01(1414),myhost02(1414)" +this.setProperty("connectionNameList", brokerValue); Review comment: The following two quotes are both from the same [IBM page](https://www.ibm.com/support/knowledgecenter/en/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/umj_pasm.html): > Connection name lists can be used to connect to a single queue manager or to a multi-instance queue manager > This property (connection name list) must only be used to allow connection to a multi-instance queue manager. It must not be used to allow connections to non-multi-instance queue managers as that can result in transaction integrity issues. From another [IBM page](http://www-01.ibm.com/support/docview.wss?uid=swg21508357): > Each entry in the list can correspond to either a stand-alone queue manager, or a multi-instance queue manager. I think these quotes contradict each other regarding whether connection name list should be used for single queue manager (broker), but it would be safest to not use it Furthermore it says: > If 'Enter host and port information in the form of separate host and port values' is selected, the connection name list property cannot be used ... This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] vkcelik commented on a change in pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers
vkcelik commented on a change in pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers URL: https://github.com/apache/nifi/pull/3246#discussion_r245968943 ## File path: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java ## @@ -210,17 +218,23 @@ private void setConnectionFactoryProperties(ConfigurationContext context) { if (descriptor.isDynamic()) { this.setProperty(propertyName, entry.getValue()); } else { -if (propertyName.equals(BROKER)) { -String brokerValue = context.getProperty(descriptor).evaluateAttributeExpressions().getValue(); -if (context.getProperty(CONNECTION_FACTORY_IMPL).evaluateAttributeExpressions().getValue().startsWith("org.apache.activemq")) { +if (descriptor == BROKER_URI) { +String brokerValue = context.getProperty(BROKER_URI).evaluateAttributeExpressions().getValue(); +String connectionFactoryValue = context.getProperty(CONNECTION_FACTORY_IMPL).evaluateAttributeExpressions().getValue(); +if (connectionFactoryValue.startsWith("org.apache.activemq")) { this.setProperty("brokerURL", brokerValue); +} else if (connectionFactoryValue.startsWith("com.tibco.tibjms")) { +this.setProperty("serverUrl", brokerValue); } else { +// Try to parse broker URI as colon separated host/port pair String[] hostPort = brokerValue.split(":"); +// If broker URI indeed was colon separated host/port pair if (hostPort.length == 2) { this.setProperty("hostName", hostPort[0]); this.setProperty("port", hostPort[1]); -} else if (hostPort.length != 2) { -this.setProperty("serverUrl", brokerValue); // for tibco +} else if (connectionFactoryValue.startsWith("com.ibm.mq.jms")) { +// Assuming IBM MQ style broker was specified, e.g. "myhost(1414)" and "myhost01(1414),myhost02(1414)" +this.setProperty("connectionNameList", brokerValue); Review comment: The following two quotes are both from the same [IBM page](https://www.ibm.com/support/knowledgecenter/en/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/umj_pasm.html): > Connection name lists can be used to connect to a single queue manager or to a multi-instance queue manager > This property (connection name list) must only be used to allow connection to a multi-instance queue manager. It must not be used to allow connections to non-multi-instance queue managers as that can result in transaction integrity issues. From another [IBM page](http://www-01.ibm.com/support/docview.wss?uid=swg21508357): > Each entry in the list can correspond to either a stand-alone queue manager, or a multi-instance queue manager. I think these quotes contradict each other regarding whether connection name list should be used for single queue manager (broker), but it would be safest to not use it Furthermore it says: > If 'Enter host and port information in the form of separate host and port values' is selected, the connection name list property cannot be used and the following properties can be used: > - Host name > - Port This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services