[GitHub] [nifi] mtien-apache commented on pull request #4343: NIFI-7542 Override jackson-databind version.

2020-06-22 Thread GitBox


mtien-apache commented on pull request #4343:
URL: https://github.com/apache/nifi/pull/4343#issuecomment-647849078


   Thanks for looking at this so quickly, @MikeThomsen . But I still have 
commits for this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #4353: NIFI-7566: Avoid using Thread.sleep() to wait for Site-to-Site connec…

2020-06-22 Thread GitBox


markap14 commented on pull request #4353:
URL: https://github.com/apache/nifi/pull/4353#issuecomment-647814038


   Thanks @alopresto!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7575) Improve unit testing for TLS Toolkit

2020-06-22 Thread Andy LoPresto (Jira)
Andy LoPresto created NIFI-7575:
---

 Summary: Improve unit testing for TLS Toolkit
 Key: NIFI-7575
 URL: https://issues.apache.org/jira/browse/NIFI-7575
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Affects Versions: 1.11.4
Reporter: Andy LoPresto
Assignee: Andy LoPresto


Some of the tests for the TLS Toolkit can be improved for correctness and 
comprehensiveness. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alopresto commented on pull request #4206: NIFI-7304 Disabled CLF by default and allowed S2S & cluster comms to bypass length check

2020-06-22 Thread GitBox


alopresto commented on pull request #4206:
URL: https://github.com/apache/nifi/pull/4206#issuecomment-647798789


   Closed PR as it is succeeded by [PR 
4354](https://github.com/apache/nifi/pull/4354). 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto closed pull request #4206: NIFI-7304 Disabled CLF by default and allowed S2S & cluster comms to bypass length check

2020-06-22 Thread GitBox


alopresto closed pull request #4206:
URL: https://github.com/apache/nifi/pull/4206


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto commented on pull request #4354: NIFI-7304 Removed default value for nifi.web.max.content.size.

2020-06-22 Thread GitBox


alopresto commented on pull request #4354:
URL: https://github.com/apache/nifi/pull/4354#issuecomment-647798624


   @markap14 @thenatog I accidentally rebased and copied these changes to the 
`main` branch to resolve the merge conflicts in the previous PR 
(https://github.com/apache/nifi/pull/4206), so it was easier to just open a new 
PR with these changes. I'll close the other one. Please review. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto opened a new pull request #4354: NIFI-7304 Removed default value for nifi.web.max.content.size.

2020-06-22 Thread GitBox


alopresto opened a new pull request #4354:
URL: https://github.com/apache/nifi/pull/4354


   Added Bundle#toString() method.
   Refactored implementation of filter addition logic.
   Added logging.
   Added unit tests to check for filter enablement.
   Introduced content-length exception handling in StandardPublicPort.
   Added filter bypass functionality for framework requests in 
ContentLengthFilter.
   Updated property documentation in Admin Guide.
   Renamed methods & added Javadoc to clarify purpose of filters in JettyServer.
   Cleaned up conditional logic in StandardPublicPort.
   Moved ContentLengthFilterTest to correct module.
   Refactored unit tests for accuracy and clarity.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Resolves problems introduced by the default values for the addition of the 
ContentLengthFilter._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7542) Upgrade jackson-databind dependency version

2020-06-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142434#comment-17142434
 ] 

ASF subversion and git services commented on NIFI-7542:
---

Commit 005d05f20bdecd8578b386d0eba3a6f2e05d95c0 in nifi's branch 
refs/heads/master from mtien
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=005d05f ]

NIFI-7542 Override jackson-databind version.
NIFI-7542 Override additional jackson-databind versions.
NIFI-7542 Upgrade jackson-databind dependency to 2.9.10.5 in the root pom.xml.

This closes #4343

Signed-off-by: Mike Thomsen 


> Upgrade jackson-databind dependency version
> ---
>
> Key: NIFI-7542
> URL: https://issues.apache.org/jira/browse/NIFI-7542
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.11.4
>Reporter: M Tien
>Assignee: M Tien
>Priority: Major
>  Labels: dependency
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Upgrade jackson-databind dependency version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7542) Upgrade jackson-databind dependency version

2020-06-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142435#comment-17142435
 ] 

ASF subversion and git services commented on NIFI-7542:
---

Commit 005d05f20bdecd8578b386d0eba3a6f2e05d95c0 in nifi's branch 
refs/heads/master from mtien
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=005d05f ]

NIFI-7542 Override jackson-databind version.
NIFI-7542 Override additional jackson-databind versions.
NIFI-7542 Upgrade jackson-databind dependency to 2.9.10.5 in the root pom.xml.

This closes #4343

Signed-off-by: Mike Thomsen 


> Upgrade jackson-databind dependency version
> ---
>
> Key: NIFI-7542
> URL: https://issues.apache.org/jira/browse/NIFI-7542
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.11.4
>Reporter: M Tien
>Assignee: M Tien
>Priority: Major
>  Labels: dependency
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Upgrade jackson-databind dependency version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7542) Upgrade jackson-databind dependency version

2020-06-22 Thread Mike Thomsen (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-7542.

Fix Version/s: 1.12.0
   Resolution: Fixed

> Upgrade jackson-databind dependency version
> ---
>
> Key: NIFI-7542
> URL: https://issues.apache.org/jira/browse/NIFI-7542
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.11.4
>Reporter: M Tien
>Assignee: M Tien
>Priority: Major
>  Labels: dependency
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Upgrade jackson-databind dependency version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7542) Upgrade jackson-databind dependency version

2020-06-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142436#comment-17142436
 ] 

ASF subversion and git services commented on NIFI-7542:
---

Commit 005d05f20bdecd8578b386d0eba3a6f2e05d95c0 in nifi's branch 
refs/heads/master from mtien
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=005d05f ]

NIFI-7542 Override jackson-databind version.
NIFI-7542 Override additional jackson-databind versions.
NIFI-7542 Upgrade jackson-databind dependency to 2.9.10.5 in the root pom.xml.

This closes #4343

Signed-off-by: Mike Thomsen 


> Upgrade jackson-databind dependency version
> ---
>
> Key: NIFI-7542
> URL: https://issues.apache.org/jira/browse/NIFI-7542
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.11.4
>Reporter: M Tien
>Assignee: M Tien
>Priority: Major
>  Labels: dependency
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Upgrade jackson-databind dependency version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4343: NIFI-7542 Override jackson-databind version.

2020-06-22 Thread GitBox


asfgit closed pull request #4343:
URL: https://github.com/apache/nifi/pull/4343


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7535) Adding nar's to auto load nar directory unreadable by user running the nifi jvm fails to start nifi

2020-06-22 Thread Juan C. Sequeiros (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan C. Sequeiros updated NIFI-7535:

Summary: Adding nar's to auto load nar directory unreadable by user running 
the nifi jvm fails to start nifi  (was: Adding nar's to extensions directory 
unreadable by user running the nifi jvm fails to start nifi)

> Adding nar's to auto load nar directory unreadable by user running the nifi 
> jvm fails to start nifi
> ---
>
> Key: NIFI-7535
> URL: https://issues.apache.org/jira/browse/NIFI-7535
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
> Environment: NiFi 1.9
>Reporter: Juan C. Sequeiros
>Priority: Major
>
> If nar's are added to the extensions directory while NiFi is not running and 
> those nar's do not have read permissions by the user set to run nifi jvm, 
> nifi fails to start up and throws NPE
>  
> {code:java}
> 2020-06-15 13:34:16,988 WARN [main] org.apache.nifi.web.server.JettyServer 
> Failed to start web server... shutting down. java.lang.NullPointerException: 
> null at 
> org.apache.nifi.documentation.DocGenerator.generate(DocGenerator.java:62) at 
> org.apache.nifi.web.server.JettyServer.start(JettyServer.java:932) at 
> org.apache.nifi.NiFi.(NiFi.java:158) at 
> org.apache.nifi.NiFi.(NiFi.java:72) at 
> org.apache.nifi.NiFi.main(NiFi.java:297) 2020-06-15 13:34:16,988 INFO 
> [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server... 
> 2020-06-15 13:34:16,989 INFO [Thread-1] org.apache.nifi.NiFi Jetty web server 
> shutdown completed (nicely or otherwise).
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7574) NiFi fails to start if there are nar's innifi.nar.library.autoload.directory directory not readable by the user starting up the JVM

2020-06-22 Thread Juan C. Sequeiros (Jira)
Juan C. Sequeiros created NIFI-7574:
---

 Summary: NiFi fails to start if there are nar's 
innifi.nar.library.autoload.directory directory not readable by the user 
starting up the JVM
 Key: NIFI-7574
 URL: https://issues.apache.org/jira/browse/NIFI-7574
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
 Environment: I can replicate using Apache NiFi 1.9.0

Reporter: Juan C. Sequeiros


NiFi fails to start if there are nar's in "extensions" directory not readable 
by the user starting up the JVM.

 

On 1.9.0 probably other versions too.

If I put nar's on directory configured on the nar auto load dir:

 

nifi.nar.library.autoload.directory

 

That are not readable by the user starting up the NiFi JVM, NiFi fails to start 
if restarted with NPE

 
{code:java}
2020-06-15 13:34:16,988 WARN [main] org.apache.nifi.web.server.JettyServer 
Failed to start web server... shutting down. java.lang.NullPointerException: 
null at 
org.apache.nifi.documentation.DocGenerator.generate(DocGenerator.java:62) at 
org.apache.nifi.web.server.JettyServer.start(JettyServer.java:932) at 
org.apache.nifi.NiFi.(NiFi.java:158) at 
org.apache.nifi.NiFi.(NiFi.java:72) at 
org.apache.nifi.NiFi.main(NiFi.java:297) 2020-06-15 13:34:16,988 INFO 
[Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server... 
2020-06-15 13:34:16,989 INFO [Thread-1] org.apache.nifi.NiFi Jetty web server 
shutdown completed (nicely or otherwise).
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-22 Thread GitBox


szaszm commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r443776103



##
File path: libminifi/include/core/PropertyValue.h
##
@@ -202,14 +196,50 @@ class PropertyValue : public state::response::ValueNode {
   auto operator=(const std::string ) -> typename std::enable_if<
   std::is_same::value ||
   std::is_same::value, PropertyValue&>::type {
-value_ = std::make_shared(ref);
-type_id = value_->getTypeIndex();
-return *this;
+validator_.clearValidationResult();
+return WithAssignmentGuard(ref, [&] () -> PropertyValue& {
+  value_ = std::make_shared(ref);
+  type_id = value_->getTypeIndex();
+  return *this;
+});
+  }
+
+ private:
+  template
+  T convertImpl(const char* const type_name) const {
+if (!isValueUsable()) {
+  throw utils::InvalidValueException("Cannot convert invalid value");
+}
+T res;
+if (value_->convertValue(res)) {
+  return res;
+}
+throw utils::ConversionException(std::string("Invalid conversion to ") + 
type_name + " for " + value_->getStringValue());
+  }

Review comment:
   Good point, I take my suggestions back.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7558) Context path filtering does not work when behind a reverse proxy with a context path

2020-06-22 Thread Andy LoPresto (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-7558:

Fix Version/s: 1.12.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Context path filtering does not work when behind a reverse proxy with a 
> context path
> 
>
> Key: NIFI-7558
> URL: https://issues.apache.org/jira/browse/NIFI-7558
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Core Framework
>Affects Versions: 1.11.4
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: context-path, filter, reverse-proxy
> Fix For: 1.12.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> As discussed with [~markap14], the {{CatchAllFilter#init()}} method doesn't 
> call {{super.init()}} so this fails. Will make the change and improve related 
> terminology (with backward compatible configuration). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7558) Context path filtering does not work when behind a reverse proxy with a context path

2020-06-22 Thread Andy LoPresto (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-7558:

Status: Patch Available  (was: Open)

> Context path filtering does not work when behind a reverse proxy with a 
> context path
> 
>
> Key: NIFI-7558
> URL: https://issues.apache.org/jira/browse/NIFI-7558
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Core Framework
>Affects Versions: 1.11.4
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: context-path, filter, reverse-proxy
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> As discussed with [~markap14], the {{CatchAllFilter#init()}} method doesn't 
> call {{super.init()}} so this fails. Will make the change and improve related 
> terminology (with backward compatible configuration). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4351: NIFI-7558 Fixed CatchAllFilter init logic by calling super.init().

2020-06-22 Thread GitBox


asfgit closed pull request #4351:
URL: https://github.com/apache/nifi/pull/4351


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7558) Context path filtering does not work when behind a reverse proxy with a context path

2020-06-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142359#comment-17142359
 ] 

ASF subversion and git services commented on NIFI-7558:
---

Commit 94c98c019f6704bea92ec15c1a5aa71b42e521ab in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=94c98c0 ]

NIFI-7558 Fixed CatchAllFilter init logic by calling super.init().
Renamed legacy terms.
Updated documentation.

This closes #4351.

Signed-off-by: Mark Payne 


> Context path filtering does not work when behind a reverse proxy with a 
> context path
> 
>
> Key: NIFI-7558
> URL: https://issues.apache.org/jira/browse/NIFI-7558
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Core Framework
>Affects Versions: 1.11.4
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: context-path, filter, reverse-proxy
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> As discussed with [~markap14], the {{CatchAllFilter#init()}} method doesn't 
> call {{super.init()}} so this fails. Will make the change and improve related 
> terminology (with backward compatible configuration). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] tpalfy commented on a change in pull request #4348: NIFI-7523: Use SSL Context Service for Atlas HTTPS connection in Atla…

2020-06-22 Thread GitBox


tpalfy commented on a change in pull request #4348:
URL: https://github.com/apache/nifi/pull/4348#discussion_r443719863



##
File path: 
nifi-commons/nifi-security-utils/src/main/java/org/apache/nifi/security/credstore/HadoopCredentialStore.java
##
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.security.credstore;
+
+import org.apache.nifi.processor.exception.ProcessException;
+
+import javax.crypto.spec.SecretKeySpec;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.security.KeyStore;
+import java.util.Map;
+
+public class HadoopCredentialStore {
+
+private static final String CRED_STORE_PASSWORD_ENVVAR = 
"HADOOP_CREDSTORE_PASSWORD";

Review comment:
   Could use the Hadoop constants.

##
File path: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/reporting/ReportLineageToAtlas.java
##
@@ -385,31 +401,50 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(String proper
 protected Collection customValidate(ValidationContext 
context) {
 final Collection results = new ArrayList<>();
 
-final boolean isSSLContextServiceSet = 
context.getProperty(KAFKA_SSL_CONTEXT_SERVICE).isSet();
+final SSLContextService sslContextService = 
context.getProperty(SSL_CONTEXT_SERVICE).asControllerService(SSLContextService.class);
 final ValidationResult.Builder invalidSSLService = new 
ValidationResult.Builder()
-
.subject(KAFKA_SSL_CONTEXT_SERVICE.getDisplayName()).valid(false);
+.subject(SSL_CONTEXT_SERVICE.getDisplayName()).valid(false);
 
+AtomicBoolean isAtlasApiSecure = new AtomicBoolean(false);
 String atlasUrls = 
context.getProperty(ATLAS_URLS).evaluateAttributeExpressions().getValue();
 if (!StringUtils.isEmpty(atlasUrls)) {
 Arrays.stream(atlasUrls.split(ATLAS_URL_DELIMITER))
 .map(String::trim)
 .forEach(input -> {
-final ValidationResult.Builder builder = new 
ValidationResult.Builder().subject(ATLAS_URLS.getDisplayName()).input(input);
 try {
-new URL(input);
-results.add(builder.explanation("Valid 
URI").valid(true).build());
+final URL url = new URL(input);
+if ("https".equalsIgnoreCase(url.getProtocol())) {
+isAtlasApiSecure.set(true);
+}
 } catch (Exception e) {
-results.add(builder.explanation("Contains invalid URI: 
" + e).valid(false).build());
+results.add(new 
ValidationResult.Builder().subject(ATLAS_URLS.getDisplayName()).input(input)
+.explanation("contains invalid URI: " + 
e).valid(false).build());
 }
 });
 }
 
+if (isAtlasApiSecure.get()) {
+if (sslContextService == null) {
+results.add(invalidSSLService.explanation("required for 
connecting to Atlas via HTTPS.").build());
+} else if 
(context.getControllerServiceLookup().isControllerServiceEnabled(sslContextService))
 {
+if (!sslContextService.isTrustStoreConfigured()) {
+results.add(invalidSSLService.explanation("no truststore 
configured which is required for connecting to Atlas via HTTPS.").build());
+} else if 
(!KEYSTORE_TYPE_JKS.equalsIgnoreCase(sslContextService.getTrustStoreType())) {
+results.add(invalidSSLService.explanation("truststore type 
is not JKS. Atlas client supports JKS truststores only.").build());
+}
+}
+}
+
 final String atlasAuthNMethod = 
context.getProperty(ATLAS_AUTHN_METHOD).getValue();
 final AtlasAuthN atlasAuthN = getAtlasAuthN(atlasAuthNMethod);
 results.addAll(atlasAuthN.validate(context));
 
-
-namespaceResolverLoader.forEach(resolver -> 

[GitHub] [nifi] turcsanyip commented on a change in pull request #4348: NIFI-7523: Use SSL Context Service for Atlas HTTPS connection in Atla…

2020-06-22 Thread GitBox


turcsanyip commented on a change in pull request #4348:
URL: https://github.com/apache/nifi/pull/4348#discussion_r443739026



##
File path: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/reporting/ReportLineageToAtlas.java
##
@@ -385,31 +401,50 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(String proper
 protected Collection customValidate(ValidationContext 
context) {
 final Collection results = new ArrayList<>();
 
-final boolean isSSLContextServiceSet = 
context.getProperty(KAFKA_SSL_CONTEXT_SERVICE).isSet();
+final SSLContextService sslContextService = 
context.getProperty(SSL_CONTEXT_SERVICE).asControllerService(SSLContextService.class);
 final ValidationResult.Builder invalidSSLService = new 
ValidationResult.Builder()
-
.subject(KAFKA_SSL_CONTEXT_SERVICE.getDisplayName()).valid(false);
+.subject(SSL_CONTEXT_SERVICE.getDisplayName()).valid(false);
 
+AtomicBoolean isAtlasApiSecure = new AtomicBoolean(false);
 String atlasUrls = 
context.getProperty(ATLAS_URLS).evaluateAttributeExpressions().getValue();
 if (!StringUtils.isEmpty(atlasUrls)) {
 Arrays.stream(atlasUrls.split(ATLAS_URL_DELIMITER))
 .map(String::trim)
 .forEach(input -> {
-final ValidationResult.Builder builder = new 
ValidationResult.Builder().subject(ATLAS_URLS.getDisplayName()).input(input);
 try {
-new URL(input);
-results.add(builder.explanation("Valid 
URI").valid(true).build());
+final URL url = new URL(input);
+if ("https".equalsIgnoreCase(url.getProtocol())) {
+isAtlasApiSecure.set(true);
+}
 } catch (Exception e) {
-results.add(builder.explanation("Contains invalid URI: 
" + e).valid(false).build());
+results.add(new 
ValidationResult.Builder().subject(ATLAS_URLS.getDisplayName()).input(input)
+.explanation("contains invalid URI: " + 
e).valid(false).build());
 }
 });
 }
 
+if (isAtlasApiSecure.get()) {
+if (sslContextService == null) {
+results.add(invalidSSLService.explanation("required for 
connecting to Atlas via HTTPS.").build());

Review comment:
   I'll remove these checks from customValidate().
   There are also runtime checks with warning messages and fallback to system 
truststore in setAtlasSSLConfig(). These were used only when the Atlas URL was 
configured in the Atlas property file (not on the reporting task) and 
customValidate() did not check the SSL config.
   However, it is more consistent if there is only one check (with system 
truststore fallback) for all cases, regardless where the Atlas url has been 
configured.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #4351: NIFI-7558 Fixed CatchAllFilter init logic by calling super.init().

2020-06-22 Thread GitBox


markap14 commented on pull request #4351:
URL: https://github.com/apache/nifi/pull/4351#issuecomment-647672760


   @alopresto ha no worries, I've done it myself a few times. All looks good. 
I'm a +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7572) Add a ScriptedTransformRecord processor

2020-06-22 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-7572:
-
Description: 
NiFi has started to put a heavier emphasis on Record-oriented processors, as 
they provide many benefits including better performance and a better UX over 
their purely byte-oriented counterparts. It is common to see users wanting to 
transform a Record in some very specific way, but NiFi doesn't make this as 
easy as it should. There are methods using ExecuteScript, 
InvokedScriptedProcessor, ScriptedRecordWriter, and ScriptedRecordReader for 
instance.

But each of these requires that the Script writer understand a lot about NiFi 
and how to expose properties, create Property Descriptors, etc. and for fairly 
simple transformation we end up with scripts where the logic takes fewer lines 
of code than the boilerplate.

We should expose a Processor that allows a user to write a script that takes a 
Record and transforms that Record in some way. The processor should be 
configured with the following:
 * Record Reader (required)
 * Record Writer (required)
 * Script Language (required)
 * Script Body or Script File (one and only one of these required)

The script should implement a single method along the lines of:
{code:java}
void transform(Record record) throws Exception; {code}
The processor should have two relationships: "success" and "failure."

The script should not be allowed to expose any properties or define any 
relationships. The point is to keep the script focused purely on processing the 
record itself.

It's not entirely clear to me how easy the Record API works with some of the 
scripting languages. The Record object does expose a method named toMap() that 
returns a Map containing the underlying key/value pairs. 
However, the values in that Map may themselves be Records. It might make sense 
to expose a new method toNormalizedMap() or something along those lines that 
would return a Map where the values have been recursively 
normalized, in much the same way that we do for JoltTransformRecord. This would 
perhaps allow for cleaner syntax, but I'm not a scripting expert so I can't say 
for sure whether such a method is necessary.

  was:
NiFi has started to put a heavier emphasis on Record-oriented processors, as 
they provide many benefits including better performance and a better UX over 
their purely byte-oriented counterparts. It is common to see users wanting to 
transform a Record in some very specific way, but NiFi doesn't make this as 
easy as it should. There are methods using ExecuteScript, 
InvokedScriptedProcessor, ScriptedRecordWriter, and ScriptedRecordReader for 
instance.

But each of these requires that the Script writer understand a lot about NiFi 
and how to expose properties, create Property Descriptors, etc. and for fairly 
simple transformation we end up with scripts where the logic takes fewer lines 
of code than the boilerplate.

We should expose a Processor that allows a user to write a script that takes a 
Record and transforms that Record in some way. The processor should be 
configured with the following:
 * Record Reader (required)
 * Record Writer (required)
 * Script Language (required)
 * Script Body or Script File (one and only one of these required)

The script should implement a single method along the lines of:
{code:java}
void transform(Record record) throws Exception; {code}
The processor should have two relationships: "success" and "failure."

The script should not be allowed to expose any properties ore define any 
relationships. The point is to keep the script focused purely on processing the 
record itself.

It's not entirely clear to me how easy the Record API works with some of the 
scripting languages. The Record object does expose a method named toMap() that 
returns a Map containing the underlying key/value pairs. 
However, the values in that Map may themselves be Records. It might make sense 
to expose a new method toNormalizedMap() or something along those lines that 
would return a Map where the values have been recursively 
normalized, in much the same way that we do for JoltTransformRecord. This would 
perhaps allow for cleaner syntax, but I'm not a scripting expert so I can't say 
for sure whether such a method is necessary.


> Add a ScriptedTransformRecord processor
> ---
>
> Key: NIFI-7572
> URL: https://issues.apache.org/jira/browse/NIFI-7572
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Priority: Major
>
> NiFi has started to put a heavier emphasis on Record-oriented processors, as 
> they provide many benefits including better performance and a better UX over 
> their purely byte-oriented counterparts. It is common to see users wanting to 
> transform a Record in 

[GitHub] [nifi] mattyb149 commented on pull request #4350: NIFI-6934 In PutDatabaseRecord added Postgres UPSERT support

2020-06-22 Thread GitBox


mattyb149 commented on pull request #4350:
URL: https://github.com/apache/nifi/pull/4350#issuecomment-647661277


   Looking and working good, just some comments on extending the documentation 
to make things as clear as possible, should be good to go after that



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on a change in pull request #4350: NIFI-6934 In PutDatabaseRecord added Postgres UPSERT support

2020-06-22 Thread GitBox


mattyb149 commented on a change in pull request #4350:
URL: https://github.com/apache/nifi/pull/4350#discussion_r443710657



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
##
@@ -156,7 +162,7 @@
 + "FlowFile. The 'Use statement.type Attribute' option is 
the only one that allows the 'SQL' statement type. If 'SQL' is specified, the 
value of the field specified by the "
 + "'Field Containing SQL' property is expected to be a 
valid SQL statement on the target database, and will be executed as-is.")

Review comment:
   Also probably some text saying "please refer to the database 
documentation for a description of the behavior of each operation." This is 
because even the different DBs that `do` support "upsert" may do them 
differently, for instance PostgreSQL requires a unique or exclusion constraint 
on the update key column(s)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-6934) Support Postgres 9.5+ Upsert in the standard PutDatabaseRecord processor

2020-06-22 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-6934:
---
Affects Version/s: (was: 1.10.0)
   Status: Patch Available  (was: Open)

> Support Postgres 9.5+ Upsert in the standard PutDatabaseRecord processor
> 
>
> Key: NIFI-6934
> URL: https://issues.apache.org/jira/browse/NIFI-6934
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Sergey Shcherbakov
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The PutDatabaseRecord processing is setup for batch record processing for 
> more performant records processing and to be able to use JDBC batch API.
> Unfortunately, in this setup, in case if a single record in the batch fails 
> to be inserted into the target SQL database, the entire transaction with the 
> full DML batch gets rolled back.
>  That is, a single failure (e.g. because of a duplicate existing in the 
> database with the same unique business key constraint) makes using 
> PutDatabaseRecord processor unusable in batch mode.
> A common workaround for that is inserting a SplitRecord before the 
> PutDatabaseProcessor. That approach works but obviously has disadvantages 
> (slow performance, higher resource consumption, downstream logic changes, 
> since the number of FlowFiles explodes).
> The PostgresSQL starting with version 9.5 supports a special SQL syntax 
> extension that effectively allows UPSERTS, that is, adding new records in 
> case of no constraint conflicts and updating existing record (or doing 
> nothing) in case if there is a conflict:
>  [http://www.postgresqltutorial.com/postgresql-upsert/]
> Such an Upsert would solve the above mentioned problems of the 
> PutDatabaseRecord processor.
>  Adding support for such extension looks also fairly trivial in the 
> PutDatabaseRecord processor.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7573) Add a Azure Active Directory (AAD) User Group Provider

2020-06-22 Thread Seokwon Yang (Jira)
Seokwon Yang created NIFI-7573:
--

 Summary: Add a Azure Active Directory (AAD) User Group Provider 
 Key: NIFI-7573
 URL: https://issues.apache.org/jira/browse/NIFI-7573
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Seokwon Yang
Assignee: Seokwon Yang


NiFi on Azure Cloud environment may be integrated with AAD and define the group 
based authorization policies. Combined with OIDC based user authentication, 
this will improve user/group authorization policy management experience.

Reference:

[https://docs.microsoft.com/en-us/graph/api/resources/group?view=graph-rest-1.0]

[https://docs.microsoft.com/en-us/graph/api/group-list-members?view=graph-rest-1.0=http]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 commented on a change in pull request #4350: NIFI-6934 In PutDatabaseRecord added Postgres UPSERT support

2020-06-22 Thread GitBox


mattyb149 commented on a change in pull request #4350:
URL: https://github.com/apache/nifi/pull/4350#discussion_r443679370



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java
##
@@ -156,7 +162,7 @@
 + "FlowFile. The 'Use statement.type Attribute' option is 
the only one that allows the 'SQL' statement type. If 'SQL' is specified, the 
value of the field specified by the "
 + "'Field Containing SQL' property is expected to be a 
valid SQL statement on the target database, and will be executed as-is.")

Review comment:
   We should add some doc here explaining that some Statement Type values 
may not be supported depending on the value of the Database Type property, and 
can even include an example ("For example, UPSERT statements are not supported 
by Oracle" or something like that) 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7572) Add a ScriptedTransformRecord processor

2020-06-22 Thread Mark Payne (Jira)
Mark Payne created NIFI-7572:


 Summary: Add a ScriptedTransformRecord processor
 Key: NIFI-7572
 URL: https://issues.apache.org/jira/browse/NIFI-7572
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Mark Payne


NiFi has started to put a heavier emphasis on Record-oriented processors, as 
they provide many benefits including better performance and a better UX over 
their purely byte-oriented counterparts. It is common to see users wanting to 
transform a Record in some very specific way, but NiFi doesn't make this as 
easy as it should. There are methods using ExecuteScript, 
InvokedScriptedProcessor, ScriptedRecordWriter, and ScriptedRecordReader for 
instance.

But each of these requires that the Script writer understand a lot about NiFi 
and how to expose properties, create Property Descriptors, etc. and for fairly 
simple transformation we end up with scripts where the logic takes fewer lines 
of code than the boilerplate.

We should expose a Processor that allows a user to write a script that takes a 
Record and transforms that Record in some way. The processor should be 
configured with the following:
 * Record Reader (required)
 * Record Writer (required)
 * Script Language (required)
 * Script Body or Script File (one and only one of these required)

The script should implement a single method along the lines of:
{code:java}
void transform(Record record) throws Exception; {code}
The processor should have two relationships: "success" and "failure."

The script should not be allowed to expose any properties ore define any 
relationships. The point is to keep the script focused purely on processing the 
record itself.

It's not entirely clear to me how easy the Record API works with some of the 
scripting languages. The Record object does expose a method named toMap() that 
returns a Map containing the underlying key/value pairs. 
However, the values in that Map may themselves be Records. It might make sense 
to expose a new method toNormalizedMap() or something along those lines that 
would return a Map where the values have been recursively 
normalized, in much the same way that we do for JoltTransformRecord. This would 
perhaps allow for cleaner syntax, but I'm not a scripting expert so I can't say 
for sure whether such a method is necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende commented on a change in pull request #4348: NIFI-7523: Use SSL Context Service for Atlas HTTPS connection in Atla…

2020-06-22 Thread GitBox


bbende commented on a change in pull request #4348:
URL: https://github.com/apache/nifi/pull/4348#discussion_r443600675



##
File path: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/reporting/ReportLineageToAtlas.java
##
@@ -385,31 +401,50 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(String proper
 protected Collection customValidate(ValidationContext 
context) {
 final Collection results = new ArrayList<>();
 
-final boolean isSSLContextServiceSet = 
context.getProperty(KAFKA_SSL_CONTEXT_SERVICE).isSet();
+final SSLContextService sslContextService = 
context.getProperty(SSL_CONTEXT_SERVICE).asControllerService(SSLContextService.class);
 final ValidationResult.Builder invalidSSLService = new 
ValidationResult.Builder()
-
.subject(KAFKA_SSL_CONTEXT_SERVICE.getDisplayName()).valid(false);
+.subject(SSL_CONTEXT_SERVICE.getDisplayName()).valid(false);
 
+AtomicBoolean isAtlasApiSecure = new AtomicBoolean(false);
 String atlasUrls = 
context.getProperty(ATLAS_URLS).evaluateAttributeExpressions().getValue();
 if (!StringUtils.isEmpty(atlasUrls)) {
 Arrays.stream(atlasUrls.split(ATLAS_URL_DELIMITER))
 .map(String::trim)
 .forEach(input -> {
-final ValidationResult.Builder builder = new 
ValidationResult.Builder().subject(ATLAS_URLS.getDisplayName()).input(input);
 try {
-new URL(input);
-results.add(builder.explanation("Valid 
URI").valid(true).build());
+final URL url = new URL(input);
+if ("https".equalsIgnoreCase(url.getProtocol())) {
+isAtlasApiSecure.set(true);
+}
 } catch (Exception e) {
-results.add(builder.explanation("Contains invalid URI: 
" + e).valid(false).build());
+results.add(new 
ValidationResult.Builder().subject(ATLAS_URLS.getDisplayName()).input(input)
+.explanation("contains invalid URI: " + 
e).valid(false).build());
 }
 });
 }
 
+if (isAtlasApiSecure.get()) {
+if (sslContextService == null) {
+results.add(invalidSSLService.explanation("required for 
connecting to Atlas via HTTPS.").build());

Review comment:
   Is it possible it would still would work by falling back to the system 
truststore, or is it required to provide the truststore from the SSLContext?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] tpalfy commented on a change in pull request #4348: NIFI-7523: Use SSL Context Service for Atlas HTTPS connection in Atla…

2020-06-22 Thread GitBox


tpalfy commented on a change in pull request #4348:
URL: https://github.com/apache/nifi/pull/4348#discussion_r443637366



##
File path: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/reporting/ReportLineageToAtlas.java
##
@@ -322,9 +328,19 @@
 private static final String ATLAS_PROPERTY_CLUSTER_NAME = 
"atlas.cluster.name";
 private static final String ATLAS_PROPERTY_REST_ADDRESS = 
"atlas.rest.address";
 private static final String ATLAS_PROPERTY_ENABLE_TLS = "atlas.enableTLS";
+private static final String ATLAS_PROPERTY_TRUSTSTORE_FILE = 
"truststore.file";

Review comment:
   Could we use the Atlas constants instead?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7501) Generate Flowfile does not scale

2020-06-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142171#comment-17142171
 ] 

ASF subversion and git services commented on NIFI-7501:
---

Commit 704260755a5de756aefc303029cf7b3dda556993 in nifi's branch 
refs/heads/master from dennisjaheruddin
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7042607 ]

NIFI-7501
Update nf-context-menu.js for an intuitive road to parameters
When rightclicking a process group the variables are shown, but parameters are 
not. This makes sense as they have a prerequisite, in the form of a parameter 
context. This change gives a more consistent experience for finding the 
functionality regarding parameters by ensuring the contextmenu shows the 
possibility to configure a parameter context. Once the paramater context has 
been created for a process group, the parameters text shows, so this is no 
longer visible. People would then need to click configure to change the 
context, just as they would be required to do now.

Added generateflowfile load tag and description
Added GenerateFlowFile load tag to be consistent with DuplicateFlowFile and 
updated description to refer to DuplicateFlowFile.

Revert "Update nf-context-menu.js for an intuitive road to parameters"

This reverts commit 3c44b1661f09fb6ae11d2f088550f81fb7a4b393.

This closes #4333

Signed-off-by: Mike Thomsen 


> Generate Flowfile does not scale
> 
>
> Key: NIFI-7501
> URL: https://issues.apache.org/jira/browse/NIFI-7501
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Dennis Jaheruddin
>Priority: Minor
> Fix For: 1.12.0
>
> Attachments: generationperformance.xml
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> One of the purposes of Generate Flowfile is load testing. However, 
> unfortunately it often appears to become the bottleneck itself. I have found 
> it not to scale well.
> Example result from my laptop:
> I want to generate messages and bring them to a single processor, lets call 
> it processor X.
> With 1 concurrent task, and a batch size of 1, and a message size of 10MB and 
> uniqueness false it can generate approximately 2 GB/sec.
> When allowing for more concurrent tasks, or a larger batch size, no 
> noticeable change is found.
> However, if instead of increasing the batchsize I route the success 
> relationship to multiple processors that do 'nothing' (like updateattribute), 
> and then bring the success relations of all these to processor X, I can get 
> much more than 2 GB/sec. 
>  
> In conclusion: I don't appear to be hitting a hardware limit as I am able to 
> generate the number of messages in this inelegant way, but no matter how I 
> set up my generateflowfile processor, it just will not scale. Suggesting 
> there may be a smarter way to generate data when uniqueness is not required.
>  
> I have attached a template to illustrate my findings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4333: NIFI-7501 Generateflowfile tag and description

2020-06-22 Thread GitBox


asfgit closed pull request #4333:
URL: https://github.com/apache/nifi/pull/4333


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-7501) Generate Flowfile does not scale

2020-06-22 Thread Mike Thomsen (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen resolved NIFI-7501.

Fix Version/s: 1.12.0
   Resolution: Fixed

> Generate Flowfile does not scale
> 
>
> Key: NIFI-7501
> URL: https://issues.apache.org/jira/browse/NIFI-7501
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Dennis Jaheruddin
>Priority: Minor
> Fix For: 1.12.0
>
> Attachments: generationperformance.xml
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> One of the purposes of Generate Flowfile is load testing. However, 
> unfortunately it often appears to become the bottleneck itself. I have found 
> it not to scale well.
> Example result from my laptop:
> I want to generate messages and bring them to a single processor, lets call 
> it processor X.
> With 1 concurrent task, and a batch size of 1, and a message size of 10MB and 
> uniqueness false it can generate approximately 2 GB/sec.
> When allowing for more concurrent tasks, or a larger batch size, no 
> noticeable change is found.
> However, if instead of increasing the batchsize I route the success 
> relationship to multiple processors that do 'nothing' (like updateattribute), 
> and then bring the success relations of all these to processor X, I can get 
> much more than 2 GB/sec. 
>  
> In conclusion: I don't appear to be hitting a hardware limit as I am able to 
> generate the number of messages in this inelegant way, but no matter how I 
> set up my generateflowfile processor, it just will not scale. Suggesting 
> there may be a smarter way to generate data when uniqueness is not required.
>  
> I have attached a template to illustrate my findings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7525) Create a null() Subjectless Functions and support null type on Expression Language

2020-06-22 Thread Fabrizio Spataro (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabrizio Spataro updated NIFI-7525:
---
Summary: Create a null() Subjectless Functions and support null type on 
Expression Language  (was: Create a null() Subjectless Functions to support 
null type on Expression Language)

> Create a null() Subjectless Functions and support null type on Expression 
> Language
> --
>
> Key: NIFI-7525
> URL: https://issues.apache.org/jira/browse/NIFI-7525
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Fabrizio Spataro
>Priority: Minor
>
> Reading (and using) nifi expression language, could be useful have a function 
> to set a "null value" easily!
>  Today, to resolve this, we can apply some tricks, for example: use an 
> UpdateRecord processor with an unknown RecordPath value or convert record in 
> text (json) and apply ReplaceText processor using a regex + *null* value.
>   
>  This issue will propose you a new stateless function, called *nullValue()* 
> to retrieve a real "null value"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adam-markovics opened a new pull request #820: MINIFICPP-1183 - Cleanup C2 Update tests

2020-06-22 Thread GitBox


adam-markovics opened a new pull request #820:
URL: https://github.com/apache/nifi-minifi-cpp/pull/820


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-22 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r443587382



##
File path: libminifi/include/core/PropertyValue.h
##
@@ -75,64 +77,52 @@ class PropertyValue : public state::response::ValueNode {
   }
 
   std::shared_ptr getValidator() const {
-return validator_;
+return *validator_;
   }
 
   ValidationResult validate(const std::string ) const {
-if (validator_) {
-  return validator_->validate(subject, getValue());
-} else {
+auto cachedResult = validator_.isValid();
+if (cachedResult == CachedValueValidator::Result::SUCCESS) {
   return ValidationResult::Builder::createBuilder().isValid(true).build();
 }
+if (cachedResult == CachedValueValidator::Result::FAILURE) {
+  return 
ValidationResult::Builder::createBuilder().withSubject(subject).withInput(getValue()->getStringValue()).isValid(false).build();
+}
+auto result = validator_->validate(subject, getValue());
+validator_.setValidationResult(result.valid());
+return result;

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7516) Predictions model throws intermittent SingularMatrixExceptions

2020-06-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142030#comment-17142030
 ] 

ASF subversion and git services commented on NIFI-7516:
---

Commit 32fa9ae51bfa486f463862a97817e2cf1702532d in nifi's branch 
refs/heads/master from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=32fa9ae ]

NIFI-7516: Catch and log SingularMatrixExceptions in OrdinaryLeastSquares model 
(#4323)



> Predictions model throws intermittent SingularMatrixExceptions
> --
>
> Key: NIFI-7516
> URL: https://issues.apache.org/jira/browse/NIFI-7516
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Under some circumstances, the Connection Status Analytics model (specifically 
> the Ordinary Least Squares model) throws a SingularMatrix exception:
> org.apache.commons.math3.linear.SingularMatrixException: matrix is singular
> This can happen (usually intermittently) when the data points used to update 
> the model form a matrix that has no inverse (i.e. singular).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] YolandaMDavis merged pull request #4323: NIFI-7516: Catch and log SingularMatrixExceptions in OrdinaryLeastSquares model

2020-06-22 Thread GitBox


YolandaMDavis merged pull request #4323:
URL: https://github.com/apache/nifi/pull/4323


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-22 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r443546920



##
File path: libminifi/include/core/PropertyValue.h
##
@@ -202,14 +196,50 @@ class PropertyValue : public state::response::ValueNode {
   auto operator=(const std::string ) -> typename std::enable_if<
   std::is_same::value ||
   std::is_same::value, PropertyValue&>::type {
-value_ = std::make_shared(ref);
-type_id = value_->getTypeIndex();
-return *this;
+validator_.clearValidationResult();
+return WithAssignmentGuard(ref, [&] () -> PropertyValue& {
+  value_ = std::make_shared(ref);
+  type_id = value_->getTypeIndex();
+  return *this;
+});
+  }
+
+ private:
+  template
+  T convertImpl(const char* const type_name) const {
+if (!isValueUsable()) {
+  throw utils::InvalidValueException("Cannot convert invalid value");
+}
+T res;
+if (value_->convertValue(res)) {
+  return res;
+}
+throw utils::ConversionException(std::string("Invalid conversion to ") + 
type_name + " for " + value_->getStringValue());
+  }

Review comment:
   we could always slap a nice macro on it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-22 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r443531079



##
File path: libminifi/include/core/PropertyValue.h
##
@@ -202,14 +196,50 @@ class PropertyValue : public state::response::ValueNode {
   auto operator=(const std::string ) -> typename std::enable_if<
   std::is_same::value ||
   std::is_same::value, PropertyValue&>::type {
-value_ = std::make_shared(ref);
-type_id = value_->getTypeIndex();
-return *this;
+validator_.clearValidationResult();
+return WithAssignmentGuard(ref, [&] () -> PropertyValue& {
+  value_ = std::make_shared(ref);
+  type_id = value_->getTypeIndex();
+  return *this;
+});
+  }
+
+ private:
+  template
+  T convertImpl(const char* const type_name) const {
+if (!isValueUsable()) {
+  throw utils::InvalidValueException("Cannot convert invalid value");
+}
+T res;
+if (value_->convertValue(res)) {
+  return res;
+}
+throw utils::ConversionException(std::string("Invalid conversion to ") + 
type_name + " for " + value_->getStringValue());
+  }

Review comment:
   if we can live with `uint64_t` being printed as `unsigned long` on some 
platforms and `unsigned __int64` on others (windows) I can make the change, but 
then the error message would be invalid, as we do not want to convert to 
`unsigned long` we want to convert to `uint64_t` that the two coincide is a 
platform dependent implementation detail





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-22 Thread GitBox


arpadboda commented on a change in pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816#discussion_r443518040



##
File path: extensions/rocksdb-repos/FlowFileRepository.cpp
##
@@ -122,27 +122,26 @@ void FlowFileRepository::run() {
 }
 
 void FlowFileRepository::prune_stored_flowfiles() {
-  rocksdb::DB* stored_database_ = nullptr;
-  utils::ScopeGuard db_guard([_database_]() {
-delete stored_database_;
-  });
+  std::unique_ptr stored_database;
+  rocksdb::DB* used_database;
   bool corrupt_checkpoint = false;
   if (nullptr != checkpoint_) {
 rocksdb::Options options;
 options.create_if_missing = true;
 options.use_direct_io_for_flush_and_compaction = true;
 options.use_direct_reads = true;
-rocksdb::Status status = rocksdb::DB::OpenForReadOnly(options, 
FLOWFILE_CHECKPOINT_DIRECTORY, _database_);
+rocksdb::Status status = rocksdb::DB::OpenForReadOnly(options, 
FLOWFILE_CHECKPOINT_DIRECTORY, _database);
+stored_database.reset(used_database);

Review comment:
   This makes few sense to me to do here in case we release 4 lines later. 
   I would suggest doing it only when status is ok. Might be a bit more lines, 
but increases readability.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-22 Thread GitBox


arpadboda commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r443498165



##
File path: libminifi/include/core/PropertyValue.h
##
@@ -202,14 +196,50 @@ class PropertyValue : public state::response::ValueNode {
   auto operator=(const std::string ) -> typename std::enable_if<
   std::is_same::value ||
   std::is_same::value, PropertyValue&>::type {
-value_ = std::make_shared(ref);
-type_id = value_->getTypeIndex();
-return *this;
+validator_.clearValidationResult();
+return WithAssignmentGuard(ref, [&] () -> PropertyValue& {
+  value_ = std::make_shared(ref);
+  type_id = value_->getTypeIndex();
+  return *this;
+});
+  }
+
+ private:
+  template
+  T convertImpl(const char* const type_name) const {
+if (!isValueUsable()) {
+  throw utils::InvalidValueException("Cannot convert invalid value");
+}
+T res;
+if (value_->convertValue(res)) {
+  return res;
+}
+throw utils::ConversionException(std::string("Invalid conversion to ") + 
type_name + " for " + value_->getStringValue());
+  }
+
+  bool isValueUsable() const {
+if (!value_) return false;
+if (validator_.isValid() == CachedValueValidator::Result::FAILURE) return 
false;
+if (validator_.isValid() == CachedValueValidator::Result::SUCCESS) return 
true;
+return validate("__unknown__").valid();
+  }
+
+  template
+  auto WithAssignmentGuard(const std::string& ref, Fn&& functor) -> 
decltype(std::forward(functor)()) {
+// TODO(adebreceni): as soon as c++17 comes jump to a RAII implementation
+// as we will need std::uncaught_exceptions()
+try {
+  return std::forward(functor)();
+} catch(const utils::ValueException&) {
+  type_id = std::type_index(typeid(std::string));
+  value_ = minifi::state::response::createValue(ref);
+  throw;
+}
   }
 
  protected:
   std::type_index type_id;
-  std::shared_ptr validator_;
+  CachedValueValidator validator_;

Review comment:
   I can definitely live with this :)
   Property validation has always been lagging behind, seems to be properly 
done finally.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-2903) GetKafka cannot handle null value in Kafka offset cause NullPointerException

2020-06-22 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-2903.
--
Resolution: Won't Fix

GetKafka processor is deprecated as it is designed for old versions of Kafka.

> GetKafka cannot handle null value in Kafka offset cause NullPointerException
> 
>
> Key: NIFI-2903
> URL: https://issues.apache.org/jira/browse/NIFI-2903
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Kurt Hung
>Priority: Major
>
> The GetKafka processor does not handle the condition of null value that 
> exists in Kafka offset, which would cause the processor hang on and generate 
> a NullPointerException after 30 sec. It's easy to reproduce this issue; I use 
> the kafka-python to insert a key-value pair ("abc", None), and create a 
> GetKafka processor to consume the topic I've created. This issue would happen 
> immediately, and moreover, the processor could't consume rest offsets.
> Temporary I customize a GetKafka processor for a "Failure" relationship to 
> handle null values exist in Kafka offset to prevent this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-3378) PutKafka retries flowfiles that are too large instead of routing them to failure

2020-06-22 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-3378.
--
Resolution: Won't Fix

PutKafka is deprecated as it is designed for old versions of Kafka.

> PutKafka retries flowfiles that are too large instead of routing them to 
> failure
> 
>
> Key: NIFI-3378
> URL: https://issues.apache.org/jira/browse/NIFI-3378
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Ryan Persaud
>Priority: Major
>
> When using PutKafka, if the content size exceeds 1048576, an uncaught 
> exception is thrown by org.apache.nifi.stream.io.util.StreamDemarcator.fill() 
> (see stack trace below).  This results in the offending flowfile being 
> retried repeatedly instead of being routed to failure.
> The exception is thrown because maxRequestSize in PublishingContext is 
> hardcoded to 1048576 which is the "kafka default."  In actuality, I believe 
> the default should be 100 for Kafka 0.8 (see message.max.bytes at 
> https://kafka.apache.org/08/documentation.html#brokerconfigs), but that's 
> another issue. If we know that any content larger than maxRequestSize is 
> always going to cause an exception, would it make sense to check the fileSize 
> early in PutKafka and avoid many needless function calls, exceptions and 
> retries?  For example, something like:
> {code}
> @Override
> protected boolean rendezvousWithKafka(ProcessContext context, 
> ProcessSession session) throws ProcessException {
> boolean processed = false;
> FlowFile flowFile = session.get();
> if (flowFile != null) {
> if (flowFile.getSize() > 1048576) {
> session.transfer(session.penalize(flowFile), REL_FAILURE);
> }
> else {
> flowFile = this.doRendezvousWithKafka(flowFile, context, 
> session);
> if (!this.isFailedFlowFile(flowFile)) {
> session.getProvenanceReporter().send(flowFile,
> context.getProperty(SEED_BROKERS).getValue() + "/"
> + 
> context.getProperty(TOPIC).evaluateAttributeExpressions(flowFile).getValue());
> session.transfer(flowFile, REL_SUCCESS);
> } else {
> session.transfer(session.penalize(flowFile), REL_FAILURE);
> }
> }
> processed = true;
> }
> return processed;
> }
> {code}
> Thoughts? A RouteOnAttribute processor that examines the fileSize attribute 
> could be used to 'protect' PutKafka, but that seems rather cumbersome.
> 2017-01-20 02:48:12,008 ERROR [Timer-Driven Process Thread-6] 
> o.apache.nifi.processors.kafka.PutKafka 
> java.lang.IllegalStateException: Maximum allowed data size of 1048576 
> exceeded.
> at 
> org.apache.nifi.stream.io.util.StreamDemarcator.fill(StreamDemarcator.java:153)
>  ~[nifi-utils-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.stream.io.util.StreamDemarcator.nextToken(StreamDemarcator.java:105)
>  ~[nifi-utils-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.processors.kafka.KafkaPublisher.publish(KafkaPublisher.java:126)
>  ~[nifi-kafka-0-8-processors-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.processors.kafka.PutKafka$1.process(PutKafka.java:313) 
> ~[nifi-kafka-0-8-processors-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1880)
>  ~[nifi-framework-core-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1851)
>  ~[nifi-framework-core-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.processors.kafka.PutKafka.doRendezvousWithKafka(PutKafka.java:309)
>  ~[nifi-kafka-0-8-processors-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.processors.kafka.PutKafka.rendezvousWithKafka(PutKafka.java:285)
>  ~[nifi-kafka-0-8-processors-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.processors.kafka.AbstractKafkaProcessor.onTrigger(AbstractKafkaProcessor.java:76)
>  ~[nifi-kafka-0-8-processors-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064)
>  [nifi-framework-core-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579]

[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #819: MINIFICPP-1265 - Adding missing license header for Windows ext

2020-06-22 Thread GitBox


arpadboda closed pull request #819:
URL: https://github.com/apache/nifi-minifi-cpp/pull/819


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7571) Deleted processors in the Controller Service

2020-06-22 Thread Martin (Jira)
Martin created NIFI-7571:


 Summary: Deleted processors in the Controller Service
 Key: NIFI-7571
 URL: https://issues.apache.org/jira/browse/NIFI-7571
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.11.4
Reporter: Martin


I am not able to enable one of my controller services. If I want to check 
inactive processors I get this. So far so good.

!https://files.slack.com/files-pri/T0L9SDNRZ-F015K0ZQB2N/image.png!

Now I click on these processors:

!https://files.slack.com/files-pri/T0L9SDNRZ-F015C8H6UFQ/image.png!

I then tried to find the processors with the search functionality in the UI 
using the ID, no results.

!https://files.slack.com/files-pri/T0L9SDNRZ-F015DMYLZML/image.png!

I also checked the processors in the list:

!https://files.slack.com/files-pri/T0L9SDNRZ-F0155MBU079/image.png!

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)