Re: NIFI HandleHttpRequest API - Health Check when API or Node Down

2020-09-04 Thread Etienne Jouvin
I do not know everything, but if I well understood NiFi is based on REST
API. For example, all you do on the GUI is done throw REST call.
So I guess you can request if the NiFi instance is up on each node.

But this will not give you the status of your custom HttpHandle. NiFi
instance can be up, but your processor stopped or disabled.




Le ven. 4 sept. 2020 à 18:26, jgunvaldson  a écrit :

> It seems a bit like a chicken and egg thing. Using ‘anything’ configured
> on the disconnected node as a health check, is not unlike trying to get to
> the API (listening port) itself? Kinda.
>
> Anyway
>
> I was hoping that the NIFI infrastructure had a generalized, centralized
> (REST API?  or other) that would give me the answer is this NODE up and
> listening on this PORT, and that it could be called by a Load Balancer?
>
> ~John
>
>
>
> On Sep 4, 2020, at 9:19 AM, Etienne Jouvin 
> wrote:
>
> Because you implemented a HandleHttpRequest listing, why don't you
> configure an handle on something like http(s)://server/ping
> And the response is just pong
>
>
>
> Le ven. 4 sept. 2020 à 18:02, jgunvaldson  a écrit :
>
>> Hi,
>>
>> Our network administrators are unable to wire up advanced Load Balancer
>> (AWS Application Load Balancer) or (Apache reverse proxy) to leverage a
>> NIFI API that may be listening on a port across several nodes.
>>
>> For instance, a HandleHttpRequest listing on Node-1 on PORT 5112, Node-2
>> on 5112, Node-3 on 5112, and so on and so forth…
>>
>> In an event where a NODE is down (or API stops listening, it happens), or
>> disconnected, a call to that Node and PORT will fail and be a pretty bad
>> experience for the customer
>>
>> So
>>
>> What we would like to have is an external Load Balancer be able to use
>> Round Robin (Advanced Features) to redirect the request to an UP Node, but
>> to do this the Load Balancer needs a proper health check.
>>
>> What is a proper “Health Check” for this scenario? How would it be
>> created and wired up?
>>
>> Right now, an API requested that is hosted on NIFI that is proxied by our
>> API Manager (WSO2) will fail on the down NODE and not recover - user will
>> probably get a 500. APIM is not a good load balancer.
>>
>> Thanks in advance for this discussion
>>
>>
>> Best Regards
>> John Gunvaldson
>>
>>
>


Re: NIFI HandleHttpRequest API - Health Check when API or Node Down

2020-09-04 Thread Etienne Jouvin
Because you implemented a HandleHttpRequest listing, why don't you
configure an handle on something like http(s)://server/ping
And the response is just pong



Le ven. 4 sept. 2020 à 18:02, jgunvaldson  a écrit :

> Hi,
>
> Our network administrators are unable to wire up advanced Load Balancer
> (AWS Application Load Balancer) or (Apache reverse proxy) to leverage a
> NIFI API that may be listening on a port across several nodes.
>
> For instance, a HandleHttpRequest listing on Node-1 on PORT 5112, Node-2
> on 5112, Node-3 on 5112, and so on and so forth…
>
> In an event where a NODE is down (or API stops listening, it happens), or
> disconnected, a call to that Node and PORT will fail and be a pretty bad
> experience for the customer
>
> So
>
> What we would like to have is an external Load Balancer be able to use
> Round Robin (Advanced Features) to redirect the request to an UP Node, but
> to do this the Load Balancer needs a proper health check.
>
> What is a proper “Health Check” for this scenario? How would it be created
> and wired up?
>
> Right now, an API requested that is hosted on NIFI that is proxied by our
> API Manager (WSO2) will fail on the down NODE and not recover - user will
> probably get a 500. APIM is not a good load balancer.
>
> Thanks in advance for this discussion
>
>
> Best Regards
> John Gunvaldson
>
>


Re: Need help SSL LDAP Nifi Registry

2020-06-30 Thread Etienne Jouvin
Got it thanks to
https://community.cloudera.com/t5/Community-Articles/Setting-Up-a-Secure-NiFi-to-Integrate-with-a-Secure-NiFi/ta-p/247765

Next steps would be to have NiFi and Registry on different hosts and see
how connections are made.



Le mar. 30 juin 2020 à 11:43, Etienne Jouvin  a
écrit :

> But now, I have NiFi and Registry with secure access (LDAP + SSL)
>
> I need to find out how to configure the Registry in NiFi, because for now
> I did not have to specify login.
> And even if my first bucket is Public, it is not accessible from NiFi.
>
>
> Le mar. 30 juin 2020 à 11:29, Etienne Jouvin  a
> écrit :
>
>> Hi Josef.
>>
>> No I did not try that.
>> And well done, with that I can access the UI, and can connect with LDAP
>> identity.
>>
>> Thanks a lot.
>>
>> Cheers
>>
>> Etienne
>>
>>
>>
>> Le mar. 30 juin 2020 à 11:15,  a écrit :
>>
>>> Hi Etienne
>>>
>>>
>>>
>>> Did you tried the following in «nifi-registry.properties»:
>>>
>>> nifi.registry.security.needClientAuth=false
>>>
>>>
>>>
>>> Cheers Josef
>>>
>>>
>>>
>>>
>>>
>>> *From: *Etienne Jouvin 
>>> *Reply to: *"users@nifi.apache.org" 
>>> *Date: *Tuesday, 30 June 2020 at 10:46
>>> *To: *"users@nifi.apache.org" 
>>> *Subject: *Need help SSL LDAP Nifi Registry
>>>
>>>
>>>
>>> Hello all.
>>>
>>>
>>>
>>> I am trying to setup LDAP authentication on NiFi Registry.
>>>
>>> I followed some links, like
>>> https://community.cloudera.com/t5/Community-Articles/Setting-Up-a-Secure-Apache-NiFi-Registry/ta-p/247753
>>>
>>>
>>>
>>> But each time, it requires that a certificate is installed on client
>>> side. I had this "problem" for NiFi but because I did not provided
>>> the nifi.security.user.login.identity.provider
>>>
>>>
>>>
>>> But for the registry, I remember that and did it.
>>>
>>>
>>>
>>> For summary, what I have in nifi-registry.properties
>>>
>>> nifi.registry.security.keystore=./conf/keystore.jks
>>> nifi.registry.security.keystoreType=jks
>>> nifi.registry.security.keystorePasswd=password
>>> nifi.registry.security.keyPasswd=password
>>> nifi.registry.security.truststore=./conf/truststore.jks
>>> nifi.registry.security.truststoreType=jks
>>> nifi.registry.security.truststorePasswd=password
>>>
>>>
>>>
>>> (All of those informations were given by the tls-toolkit, when executed
>>> for NiFi)
>>>
>>> Then I put this
>>>
>>> #nifi.registry.security.identity.provider=
>>> nifi.registry.security.identity.provider=ldap-identity-provider
>>>
>>>
>>>
>>> In the file identity-providers.xml
>>>
>>> I setup the LDAP provider
>>>
>>> 
>>> ldap-identity-provider
>>>
>>> org.apache.nifi.registry.security.ldap.LdapIdentityProvider
>>> SIMPLE
>>>
>>> uid=admin,ou=system
>>> secret
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>> FOLLOW
>>> 10 secs
>>> 10 secs
>>>
>>> ldap://localhost:10389
>>>     ou=users,dc=test,dc=ch
>>> uid={0}
>>>
>>> USE_DN
>>> 12 hours
>>> 
>>>
>>>
>>>
>>> And finally in authorizers.xml
>>>
>>> 
>>> file-user-group-provider
>>>
>>> org.apache.nifi.registry.security.authorization.file.FileUserGroupProvider
>>> ./conf/users.xml
>>> uid=firstuser,
>>> ou=users,dc=test,dc=ch
>>> 
>>>
>>>
>>>
>>> 
>>> file-access-policy-provider
>>>
>>> org.apache.nifi.registry.security.authorization.file.FileAccessPolicyProvider
>>> file-user-group-provider
>>> ./conf/authorizations.xml
>>>  uid=firstuser,
>>> ou=users,dc=test,dc=ch 
>>> 
>>>
>>> 
>>> 
>>>
>>>
>>>
>>>
>>>
>>> Starting Registry is OK.
>>>
>>>
>>>
>>> But when I want to access throw Chrome, I have a certificate error
>>> : ERR_BAD_SSL_CLIENT_AUTH_CERT
>>>
>>>
>>>
>>> How can I force the authentication to not request a client side
>>> certificate ?
>>>
>>>
>>>
>>> Thanks for any input.
>>>
>>>
>>>
>>> Etienne Jouvin
>>>
>>>
>>>
>>


Re: Need help SSL LDAP Nifi Registry

2020-06-30 Thread Etienne Jouvin
But now, I have NiFi and Registry with secure access (LDAP + SSL)

I need to find out how to configure the Registry in NiFi, because for now I
did not have to specify login.
And even if my first bucket is Public, it is not accessible from NiFi.


Le mar. 30 juin 2020 à 11:29, Etienne Jouvin  a
écrit :

> Hi Josef.
>
> No I did not try that.
> And well done, with that I can access the UI, and can connect with LDAP
> identity.
>
> Thanks a lot.
>
> Cheers
>
> Etienne
>
>
>
> Le mar. 30 juin 2020 à 11:15,  a écrit :
>
>> Hi Etienne
>>
>>
>>
>> Did you tried the following in «nifi-registry.properties»:
>>
>> nifi.registry.security.needClientAuth=false
>>
>>
>>
>> Cheers Josef
>>
>>
>>
>>
>>
>> *From: *Etienne Jouvin 
>> *Reply to: *"users@nifi.apache.org" 
>> *Date: *Tuesday, 30 June 2020 at 10:46
>> *To: *"users@nifi.apache.org" 
>> *Subject: *Need help SSL LDAP Nifi Registry
>>
>>
>>
>> Hello all.
>>
>>
>>
>> I am trying to setup LDAP authentication on NiFi Registry.
>>
>> I followed some links, like
>> https://community.cloudera.com/t5/Community-Articles/Setting-Up-a-Secure-Apache-NiFi-Registry/ta-p/247753
>>
>>
>>
>> But each time, it requires that a certificate is installed on client
>> side. I had this "problem" for NiFi but because I did not provided
>> the nifi.security.user.login.identity.provider
>>
>>
>>
>> But for the registry, I remember that and did it.
>>
>>
>>
>> For summary, what I have in nifi-registry.properties
>>
>> nifi.registry.security.keystore=./conf/keystore.jks
>> nifi.registry.security.keystoreType=jks
>> nifi.registry.security.keystorePasswd=password
>> nifi.registry.security.keyPasswd=password
>> nifi.registry.security.truststore=./conf/truststore.jks
>> nifi.registry.security.truststoreType=jks
>> nifi.registry.security.truststorePasswd=password
>>
>>
>>
>> (All of those informations were given by the tls-toolkit, when executed
>> for NiFi)
>>
>> Then I put this
>>
>> #nifi.registry.security.identity.provider=
>> nifi.registry.security.identity.provider=ldap-identity-provider
>>
>>
>>
>> In the file identity-providers.xml
>>
>> I setup the LDAP provider
>>
>> 
>> ldap-identity-provider
>>
>> org.apache.nifi.registry.security.ldap.LdapIdentityProvider
>> SIMPLE
>>
>> uid=admin,ou=system
>> secret
>>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> FOLLOW
>> 10 secs
>> 10 secs
>>
>> ldap://localhost:10389
>> ou=users,dc=test,dc=ch
>> uid={0}
>>
>> USE_DN
>> 12 hours
>> 
>>
>>
>>
>> And finally in authorizers.xml
>>
>> 
>> file-user-group-provider
>>
>> org.apache.nifi.registry.security.authorization.file.FileUserGroupProvider
>> ./conf/users.xml
>> uid=firstuser,
>> ou=users,dc=test,dc=ch
>> 
>>
>>
>>
>> 
>> file-access-policy-provider
>>
>> org.apache.nifi.registry.security.authorization.file.FileAccessPolicyProvider
>> file-user-group-provider
>> ./conf/authorizations.xml
>>  uid=firstuser,
>> ou=users,dc=test,dc=ch 
>> 
>>
>> 
>> 
>>
>>
>>
>>
>>
>> Starting Registry is OK.
>>
>>
>>
>> But when I want to access throw Chrome, I have a certificate error
>> : ERR_BAD_SSL_CLIENT_AUTH_CERT
>>
>>
>>
>> How can I force the authentication to not request a client side
>> certificate ?
>>
>>
>>
>> Thanks for any input.
>>
>>
>>
>> Etienne Jouvin
>>
>>
>>
>


Re: Need help SSL LDAP Nifi Registry

2020-06-30 Thread Etienne Jouvin
Hi Josef.

No I did not try that.
And well done, with that I can access the UI, and can connect with LDAP
identity.

Thanks a lot.

Cheers

Etienne



Le mar. 30 juin 2020 à 11:15,  a écrit :

> Hi Etienne
>
>
>
> Did you tried the following in «nifi-registry.properties»:
>
> nifi.registry.security.needClientAuth=false
>
>
>
> Cheers Josef
>
>
>
>
>
> *From: *Etienne Jouvin 
> *Reply to: *"users@nifi.apache.org" 
> *Date: *Tuesday, 30 June 2020 at 10:46
> *To: *"users@nifi.apache.org" 
> *Subject: *Need help SSL LDAP Nifi Registry
>
>
>
> Hello all.
>
>
>
> I am trying to setup LDAP authentication on NiFi Registry.
>
> I followed some links, like
> https://community.cloudera.com/t5/Community-Articles/Setting-Up-a-Secure-Apache-NiFi-Registry/ta-p/247753
>
>
>
> But each time, it requires that a certificate is installed on client side.
> I had this "problem" for NiFi but because I did not provided
> the nifi.security.user.login.identity.provider
>
>
>
> But for the registry, I remember that and did it.
>
>
>
> For summary, what I have in nifi-registry.properties
>
> nifi.registry.security.keystore=./conf/keystore.jks
> nifi.registry.security.keystoreType=jks
> nifi.registry.security.keystorePasswd=password
> nifi.registry.security.keyPasswd=password
> nifi.registry.security.truststore=./conf/truststore.jks
> nifi.registry.security.truststoreType=jks
> nifi.registry.security.truststorePasswd=password
>
>
>
> (All of those informations were given by the tls-toolkit, when executed
> for NiFi)
>
> Then I put this
>
> #nifi.registry.security.identity.provider=
> nifi.registry.security.identity.provider=ldap-identity-provider
>
>
>
> In the file identity-providers.xml
>
> I setup the LDAP provider
>
> 
> ldap-identity-provider
>
> org.apache.nifi.registry.security.ldap.LdapIdentityProvider
> SIMPLE
>
> uid=admin,ou=system
> secret
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> FOLLOW
> 10 secs
> 10 secs
>
> ldap://localhost:10389
> ou=users,dc=test,dc=ch
> uid={0}
>
> USE_DN
> 12 hours
> 
>
>
>
> And finally in authorizers.xml
>
> 
> file-user-group-provider
>
> org.apache.nifi.registry.security.authorization.file.FileUserGroupProvider
> ./conf/users.xml
> uid=firstuser,
> ou=users,dc=test,dc=ch
> 
>
>
>
> 
> file-access-policy-provider
>
> org.apache.nifi.registry.security.authorization.file.FileAccessPolicyProvider
> file-user-group-provider
> ./conf/authorizations.xml
>  uid=firstuser,
> ou=users,dc=test,dc=ch 
> 
>
> 
> 
>
>
>
>
>
> Starting Registry is OK.
>
>
>
> But when I want to access throw Chrome, I have a certificate error
> : ERR_BAD_SSL_CLIENT_AUTH_CERT
>
>
>
> How can I force the authentication to not request a client side
> certificate ?
>
>
>
> Thanks for any input.
>
>
>
> Etienne Jouvin
>
>
>


Need help SSL LDAP Nifi Registry

2020-06-30 Thread Etienne Jouvin
Hello all.

I am trying to setup LDAP authentication on NiFi Registry.
I followed some links, like
https://community.cloudera.com/t5/Community-Articles/Setting-Up-a-Secure-Apache-NiFi-Registry/ta-p/247753

But each time, it requires that a certificate is installed on client side.
I had this "problem" for NiFi but because I did not provided
the nifi.security.user.login.identity.provider

But for the registry, I remember that and did it.

For summary, what I have in nifi-registry.properties
nifi.registry.security.keystore=./conf/keystore.jks
nifi.registry.security.keystoreType=jks
nifi.registry.security.keystorePasswd=password
nifi.registry.security.keyPasswd=password
nifi.registry.security.truststore=./conf/truststore.jks
nifi.registry.security.truststoreType=jks
nifi.registry.security.truststorePasswd=password

(All of those informations were given by the tls-toolkit, when executed for
NiFi)
Then I put this
#nifi.registry.security.identity.provider=
nifi.registry.security.identity.provider=ldap-identity-provider

In the file identity-providers.xml
I setup the LDAP provider

ldap-identity-provider

org.apache.nifi.registry.security.ldap.LdapIdentityProvider
SIMPLE

uid=admin,ou=system
secret











FOLLOW
10 secs
10 secs

ldap://localhost:10389
ou=users,dc=test,dc=ch
uid={0}

USE_DN
12 hours


And finally in authorizers.xml

file-user-group-provider

org.apache.nifi.registry.security.authorization.file.FileUserGroupProvider
./conf/users.xml
uid=firstuser,
ou=users,dc=test,dc=ch



file-access-policy-provider

org.apache.nifi.registry.security.authorization.file.FileAccessPolicyProvider
file-user-group-provider
./conf/authorizations.xml
 uid=firstuser,
ou=users,dc=test,dc=ch 






Starting Registry is OK.

But when I want to access throw Chrome, I have a certificate error
: ERR_BAD_SSL_CLIENT_AUTH_CERT

How can I force the authentication to not request a client side certificate
?

Thanks for any input.

Etienne Jouvin


Re: Messed with PutS3Object, add signature on top on content

2020-06-26 Thread Etienne Jouvin
Hello all.

Got it.
It was a matter of signature version.
Because I use s3Ninja, I needed to set with V2 :
[image: image.png]





Le ven. 26 juin 2020 à 15:43, Etienne Jouvin  a
écrit :

> Hello all.
>
> I am trying to use S3 storage (s3Ninja for "unit test") and I am really
> messed about how it works.
> Simple process
> [image: image.png]
>
> I fetch a simple PDF content and try to post to s3.
> But in the resulting content, I found the first line :
>
> 2;chunk-signature=c0d176ad5f54df79ca61a1de74d71731f3bf2e92972d7c5ca25f79d83b99b63f
>
> I am trying to google this, and I found that it may come from chunk upload.
> But the file is small, less than 1 Mo.
>
> It is also the case with simple text file.
>
> I use Wireshark to track traffic, and I found that there is the signature
> added during the call.
>
> Also, I always have an exception saying the signature is not valid :
> com.amazonaws.SdkClientException: Unable to verify integrity of data
> upload. Client calculated content hash (contentMD5:
> RJHSWwvsGtV1rweS3gmThQ== in base 64) didn't match hash (etag:
> 9a86aae10bb9318d22c3ddea1f93fd07 in hex) calculated by Amazon S3.  You may
> need to delete the data stored in Amazon S3. (metadata.contentMD5: null,
> md5DigestStream:
> com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@239e8a25,
> bucketName: renditions, key: 43cc8b40-0f36-4bb4-9ad6-ea079ecbd5a1.pdf)
>
> Any idea or input for investigation ?
>
> Regards
>
> Etienne Jouvin
>
>


Messed with PutS3Object, add signature on top on content

2020-06-26 Thread Etienne Jouvin
Hello all.

I am trying to use S3 storage (s3Ninja for "unit test") and I am really
messed about how it works.
Simple process
[image: image.png]

I fetch a simple PDF content and try to post to s3.
But in the resulting content, I found the first line :
2;chunk-signature=c0d176ad5f54df79ca61a1de74d71731f3bf2e92972d7c5ca25f79d83b99b63f

I am trying to google this, and I found that it may come from chunk upload.
But the file is small, less than 1 Mo.

It is also the case with simple text file.

I use Wireshark to track traffic, and I found that there is the signature
added during the call.

Also, I always have an exception saying the signature is not valid :
com.amazonaws.SdkClientException: Unable to verify integrity of data
upload. Client calculated content hash (contentMD5:
RJHSWwvsGtV1rweS3gmThQ== in base 64) didn't match hash (etag:
9a86aae10bb9318d22c3ddea1f93fd07 in hex) calculated by Amazon S3.  You may
need to delete the data stored in Amazon S3. (metadata.contentMD5: null,
md5DigestStream:
com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@239e8a25,
bucketName: renditions, key: 43cc8b40-0f36-4bb4-9ad6-ea079ecbd5a1.pdf)

Any idea or input for investigation ?

Regards

Etienne Jouvin


Re: Custom service in NAR generation failure

2020-06-26 Thread Etienne Jouvin
Some update.

I wanted to reproduce the error in a fresh new project. But no way to have
it again.
So for the moment, I am not able to show an example.

I will give it a try later.

Sorry about that


Le ven. 19 juin 2020 à 15:48, Bryan Bende  a écrit :

> I haven't fully evaluated the fix, at a quick glance it seems correct, but
> I'm trying to figure out if something else is not totally correct in your
> poms because many other projects are using the latest NAR plugin and not
> having this issue, so there must be some difference that makes it work in
> some cases.
>
> We have Maven archetypes for the processor and service bundles. I wonder
> if you could compare the resulting projects/poms with yours to see what
> seems different?
>
>
> https://cwiki.apache.org/confluence/display/NIFI/Maven+Projects+for+Extensions
>
>
> On Fri, Jun 19, 2020 at 9:30 AM Etienne Jouvin 
> wrote:
>
>> My parent pom has this as declaration :
>>
>> 
>> org.apache.nifi
>> nifi-nar-bundles
>> 1.11.4
>> 
>>
>> When I studied the maven plugin, I found the following in class
>> org.apache.nifi.extension.definition.extraction.ExtensionClassLoaderFactory.java
>> private String determineProvidedEntityVersion(final Set
>> artifacts, final String groupId, final String artifactId) throws
>> ProjectBuildingException, MojoExecutionException {
>> getLog().debug("Determining provided entities for " + groupId +
>> ":" + artifactId);
>> for (final Artifact artifact : artifacts) {
>> if (artifact.getGroupId().equals(groupId) &&
>> artifact.getArtifactId().equals(artifactId)) {
>> return artifact.getVersion();
>> }
>> }
>> return findProvidedDependencyVersion(artifacts, groupId,
>> artifactId);
>> }
>> In this case, it search artifact in the dependencies.
>>
>> If not found, check from provided dependencies (in fact from artifact
>> that the current artifact depends on, if I well understood)
>> And the function is :
>> private String findProvidedDependencyVersion(final Set
>> artifacts, final String groupId, final String artifactId) {
>> final ProjectBuildingRequest projectRequest = new
>> DefaultProjectBuildingRequest();
>> projectRequest.setRepositorySession(repoSession);
>> projectRequest.setSystemProperties(System.getProperties());
>> projectRequest.setLocalRepository(localRepo);
>> for (final Artifact artifact : artifacts) {
>> final Set artifactDependencies = new HashSet<>();
>> try {
>> final ProjectBuildingResult projectResult =
>> projectBuilder.build(artifact, projectRequest);
>> gatherArtifacts(projectResult.getProject(),
>> artifactDependencies);
>> getLog().debug("For Artifact " + artifact + ", found the
>> following dependencies:");
>> artifactDependencies.forEach(dep ->
>> getLog().debug(dep.toString()));
>>
>> for (final Artifact dependency : artifactDependencies) {
>> if (dependency.getGroupId().equals(groupId) &&
>> dependency.getArtifactId().equals(artifactId)) {
>> getLog().debug("Found version of " + groupId +
>> ":" + artifactId + " to be " + artifact.getVersion());
>> return artifact.getVersion();
>> }
>> }
>> } catch (final Exception e) {
>> getLog().warn("Unable to construct Maven Project for " +
>> artifact + " when attempting to determine the expected version of NiFi
>> API");
>> getLog().debug("Unable to construct Maven Project for " +
>> artifact + " when attempting to determine the expected version of NiFi
>> API", e);
>> }
>> }
>> return null;
>> }
>>
>> And again if I well understood the code, it search in artifact to match
>> the one for specific group and artifact ids, for example nifi-api.
>> But the version returned is not the one from the found artifact, but from
>> the source artifact.
>>
>> So that's why I explicitly set dependencies in the artifact pom to solve
>> temporary the difficulty.
>>
>> In the PR, I made the following change :
>> private String findProvidedDependencyVersion(final Set
>> artifacts, final String gro

Re: How to upgrade custom nar version

2020-06-19 Thread Etienne Jouvin
So simple, just drag the new version, restart and everything is updated.

Nice, thanks.

Le ven. 19 juin 2020 à 18:53, Bryan Bende  a écrit :

> If you don't need to run processors from 1.0 and 1.1 at the same time,
> then you can stop NiFi and remove the old NAR and add the new one, then
> start again, and all the existing processors would be auto-upgraded.
>
> If you need some on 1.0 and some on 1.1, you can add the 1.1 NAR to the
> lib directory and restart, or to the auto-load directory which does not
> require a restart, but requires a hard refresh of the UI. Then you go to
> each processor from 1.0 and right-click and select "Change Version" and
> pick 1.1.
>
> There is currently no bulk upgrade for processors, but would be a nice
> future enhancement.
>
>
>
> On Fri, Jun 19, 2020 at 12:46 PM Etienne Jouvin 
> wrote:
>
>> Hello all.
>>
>> Imagine I have a custom nar, with custom processors, installed in version
>> 1.0.
>> Then I release a new version 1.1.
>> How do I migrate existing processes to take the new version 1.1 ?
>> Should I delete the nar in version 1.0 and just push the 1.1 ?
>> Should I go to each process ?
>>
>> In fact, there is like NiFi upgrade, but for now I did not see how to
>> update all process with a new version.
>>
>> Regards
>>
>> Etienne Jouvin
>>
>


How to upgrade custom nar version

2020-06-19 Thread Etienne Jouvin
Hello all.

Imagine I have a custom nar, with custom processors, installed in version
1.0.
Then I release a new version 1.1.
How do I migrate existing processes to take the new version 1.1 ?
Should I delete the nar in version 1.0 and just push the 1.1 ?
Should I go to each process ?

In fact, there is like NiFi upgrade, but for now I did not see how to
update all process with a new version.

Regards

Etienne Jouvin


Re: Custom service in NAR generation failure

2020-06-19 Thread Etienne Jouvin
expected version of NiFi
API", e);
}
}
return null;
}

Do not know if this is the correct fix, I will way the pull request review.

Etienne



Le ven. 19 juin 2020 à 15:19, Bryan Bende  a écrit :

> If you are not using nifi-nar-bundles as your parent (which is fine), then
> you should be explicitly setting versions for nifi-api and
> nifi-framework-api.
>
> Otherwise how would it know to use 1.11.4 ?
>
>
> On Fri, Jun 19, 2020 at 9:09 AM Etienne Jouvin 
> wrote:
>
>> Ok, will try to just post simple thing.
>>
>> The project has the following :
>> http://maven.apache.org/POM/4.0.0;
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
>> https://maven.apache.org/xsd/maven-4.0.0.xsd;>
>> 4.0.0
>> 
>> ch.amexio.nifi.transform
>> nifi-transform-nar-bundles
>> 0.0.1-SNAPSHOT
>> 
>>
>> nifi-transform-service-api
>> jar
>>
>> 
>> 
>> 
>> org.apache.nifi
>> nifi-api
>> 
>> 
>> 
>>
>> the nar project ::
>>
>> http://maven.apache.org/POM/4.0.0;
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
>> https://maven.apache.org/xsd/maven-4.0.0.xsd;>
>> 4.0.0
>> 
>> ch.amexio.nifi.transform
>> nifi-transform-nar-bundles
>> 0.0.1-SNAPSHOT
>> 
>>
>> nifi-transform-service-api-nar
>> nar
>>
>> 
>> true
>> true
>> 
>>
>> 
>> 
>> 
>> ch.amexio.nifi.transform
>> nifi-transform-service-api
>> 0.0.1-SNAPSHOT
>> compile
>> 
>>
>> 
>> 
>> org.apache.nifi
>> nifi-standard-services-api-nar
>> nar
>> 
>> 
>> 
>>
>> It was then in failure.
>> What I did, is to change the my parent pom and add the following in
>> dependencies
>> 
>> 
>> 
>> 
>> org.apache.nifi
>> nifi-api
>> 
>> 
>> org.apache.nifi
>> nifi-framework-api
>> 
>> 
>>
>>
>> By the way, I submit a Pull Request on nifi-maven
>> https://github.com/apache/nifi-maven/pull/13
>> With following change :
>> https://github.com/apache/nifi-maven/pull/13/files
>>
>> Etienne
>>
>>
>>
>> Le ven. 19 juin 2020 à 13:52, Mike Thomsen  a
>> écrit :
>>
>>> Without seeing your POM(s), it could be several things. Try posting your
>>> POMs here or as a GitHub gist.
>>>
>>> On Fri, Jun 19, 2020 at 3:36 AM Etienne Jouvin 
>>> wrote:
>>>
>>>> Hello all.
>>>>
>>>> Do not know where to post the message, guide me if I should send to
>>>> another mailing list.
>>>> A simple summary in first step.
>>>> I created a simple project to build a new service.
>>>> I extend the nifi-nar-bundles artifact with version 1.11.4.
>>>> My project version is currently 0.0.1-SNAPSHOT.
>>>>
>>>> During NAR generation, it failed for the documentation with message :
>>>> org.apache.maven.plugin.MojoExecutionException: Failed to create
>>>> Extension Documentation
>>>> Caused by: org.apache.maven.plugin.MojoExecutionException: Could not
>>>> resolve local dependency org.apache.nifi:nifi-api:jar:0.0.1-SNAPSHOT
>>>>
>>>> I am currently looking in source code of nifi-maven project, specially
>>>> class ExtensionClassLoaderFactory.
>>>>
>>>> What I do not understand is why it searches for version 0.0.1-SNAPSHOT
>>>> on nifi-api, and not the version 1.11.4
>>>>
>>>> Let me know if I should discuss about this in another thread.
>>>>
>>>> Regards
>>>>
>>>> Etienne
>>>>
>>>


Re: Custom service in NAR generation failure

2020-06-19 Thread Etienne Jouvin
Ok, will try to just post simple thing.

The project has the following :
http://maven.apache.org/POM/4.0.0;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-4.0.0.xsd;>
4.0.0

ch.amexio.nifi.transform
nifi-transform-nar-bundles
0.0.1-SNAPSHOT


nifi-transform-service-api
jar




org.apache.nifi
nifi-api




the nar project ::

http://maven.apache.org/POM/4.0.0;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-4.0.0.xsd;>
4.0.0

ch.amexio.nifi.transform
nifi-transform-nar-bundles
0.0.1-SNAPSHOT


nifi-transform-service-api-nar
nar


true
true





ch.amexio.nifi.transform
nifi-transform-service-api
0.0.1-SNAPSHOT
compile




org.apache.nifi
nifi-standard-services-api-nar
nar




It was then in failure.
What I did, is to change the my parent pom and add the following in
dependencies




org.apache.nifi
nifi-api


org.apache.nifi
nifi-framework-api




By the way, I submit a Pull Request on nifi-maven
https://github.com/apache/nifi-maven/pull/13
With following change :
https://github.com/apache/nifi-maven/pull/13/files

Etienne



Le ven. 19 juin 2020 à 13:52, Mike Thomsen  a
écrit :

> Without seeing your POM(s), it could be several things. Try posting your
> POMs here or as a GitHub gist.
>
> On Fri, Jun 19, 2020 at 3:36 AM Etienne Jouvin 
> wrote:
>
>> Hello all.
>>
>> Do not know where to post the message, guide me if I should send to
>> another mailing list.
>> A simple summary in first step.
>> I created a simple project to build a new service.
>> I extend the nifi-nar-bundles artifact with version 1.11.4.
>> My project version is currently 0.0.1-SNAPSHOT.
>>
>> During NAR generation, it failed for the documentation with message :
>> org.apache.maven.plugin.MojoExecutionException: Failed to create
>> Extension Documentation
>> Caused by: org.apache.maven.plugin.MojoExecutionException: Could not
>> resolve local dependency org.apache.nifi:nifi-api:jar:0.0.1-SNAPSHOT
>>
>> I am currently looking in source code of nifi-maven project, specially
>> class ExtensionClassLoaderFactory.
>>
>> What I do not understand is why it searches for version 0.0.1-SNAPSHOT on
>> nifi-api, and not the version 1.11.4
>>
>> Let me know if I should discuss about this in another thread.
>>
>> Regards
>>
>> Etienne
>>
>


Re: Custom service in NAR generation failure

2020-06-19 Thread Etienne Jouvin
Just for information, and for me to remember.

Found here :
https://github.com/apache/nifi-maven/blob/master/src/main/java/org/apache/nifi/extension/definition/extraction/ExtensionClassLoaderFactory.java


private String determineProvidedEntityVersion(final Set
> artifacts, final String groupId, final String artifactId) throws
> ProjectBuildingException, MojoExecutionException {
> getLog().debug("Determining provided entities for " + groupId +
> ":" + artifactId);
> for (final Artifact artifact : artifacts) {
> if (artifact.getGroupId().equals(groupId) &&
> artifact.getArtifactId().equals(artifactId)) {
> return artifact.getVersion();
> }
> }
> return findProvidedDependencyVersion(artifacts, groupId,
> artifactId);
> }
> private String findProvidedDependencyVersion(final Set
> artifacts, final String groupId, final String artifactId) {
> final ProjectBuildingRequest projectRequest = new
> DefaultProjectBuildingRequest();
> projectRequest.setRepositorySession(repoSession);
> projectRequest.setSystemProperties(System.getProperties());
> projectRequest.setLocalRepository(localRepo);
> for (final Artifact artifact : artifacts) {
> final Set artifactDependencies = new HashSet<>();
> try {
> final ProjectBuildingResult projectResult =
> projectBuilder.build(artifact, projectRequest);
> gatherArtifacts(projectResult.getProject(),
> artifactDependencies);
> getLog().debug("For Artifact " + artifact + ", found the
> following dependencies:");
> artifactDependencies.forEach(dep ->
> getLog().debug(dep.toString()));
> for (final Artifact dependency : artifactDependencies) {
> if (dependency.getGroupId().equals(groupId) &&
> dependency.getArtifactId().equals(artifactId)) {
> getLog().debug("Found version of " + groupId + ":"
> + artifactId + " to be " + artifact.getVersion());
> return artifact.getVersion();
> }
> }
> } catch (final Exception e) {
> getLog().warn("Unable to construct Maven Project for " +
> artifact + " when attempting to determine the expected version of NiFi
> API");
> getLog().debug("Unable to construct Maven Project for " +
> artifact + " when attempting to determine the expected version of NiFi
> API", e);
> }
> }
> return null;
> }


Should not be return dependency.getVersion() ?
Because the artifact is the currently parsed artifact and not the
dependency one ?



Le ven. 19 juin 2020 à 09:35, Etienne Jouvin  a
écrit :

> Hello all.
>
> Do not know where to post the message, guide me if I should send to
> another mailing list.
> A simple summary in first step.
> I created a simple project to build a new service.
> I extend the nifi-nar-bundles artifact with version 1.11.4.
> My project version is currently 0.0.1-SNAPSHOT.
>
> During NAR generation, it failed for the documentation with message :
> org.apache.maven.plugin.MojoExecutionException: Failed to create Extension
> Documentation
> Caused by: org.apache.maven.plugin.MojoExecutionException: Could not
> resolve local dependency org.apache.nifi:nifi-api:jar:0.0.1-SNAPSHOT
>
> I am currently looking in source code of nifi-maven project, specially
> class ExtensionClassLoaderFactory.
>
> What I do not understand is why it searches for version 0.0.1-SNAPSHOT on
> nifi-api, and not the version 1.11.4
>
> Let me know if I should discuss about this in another thread.
>
> Regards
>
> Etienne
>


Custom service in NAR generation failure

2020-06-19 Thread Etienne Jouvin
Hello all.

Do not know where to post the message, guide me if I should send to another
mailing list.
A simple summary in first step.
I created a simple project to build a new service.
I extend the nifi-nar-bundles artifact with version 1.11.4.
My project version is currently 0.0.1-SNAPSHOT.

During NAR generation, it failed for the documentation with message :
org.apache.maven.plugin.MojoExecutionException: Failed to create Extension
Documentation
Caused by: org.apache.maven.plugin.MojoExecutionException: Could not
resolve local dependency org.apache.nifi:nifi-api:jar:0.0.1-SNAPSHOT

I am currently looking in source code of nifi-maven project, specially
class ExtensionClassLoaderFactory.

What I do not understand is why it searches for version 0.0.1-SNAPSHOT on
nifi-api, and not the version 1.11.4

Let me know if I should discuss about this in another thread.

Regards

Etienne


Re: Creating parameters from exported template?

2020-05-19 Thread Etienne Jouvin
Hum, did not see that, and because I am still using "old" (1.9.2) version
where parameters are not available.

But I understand like you

Le mar. 19 mai 2020 à 13:56, James  a écrit :

> Hi
>
> Thanks for the response.
>
> According to this, it's not recommended:
> https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Variables
>
> "Variables and the nifi.variable.registry.properties property will be
> removed in a future release. As a result, it is highly recommended to
> switch to Parameters."
>
> Or am I thinking of the wrong thing?
>
> thanks
>
> On 2020/05/19 08:11:14, Etienne Jouvin  wrote:
> > Hi.
> >
> > In this case why don't you use variables ?
> >
> > Etienne
> >
> >
> > Le mar. 19 mai 2020 à 09:41, James  a écrit :
> >
> > > Hi
> > >
> > > For the life of me, I can't find anything related to exporting
> templates
> > > to new systems/environments along with the parameters.
> > >
> > > I don't need the parameter values necessarily, but, importing a new
> > > template and then needing to create the parameters manually while
> setting
> > > them seems a little tedious. Is this only possible using a registry
> perhaps?
> > >
> > > Any guidance would be helpful.
> > >
> > > thanks
> > >
> >
>


Re: Creating parameters from exported template?

2020-05-19 Thread Etienne Jouvin
Hi.

In this case why don't you use variables ?

Etienne


Le mar. 19 mai 2020 à 09:41, James  a écrit :

> Hi
>
> For the life of me, I can't find anything related to exporting templates
> to new systems/environments along with the parameters.
>
> I don't need the parameter values necessarily, but, importing a new
> template and then needing to create the parameters manually while setting
> them seems a little tedious. Is this only possible using a registry perhaps?
>
> Any guidance would be helpful.
>
> thanks
>


Re: [Blog] Sending Multipart / Form Data with InvokeHTTP in Apache NiFi 1.12-SNAPSHOT

2020-04-29 Thread Etienne Jouvin
Nice.

Will help a lot

Le jeu. 30 avr. 2020 à 05:46, Otto Fowler  a
écrit :

> As promised.
>
> Sending Multipart / Form Data with InvokeHTTP in Apache NiFi 1.12-SNAPSHOT
> 
>
>
>


Re: POST multipart/form-data with Invokehttp

2020-04-27 Thread Etienne Jouvin
Hello.

I did it with a processor ExecuteGroovyScript.

The script body is something like :

import org.apache.http.entity.mime.MultipartEntityBuilder
import org.apache.http.entity.ContentType

flowFileList = session.get(100)
if(!flowFileList.isEmpty()) {
  flowFileList.each { flowFile ->
def multipart
String text = flowFile.read().getText("UTF-8")

flowFile.write{streamIn, streamOut->
  multipart = MultipartEntityBuilder.create()
//specify multipart entries here
.addTextBody("object", text, ContentType.APPLICATION_JSON)
.addBinaryBody("content", new
File(flowFile.'document.content.path'),
ContentType.create(flowFile.'document.mime.type'), flowFile.'document.name')
.build()
  multipart.writeTo(streamOut)
}
//set the `documentum.action.rest.content.type` attribute to be used as
`Content-Type` in InvokeHTTP
flowFile.'document.content.type' = multipart.getContentType().getValue()
session.transfer(flowFile, REL_SUCCESS)
  }
}


Attributes are :

   - document.content.path : content path
   - document.mime.type : content mime type
   - document.name : binaire content name


Output update attribute
document.content.type : multipart content type.

You need some extra librairies :

   - httpcore-4.4.12.jar
   - httpmime-4.5.10.jar


This will build a multipart as the flowfile content and you can use it for
the call after.


Etienne


Le lun. 27 avr. 2020 à 19:21, Luis Carmona  a
écrit :

> Hi everyone,
>
> Hoping everybody is doing ok, wherever you are, need some help please.
>
> Does anyone has sent a file and parameters to a REST point using
> Invokehhtp with multipart/form-data as mime-type ?
>
> I can't figure out how to include the -F , speaking in terms
> of curl syntax.
>
> I really need this done throught NIFIso any help will be highly
> apreciated.
>
> Thanks in advance.
>
> LC
>
>


Re: Adding Nested Properties/JSON

2020-03-31 Thread Etienne Jouvin
With Jolt transformation, you can do something like :


Input
{
  "name": "this and that",
  "field": "value"
}

Transformation 1
[
   {
  "operation":"modify-overwrite-beta",
  "spec":{
 "others":{
"[0]":[
   {
  "name":"here and there"
   }
]
 }
  }
   }
]

Transformation 2
[
   {
  "operation":"modify-overwrite-beta",
  "spec":{
 "others":{
"[1]":[
   {
  "name":"one and two"
   }
]
 }
  }
   }
]



Or in one raw :
[
   {
  "operation":"modify-overwrite-beta",
  "spec":{
 "others":{
"[0]":[
   {
  "name":"here and there"
   }
],
"[1]":[
   {
  "name":"one and two"
   }
]
 }
  }
   }
]

But with this, the difficult part is to send the processor the value [0],
[1] 
You have to put a variable wih the complete value and put it in the JOLT
specification like
  [
   {
  "operation":"modify-overwrite-beta",
  "spec":{
 "others":{
"${myVar}":[
   {
  "name":"here and there"
   }
]
 }
  }
   }
]
Where myVar contains [0]

But may be you the value to add is not a constant.
This will be another issue, but possible (I do it lot of time)



<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
Garanti
sans virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le mar. 31 mars 2020 à 15:40, Darren Govoni  a écrit :

> Sure. Thank you.
>
> Processor #1 creates this JSON
>
> {
>"name":"this and that",
>"field":"value"
> }
>
> passes to Processor #2 which adds a record to a sub-field
>
> {
>"name":"this and that",
>"field":"value",
>"others": [
>   {"name":"here and there"}
> ]
> }
>
> passes to Processor #3 which also adds a record to "others".
>
> {
>"name":"this and that",
>"field":"value",
>"others": [
>   {"name":"here and there"},
>   {"name":"one and two"},
> ]
> }
>
> which is the final output. So it's more building a JSON than transforming,
> sorta.
> --
> *From:* Etienne Jouvin 
> *Sent:* Tuesday, March 31, 2020 9:37 AM
> *To:* users@nifi.apache.org 
> *Subject:* Re: Adding Nested Properties/JSON
>
> Can you post example of input and expected result.
>
> For adding, you can use default or modify-overwrite-beta
>
>
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>  Garanti
> sans virus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
> <#m_5258469381324177961_x_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> Le mar. 31 mars 2020 à 15:30, Darren Govoni  a
> écrit :
>
> Hi. Thank you.
>
> In looking at the Jolt docs these are the operations:
>
> shift, sort, cardinality, modify-default-beta, modify-overwrite-beta,
> modify-define-beta, or remove
>
> I primarily need "add" such that I can add nested elements or add elements
> to an array already in the JSON.
>
> Can a single Jolt processor do this? Or do I need to merge two inputs to
> join them into a single JSON?
>
> thanks in advance!
> Darren
>
>
> --
> *From:* Etienne Jouvin 
> *Sent:* Tuesday, March 31, 2020 8:52 AM
> *To:* users@nifi.apache.org 
> *Subject:* Re: Adding Nested Properties/JSON
>
> Hello.
>
> Jolt transformation.
>
> Etienne
>
> Le mar. 31 mars 2020 à 14:40, Darren Govoni  a
> écrit :
>
> Hi,
>I want to use Nifi to design a flow that modifies, updates, etc a
> nested JSON document (or that can finally output one at the end).
>
> For example:
>
> {
>"name":"this and that",
>"field":"value",
>"others": [
>{"name":"here and there"},
>...
>]
> }
>
> What's the best approach to this using Nifi?
>
> Thanks in advance!
> Darren
>
>
>
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>  Garanti
> sans virus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
> <#m_5258469381324177961_x_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


Re: Adding Nested Properties/JSON

2020-03-31 Thread Etienne Jouvin
Can you post example of input and expected result.

For adding, you can use default or modify-overwrite-beta

<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
Garanti
sans virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le mar. 31 mars 2020 à 15:30, Darren Govoni  a écrit :

> Hi. Thank you.
>
> In looking at the Jolt docs these are the operations:
>
> shift, sort, cardinality, modify-default-beta, modify-overwrite-beta,
> modify-define-beta, or remove
>
> I primarily need "add" such that I can add nested elements or add elements
> to an array already in the JSON.
>
> Can a single Jolt processor do this? Or do I need to merge two inputs to
> join them into a single JSON?
>
> thanks in advance!
> Darren
>
>
> --
> *From:* Etienne Jouvin 
> *Sent:* Tuesday, March 31, 2020 8:52 AM
> *To:* users@nifi.apache.org 
> *Subject:* Re: Adding Nested Properties/JSON
>
> Hello.
>
> Jolt transformation.
>
> Etienne
>
> Le mar. 31 mars 2020 à 14:40, Darren Govoni  a
> écrit :
>
> Hi,
>I want to use Nifi to design a flow that modifies, updates, etc a
> nested JSON document (or that can finally output one at the end).
>
> For example:
>
> {
>"name":"this and that",
>"field":"value",
>"others": [
>{"name":"here and there"},
>...
>]
> }
>
> What's the best approach to this using Nifi?
>
> Thanks in advance!
> Darren
>
>
<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
Garanti
sans virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Adding Nested Properties/JSON

2020-03-31 Thread Etienne Jouvin
Hello.

Jolt transformation.

Etienne

Le mar. 31 mars 2020 à 14:40, Darren Govoni  a écrit :

> Hi,
>I want to use Nifi to design a flow that modifies, updates, etc a
> nested JSON document (or that can finally output one at the end).
>
> For example:
>
> {
>"name":"this and that",
>"field":"value",
>"others": [
>{"name":"here and there"},
>...
>]
> }
>
> What's the best approach to this using Nifi?
>
> Thanks in advance!
> Darren
>


Re: Apache NiFi 1.9.2 InferAvroSchema on csv file header with :

2020-03-11 Thread Etienne Jouvin
Ok thanks.

Finally in my case, this is not an issue.
Because the CSV reader does the job.


<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
Garanti
sans virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le mer. 11 mars 2020 à 10:48, Edward Armes  a
écrit :

> Hi Jouvin,
>
> I believe you are correct that the inferAvroSchema and the convert record
> processor do work differently. I believe this is because the
> inferAvroSchema uses Apache Kite and the convert record derives the schema
> from the record reader itself.
>
> As an aside I have also noticed that when you use a validateRecord with a
> different types of reader and writer record handlers (i.e. json in avro
> out). You get different results l, while I'm not surprised by this I think
> it's worth just flagging up, for future reference.
>
> Edward
>
> On Wed, 11 Mar 2020, 09:35 Etienne Jouvin, 
> wrote:
>
>> Hello all.
>>
>> Just in case someone "can test".
>>
>> I have NiFi 1.9.2 and need to convert CSV to JSON. I do not planned to
>> upgrade for now (because of deployment procedure)
>> In the CSV, I have a column with value like prop:Name
>>
>> i set true for the property Get CSV Header Definition From Data
>>
>> The processor failed because of the name.
>>
>> But if I use a convertRecord with a CSV Reader, that infer schema, and a
>> JSON writer, this is working fine.
>>
>> Not the same algorithm to get infer schema from InferAvroSchema and the
>> reader ?
>>
>> Regards
>>
>> Etienne Jouvin
>>
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>>  Garanti
>> sans virus. www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>> <#m_-8081117188909953864_m_1566800260412186955_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>


Apache NiFi 1.9.2 InferAvroSchema on csv file header with :

2020-03-11 Thread Etienne Jouvin
Hello all.

Just in case someone "can test".

I have NiFi 1.9.2 and need to convert CSV to JSON. I do not planned to
upgrade for now (because of deployment procedure)
In the CSV, I have a column with value like prop:Name

i set true for the property Get CSV Header Definition From Data

The processor failed because of the name.

But if I use a convertRecord with a CSV Reader, that infer schema, and a
JSON writer, this is working fine.

Not the same algorithm to get infer schema from InferAvroSchema and the
reader ?

Regards

Etienne Jouvin


<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
Garanti
sans virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: On ExecuteSQL (1.9.2) failure, infinite loop

2020-02-10 Thread Etienne Jouvin
Ok.
That's not m'y usecase, si put penalty to 0.

Understood, thanks a lot

Le lun. 10 févr. 2020 à 17:29, Pierre Villard 
a écrit :

> Etienne,
>
> The penalty duration is particularly useful when you have the relationship
> going back to the ExecuteSQL processor (self-loop). In that case, you don't
> want to constantly hit the database and give some time before trying again.
>
> HTH,
> Pierre
>
> Le lun. 10 févr. 2020 à 06:03, Etienne Jouvin  a
> écrit :
>
>> Put 0 seconds to penalized and it goes to the logger processor without
>> waiting.
>>
>> That's fine.
>>
>> I just do not understand weel why there is a penalty or yield failure,
>> but now that I know this, this is ok.
>>
>>
>> Le lun. 10 févr. 2020 à 14:58, Etienne Jouvin 
>> a écrit :
>>
>>> Mark,
>>>
>>> Hum fine, I was looking the source code and touch this point ;)
>>>
>>> Thanks a lot.
>>> I am going to play with that.
>>>
>>> Etienne
>>>
>>>
>>> Le lun. 10 févr. 2020 à 14:55, Mark Payne  a
>>> écrit :
>>>
>>>> Etienne,
>>>>
>>>> When a FlowFile fails, ExecuteSQL penalizes the FlowFile. This allows
>>>> you to loop failures without constantly hitting the database. By default,
>>>> the FlowFile will be penalized for 60 seconds. See [1] for more information
>>>> on how penalization works and how to configure the penalty duration.
>>>>
>>>> Thanks
>>>> -Mark
>>>>
>>>> [1]
>>>> http://nifi.apache.org/docs/nifi-docs/html/user-guide.html#settings-tab
>>>>
>>>> > On Feb 10, 2020, at 8:47 AM, Etienne Jouvin 
>>>> wrote:
>>>> >
>>>> > Hello All.
>>>> >
>>>> > Here is an extract of my process
>>>> > 
>>>> >
>>>> > If the executeSQL failed (invalid SQL for example), the flowfile goes
>>>> to the failure relation.
>>>> > At very first, I call the update attribute and I saw that the
>>>> flowfile is kept into the relation, never proceeded.
>>>> > So I try to put intermediate processor, LogMessage, but this is the
>>>> same.
>>>> >
>>>> > Notice that I do not have this in the success relation.
>>>> >
>>>> > Does someone have this also ?
>>>> >
>>>> > Regards
>>>> >
>>>> > Etienne Jouvin
>>>> >
>>>>
>>>>


Re: On ExecuteSQL (1.9.2) failure, infinite loop

2020-02-10 Thread Etienne Jouvin
Put 0 seconds to penalized and it goes to the logger processor without
waiting.

That's fine.

I just do not understand weel why there is a penalty or yield failure, but
now that I know this, this is ok.


Le lun. 10 févr. 2020 à 14:58, Etienne Jouvin  a
écrit :

> Mark,
>
> Hum fine, I was looking the source code and touch this point ;)
>
> Thanks a lot.
> I am going to play with that.
>
> Etienne
>
>
> Le lun. 10 févr. 2020 à 14:55, Mark Payne  a écrit :
>
>> Etienne,
>>
>> When a FlowFile fails, ExecuteSQL penalizes the FlowFile. This allows you
>> to loop failures without constantly hitting the database. By default, the
>> FlowFile will be penalized for 60 seconds. See [1] for more information on
>> how penalization works and how to configure the penalty duration.
>>
>> Thanks
>> -Mark
>>
>> [1]
>> http://nifi.apache.org/docs/nifi-docs/html/user-guide.html#settings-tab
>>
>> > On Feb 10, 2020, at 8:47 AM, Etienne Jouvin 
>> wrote:
>> >
>> > Hello All.
>> >
>> > Here is an extract of my process
>> > 
>> >
>> > If the executeSQL failed (invalid SQL for example), the flowfile goes
>> to the failure relation.
>> > At very first, I call the update attribute and I saw that the flowfile
>> is kept into the relation, never proceeded.
>> > So I try to put intermediate processor, LogMessage, but this is the
>> same.
>> >
>> > Notice that I do not have this in the success relation.
>> >
>> > Does someone have this also ?
>> >
>> > Regards
>> >
>> > Etienne Jouvin
>> >
>>
>>


Re: On ExecuteSQL (1.9.2) failure, infinite loop

2020-02-10 Thread Etienne Jouvin
Mark,

Hum fine, I was looking the source code and touch this point ;)

Thanks a lot.
I am going to play with that.

Etienne


Le lun. 10 févr. 2020 à 14:55, Mark Payne  a écrit :

> Etienne,
>
> When a FlowFile fails, ExecuteSQL penalizes the FlowFile. This allows you
> to loop failures without constantly hitting the database. By default, the
> FlowFile will be penalized for 60 seconds. See [1] for more information on
> how penalization works and how to configure the penalty duration.
>
> Thanks
> -Mark
>
> [1]
> http://nifi.apache.org/docs/nifi-docs/html/user-guide.html#settings-tab
>
> > On Feb 10, 2020, at 8:47 AM, Etienne Jouvin 
> wrote:
> >
> > Hello All.
> >
> > Here is an extract of my process
> > 
> >
> > If the executeSQL failed (invalid SQL for example), the flowfile goes to
> the failure relation.
> > At very first, I call the update attribute and I saw that the flowfile
> is kept into the relation, never proceeded.
> > So I try to put intermediate processor, LogMessage, but this is the same.
> >
> > Notice that I do not have this in the success relation.
> >
> > Does someone have this also ?
> >
> > Regards
> >
> > Etienne Jouvin
> >
>
>


On ExecuteSQL (1.9.2) failure, infinite loop

2020-02-10 Thread Etienne Jouvin
Hello All.

Here is an extract of my process
[image: image.png]

If the executeSQL failed (invalid SQL for example), the flowfile goes to
the failure relation.
At very first, I call the update attribute and I saw that the flowfile is
kept into the relation, never proceeded.
So I try to put intermediate processor, LogMessage, but this is the same.

Notice that I do not have this in the success relation.

Does someone have this also ?

Regards

Etienne Jouvin


Re: InvokeHTTP & AttributesToJSON Help

2020-02-01 Thread Etienne Jouvin
Hi.

If I well understand, you can configure the InvokeHTTP to always output the
response.
In this case, the response will be routed to the response relation in any
case.
Then yo ucan do a RouteOnAttribute to check the response status. And if
HTTP code, from attribute, is 500 go to a specific relation, if 200 to
other and so on.
But be careful, retry, error relation are still active. So you can auto
finish on them and just work from the response relation.

Etienne


Le sam. 1 févr. 2020 à 15:26, Darren Govoni  a écrit :

> Hi,
>   I have probably 2 easy problems I can't seem to solve (still new).
>
>
>1. I want to route a status 500 to Failure. Not retry. The response
>contains a JSON message.
>2. Currently, I am routing the InvokeHTTP Retry with code 500 to
>AttributesToJSON to pull the response JSON from "invokehttp.response.body"
>and put it as the flow file. However, it does not work the way I expect.
>   1. I want the response body to become the flow file. Instead I get
>  1. { "invokehttp.response.body": "my json encoded json response"
>  }
>  2. I do not want the outer "invokehttp.response.body" field
>   3. I then tried to unwrap this using SplitJSON, but I cannot seems
>to use this JSON path
>   1. $.invokehttp.response.body - Because the dot notation used by
>   Nifi has different semantics to JSONPath.
>
> Any easy fixes to these conundrums?
>
> thank you!
> D
>


Re: How to preserve the previous flowfile content using InvokeHttp

2020-01-27 Thread Etienne Jouvin
With Notify, you can specify attributs to copy from thé Notify to the wait.
At very first, that what I used. But you have a "latency" between thé two
process, because of thread and I prefer to use distribute cache. But
careful also, the distribute cache Can ne access from other point...

Le lun. 27 janv. 2020 à 20:45, Contacto Control Cobros <
contactocontrolcob...@gmail.com> a écrit :

> Thanks Etienne for your response,
>
> I tried both approaches. The second goes better with what I need, because
> I require the sessionID and with the first approach I didn't find how to
> share the sessionID after the Notify.
>
> I just hope that storing the content in the distributed cache is better
> than having it as an attribute.
>
>
> El lun., 27 ene. 2020 a las 11:14, Etienne Jouvin (<
> lapinoujou...@gmail.com>) escribió:
>
>> Hello.
>>
>> I manage to do this in two ways :
>> 1. Use Wait / Notify process. Put the original content in the wait, and
>> after your post is finished call the Notify to free the original one.
>> 2. Or use a distribute cache map. Store the original content in the
>> cache, do your stuff and then request the original content from the cache.
>>
>> Hope it helps
>>
>>
>> Le lun. 27 janv. 2020 à 17:12, Contacto Control Cobros <
>> contactocontrolcob...@gmail.com> a écrit :
>>
>>> Hello community
>>>
>>> I need to read a JSON file that is obtained as an attachment to an email
>>> with a specific subject, so far there is no problem. Then I need to make a
>>> POST to get a sessionID, with username and password that I have as a
>>> parameter.
>>>
>>> Finally I need to send the JSON content using another POST request,
>>> sending as header the sessionID that I obtained.
>>>
>>> My problem is how to make an InvokeHTTP (POST), without losing the
>>> content I read in the first step. Nifi always requires that the body for
>>> the POST be the Flowfile content. I would not like to load the JSON as an
>>> attribute, it can be very large.
>>>
>>> I appreciate your help.
>>>
>>


Re: How to preserve the previous flowfile content using InvokeHttp

2020-01-27 Thread Etienne Jouvin
Hello.

I manage to do this in two ways :
1. Use Wait / Notify process. Put the original content in the wait, and
after your post is finished call the Notify to free the original one.
2. Or use a distribute cache map. Store the original content in the cache,
do your stuff and then request the original content from the cache.

Hope it helps


Le lun. 27 janv. 2020 à 17:12, Contacto Control Cobros <
contactocontrolcob...@gmail.com> a écrit :

> Hello community
>
> I need to read a JSON file that is obtained as an attachment to an email
> with a specific subject, so far there is no problem. Then I need to make a
> POST to get a sessionID, with username and password that I have as a
> parameter.
>
> Finally I need to send the JSON content using another POST request,
> sending as header the sessionID that I obtained.
>
> My problem is how to make an InvokeHTTP (POST), without losing the content
> I read in the first step. Nifi always requires that the body for the POST
> be the Flowfile content. I would not like to load the JSON as an attribute,
> it can be very large.
>
> I appreciate your help.
>


Re: SSLContextService configuration

2020-01-05 Thread Etienne Jouvin
Hello.

Thanks for the point of view.

Nice for parameters, did not see this new features. It looks pretty cool.
And should solve my purpose. Unfortunately, I did not expected to make an
upgrade for my clients, but this is an another "problem".

For NiFi registry, this is what I do when possible.
In the current case, this is not possible because the development cycle is
the following :
* Develop on my laptop.
* Validate the process on my instance.
* Then go to the client, and give him the process. The client do the
installation and I do not have right to install by myself.
* Moreover, all three instances and separated. Dev instance can not
communicate with Integration and Production. We could not have a registry
accessible by those three, and neither frmo my laptop.

For versioning, I understand. For the 1.9.3, it was just "a hope", but I
understand this is just bug fixing release, if there is one. I am OK with
that.

And finally yo give the "answer". You do not anticipate such a feature. So
no need to spend time on such things, because you are not favorable for
that. It's OK and if I really need, I can always do my own service, that's
the power of open source ;)

Thanks for all.

Etienne

Le sam. 4 janv. 2020 à 00:05, Andy LoPresto  a écrit :

> I am not sure I follow all of the issues you are describing, but I will
> try to address them as I understand them.
>
> In the 1.10.0 release, parameters [1] were introduced, which allow any
> component property to use a reference to externally-defined values (this
> does not rely on Expression Language, which must be explicitly enabled for
> each property, and allows for sensitive parameter values so it can be used
> for passwords). You should be able to define a single controller service
> with the proper parameters specified for the passwords, and then each
> environment will have a parameter context defined once which contains the
> appropriate passwords. Any time a new flow is loaded or updated that
> references the controller service, no changes will be required.
>
> We do not recommend using templates for flow deployment as NiFi Registry
> [2] is available and is a more robust solution.
>
> There is currently no plan for a 1.9.3 release, and even if it came to
> fruition, it would not include this behavior as it would be a bug fix only,
> not a feature-bearing release. We use semantic versioning [3], so the next
> release which would contain new features is 1.11.0.
>
> I do not anticipate adding a feature for a “proxy” controller service
> which wraps another controller service in this manner because I don’t see
> this addressing the problem you have. I believe there was a fix [4] in the
> most recent version of NiFi Registry which addresses a similar issue.
>
> [1] https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Parameters
> [2] https://nifi.apache.org/docs/nifi-registry-docs/index.html
> [3] https://semver.org
> [4] https://issues.apache.org/jira/browse/NIFIREG-288
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Jan 3, 2020, at 10:50 AM, Etienne Jouvin 
> wrote:
>
> Hello all.
>
> Those last days, I spent some times to deploy process to my client, using
> a Template.
> In the template, I have many InvokeHTTP processors, and some services
> related to the SSLContextService.
>
> My client have 3 environments, and for two of them, I can not configure
> the SSLContextService, because I do not have to know password for keystore
> and trustore.
>
> So we decide to setup a SSLContextService at "root" level in NiFi once for
> all.
> Each time I deploy the Template, a new service is deployed (this was a
> previous question by someone here).
> I just have to delete the serice created during the template import. And
> then modify all processors.
>
> I was thinking of something than my help me and ask here if you think I
> could be nice to have it in future release (ideally in 1.9.3, if planned)
> We could have a sort of "proxy" for SSLContextService. The only property
> would be an instance of SSLContextService.
> And each call on the proxy will be a delegation to to "wrapped" instance.
>
> Like this, during deploy, I will just have to update the instance set on
> the "proxy".
> For other usage, we will be able to switch easly between SSL Context.
>
> The problem could be the implementation that may produce circular
> reference. But it is not our fault if user make things stupid.
>
>
> What do you think about that ?
>
>
> Regards
>
> Etienne Jouvin
>
>
>


SSLContextService configuration

2020-01-03 Thread Etienne Jouvin
Hello all.

Those last days, I spent some times to deploy process to my client, using a
Template.
In the template, I have many InvokeHTTP processors, and some services
related to the SSLContextService.

My client have 3 environments, and for two of them, I can not configure the
SSLContextService, because I do not have to know password for keystore and
trustore.

So we decide to setup a SSLContextService at "root" level in NiFi once for
all.
Each time I deploy the Template, a new service is deployed (this was a
previous question by someone here).
I just have to delete the serice created during the template import. And
then modify all processors.

I was thinking of something than my help me and ask here if you think I
could be nice to have it in future release (ideally in 1.9.3, if planned)
We could have a sort of "proxy" for SSLContextService. The only property
would be an instance of SSLContextService.
And each call on the proxy will be a delegation to to "wrapped" instance.

Like this, during deploy, I will just have to update the instance set on
the "proxy".
For other usage, we will be able to switch easly between SSL Context.

The problem could be the implementation that may produce circular
reference. But it is not our fault if user make things stupid.


What do you think about that ?


Regards

Etienne Jouvin


Re: Question about data provenance menu in processors

2019-12-18 Thread Etienne Jouvin
When you request dataprovenance from the processor, then you should have
the component ID in the filter "Component ID".

Le mer. 18 déc. 2019 à 14:40, Dieter Scholz  a écrit :

> Hello,
>
> the search filter is empty. Is that the problem? What should be the
> default value if any?
>
> Thanks.
>
> Dieter
>
> Am 18.12.2019 um 13:30 schrieb Etienne Jouvin:
>
> Ok sorry.
>
> I've never seen that. At least the list is empty sometimes and I have to go
> into filters to refresh.
>
> Did you check search filter after display the data provenance for a
> processor ?
>
>
> Le mer. 18 déc. 2019 à 13:21, Dieter Scholz  
>  a écrit :
>
>
> Hello,
>
> exactly this is my problem. When I right click on a processor I see the
> provenance entries of all processors instead of only these from the
> processor I right click on.
>
> A few days ago it worked like you wrote it in your answer. But suddenly
> the behaviour changed.
>
> Thanks.
>
> Dieter
>
> Am 18.12.2019 um 11:36 schrieb Etienne Jouvin:
>
> Hello.
>
> You have this when you right click on the processor and then call the data
> provenance.
>
> Data provenance from the menu is for the instance in global.
>
> Etienne
>
>
> Le mer. 18 déc. 2019 à 11:21, Dieter Scholz  
>a écrit :
>
>
> Hello,
>
> in the past when I opened the data provenance menu of a processor only the
> content of the flow entries from the processor were listed. Now when I use
> this menu the data provenance entries of all processors are listed.
>
> I made no update and I'm not aware of a configuration change I did.
>
> How can I restore the old behaviour? Where did I change something by
> mistake.
>
> Thanks for your help.
>
> Dieter
>
>
>
>
>
>
>
>
>


Re: Question about data provenance menu in processors

2019-12-18 Thread Etienne Jouvin
Ok sorry.

I've never seen that. At least the list is empty sometimes and I have to go
into filters to refresh.

Did you check search filter after display the data provenance for a
processor ?


Le mer. 18 déc. 2019 à 13:21, Dieter Scholz  a écrit :

> Hello,
>
> exactly this is my problem. When I right click on a processor I see the
> provenance entries of all processors instead of only these from the
> processor I right click on.
>
> A few days ago it worked like you wrote it in your answer. But suddenly
> the behaviour changed.
>
> Thanks.
>
> Dieter
>
> Am 18.12.2019 um 11:36 schrieb Etienne Jouvin:
>
> Hello.
>
> You have this when you right click on the processor and then call the data
> provenance.
>
> Data provenance from the menu is for the instance in global.
>
> Etienne
>
>
> Le mer. 18 déc. 2019 à 11:21, Dieter Scholz  
>  a écrit :
>
>
> Hello,
>
> in the past when I opened the data provenance menu of a processor only the
> content of the flow entries from the processor were listed. Now when I use
> this menu the data provenance entries of all processors are listed.
>
> I made no update and I'm not aware of a configuration change I did.
>
> How can I restore the old behaviour? Where did I change something by
> mistake.
>
> Thanks for your help.
>
> Dieter
>
>
>
>
>
>


Re: Question about data provenance menu in processors

2019-12-18 Thread Etienne Jouvin
Hello.

You have this when you right click on the processor and then call the data
provenance.

Data provenance from the menu is for the instance in global.

Etienne


Le mer. 18 déc. 2019 à 11:21, Dieter Scholz  a écrit :

> Hello,
>
> in the past when I opened the data provenance menu of a processor only the
> content of the flow entries from the processor were listed. Now when I use
> this menu the data provenance entries of all processors are listed.
>
> I made no update and I'm not aware of a configuration change I did.
>
> How can I restore the old behaviour? Where did I change something by
> mistake.
>
> Thanks for your help.
>
> Dieter
>
>
>


Re: XSL transformation date formating

2019-12-13 Thread Etienne Jouvin
So for information.

This is not "possible".

See this document : http://www.saxonica.com/products/PD9.9/HE.pdf
Chapter 8

> Run-time localization support for formatting of dates and numbers, and
> sorting and comparison of strings, building on the capabilities of the Java
> Virtual Machine. For more details see: Unicode collation, Localizing
> numbers and dates.


And regarding the PE edition : http://www.saxonica.com/products/PD9.9/PE.pdf
Chapter 15

> Run-time localization support for formatting of dates and numbers, and
> sorting and comparison of strings, building on the capabilities of the
> ICU-J library. The Advanced level also includes APIs which allow additional
> languages to be supported.



So it is clear, with version HE of Saxon, no way to have localization
functions on date in the XSL.

Need to check how to do that now in other way.

Etienne Jouvin


Le ven. 13 déc. 2019 à 13:21, Etienne Jouvin  a
écrit :

> Hello all
>
> Currently working with XSL transformation.
> With this, we can use a function format-dateTime like this :
> format-dateTime(value, $dateFormatPattern, 'fr', (), ())
>
> I try this with some online tools and everything work like a charm.
>
> Here, I try to force the local to have French label.
> But in the result, I have something like : [Language: en]04 Jan. 2019
>
> I found that Saxon is used to do transformation.
> And some information on the document inform us that when locale is not
> found, the english default locale is used.
> And it requires to have ICU4J loaded to achieve the format.
>
> For now, I just tried to copy the ICU4J library, after download, in the
> lib folder. But nothing better.
>
> I will try to put it in the nar file and see what happens.
>
> Does someone have the same requirements ?
>
> Regards
>
> Etienne Jouvin
>
>


XSL transformation date formating

2019-12-13 Thread Etienne Jouvin
Hello all

Currently working with XSL transformation.
With this, we can use a function format-dateTime like this :
format-dateTime(value, $dateFormatPattern, 'fr', (), ())

I try this with some online tools and everything work like a charm.

Here, I try to force the local to have French label.
But in the result, I have something like : [Language: en]04 Jan. 2019

I found that Saxon is used to do transformation.
And some information on the document inform us that when locale is not
found, the english default locale is used.
And it requires to have ICU4J loaded to achieve the format.

For now, I just tried to copy the ICU4J library, after download, in the lib
folder. But nothing better.

I will try to put it in the nar file and see what happens.

Does someone have the same requirements ?

Regards

Etienne Jouvin


Re: promote NiFi flows from DEV to PROD and controllers

2019-12-06 Thread Etienne Jouvin
Hello.

Why don't you use NiFi registry ?
Discover this couple if weeks ago, ans ont ils really cool

Le ven. 6 déc. 2019 à 19:14, Boris Tyukin  a écrit :

> Hi,
>
> We have a single NiFi Registry and DEV/PROD NiFi clusters. When we deploy
> changes to PROD NiFi (after initial commit), we often have to repoint
> controllers, especially on custom groovy processors as NiFi would not
> recognize them by name.
>
> It does not happen with standard processors/controllers.
>
> Is there a trick to make custom processors behave the same way?
>
> thanks!
>


Re: Split a flow file to multiples, related by common key and with split count

2019-12-05 Thread Etienne Jouvin
On "official site" ;)

https://jolt-demo.appspot.com/#inception

Le jeu. 5 déc. 2019 à 15:01, James McMahon  a écrit :

> Absolutely. I am going to do that. When you started working with it, were
> there any particularly helpful examples of its application you used to
> learn it that you recommend?
>
> On Thu, Dec 5, 2019 at 8:57 AM Etienne Jouvin 
> wrote:
>
>> Hello.
>>
>> You are right. If it works and you are satisfied, you should keep your
>> solution.
>> By the wya JoltTransformation may be difficult at the very beginning. But
>> it is very powerful and with some pratice, it begins to be easy.
>>
>> For study, you may give it a try.
>>
>> Regards.
>>
>> Etienne Jouvin
>>
>> Le jeu. 5 déc. 2019 à 14:40, James McMahon  a
>> écrit :
>>
>>> Hello Etienne. Yes, Matt may have mentioned that approach and I started
>>> to look into it.
>>>
>>> My initial thought was this: is it much of a savings? My rudimentary
>>> process works in three process steps - each simple in configuration. The
>>> JoltTransformationJSON would eliminate only one processor, and it looks
>>> fairly complex to configure. It appears to require a Custom Transformation
>>> Class Name, a Custom Module Directory, and a Jolt Specification. For folks
>>> who have done it before those may be an afterthought. But as is often the
>>> case with NiFi, if you've never used a processor sometimes it is hard to
>>> find concrete examples to configure NiFi processors, services, schemas, etc
>>> etc. I opted to take the more familiar path, not being familiar with the
>>> Jolt transformation processor.
>>>
>>> Am happy to learn and will see if there's much out there in way of
>>> examples to configure JoltTransformationJSON. For now I'll use my less
>>> elegant solution that works gets me where i need to be: pumping data
>>> through my production system.
>>>
>>> Good suggestion. Thanks again.
>>>
>>> On Thu, Dec 5, 2019 at 8:20 AM Etienne Jouvin 
>>> wrote:
>>>
>>>> Hello.
>>>>
>>>> Why don't you use a JoltTransformation process first to produce
>>>> multiple element in JSON according value in the array, and duplicate common
>>>> attributes for all.
>>>> And then, you do the split.
>>>>
>>>> Etienne
>>>>
>>>>
>>>>
>>>>


Re: Split a flow file to multiples, related by common key and with split count

2019-12-05 Thread Etienne Jouvin
Hello.

You are right. If it works and you are satisfied, you should keep your
solution.
By the wya JoltTransformation may be difficult at the very beginning. But
it is very powerful and with some pratice, it begins to be easy.

For study, you may give it a try.

Regards.

Etienne Jouvin

Le jeu. 5 déc. 2019 à 14:40, James McMahon  a écrit :

> Hello Etienne. Yes, Matt may have mentioned that approach and I started to
> look into it.
>
> My initial thought was this: is it much of a savings? My rudimentary
> process works in three process steps - each simple in configuration. The
> JoltTransformationJSON would eliminate only one processor, and it looks
> fairly complex to configure. It appears to require a Custom Transformation
> Class Name, a Custom Module Directory, and a Jolt Specification. For folks
> who have done it before those may be an afterthought. But as is often the
> case with NiFi, if you've never used a processor sometimes it is hard to
> find concrete examples to configure NiFi processors, services, schemas, etc
> etc. I opted to take the more familiar path, not being familiar with the
> Jolt transformation processor.
>
> Am happy to learn and will see if there's much out there in way of
> examples to configure JoltTransformationJSON. For now I'll use my less
> elegant solution that works gets me where i need to be: pumping data
> through my production system.
>
> Good suggestion. Thanks again.
>
> On Thu, Dec 5, 2019 at 8:20 AM Etienne Jouvin 
> wrote:
>
>> Hello.
>>
>> Why don't you use a JoltTransformation process first to produce multiple
>> element in JSON according value in the array, and duplicate common
>> attributes for all.
>> And then, you do the split.
>>
>> Etienne
>>
>>
>>
>>


Re: Split a flow file to multiples, related by common key and with split count

2019-12-05 Thread Etienne Jouvin
Hello.

Why don't you use a JoltTransformation process first to produce multiple
element in JSON according value in the array, and duplicate common
attributes for all.
And then, you do the split.

Etienne


Le jeu. 5 déc. 2019 à 14:11, James McMahon  a écrit :

> Daeho and Matt, thank you for all your suggestions. You helped me get to a
> solution. Here is how I unwound my incoming JSON with a simple flow,
>
> My incoming JSON flowfile looks like this:
> {
>   "KEY1":"value1",
>   "KEY2":"value2",
>   "FNAMES":["A","B","C","D"],
>   "KEY4":2
> }
> My goal is to have a flowfile for each of A, B, C, and D, with attribute
> THIS_NAME set to each singular FNAMES value, and also preserving KEY1,
> KEY2, KEY3 as flowfile attributes with their values pulled from the JSON.
>
> Final flow: ListFile->FetchFile->EvaluateJsonPath->SplitJson->ExtractText
>
> EvaluateJsonPath grabs all JSON key/values to attributes. At this point
> though, FNAMES attribute is ["A","B","C","D"] -- not quite what we require.
>
> SplitJson creates four flowfiles from one, its configuration setting
> JsonPath Expression as $.FNAMES . This results in four flowfiles. We're
> almost home.
>
> The flowfile content is now just each of the singular values from FNAMES.
> ExtractText creates attribute THIS_NAME configured like this:
> Include Capture Group 0 false
> Dynamic property added is THIS_NAME, configured to regex pattern (.*) .
> (Bad idea in general in any situation where content length may vary to
> large content, but not in our case where we know the values in the original
> JSON list are no larger than half a KB.)
>
> After this ExtractText step we have all our attributes, including
> fragment.count of 4 and a common fragment.identifier we can later use to
> reunite all after individual processing, with a MergeContent or similar.
>
> Thank you once again.
>
> On Thu, Dec 5, 2019 at 6:36 AM 노대호Daeho Ro 
> wrote:
>
>> Hm I might wrong.
>>
>> It wouldn't preserve other keys, so you have to evaluate other keys
>> first, and split FNAMES and evaluate again. Sorry for the confusion.
>>
>> 2019년 12월 5일 (목) 오후 8:29, James McMahon 님이 작성:
>>
>>> Typo in my initial reply. I did use $.FNAMES. It drops all the other
>>> key/value pairs in the output split result flowfiles.
>>> I configured my SplitJSON like so:
>>> JsonPathExpression$.FNAME*S*
>>> Null Value Representation empty string
>>>
>>> If there are two values in the json array for that key FNAME*S*, I do
>>> get two output flowfiles. But the only value present in the output is the
>>> value from the split of the value list of FNAMES. All my other JSON keys
>>> and values are not present. How do I tell SplitJSON to also retain all the
>>> key/values I did not split on?
>>>
>>> On Thu, Dec 5, 2019 at 6:15 AM 노대호Daeho Ro 
>>> wrote:
>>>
 Path to be $.FNAMES, that will work I guess.

 2019년 12월 5일 (목) 오후 8:10, James McMahon 님이 작성:

> I should add that I also tried this for JsonPathExpression $.*
> That result also wasn't what I require, because it gave me 14
> different flowfiles each with only one value - - the two that resulted 
> from
> the FNAME key, and one for each of the other 12 keys that had only one
> value.
> My incoming JSON flowfile looks like this:
> {
>   "KEY1":"value1",
>   "KEY2":"value2",
>.
>.
>   "FNAMES":["A","B"],
>   "KEY13":2
> }
>
> This is what I need as output:
> {
>   "KEY1":"value1",
>   "KEY2":"value2",
>.
>.
>   "FNAMES":"A",,
>   "KEY13":2
> }
>
> and
>
> {
>   "KEY1":"value1",
>   "KEY2":"value2",
>.
>.
>   "FNAMES":"B",
>   "KEY13":2
> }
>
> How does one configure SplitJSON to accomplish that?
>
> On Thu, Dec 5, 2019 at 5:59 AM James McMahon 
> wrote:
>
>> Daeho, I configured my SplitJSON like so:
>> JsonPathExpression$.FNAME
>> Null Value Representation empty string
>>
>> If there are two values in the json array for that key FNAME, I do
>> get two output flowfiles. But the only value present in the output is the
>> value from the split of the list. All my other JSON keys and values are 
>> not
>> present. How do I tell SplitJSON to also retain all the key/values I did
>> not split on?
>>
>>
>> On Wed, Dec 4, 2019 at 9:26 PM 노대호Daeho Ro 
>> wrote:
>>
>>> Of course.
>>>
>>> There is a processor, the name is SplitJson. It can split the JSON
>>> text by defined key. For example, if there is a key name is 'fname' and 
>>> has
>>> the value [a, b, c]. Once you split the JSON by that processor, the
>>> resulted JSON will have the same key and values for others but 'fname' 
>>> will
>>> be a for the first JSON , b for the second and so on.
>>>
>>> After that, do the EvaluateJsonPath for FNAME then it will have a
>>> and b and c for 

Re: LookupRecord with RestLookupService

2019-12-04 Thread Etienne Jouvin
Hello all.

Found a solution.
On the LookupRecord, I put those properties:
mime.type : toString('application/json', 'UTF-8')
request.body : toString('{...}', 'UTF-8')
request.method : toString('POST', 'UTF-8')

With a big difficulty, the request.body variable content should not
contains any carriage return.

It seems to send the properties correctly.

But now, I am not able to set timeout and readtimeout on the HTTP call.
The request service is pretty slow... So I am always reaching the timeout.

Do you know if there is a header or something I can send to setup
differents timeouts ?

Regards

Etienne Jouvin







Le mer. 4 déc. 2019 à 20:31, Etienne Jouvin  a
écrit :

> Hello all.
>
> I am trying a little bit the LookupRecord with RestlookupService.
>
> When watching the code for RestLookupService, we can see the following in
> the lookup function :
> final String endpoint = determineEndpoint(coordinates);
> final String mimeType = (String)coordinates.get(MIME_TYPE_KEY);
> final String method = ((String)coordinates.getOrDefault(METHOD_KEY, "get"
> )).trim().toLowerCase();
> final String body = (String)coordinates.get(BODY_KEY);
>
> Ok it is looking nice, we can send the body and method type.
>
> So, I am build the request body in an attribute of the flow file, then set
> attributes as it :
> request.body :  {}
> request.method :  POST
>
>
> BUT those values are not sent to the service.
> When I analyzed the LookupRecord code, I found the function route, where
> in fact the coordinates are only built when recordPath evaluation.
>
> Does it means the mime.type, method and body should be retrieve from a
> record path evaluate on the flowfile body content ?
>
> Regards.
>
> Etienne Jouvin
>
>


LookupRecord with RestLookupService

2019-12-04 Thread Etienne Jouvin
Hello all.

I am trying a little bit the LookupRecord with RestlookupService.

When watching the code for RestLookupService, we can see the following in
the lookup function :
final String endpoint = determineEndpoint(coordinates);
final String mimeType = (String)coordinates.get(MIME_TYPE_KEY);
final String method = ((String)coordinates.getOrDefault(METHOD_KEY, "get")).
trim().toLowerCase();
final String body = (String)coordinates.get(BODY_KEY);

Ok it is looking nice, we can send the body and method type.

So, I am build the request body in an attribute of the flow file, then set
attributes as it :
request.body :  {}
request.method :  POST


BUT those values are not sent to the service.
When I analyzed the LookupRecord code, I found the function route, where in
fact the coordinates are only built when recordPath evaluation.

Does it means the mime.type, method and body should be retrieve from a
record path evaluate on the flowfile body content ?

Regards.

Etienne Jouvin


Re: Jolt specification registry

2019-11-20 Thread Etienne Jouvin
Does it means that AvroSchemaRegistry is "not a good idea actually" ?

I am pretty sure I misunderstood, because in this case there is a kind of
compilation on schema.

But you are right, the registry for JOLT specification is just a storage of
blob.

Le mer. 20 nov. 2019 à 16:36, Mark Payne  a écrit :

> I would recommend that we also be careful about the naming here and tying
> this to Jolt. Really, this is just a mechanism for externalizing a big blob
> of text (or bytes). There are several other processors and controller
> services that do this, such as scripted components, Hadoop related
> processors that need things like core-site.xml, etc.
>
> It may be advantageous to consider this as a more generic way to access
> any such resource. A simple implementation would be purely configured
> through the UI but there could be other future implementations that are
> based on fetching from remote services, etc.
>
> Thanks
> -Mark
>
> Sent from my iPhone
>
> On Nov 20, 2019, at 10:28 AM, Joe Witt  wrote:
>
> 
> Yeah filing a JIRA would be good.  Contributing a PR for it would be even
> better.  It should have no impact on the schema registry controller
> service.  This is different.
>
> Thanks
>
> On Wed, Nov 20, 2019 at 10:26 AM Etienne Jouvin 
> wrote:
>
>> Yes it would be a ControllerService as you described.
>>
>> There is currently three implementation :
>> * AvroSchemaRegistry
>> * ConfluentSchemaRegistry
>> * HortonworksSchemaRegistry
>>
>> It could be based on something like them.
>>
>> May be I should send something on Jira or somewhere else to submit the
>> idea to NiFi developers ?
>>
>> But it also means that the processor JoltTransformJSON and
>> JoltTransformRecord should be changed.
>>
>>
>>
>>
>>
>>
>>
>> Le mer. 20 nov. 2019 à 16:08, Joe Witt  a écrit :
>>
>>> Hello
>>>
>>> Is the idea to have a place to store Jolt specifications that you could
>>> then access in various components?
>>>
>>> If so a simple ControllerService such as 'JoltSpecControllerService'
>>> which has a list of keys (names of specs) and values (the spec) would
>>> probably do the trick.
>>>
>>> Thanks
>>>
>>> On Wed, Nov 20, 2019 at 10:04 AM Otto Fowler 
>>> wrote:
>>>
>>>> I think that is a great idea, I’d suggest the same thing for protobuf
>>>> specs as well.
>>>>
>>>> Even if the first step is the registry supporting raw bytes access and
>>>> support….
>>>>
>>>>
>>>>
>>>>
>>>> On November 20, 2019 at 09:28:23, Etienne Jouvin (
>>>> lapinoujou...@gmail.com) wrote:
>>>>
>>>> Hello all.
>>>>
>>>>
>>>> For reader and writers, there is the possibility to store the schema
>>>> inside a schema registry.
>>>> What do you think about having this type of mechanism for
>>>> JolftTransformation ?
>>>> Currently, I can put Jolt specification in variables and get them from
>>>> it, but I think it could be nice tohave same as schema registry.
>>>>
>>>> Regards.
>>>>
>>>> Etienne Jouvin
>>>>
>>>>
>>>>


Re: Jolt specification registry

2019-11-20 Thread Etienne Jouvin
For the PR...

If only I would have enough time for that ;)
Soon it will be end  year holidays, may be...



Le mer. 20 nov. 2019 à 16:28, Joe Witt  a écrit :

> Yeah filing a JIRA would be good.  Contributing a PR for it would be even
> better.  It should have no impact on the schema registry controller
> service.  This is different.
>
> Thanks
>
> On Wed, Nov 20, 2019 at 10:26 AM Etienne Jouvin 
> wrote:
>
>> Yes it would be a ControllerService as you described.
>>
>> There is currently three implementation :
>> * AvroSchemaRegistry
>> * ConfluentSchemaRegistry
>> * HortonworksSchemaRegistry
>>
>> It could be based on something like them.
>>
>> May be I should send something on Jira or somewhere else to submit the
>> idea to NiFi developers ?
>>
>> But it also means that the processor JoltTransformJSON and
>> JoltTransformRecord should be changed.
>>
>>
>>
>>
>>
>>
>>
>> Le mer. 20 nov. 2019 à 16:08, Joe Witt  a écrit :
>>
>>> Hello
>>>
>>> Is the idea to have a place to store Jolt specifications that you could
>>> then access in various components?
>>>
>>> If so a simple ControllerService such as 'JoltSpecControllerService'
>>> which has a list of keys (names of specs) and values (the spec) would
>>> probably do the trick.
>>>
>>> Thanks
>>>
>>> On Wed, Nov 20, 2019 at 10:04 AM Otto Fowler 
>>> wrote:
>>>
>>>> I think that is a great idea, I’d suggest the same thing for protobuf
>>>> specs as well.
>>>>
>>>> Even if the first step is the registry supporting raw bytes access and
>>>> support….
>>>>
>>>>
>>>>
>>>>
>>>> On November 20, 2019 at 09:28:23, Etienne Jouvin (
>>>> lapinoujou...@gmail.com) wrote:
>>>>
>>>> Hello all.
>>>>
>>>>
>>>> For reader and writers, there is the possibility to store the schema
>>>> inside a schema registry.
>>>> What do you think about having this type of mechanism for
>>>> JolftTransformation ?
>>>> Currently, I can put Jolt specification in variables and get them from
>>>> it, but I think it could be nice tohave same as schema registry.
>>>>
>>>> Regards.
>>>>
>>>> Etienne Jouvin
>>>>
>>>>
>>>>


Re: Jolt specification registry

2019-11-20 Thread Etienne Jouvin
Yes it would be a ControllerService as you described.

There is currently three implementation :
* AvroSchemaRegistry
* ConfluentSchemaRegistry
* HortonworksSchemaRegistry

It could be based on something like them.

May be I should send something on Jira or somewhere else to submit the idea
to NiFi developers ?

But it also means that the processor JoltTransformJSON and
JoltTransformRecord should be changed.







Le mer. 20 nov. 2019 à 16:08, Joe Witt  a écrit :

> Hello
>
> Is the idea to have a place to store Jolt specifications that you could
> then access in various components?
>
> If so a simple ControllerService such as 'JoltSpecControllerService' which
> has a list of keys (names of specs) and values (the spec) would probably do
> the trick.
>
> Thanks
>
> On Wed, Nov 20, 2019 at 10:04 AM Otto Fowler 
> wrote:
>
>> I think that is a great idea, I’d suggest the same thing for protobuf
>> specs as well.
>>
>> Even if the first step is the registry supporting raw bytes access and
>> support….
>>
>>
>>
>>
>> On November 20, 2019 at 09:28:23, Etienne Jouvin (lapinoujou...@gmail.com)
>> wrote:
>>
>> Hello all.
>>
>>
>> For reader and writers, there is the possibility to store the schema
>> inside a schema registry.
>> What do you think about having this type of mechanism for
>> JolftTransformation ?
>> Currently, I can put Jolt specification in variables and get them from
>> it, but I think it could be nice tohave same as schema registry.
>>
>> Regards.
>>
>> Etienne Jouvin
>>
>>
>>


Jolt specification registry

2019-11-20 Thread Etienne Jouvin
Hello all.


For reader and writers, there is the possibility to store the schema inside
a schema registry.
What do you think about having this type of mechanism for
JolftTransformation ?
Currently, I can put Jolt specification in variables and get them from it,
but I think it could be nice tohave same as schema registry.

Regards.

Etienne Jouvin


Merge content Max Bin Age optional ?

2019-10-23 Thread Etienne Jouvin
Hello all.

Still working on merging content.
But I have a doubt.


Class BinFiles.java
Function onTrigger.
Got a call to migrateBins when is scheduled.

Function migrateBins, first lines are :

> for (final Bin bin : binManager.removeReadyBins(true)) {
> this.readyBins.add(bin);
> added++;
> }



Class BinManager
Function removeReadyBins
There is a loop

> for (final Bin bin : group.getValue()) {
> if (relaxFullnessConstraint && (bin.isFullEnough() || bin.
> isOlderThan(maxBinAgeSeconds.get(), TimeUnit.SECONDS))) {
> readyBins.add(bin);
> } else if (!relaxFullnessConstraint && bin.isFull()) {
> readyBins.add(bin);
> } else {
> remainingBins.add(bin);
> }
> ...
>   }


 In this case relasFullnessConstraint is true, so check if bin is full or
OlderThan.

Class Bin
Function isOlderThan

> public boolean isOlderThan(final int duration, final TimeUnit unit) {
> final long ageInNanos = System.nanoTime() - creationMomentEpochNs;
> return ageInNanos > TimeUnit.NANOSECONDS.convert(duration, unit);
> }


So if I set 0 seconds for the max age, or keep it empty
This function will always return true.

So the bin can be considered as full but it is wrong.

First call, with fragement identifier --> FRAG1
Second call with fragement identifier --> FRAG2
But during the second call, the check for first bean will return true. It
is not fullEnough, but it is older than the "specify" max age.

Do you confirm than ?

If yes, this is expected behavior ?

Regards.

Etienne Jouvin


Re: Merge content Defrag with high activity

2019-10-22 Thread Etienne Jouvin
Thanks for the answer.

When I read the source code, I saw ow it is complexe and powerful.

In fact, i need multiples tasks, and I may loop on the "same" file.
To summarize, it is something like :

Object as X version.
For first version :
* Fork the object to one branch in order to do an external call and get the
result.
* Fork the object to do "nothing" then merge the result from first fork
into it
Increase the version number and go do it again.
And so loop to all versions.

And for "performance", I try to do this on 50 concurrent objects.
In most case, it works during the merge. But sometimes... the bin was fired
as "ready" but did not reached the expected content.

Anyway, as you said, I had to work on all parameters.
I change the fragment identifier in order to have the version number, and
not only a value that is commun to all versions.

I also tried to set current task, for the merge to 1 and set a Run Schedule
to 0.1
But it was "pretty slow", almost not as fast as I expected.

But now, working with a new identifier, and set maximum bean number relly
greater than expected possible (6x more) I works as expected.

I will survey anyway.

Thanks

Etienne Jouvin










Le mar. 22 oct. 2019 à 16:59, Joe Witt  a écrit :

> Hello
>
> You should only have 1 or a few tasks at most for this processor.
> Scheduling can be frequent but choosing different options and seeing for
> your case is best.
>
> This processor is relatively difficult to configure correctly as it is a
> complex case and has powerful options.  What you will need to watch out for
> is the maximum number of bins it can track at once.  If each bin is to hold
> at least and at most 2 things and lots of data is arriving then what you
> need are lots of bins so focus on that setting.
>
> Thanks
>
> On Tue, Oct 22, 2019 at 10:49 AM Etienne Jouvin 
> wrote:
>
>> Hi,
>>
>> Here is the case.
>>
>> High activity, and use a MergeContent action.
>> I setup the mergeContent with 300 concurrent action and no schedule,
>> meaning Run Schedule set to 0.
>>
>>
>> Minimum Number of Entries : 2
>> Maximum Number of Entries : 2
>>
>>
>> No limit on the size.
>>
>>
>> In some case, I reach exception :
>> because the expected number of fragments is 2 but found only 1 fragments
>>
>>
>>
>> What I believe is that I am reaching side effet.
>> May be, I have multiple execution at the same time, and some bins are
>> considered as fulled and returned to be proceeded. But when returned, the
>> object does not contains all expected flowfiles and during the execution,
>> function processBins in class BinFiles, I am reaching the exception.
>>
>>
>> It seems that I manage to skip this error when setting concurrent task to
>> 1.
>> But, it slow down a little the process.
>>
>> Should I keep 300 concurrent tasks, and set some schedule, something like
>> 0.1 second ?
>>
>> Regards
>>
>> Etienne Jouvin
>>
>


Merge content Defrag with high activity

2019-10-22 Thread Etienne Jouvin
Hi,

Here is the case.

High activity, and use a MergeContent action.
I setup the mergeContent with 300 concurrent action and no schedule,
meaning Run Schedule set to 0.


Minimum Number of Entries : 2
Maximum Number of Entries : 2


No limit on the size.


In some case, I reach exception :
because the expected number of fragments is 2 but found only 1 fragments



What I believe is that I am reaching side effet.
May be, I have multiple execution at the same time, and some bins are
considered as fulled and returned to be proceeded. But when returned, the
object does not contains all expected flowfiles and during the execution,
function processBins in class BinFiles, I am reaching the exception.


It seems that I manage to skip this error when setting concurrent task to 1.
But, it slow down a little the process.

Should I keep 300 concurrent tasks, and set some schedule, something like
0.1 second ?

Regards

Etienne Jouvin