Re: Nifi 1.16.1 migration failed for encrypted of sensitive values

2022-05-04 Thread sanjeet rath
Thanks David, it worked after the suggested manual changes in both the
files.

Thanks a lot for the help
Sanjeet

On Wed, 4 May 2022 at 10:24 PM, David Handermann <
exceptionfact...@apache.org> wrote:

> Hi Sanjeet,
>
> Following up on my previous reply, the potential workaround would actually
> require changing "aes/gcm/256" to "AES_GCM". I am looking into addressing
> this problem in a Jira issue.
>
> Regards,
> David Handermann
>
> On Wed, May 4, 2022 at 11:41 AM David Handermann <
> exceptionfact...@apache.org> wrote:
>
>> Hi Sanjeet,
>>
>> Reviewing the implementation related to the error message you provided,
>> it looks like this could be a bug with decrypting values in authorizers.xml.
>>
>> As a workaround, can you try manually editing authorizers.xml and
>> login-identity-providers.xml, changing "aes/gcm/256" to just "aes/gcm"?
>>
>> The protection scheme resolver should match the standard value, but there
>> may be a problem with the comparison of encryption scheme names.  Changing
>> the "encryption" attribute value to "aes/gcm" may work around the problem,
>> but it sounds like this may need to be addressed in a Jira issue.
>>
>> Regards,
>> David Handermann
>>
>> On Wed, May 4, 2022 at 11:22 AM sanjeet rath 
>> wrote:
>>
>>> Hi Isha,
>>>
>>> We are using same java instalation.
>>>
>>> Our java version is open idk 11.
>>>
>>> In the same system only we are able to encrypt aes/gcm/256 for our old
>>> 1.12.1 nifi version.
>>>
>>> Thanks,
>>> Sanjeet
>>>
>>>
>>> On Wed, 4 May 2022 at 8:40 PM, Isha Lamboo <
>>> isha.lam...@virtualsciences.nl> wrote:
>>>
>>>> Hi Sanjeeth,
>>>>
>>>>
>>>>
>>>> Are you performing the toolkit encryption using the same java
>>>> installation that’s running the NiFi server?
>>>>
>>>>
>>>>
>>>> If not, you may be running into problems because of encryption
>>>> limitations on the java version on your NiFi server.
>>>>
>>>> I think AES256 needs the “Unlimited Strength Encryption” policy and
>>>> that may not be enabled (or even allowed to be enabled in your country).
>>>>
>>>>
>>>>
>>>> If you run the toolkit with the same java installation as the server,
>>>> you can verify this. It should either use aes/gcm/128 or give the same
>>>> error if it tries to use aes/gcm/256.
>>>>
>>>>
>>>>
>>>> Another thing to check is whether you’re using Java 8-251 or newer as
>>>> the migration guidance states.
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>>
>>>>
>>>> Isha
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *Van:* sanjeet rath 
>>>> *Verzonden:* woensdag 4 mei 2022 17:09
>>>> *Aan:* users@nifi.apache.org
>>>> *Onderwerp:* Re: Nifi 1.16.1 migration failed for encrypted of
>>>> sensitive values
>>>>
>>>>
>>>>
>>>> Thanks Pierre for the quick response. I have followed the same doc and
>>>> this is the 3rd version upgrade I am doing for nifi.
>>>>
>>>>
>>>>
>>>> Actually if u see the last line of the error it looks like aes/gcm/256
>>>> is not supported.
>>>>
>>>>
>>>>
>>>> So if you could point something I am doing wrong for this specific
>>>> 1.16.1 version then it would be really helpful for me.
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Sanjeet
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, 4 May 2022 at 8:20 PM, Pierre Villard <
>>>> pierre.villard...@gmail.com> wrote:
>>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> I recommend reading the migration guidance documentation:
>>>>
>>>> https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance
>>>> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FNIFI%2FMigration%2BGuidance=05%7C01%7Cisha.lamboo%40virtualsciences.nl%7Ca6cf25040df64db1d3c908da2de0058f%7C2

Re: Nifi 1.16.1 migration failed for encrypted of sensitive values

2022-05-04 Thread sanjeet rath
Hi Isha,

We are using same java instalation.

Our java version is open idk 11.

In the same system only we are able to encrypt aes/gcm/256 for our old
1.12.1 nifi version.

Thanks,
Sanjeet


On Wed, 4 May 2022 at 8:40 PM, Isha Lamboo 
wrote:

> Hi Sanjeeth,
>
>
>
> Are you performing the toolkit encryption using the same java installation
> that’s running the NiFi server?
>
>
>
> If not, you may be running into problems because of encryption limitations
> on the java version on your NiFi server.
>
> I think AES256 needs the “Unlimited Strength Encryption” policy and that
> may not be enabled (or even allowed to be enabled in your country).
>
>
>
> If you run the toolkit with the same java installation as the server, you
> can verify this. It should either use aes/gcm/128 or give the same error if
> it tries to use aes/gcm/256.
>
>
>
> Another thing to check is whether you’re using Java 8-251 or newer as the
> migration guidance states.
>
>
>
> Regards,
>
>
>
> Isha
>
>
>
>
>
> *Van:* sanjeet rath 
> *Verzonden:* woensdag 4 mei 2022 17:09
> *Aan:* users@nifi.apache.org
> *Onderwerp:* Re: Nifi 1.16.1 migration failed for encrypted of sensitive
> values
>
>
>
> Thanks Pierre for the quick response. I have followed the same doc and
> this is the 3rd version upgrade I am doing for nifi.
>
>
>
> Actually if u see the last line of the error it looks like aes/gcm/256 is
> not supported.
>
>
>
> So if you could point something I am doing wrong for this specific 1.16.1
> version then it would be really helpful for me.
>
>
>
> Thanks,
>
> Sanjeet
>
>
>
>
>
>
>
> On Wed, 4 May 2022 at 8:20 PM, Pierre Villard 
> wrote:
>
> Hi,
>
>
>
> I recommend reading the migration guidance documentation:
>
> https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance
> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FNIFI%2FMigration%2BGuidance=05%7C01%7Cisha.lamboo%40virtualsciences.nl%7Ca6cf25040df64db1d3c908da2de0058f%7C21429da9e4ad45f99a6fcd126a64274b%7C0%7C0%7C637872737450174797%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=GMsVs2HdwuGzny9ty2yfXgr0593suh0l1ULuErpvWlw%3D=0>
>
>
>
> HTH,
>
> Pierre
>
>
>
> Le mer. 4 mai 2022 à 16:46, sanjeet rath  a
> écrit :
>
> Hi ,
>
>
>
> I am facing one issue in migration from 1.12 to 1.16.1 .
>
> I have created one 1.16.1 cluster.And copied flow.xml , authoriser and
> authorisation user file my previous 1.12 version of cluster to this new
> cluster.
>
>
>
> When I am starting the cluster with all the keystone password in
> authoriser and loginidentifer and nifi sensitive key value unencrypted in
> nifi properties file. Then cluster came without any issue.
>
>
>
> When I am encrypting using keytool , all the properties are succefully
> encrypted. How ever while starting the cluster getting one error
>
>
>
> Error in creating bean with name ‘authoriser’ factory bean threw exception
> on object creation nested exception is org.apache.nifi.project.
> Senstivepropertyprotectionexception: protection scheme  [aes/gcm/256] is
> not supported.
>
>
>
> Any hint is really helpful as trying from last 2 days.
>
>
>
> Thanks and regards
>
> Sanjeet
>
>
>
>
>
>
>
>
>
> --
>
> Sanjeet Kumar Rath,
> mob- +91 8777577470
>
> --
>
> Sanjeet Kumar Rath,
> mob- +91 8777577470
>
-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Nifi 1.16.1 migration failed for encrypted of sensitive values

2022-05-04 Thread sanjeet rath
Thanks Pierre for the quick response. I have followed the same doc and this
is the 3rd version upgrade I am doing for nifi.

Actually if u see the last line of the error it looks like aes/gcm/256 is
not supported.

So if you could point something I am doing wrong for this specific 1.16.1
version then it would be really helpful for me.

Thanks,
Sanjeet



On Wed, 4 May 2022 at 8:20 PM, Pierre Villard 
wrote:

> Hi,
>
> I recommend reading the migration guidance documentation:
> https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance
>
> HTH,
> Pierre
>
> Le mer. 4 mai 2022 à 16:46, sanjeet rath  a
> écrit :
>
>> Hi ,
>>
>> I am facing one issue in migration from 1.12 to 1.16.1 .
>> I have created one 1.16.1 cluster.And copied flow.xml , authoriser and
>> authorisation user file my previous 1.12 version of cluster to this new
>> cluster.
>>
>> When I am starting the cluster with all the keystone password in
>> authoriser and loginidentifer and nifi sensitive key value unencrypted in
>> nifi properties file. Then cluster came without any issue.
>>
>> When I am encrypting using keytool , all the properties are succefully
>> encrypted. How ever while starting the cluster getting one error
>>
>> Error in creating bean with name ‘authoriser’ factory bean threw
>> exception on object creation nested exception is org.apache.nifi.project.
>> Senstivepropertyprotectionexception: protection scheme  [aes/gcm/256] is
>> not supported.
>>
>> Any hint is really helpful as trying from last 2 days.
>>
>> Thanks and regards
>> Sanjeet
>>
>>
>>
>>
>> --
>> Sanjeet Kumar Rath,
>> mob- +91 8777577470
>>
>> --
Sanjeet Kumar Rath,
mob- +91 8777577470


Nifi 1.16.1 migration failed for encrypted of sensitive values

2022-05-04 Thread sanjeet rath
Hi ,

I am facing one issue in migration from 1.12 to 1.16.1 .
I have created one 1.16.1 cluster.And copied flow.xml , authoriser and
authorisation user file my previous 1.12 version of cluster to this new
cluster.

When I am starting the cluster with all the keystone password in authoriser
and loginidentifer and nifi sensitive key value unencrypted in nifi
properties file. Then cluster came without any issue.

When I am encrypting using keytool , all the properties are succefully
encrypted. How ever while starting the cluster getting one error

Error in creating bean with name ‘authoriser’ factory bean threw exception
on object creation nested exception is org.apache.nifi.project.
Senstivepropertyprotectionexception: protection scheme  [aes/gcm/256] is
not supported.

Any hint is really helpful as trying from last 2 days.

Thanks and regards
Sanjeet




-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: SQLServer Active Directory Error

2022-03-25 Thread sanjeet rath
Hi Scott,

I have also faced same issue in nifi 1.12.1 version, then i have checked
with unenceipted nifi version(http) its working fine. Only issue is in
encripted (https) .the issue is there.
 I tried few things , but nothing worked till now. I dont know ita a bug or
not.

Thanks,
Sanjeet

On Thu, 24 Mar 2022, 11:20 pm scott,  wrote:

> Hi group,
> I'm having some trouble getting an SQL connection to work. I am trying to
> connect NiFi v1.14 to an Azure SQL Server using Active Directory auth. I've
> tried several combinations of connection strings and jdbc drivers, but have
> not had any success. The latest error I'm getting now is
> "java.sql.SQLException: Cannot create PoolableConnectionFactory (Failed to
> load ADAL4J Java library for performing ActiveDirectoryPassword
> authentication.)". The strange thing with this error is that I confirmed
> ADAL4J is part of the extensions in NiFi.
> Just wondering if anyone has had similar challenges or could point me to a
> solution or workaround.
>
> Thanks,
> Scott
>
>


How to restrict custom processor execution time

2021-08-10 Thread sanjeet rath
Hi ,

I am building a custom processor and there is restriction i want to put for
the  processor that it should not schedule to run 2 times in 1 hour time
period.

I can acheive this by passing "run schedule" 60 mins.

Is there any other way i can do in my custom processor code, So that it
won't allow the user to select "run schedule " time less than 60
mins.basically similar to we can restrict the procesor to execute on
prinary node.

Any other thought is really helpfull.

Thanks,
Sanjeet


Re: Sensitive context parameter from ExecuteScript

2021-07-21 Thread sanjeet rath
Hi Etienne,

I have also have similar usecase of using sensitive parameter context in a
non sesitive property like in ExecuteSql's " sql select query " property
field.

Did u find any way of doing that?
Any other suggestion from anyone is really appriciated.

Thanks,
Sanjeet

On Mon, 19 Jul 2021, 9:18 pm Etienne Jouvin, 
wrote:

> Hye All.
>
> I was wondering what is the best way to access a sensitive context
> parameter from ExecuteScript.
> When adding a property on the ExecuteScript process, I can not put the
> sensitive property.
>
> Thanks for any input.
>
> Regards
>
> Etienne Jouvin
>


Re: Regarding jira issue: NIFI-7856

2021-06-18 Thread sanjeet rath
Hi,

Any help input on my trailed mail is really appriciated .
I am almost stuck here from past 2 weeks.
Tried almost all the option i have.

Thank u everyone in advance.
Regards,
Sanjeet

On Tue, 15 Jun 2021, 7:56 pm sanjeet rath,  wrote:

> Hi,
>
> The symptoms mentioned in the jira issue( 
> *https://issues.apache.org/jira/browse/NIFI-7856
> <https://issues.apache.org/jira/browse/NIFI-7856> *), i am
> observing this in one of our PROD clusters.
>
> ERROR [Compress Provenance Logs-1-thread-2] o.a.n.p.s.EventFileCompressor 
> Failed to compress ./provenance_repository/1693519.prov on rollover
> java.io.FileNotFoundException: ./provenance_repository/1693519.prov (No such 
> file or directory)
>
>
> i saw the code is fixed in 1.13 version with bellow file changes in
> nifi-provenance-repository-bundle .
>
>
> nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/RecordWriterLease.java
> <https://gitbox.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/RecordWriterLease.java;h=a3ed241634e21b206437cb06c81842d23b5ea9f8;hb=a73cd6a>
> diff
> <https://gitbox.apache.org/repos/asf?p=nifi.git;a=blobdiff;f=nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/RecordWriterLease.java;h=a3ed241634e21b206437cb06c81842d23b5ea9f8;hp=5757e59567e7d6eee9a75b792f21b56948f26e37;hb=a73cd6a;hpb=91f6b42985c20ee9b1ef177d16c70c820a5cdf0e>
>  | blob
> <https://gitbox.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/RecordWriterLease.java;h=a3ed241634e21b206437cb06c81842d23b5ea9f8;hb=a73cd6a>
>  | history
> <https://gitbox.apache.org/repos/asf?p=nifi.git;a=history;f=nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/RecordWriterLease.java;hb=a73cd6a>
>
> nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/WriteAheadStorePartition.java
> <https://gitbox.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/WriteAheadStorePartition.java;h=1df84f6d41ef54b3cbf2785c66dd185509da8eb7;hb=a73cd6a>
> So i have modified above 2 file changes on top of my 1.12.1 version of my
> nifi-provenance-repository-bundle and build the nifi-provenance-repository.
> nar file. then i will deploy this new Nar file to to /Lib folder of 1.12.1
> version of niif
>
> Is my above approach is correct ?
>
> Second thing is, I am facing one issue in 1.12 version, unable to
> replicate the provenance error in the lower environment (Tried with the
> 7856.xml template attached in the jira by Mark)
> So not able to understand whether the above change i made is worked or
> not.
> As i can not directly deploy the Nar to prod env where the error is
> constantly coming in every hour.
>
> Could you please help me to replicate this issue in 1.12.1 version
> Along with the template any other config changes i need to do replicate
> the issue.
>
>
> Regards,
> --
> Sanjeet Kumar Rath,
>
>
>


Regarding jira issue: NIFI-7856

2021-06-15 Thread sanjeet rath
Hi,

The symptoms mentioned in the jira issue(
*https://issues.apache.org/jira/browse/NIFI-7856
 *), i am
observing this in one of our PROD clusters.

ERROR [Compress Provenance Logs-1-thread-2]
o.a.n.p.s.EventFileCompressor Failed to compress
./provenance_repository/1693519.prov on rollover
java.io.FileNotFoundException: ./provenance_repository/1693519.prov
(No such file or directory)


i saw the code is fixed in 1.13 version with bellow file changes in
nifi-provenance-repository-bundle .

nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/RecordWriterLease.java

diff

 | blob

 | history

nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/store/WriteAheadStorePartition.java

So i have modified above 2 file changes on top of my 1.12.1 version of my
nifi-provenance-repository-bundle and build the nifi-provenance-repository.
nar file. then i will deploy this new Nar file to to /Lib folder of 1.12.1
version of niif

Is my above approach is correct ?

Second thing is, I am facing one issue in 1.12 version, unable to replicate
the provenance error in the lower environment (Tried with the 7856.xml
template attached in the jira by Mark)
So not able to understand whether the above change i made is worked or not.
As i can not directly deploy the Nar to prod env where the error is
constantly coming in every hour.

Could you please help me to replicate this issue in 1.12.1 version
Along with the template any other config changes i need to do replicate the
issue.


Regards,
-- 
Sanjeet Kumar Rath,


DataDog reporting task is not working AWS EKS

2021-05-19 Thread sanjeet rath
Hi ,

My 3 node nifi cluster is running on AWS EKS.& for monitoring i have added
Datadog reporting task.
Our structure is data-agent is running as demonSet pod inside the host. and
from nifi pod the datadog reporting task is sending the metrics to the
dd_agent_host over UDP in 8125 port.

i saw the bellow code will work when the dd agent is installed in local
pod/instance only.So i build a custom reporting task with bellow
modification.
Code Reference: *nifi-1.12.1* /
nifi-nar-bundles
/
nifi-datadog-bundle

/nifi-datadog-reporting-task

/src

/main

/java

/org

/apache

/nifi

/reporting

/datadog

/*DDMetricRegistryBuilder.java*

private DatadogReporter createDatadogReporter(MetricRegistry
metricRegistry) throws IOException
{
DatadogReporter reporter = DatadogReporter.forRegistry(metricRegistry)
.withHost(InetAddress.getLocalHost().getHostName())  // My new
code .withHost(System.getenv("env variable for dd-agent host"))
.withTransport(transport) .withTags(tags) .build(); return reporter;
}

So the above modified code will get the dd-agent-host from env variable ,
which is host address of the demonset pod where dd-agent is running.
few log i have added where i can see nifi metrics are created with the
configs that i have provided in the reporting task set up(in nifi-app.log
file)
However i am not able to see the nifi metrics in datadog UI.

For testing i did execute the below code from the pod where nifi is running
echo -n "test:custom_metric:60|g|" >/dev/udp/$"env variable for dd-agent
host"/8125
I am able to see the test metrics in datadog UI ,which confirm the
connection is fine.

Could someone please help me to understand any thing i am doing wrong or
missing.

Regards
-- 
Sanjeet Kumar Rath,


Re: Concern regarding Nifi Processor's state is captured durning upgrade of nif cluster cluster

2021-02-24 Thread sanjeet rath
Thanks Mark for the detailed explanation.
Along with nifi i am also upgrading zookeeper 3.4 version to 3.5 version.
So My nifi1.8 version runs on 3.4 version & new 1.12.1 version nifi runs on
3.5 version of zookeeper.and both zookeper are in differnet linux server
running.

That is why i need to transfer the state from zookeeper 3.4 instance to 3.5
instance.

But my original question is when i just copy the flow.xml.gz file from 1.8
version nifi instance to new 1.12 nifi instance and please note  this new
1.12 nifi instance runs on different zookeeper instance, then how the state
id is present in the 1.12 version of nifi  lists3 processor and it starts
prcessing where it left in 1.8 environment.I have not applied
zk-migrator.sh to this new 3.5 zookeeper instance.So my expection the state
id should not be prensent here.





On Wed, 24 Feb 2021, 8:03 pm Mark Payne,  wrote:

> Sanjeet,
>
> For this use case, you should not be using the zk-migrator.sh tool. That
> tool is not intended to be used when upgrading nifi. Rather, the tool is to
> be used if you’re migrating nifi away from one zookeeper and onto another.
> For example, if you have a ZooKeeper instance that is shared by many other
> services, and you decide that you want to run a separate ZooKeeper instance
> purely for NiFi, then you could use that tool to copy the nifi state from
> one zookeeper instance to another.
>
> NiFi has two types of state: Local and Cluster state. ListS3, for example,
> would use Cluster state because if the Primary Node changes, the new node
> in the cluster needs to have that same state. So the state must be shared
> across the cluster. So the state itself is stored in ZooKeeper.
>
> However, you could also have something like ListFile running on every node
> in the cluster, listing files on the local disk. In such a case, if Node 1
> performs some listing, it does not make sense to share that state with Node
> 2, because Node 2 has a completely different listing (a completely
> different disk). So the state is then stored in the ./state/local directory.
>
> So when you upgrade from 1.8 to 1.12.1 you should also copy over the state
> directory to avoid losing any local state. But as long as the 1.12.1
> cluster is pointing at the same zookeeper, there is no need to migrate the
> zookeeper state.
>
> Hope this helps!
> -Mark
>
>
> On Feb 24, 2021, at 8:56 AM, sanjeet rath  wrote:
>
> Hi,
>
> My use case is to upgrade the nifi cluster from 1.8 to 1.12.1 with state (we
> are using external zookeeper and its 3 node cluster ).
>
> So the approach I followed ,
> -> created 3 node linux box and installed nifi 1.12.1 & zookeeper 3.5.8
> -> brought  the flow.xml.gz, users.xml and authorization.xml from old 1.8
> env to newly created 1.12 cluster.(Both cluster are in different linux box)
> -> Then followed with the use of zk-migrator.sh utility to create a
> zk-source-data.json in dev 1.8 env and applying it in 1.12 cluster to apply
> the states.
>
> Everything is fine with the above approach and state is captured in the
> processor.
>
>
> I am seeing another wired behaviour which  is , *without*
> using zk-migrator.sh the state is captured properly by bringing just the
> flow.xml.gz
>
> Steps:
> -> created 3 node linux box and installed nifi 1.12.1 & zookeeper 3.5.8
> -> brought  the flow.xml.gz, users.xml and authorization.xml from old 1.8
> env to newly created 1.12 cluster. and i have * not* used zk-migrator.sh
> utility in in newly created 1.12.1 env.
> -> When i am seeing processors like ListS3, Listsftp, the state id is
> captured properly in newly created 1.12.1 env, also when i run
> the processor it is pulling the  data after the captured timestamp only
> ,which is perfect.
>
> Could someone help me to understand how the state id is captured in
> the lists3 processor in 1.12.1 env, because i am seeing its not present in
> flow.xml.gz
>  Does it mean we can migrate the state by just bringing the flow.xml file
> without using zk-migrator.sh utilyty ?
>  one more question where does the zk-migrator.sh utility write the states
> in the destination cluster in the external zookeper configuration
> Is it inside /nifi/state/locale/* ?
>
> Thanks in advance,
> --
> Sanjeet Kumar Rath,
>
>
>


Concern regarding Nifi Processor's state is captured durning upgrade of nif cluster cluster

2021-02-24 Thread sanjeet rath
Hi,

My use case is to upgrade the nifi cluster from 1.8 to 1.12.1 with state (we
are using external zookeeper and its 3 node cluster ).

So the approach I followed ,
-> created 3 node linux box and installed nifi 1.12.1 & zookeeper 3.5.8
-> brought  the flow.xml.gz, users.xml and authorization.xml from old 1.8
env to newly created 1.12 cluster.(Both cluster are in different linux box)
-> Then followed with the use of zk-migrator.sh utility to create a
zk-source-data.json in dev 1.8 env and applying it in 1.12 cluster to apply
the states.

Everything is fine with the above approach and state is captured in the
processor.


I am seeing another wired behaviour which  is , *without*
using zk-migrator.sh the state is captured properly by bringing just the
flow.xml.gz

Steps:
-> created 3 node linux box and installed nifi 1.12.1 & zookeeper 3.5.8
-> brought  the flow.xml.gz, users.xml and authorization.xml from old 1.8
env to newly created 1.12 cluster. and i have *not* used zk-migrator.sh
utility in in newly created 1.12.1 env.
-> When i am seeing processors like ListS3, Listsftp, the state id is
captured properly in newly created 1.12.1 env, also when i run
the processor it is pulling the  data after the captured timestamp only
,which is perfect.

Could someone help me to understand how the state id is captured in
the lists3 processor in 1.12.1 env, because i am seeing its not present in
flow.xml.gz
 Does it mean we can migrate the state by just bringing the flow.xml file
without using zk-migrator.sh utilyty ?
 one more question where does the zk-migrator.sh utility write the states
in the destination cluster in the external zookeper configuration
Is it inside /nifi/state/locale/* ?

Thanks in advance,
-- 
Sanjeet Kumar Rath,


Re: Nifi 1.12.1 cluster is getting hung after few days(15 days)

2021-01-07 Thread sanjeet rath
Hi All,

Could someone please give me thoughts on the trailed mail issue, so i can
do my further analysis.

Regards,
Sanjeet

On Wed, 6 Jan 2021, 7:40 pm sanjeet rath,  wrote:

> Hi All,
>
> Happy New Year :)
>
> I have upgraded our cluster from 1.8 to 1.12.1, few days ago and everything
> is working fine. I observed that Nifi was like hanged after running for few
> days (I have observed its nearly after 15 days of nifi service start) issue
> is after login the browser keep on loading , When I saw the bootstrap.log I
> saw this message "*Apache nifi is running at PID () but not responding to
> ping requests*”.
> This happened to only one node from a 3 node cluster.
>
> This issue happened *3 times on different cluster on different nodes.*
>
> *Everytime issue got fixed by restarting NiFi service.*
>
> During  the hanged state I tried see the resource utilisation
>
>  -> top -n 1 -H -p 943785 (nifi processid )
>
>
> top - 08:26:36 up 40 days, 3:48, 2 users, load average: 5.28, 5.38, 5.43
> Threads: 239 total, 4 running, 235 sleeping, 0 stopped, 0 zombie %Cpu(s):
> 98.7 us, 1.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem :
> 15829.5 total, 610.8 free, 10823.7 used, 4395.0 buff/cache MiB Swap: 0.0
> total, 0.0 free, 0.0 used. 4456.1 avail Mem
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>
> *943806* root 20 0 12.5g 9.4g 18692 R *88.9* 60.7 12698:50 *GC Thread#1 *
>
> 943807 root 20 0 12.5g 9.4g 18692 R 88.9 60.7 12698:48 GC Thread#2
>
> 943808 root 20 0 12.5g 9.4g 18692 R 88.9 60.7 12698:58 GC Thread#3
>
>  943787 root 20 0 12.5g 9.4g 18692 R 83.3 60.7 12698:51 GC Thread#0
>
> 943785 root 20 0 12.5g 9.4g 18692 S 0.0 60.7 0:00.00 java
>
>
> We have 4 core cpu, all *4 GC threads*  are keep on this state and
> consuming more CPU.*cluster is hung state for 2 days,* Then after 2 days
> I saw these threads are moved and nifi comes out of the hung state for this
> node , but saw another node from the same cluster moved to the hung state
> with similar fashion means , 4 threads busy in GC and consuming more CPU.
>
>
> Could you please help me to identify what could be the possible reason.
>
> Details:
>
> Nifi 1.12.1
>
> Jdk 11
>
> Zookeeper 3.5.8
>
> 16g memory
>
>
>
> Thanks,
> --
> Sanjeet Kumar Rath,
> mob- +91 8777577470
>
>
>


Nifi 1.12.1 cluster is getting hung after few days(15 days)

2021-01-06 Thread sanjeet rath
Hi All,

Happy New Year :)

I have upgraded our cluster from 1.8 to 1.12.1, few days ago and everything
is working fine. I observed that Nifi was like hanged after running for few
days (I have observed its nearly after 15 days of nifi service start) issue
is after login the browser keep on loading , When I saw the bootstrap.log I
saw this message "*Apache nifi is running at PID () but not responding to
ping requests*”.
This happened to only one node from a 3 node cluster.

This issue happened *3 times on different cluster on different nodes.*

*Everytime issue got fixed by restarting NiFi service.*

During  the hanged state I tried see the resource utilisation

 -> top -n 1 -H -p 943785 (nifi processid )


top - 08:26:36 up 40 days, 3:48, 2 users, load average: 5.28, 5.38, 5.43
Threads: 239 total, 4 running, 235 sleeping, 0 stopped, 0 zombie %Cpu(s):
98.7 us, 1.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem :
15829.5 total, 610.8 free, 10823.7 used, 4395.0 buff/cache MiB Swap: 0.0
total, 0.0 free, 0.0 used. 4456.1 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

*943806* root 20 0 12.5g 9.4g 18692 R *88.9* 60.7 12698:50 *GC Thread#1 *

943807 root 20 0 12.5g 9.4g 18692 R 88.9 60.7 12698:48 GC Thread#2

943808 root 20 0 12.5g 9.4g 18692 R 88.9 60.7 12698:58 GC Thread#3

 943787 root 20 0 12.5g 9.4g 18692 R 83.3 60.7 12698:51 GC Thread#0

943785 root 20 0 12.5g 9.4g 18692 S 0.0 60.7 0:00.00 java


We have 4 core cpu, all *4 GC threads*  are keep on this state and
consuming more CPU.*cluster is hung state for 2 days,* Then after 2 days I
saw these threads are moved and nifi comes out of the hung state for this
node , but saw another node from the same cluster moved to the hung state
with similar fashion means , 4 threads busy in GC and consuming more CPU.


Could you please help me to identify what could be the possible reason.

Details:

Nifi 1.12.1

Jdk 11

Zookeeper 3.5.8

16g memory



Thanks,
-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Regarding setting up multiple DistribitedMapCacheServer controler service

2020-12-16 Thread sanjeet rath
Thanks Mark for clarifying.

On Wed, 16 Dec 2020, 9:20 pm Mark Payne,  wrote:

> Sanjeet,
>
> You can certainly setup multiple instances of the
> DistributedMapCacheServer. I think the point that the article was trying to
> get at is probably that adding a second DistributedMapCacheClient does not
> necessitate adding a second server. Multiple clients can certainly use the
> same server.
>
> That said, there may be benefits to having multiple servers. Specifically,
> for DetectDuplicate, there may be some things to consider. Because the
> server is configured with a max number of elements to add, if you have two
> flows, and Flow A processes 1 million FlowFiles per hour, and Flow B
> processes 100 FlowFiles per hour, you will almost certainly want two
> different servers. That’s because you could have a FlowFile come into Flow
> B, not a duplicate. Then Flow A fills up the cache with 10,000 FlowFiles of
> its own. Then a duplicate comes into Flow B, but the cache doesn’t know
> about it because Flow A has already filled the cache. So in that case, it
> would help to have two. Only down side is that now you have to many two
> different Controller Services (generally not a problem) and ensure that you
> have firewalls opened, etc. to access it.
>
> Thanks
> -Mark
>
> On Dec 16, 2020, at 10:37 AM, sanjeet rath  wrote:
>
> Hi All,
>
> Hope you are well.
> I need one clarification regarding DistribitedMapCacheServer controler
> service.
> Our build structure is on same cluster 2 teams are working in 2 different
> PG.
> Now both team are using DetectDuplicate processor for which they need
> DustributedMapCacheClient.
>
> My question is should i set up 2 different  DistribitedMapCacheServer on
> 2 different port or should i use 1
> DistribitedMapCacheServer with one port (lets say 4557 default ) and that
> port will be used by both the teams(both the PG)
>
> I have gone through previous internet artcle and comunity discussion,
> where it is mentioned the DistribitedMapCacheServer should set up only
> once per cluster with one port and  multiple DMCclient can access this port.
>
> Please advise is there any restriction setting up multiple 
> DistribitedMapCacheServer
> in a cluster.
>
> Thank you in advance,
> Sanjeet
>
>
>


Regarding setting up multiple DistribitedMapCacheServer controler service

2020-12-16 Thread sanjeet rath
Hi All,

Hope you are well.
I need one clarification regarding DistribitedMapCacheServer controler
service.
Our build structure is on same cluster 2 teams are working in 2 different
PG.
Now both team are using DetectDuplicate processor for which they need
DustributedMapCacheClient.

My question is should i set up 2 different  DistribitedMapCacheServer on 2
different port or should i use 1
DistribitedMapCacheServer with one port (lets say 4557 default ) and that
port will be used by both the teams(both the PG)

I have gone through previous internet artcle and comunity discussion, where
it is mentioned the DistribitedMapCacheServer should set up only once per
cluster with one port and  multiple DMCclient can access this port.

Please advise is there any restriction setting up multiple
DistribitedMapCacheServer
in a cluster.

Thank you in advance,
Sanjeet


Nifi 1.12 processor is not auto upgrading default processor if it has a different version of custom processor

2020-10-14 Thread sanjeet rath
Hi All,

I am facing one issue during the nifi cluster upgrade to 1.12 version from
1.8 version.

I have a *custom processor for AWSCredentialsProviderControllerService
controller service. this has been build on top of 1.8* version . the
structure for custom processor in flow.xml.gz file is:

AWSCredentialsProviderControllerService100
org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderControllerService

   
com.xxx.xx1234
nifi-custom-ping-credentials-controller-service
*1.0.0*
  


There are also *default AWSCredentialsProviderControllerService *controller
service of 1.8version is present which is having below configuration in
flow.xml

AWSCredentialsProviderControllerService180
org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderControllerService
  
org.apache.nifi
nifi-aws-nar
*1.8.0*


So I am upgrading the nifi cluster , which means putting this flow.xml.gz
file from 1.8 cluster to 1.12 cluster .

After the cluster up I am seeing the *default
AWSCredentialsProviderControllerService(1.8 version) controller is not auto
upgraded to 1.12* bundle and getting invalid with error:

Error:

missing controller service validated against "any property" is invalid
because controler service of this type
org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderControllerService,
but this is not a valid reporting task type.

log I am seeing :

2020-10-14 17:14:56,042 ERROR [main] o.a.nifi.controller.ExtensionBuilder
Could not create Controller Service of type
org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderControllerService
for ID 25defb18-0175-1000-5bb4-febb1b1a21db due to: Unable to find bundle
for coordinate org.apache.nifi:nifi-aws-nar:1.8.0; creating "Ghost"
implementation 2020-10-14 17:14:56,042 INFO [main]
o.a.nifi.groups.StandardProcessGroup
StandardControllerServiceNode[service=GhostControllerService[id=25defb18-0175-1000-5bb4-febb1b1a21db,
type=org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderControllerService],
versionedComponentId=null,
processGroup=StandardProcessGroup[identifier=8cb90667-0174-1000-8741-3bfe7f19db7f],
active=false] added to
StandardProcessGroup[identifier=8cb90667-0174-1000-8741-3bfe7f19db7f]

There is *no issue in the custom processor
*(nifi-custom-ping-credentials-controller-service
1.0.0) as 1.0.0 version nar file is present in the 1.12 cluster Also *no
issue with other 1.8 version processor *& controller service , all are auto
upgraded to 1.12 version.

Could you please let me know what should be done to avoid this type of
issue in upgrade?

-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Modifying putElasticSearchHTTP processor to use AWS IAM role based awscredentialprovide service for access

2020-09-09 Thread sanjeet rath
Thank you MIke for the quick reply.
I was really struggling with this functionality.
i have gone through the code ,what i understood is i should use the
"nifi-elastic-search-restapi-processor" project.

In it the JsonQueryelasticSearch processor, it uses the "Client
Service" Controller service. and i need to modify this controler. service
to use AWS shared code which i shared with you in the trailed mail chain.

Is my understanding is correct ?

Regards,
Sanjeet



On Thu, Sep 10, 2020 at 3:18 AM Mike Thomsen  wrote:

> Sanjeet,
>
> As provided, this won't integrate well with the existing NiFi
> processors. You would need to implement it as a controller service
> object and update the processors to use it. Also, if you want to use
> processors based on the official Elasticsearch client API, the ones
> under the "REST API bundle" are the best fit because they already use
> controller services that use the official Elastic clients.
>
> Thanks,
>
> Mike
>
> On Wed, Sep 9, 2020 at 12:14 PM sanjeet rath 
> wrote:
> >
> > Hi ,
> >
> > We are using AWS managed ElasticSearch and our nifi is hosted in EC2.
> > I have a use case of building a custom processor on top of
> putElasticSearchHTTP, where it will use aws IAM based role
> awscredentialprovider service to connect AWS ElasticSearch.
> > This will be similar to PUTSQS where we are using IAM role based
> awscredentialprovider service to connect SQS and its working fine.
> >
> > But there is no awscredentailprovider controller service is available in
> putElasticSearchHTTP.
> >
> > So my plan is adding a awscredentailprovider controller service to
> putElasticSearchHTTP , where i will use bellow code  to connect to
> elasticsearch.
> >
> > Is my approach correct ? Could you provide any better thought on this ?
> >
> > public class AmazonElasticsearchServiceSample { private static String
> serviceName = "es"; private static String region = "us-west-1"; private
> static String aesEndpoint = "https://domain.us-west-1.es.amazonaws.com;;
> private static String payload = "{ \"type\": \"s3\", \"settings\": {
> \"bucket\": \"your-bucket\", \"region\": \"us-west-1\", \"role_arn\":
> \"arn:aws:iam::123456789012:role/TheServiceRole\" } }"; private static
> String snapshotPath = "/_snapshot/my-snapshot-repo"; private static String
> sampleDocument = "{" + "\"title\":\"Walk the Line\"," +
> "\"director\":\"James Mangold\"," + "\"year\":\"2005\"}"; private static
> String indexingPath = "/my-index/_doc"; static final AWSCredentialsProvider
> credentialsProvider = new DefaultAWSCredentialsProviderChain(); public
> static void main(String[] args) throws IOException { RestClient esClient =
> esClient(serviceName, region); // Register a snapshot repository HttpEntity
> entity = new NStringEntity(payload, ContentType.APPLICATION_JSON); Request
> request = new Request("PUT", snapshotPath); request.setEntity(entity); //
> request.addParameter(name, value); // optional parameters Response response
> = esClient.performRequest(request);
> System.out.println(response.toString()); // Index a document entity = new
> NStringEntity(sampleDocument, ContentType.APPLICATION_JSON); String id =
> "1"; request = new Request("PUT", indexingPath + "/" + id);
> request.setEntity(entity); // Using a String instead of an HttpEntity sets
> Content-Type to application/json automatically. //
> request.setJsonEntity(sampleDocument); response =
> esClient.performRequest(request); System.out.println(response.toString()); }
> > public static RestClient esClient(String serviceName, String region) {
> AWS4Signer signer = new AWS4Signer(); signer.setServiceName(serviceName);
> signer.setRegionName(region); HttpRequestInterceptor interceptor = new
> AWSRequestSigningApacheInterceptor(serviceName, signer,
> credentialsProvider); return
> RestClient.builder(HttpHost.create(aesEndpoint)).setHttpClientConfigCallback(hacb
> -> hacb.addInterceptorLast(interceptor)).build(); }
> >
> https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-request-signing.html
> >
> >
> >
> > Regards,
> > Sanjeet
> >
> > --
> > Sanjeet Kumar Rath,
> > mob- +91 8777577470
> >
>


-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Modifying putElasticSearchHTTP processor to use AWS IAM role based awscredentialprovide service for access

2020-09-09 Thread sanjeet rath
Hi ,

We are using *AWS managed ElasticSearch* and our *nifi is hosted in EC2*.
I have a use case of building a custom processor on top of
putElasticSearchHTTP, where it will use aws IAM based role
awscredentialprovider service to connect AWS ElasticSearch.
This will be similar to PUTSQS where we are using IAM role based
awscredentialprovider service to connect SQS and its working fine.

But there is no awscredentailprovider controller service is available
in putElasticSearchHTTP.

So my plan is adding a *awscredentailprovider controller service to
putElasticSearchHTTP* , where i will use bellow code  to connect to
elasticsearch.

Is my approach correct ? Could you provide any better thought on this ?

public class AmazonElasticsearchServiceSample { private static String
serviceName = "es"; private static String region = "us-west-1"; private
static String aesEndpoint = "https://domain.us-west-1.es.amazonaws.com;;
private static String payload = "{ \"type\": \"s3\", \"settings\": {
\"bucket\": \"your-bucket\", \"region\": \"us-west-1\", \"role_arn\": \"
arn:aws:iam::123456789012:role/TheServiceRole\" } }"; private static String
snapshotPath = "/_snapshot/my-snapshot-repo"; private static String
sampleDocument = "{" + "\"title\":\"Walk the Line\"," + "\"director\":\"James
Mangold\"," + "\"year\":\"2005\"}"; private static String indexingPath =
"/my-index/_doc"; static final AWSCredentialsProvider credentialsProvider =
new DefaultAWSCredentialsProviderChain(); public static void main(String[]
args) throws IOException { RestClient esClient = esClient(serviceName,
region); // Register a snapshot repository HttpEntity entity = new
NStringEntity(payload, ContentType.APPLICATION_JSON); Request request = new
Request("PUT", snapshotPath); request.setEntity(entity); //
request.addParameter(name, value); // optional parameters Response response
= esClient.performRequest(request);
System.out.println(response.toString()); // Index a document entity = new
NStringEntity(sampleDocument, ContentType.APPLICATION_JSON); String id = "1";
request = new Request("PUT", indexingPath + "/" + id);
request.setEntity(entity); // Using a String instead of an HttpEntity sets
Content-Type to application/json automatically. //
request.setJsonEntity(sampleDocument); response =
esClient.performRequest(request); System.out.println(response.toString()); }
public static RestClient esClient(String serviceName, String region) {
AWS4Signer signer = new AWS4Signer(); signer.setServiceName(serviceName);
signer.setRegionName(region); HttpRequestInterceptor interceptor = new
AWSRequestSigningApacheInterceptor(serviceName, signer,
credentialsProvider); return
RestClient.builder(HttpHost.create(aesEndpoint)).setHttpClientConfigCallback(hacb
-> hacb.addInterceptorLast(interceptor)).build(); }
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-request-signing.html



Regards,
Sanjeet

-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Jks password migration issue

2020-08-27 Thread sanjeet rath
Hi All,

I am facing one ussue during my migration from 1.8 to 1.11.4

My 1.8 env has jks key password is "xyz"
The newly created 1.11.4 has jks password "abc".

The encripyion key used in the bootstrap file is same for both the env.


I have modified the pasaword of the 1.11.4 env's jks file using keytool to
"xyz".
However when i am changing its values("xyz") in nifi.properties &
authoriser.xml in 1.11.4 env.I am getting below error.

Error in creating authoriser bean ,
IlligalArgumentException can't decrypt a ciphertext less than 17 characters
.

When i am copying the encripted values for jks password from 1.8 env's
nifi.properties and replacing directly  in nifi.properties& authoriser.xml
of 1.11.4 env , as the encription key is same in both, then getting.

java.security.UnrecoverableKeyException: Get Key failed: Given final
block not properly padded. Such issues can arise if a bad key is used
during decryption.

Could you please help me how can i use my old jks password here.

Thanks in advance.

Sanjeet


NIFI migration behaviour in template( inside flow.xml.gz file)

2020-08-20 Thread sanjeet rath
Hi All,

I have migrated the flows from 1.8 to 1.11.4 version
I have realised all the processors are auto upgraded to 1.11.4 Howeever
below two types are not auto upgraded.

1-> all the processors & controler of the *template* are not auto upgraded
to 1.11.4 in flow.xml.gz file.

2->processor having invalid state(invalid means its realtion is not
connected or not auto terminated.

Please clarify is it expected behaviour or some bug.

Thanks a lot for your helping in migration process.
Sanjeet


Re: Nifi 1.11.4 three node cluster is taking longer time to up after migration from 1.8

2020-08-14 Thread sanjeet rath
Thanks Mark for the update, I will use Dynatrace to analyse the heap memory
and also look for the flows as you suggested.


On Fri, Aug 14, 2020 at 7:40 PM Mark Payne  wrote:

> Sanjeet,
>
> It’s hard to say what is triggering the OutOfMemoryError. I was able to
> create a similarly sized flow and startup just fine using a 2 GB heap.
> Since you’ve got 8 GB of heap, it’s unlikely related to the size of just
> the flow itself.
>
> The only way to really definitively diagnose an OutOfMemoryError would be
> for you to get a heap dump and analyze that to understand what’s using up
> the heap space.
> But that said, quite often we the reason that we see OutOfMemoryError is
> because users tend to extract FlowFile content into attributes, using
> something like ExtractText or EvaluateJsonPath. These processors are
> extremely useful for pulling out small pieces of data such as a timestamp
> or an “id” field or something like that from data and promoting it to an
> attribute. But abusing these types of processors result in huge amounts of
> information being added to FlowFile attributes. This then takes up a huge
> amount of heap. So if you’re using a lot of that type of pattern, I’d
> recommend fixing the flow to avoid that.
>
> As for an upgrade from 1.8 to 1.11, I would guess that you were already at
> the tipping point on 1.8 and just operating below the point of hitting
> OutOfMemoryError. They may well be things in 1.11.4 that take a bit more
> memory but nothing that I know of that would result in very significant
> differences in memory footprint.
>
> Thanks
> -Mark
>
> On Aug 14, 2020, at 9:18 AM, sanjeet rath  wrote:
>
> Hi Mark/Team
>
> Any thoughts on this? Where should i analyse further.
>
>
> Regards,
> Sanjeet
>
>
> On Thu, 13 Aug 2020, 10:32 pm sanjeet rath, 
> wrote:
>
>> Hi Mark,
>> Thanks for the response.
>>
>> My flow.xml.gz file size is 14 Mb.
>>
>> Regards,
>> Sanjeet
>>
>> On Thu, 13 Aug 2020, 10:27 pm Mark Payne,  wrote:
>>
>>> Actually, I take back what I said. I was a little too quick to jump to
>>> conclusions about what the issue was. There was an issue addressed that
>>> should improve startup time. But what you’re seeing here is unrelated, as
>>> you’re enchanting OutOfMemoryError. The long time is likely related to
>>> garbage collection. How large is your flow.xml.gz file?
>>>
>>> Thanks
>>> -Mark
>>>
>>>
>>> On Aug 13, 2020, at 12:48 PM, Mark Payne  wrote:
>>>
>>> Sanjeet,
>>>
>>> I believe this should be addressed in 1.12.0, which should be released
>>> very soon.
>>>
>>> Thanks
>>> -Mark
>>>
>>> On Aug 13, 2020, at 4:13 AM, sanjeet rath 
>>> wrote:
>>>
>>> Hi Team,
>>>
>>> I have migrated my flows.xml.gz, users.xml, authorization.xml from 1.8
>>> env to 1.11.4 environment .(12k processor are therr in flow)
>>> There are no error in the log file, the issue is taking 10 mintues to
>>> server up and with all node connected.
>>>
>>> The warning logs , which i am suspecting is causing the delay are
>>> mentioned below
>>> -> I have set max & min jvm to 8gb
>>> ->nifi.cluster.node.connection.timeout=30 sec
>>> >nifi.cluster.node.read timeout=30 sec
>>> ->nifi-zookeeper.connect.timeout=15 sec
>>> ->nifi-zookeeper.session.timeout=15 sec
>>>
>>> When i am making the defaut value  5 & 3 sec respectively, it taking
>>> much longer time to node connected.
>>>
>>> Could you please help me to identify the issue, why it taking so long 10
>>> minutes time to up the nifi cluster.Thanks In advance.
>>>
>>> 1st suspect warning:(appearing 6 to 7 times in log and dissapears one
>>> server is up with nodes are connected)
>>>
>>> WARN [Process Cluster Protocol Request-10] 
>>> o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message 
>>> from “**HOSTIP address**”com due to 
>>> org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 
>>> protocol message in response to message type: CONNECTION_REQUEST due to 
>>> javax.net.ssl.SSLException: Broken pipe (Write failed)
>>> org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 
>>> protocol message in response to message type: CONNECTION_REQUEST due to 
>>> javax.net.ssl.SSLException: Broken pipe (Write failed)
>>> at 
>>> org.apache.nifi.cluster.protocol.impl

Re: Nifi 1.11.4 three node cluster is taking longer time to up after migration from 1.8

2020-08-14 Thread sanjeet rath
Hi Mark/Team

Any thoughts on this? Where should i analyse further.


Regards,
Sanjeet


On Thu, 13 Aug 2020, 10:32 pm sanjeet rath,  wrote:

> Hi Mark,
> Thanks for the response.
>
> My flow.xml.gz file size is 14 Mb.
>
> Regards,
> Sanjeet
>
> On Thu, 13 Aug 2020, 10:27 pm Mark Payne,  wrote:
>
>> Actually, I take back what I said. I was a little too quick to jump to
>> conclusions about what the issue was. There was an issue addressed that
>> should improve startup time. But what you’re seeing here is unrelated, as
>> you’re enchanting OutOfMemoryError. The long time is likely related to
>> garbage collection. How large is your flow.xml.gz file?
>>
>> Thanks
>> -Mark
>>
>>
>> On Aug 13, 2020, at 12:48 PM, Mark Payne  wrote:
>>
>> Sanjeet,
>>
>> I believe this should be addressed in 1.12.0, which should be released
>> very soon.
>>
>> Thanks
>> -Mark
>>
>> On Aug 13, 2020, at 4:13 AM, sanjeet rath  wrote:
>>
>> Hi Team,
>>
>> I have migrated my flows.xml.gz, users.xml, authorization.xml from 1.8
>> env to 1.11.4 environment .(12k processor are therr in flow)
>> There are no error in the log file, the issue is taking 10 mintues to
>> server up and with all node connected.
>>
>> The warning logs , which i am suspecting is causing the delay are
>> mentioned below
>> -> I have set max & min jvm to 8gb
>> ->nifi.cluster.node.connection.timeout=30 sec
>> >nifi.cluster.node.read timeout=30 sec
>> ->nifi-zookeeper.connect.timeout=15 sec
>> ->nifi-zookeeper.session.timeout=15 sec
>>
>> When i am making the defaut value  5 & 3 sec respectively, it taking
>> much longer time to node connected.
>>
>> Could you please help me to identify the issue, why it taking so long 10
>> minutes time to up the nifi cluster.Thanks In advance.
>>
>> 1st suspect warning:(appearing 6 to 7 times in log and dissapears one
>> server is up with nodes are connected)
>>
>> WARN [Process Cluster Protocol Request-10] 
>> o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message 
>> from “**HOSTIP address**”com due to 
>> org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 
>> protocol message in response to message type: CONNECTION_REQUEST due to 
>> javax.net.ssl.SSLException: Broken pipe (Write failed)
>> org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 
>> protocol message in response to message type: CONNECTION_REQUEST due to 
>> javax.net.ssl.SSLException: Broken pipe (Write failed)
>> at 
>> org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.dispatchRequest(SocketProtocolListener.java:184)
>> at 
>> org.apache.nifi.io.socket.SocketListener$2$1.run(SocketListener.java:136)
>> at 
>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>> at 
>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>> at java.base/java.lang.Thread.run(Thread.java:834)
>> Caused by: javax.net.ssl.SSLException: Broken pipe (Write failed)
>> at 
>> java.base/sun.security.ssl.Alert.createSSLException(Alert.java:127)
>> at 
>> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:326)
>> at 
>> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:269)
>> at 
>> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
>> at 
>> java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:980)
>> at 
>> java.base/java.io.DataOutputStream.write(DataOutputStream.java:107)
>> at 
>> java.base/java.io.FilterOutputStream.write(FilterOutputStream.java:108)
>> at 
>> org.apache.nifi.cluster.protocol.jaxb.JaxbProtocolContext$1.marshal(JaxbProtocolContext.java:86)
>> at 
>> org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.dispatchRequest(SocketProtocolListener.java:182)
>> ... 4 common frames omitted
>> Suppressed: java.net.SocketException: Broken pipe (Write failed)
>> at java.base/java.net.SocketOutputStream.socketWrite0(Native 
>> Method)
>> at 
>> java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
>> at 
>> java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
>> at 
>

Re: Nifi 1.11.4 three node cluster is taking longer time to up after migration from 1.8

2020-08-13 Thread sanjeet rath
Hi Mark,
Thanks for the response.

My flow.xml.gz file size is 14 Mb.

Regards,
Sanjeet

On Thu, 13 Aug 2020, 10:27 pm Mark Payne,  wrote:

> Actually, I take back what I said. I was a little too quick to jump to
> conclusions about what the issue was. There was an issue addressed that
> should improve startup time. But what you’re seeing here is unrelated, as
> you’re enchanting OutOfMemoryError. The long time is likely related to
> garbage collection. How large is your flow.xml.gz file?
>
> Thanks
> -Mark
>
>
> On Aug 13, 2020, at 12:48 PM, Mark Payne  wrote:
>
> Sanjeet,
>
> I believe this should be addressed in 1.12.0, which should be released
> very soon.
>
> Thanks
> -Mark
>
> On Aug 13, 2020, at 4:13 AM, sanjeet rath  wrote:
>
> Hi Team,
>
> I have migrated my flows.xml.gz, users.xml, authorization.xml from 1.8 env
> to 1.11.4 environment .(12k processor are therr in flow)
> There are no error in the log file, the issue is taking 10 mintues to
> server up and with all node connected.
>
> The warning logs , which i am suspecting is causing the delay are
> mentioned below
> -> I have set max & min jvm to 8gb
> ->nifi.cluster.node.connection.timeout=30 sec
> >nifi.cluster.node.read timeout=30 sec
> ->nifi-zookeeper.connect.timeout=15 sec
> ->nifi-zookeeper.session.timeout=15 sec
>
> When i am making the defaut value  5 & 3 sec respectively, it taking
> much longer time to node connected.
>
> Could you please help me to identify the issue, why it taking so long 10
> minutes time to up the nifi cluster.Thanks In advance.
>
> 1st suspect warning:(appearing 6 to 7 times in log and dissapears one
> server is up with nodes are connected)
>
> WARN [Process Cluster Protocol Request-10] 
> o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message from 
> “**HOSTIP address**”com due to 
> org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 
> protocol message in response to message type: CONNECTION_REQUEST due to 
> javax.net.ssl.SSLException: Broken pipe (Write failed)
> org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 
> protocol message in response to message type: CONNECTION_REQUEST due to 
> javax.net.ssl.SSLException: Broken pipe (Write failed)
> at 
> org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.dispatchRequest(SocketProtocolListener.java:184)
> at 
> org.apache.nifi.io.socket.SocketListener$2$1.run(SocketListener.java:136)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: javax.net.ssl.SSLException: Broken pipe (Write failed)
> at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:127)
> at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:326)
> at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:269)
> at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
> at 
> java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:980)
> at java.base/java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> java.base/java.io.FilterOutputStream.write(FilterOutputStream.java:108)
> at 
> org.apache.nifi.cluster.protocol.jaxb.JaxbProtocolContext$1.marshal(JaxbProtocolContext.java:86)
> at 
> org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.dispatchRequest(SocketProtocolListener.java:182)
> ... 4 common frames omitted
> Suppressed: java.net.SocketException: Broken pipe (Write failed)
> at java.base/java.net.SocketOutputStream.socketWrite0(Native 
> Method)
> at 
> java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
> at 
> java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
> at 
> java.base/sun.security.ssl.SSLSocketOutputRecord.encodeAlert(SSLSocketOutputRecord.java:81)
> at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:357)
> ... 11 common frames omitted
> Caused by: java.net.SocketException: Broken pipe (Write failed)
> at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
> at 
> java.base/java.net.SocketOutputStream.write(SocketOutputStream

Nifi 1.11.4 three node cluster is taking longer time to up after migration from 1.8

2020-08-13 Thread sanjeet rath
Hi Team,

I have migrated my flows.xml.gz, users.xml, authorization.xml from 1.8 env
to 1.11.4 environment .(12k processor are therr in flow)
There are no error in the log file, the issue is taking 10 mintues to
server up and with all node connected.

The warning logs , which i am suspecting is causing the delay are mentioned
below
-> I have set max & min jvm to 8gb
->nifi.cluster.node.connection.timeout=30 sec
>nifi.cluster.node.read timeout=30 sec
->nifi-zookeeper.connect.timeout=15 sec
->nifi-zookeeper.session.timeout=15 sec

When i am making the defaut value  5 & 3 sec respectively, it taking
much longer time to node connected.

Could you please help me to identify the issue, why it taking so long 10
minutes time to up the nifi cluster.Thanks In advance.

1st suspect warning:(appearing 6 to 7 times in log and dissapears one
server is up with nodes are connected)

WARN [Process Cluster Protocol Request-10]
o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol
message from “**HOSTIP address**”com due to
org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling
protocol message in response to message type: CONNECTION_REQUEST due
to javax.net.ssl.SSLException: Broken pipe (Write failed)
org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling
protocol message in response to message type: CONNECTION_REQUEST due
to javax.net.ssl.SSLException: Broken pipe (Write failed)
at 
org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.dispatchRequest(SocketProtocolListener.java:184)
at 
org.apache.nifi.io.socket.SocketListener$2$1.run(SocketListener.java:136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: javax.net.ssl.SSLException: Broken pipe (Write failed)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:127)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:326)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:269)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:980)
at java.base/java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
java.base/java.io.FilterOutputStream.write(FilterOutputStream.java:108)
at 
org.apache.nifi.cluster.protocol.jaxb.JaxbProtocolContext$1.marshal(JaxbProtocolContext.java:86)
at 
org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.dispatchRequest(SocketProtocolListener.java:182)
... 4 common frames omitted
Suppressed: java.net.SocketException: Broken pipe (Write failed)
at
java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at
java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at
java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at
java.base/sun.security.ssl.SSLSocketOutputRecord.encodeAlert(SSLSocketOutputRecord.java:81)
at
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:357)
... 11 common frames omitted
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at 
java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at 
java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at 
java.base/sun.security.ssl.SSLSocketOutputRecord.deliver(SSLSocketOutputRecord.java:319)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:975)
... 8 common frames omitted
2020-08-13 00:23:51,002 WARN [Process Cluster Protocol Request-6]
org.apache.nifi.io.socket.SocketListener Dispatching socket request
encountered exception due to: java.lang.OutOfMemoryError: Java heap
space
java.lang.OutOfMemoryError: Java heap space

2nd suspecting:

WARN [Process Cluster Protocol Request-6]
org.apache.nifi.io.socket.SocketListener Dispatching socket request
encountered exception due to: java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space at
java.base/java.util.Arrays.copyOf(Arrays.java:3745) at
java.base/java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:120)
at
java.base/java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:95)
at
java.base/java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:156)
at
com.sun.xml.bind.v2.runtime.output.UTF8XmlOutput.flushBuffer(UTF8XmlOutput.java:418)
at
com.sun.xml.bind.v2.runtime.output.UTF8XmlOutput.text(UTF8XmlOutput.java:371)
at

Re: Nifi 1.8 to 1.11.4 migration issue

2020-07-30 Thread sanjeet rath
Thanks Mark for the update.
Could you please the jira request number on which this issue is fixed.

Regards,
Sanjeet

On Fri, 31 Jul 2020, 12:51 am Mark Payne,  wrote:

> OK, I suspect you’re probably hitting an issue that was introduced in
> 1.11.x related to fingerprint handling. A change was made to improve how
> some things were handled in the flow fingerprinting logic but resulted in
> very poor performance in some particular cases. I.e., some flows will load
> quickly while others load very slowly. The good news is that this has since
> been fixed, though the fix hasn’t yet been released. I would expect the
> next release to come pretty soon, though.
>
> Thanks
> -Mark
>
> On Jul 30, 2020, at 3:15 PM, sanjeet rath  wrote:
>
> Hi Mark,
> Thanks for responding.
> Sorry, i can't share the log, Due to our company policy.
> its keeping printing this 2 warning in nifi-app.log once the server is up
> this warning is gone.
> 1-> O.apache.nifi.controler.flocontroler failed to send heart beat due to
> ; org.apache.nifi.cluster.protocol.protocalException: Failed marshalling
> 'HEARTBEAT" protocol message due to javax.net.sslException:Read timed out.
> 2-> o.a.nifi.fingerprint.FingerprintFactory unable to get the
> controller service of type org.apache.nifi.processor.aws.s3.puts3object;it
> is default properties will be fingerprinted instead of being ignored
>
> Thanks Boris for the response, but i am not that issue, facing node
> disconnection issue.
>
>
>
> On Fri, Jul 31, 2020 at 12:18 AM Mark Payne  wrote:
>
>> Sanjeet,
>>
>> I’d recommend grabbing a thread dump while NiFi is starting up (after
>> it’s been going for 3 minutes or so) and providing that to understand why
>> it’s taking 30 minutes to startup. Specifically, the “main” thread will be
>> of interest.
>>
>> Thanks
>> -Mark
>>
>>
>> On Jul 30, 2020, at 2:04 PM, sanjeet rath  wrote:
>>
>> Hi ,
>> yes i am using external zookeeper 3.5 version.I have already followed all
>> the document you have share .
>> Now i am seeing after waiting around 30 mins the node gets
>> connected after that there is no node disconnection issue.
>> when i restarted , again it is taking 30 min for node connection to
>> cluster.
>>
>> not able to figure out where is the issue.
>> and same below warning also comes.
>> O.apache.nifi.controler.flocontroler failed to send heart beat due to ;
>> org.apache.nifi.cluster.protocol.protocalException: Failed marshalling
>> 'HEARTBEAT" protocol message due to javax.net.sslException:Read timed out.
>>
>> On Thu, Jul 30, 2020 at 8:01 PM Nathan Gough  wrote:
>>
>>> Hi Sanjeet,
>>>
>>> One thing to do is make sure you're using at least Zookeeper 3.5.5:
>>>
>>> Migrating from 1.x.x to 1.10.0
>>>>
>>>>- The RPM creation mechanism appears broken for both Java 8 and
>>>>Java 11 binaries if you wanted to build those yourself.  Should be 
>>>> resolved
>>>>in a later release.
>>>>- We've removed the following nars from the default convenience
>>>>binary.  These include kite-nar, kafka-0-8-nar, flume-nar, media-nar,
>>>>druid-controller-service-api-nar, druid-nar, other-graph-services-nar.  
>>>> You
>>>>can still get them from the various artifact repositories and use them 
>>>> in
>>>>your flows but we cannot bundle them due to space limitations by 
>>>> default.
>>>>- The "Auto-Create Partitions" property was removed from the
>>>>PutHive3Streaming processor, causing existing instances of this 
>>>> processor
>>>>to become invalid. The property would appear as a unsupported 
>>>> user-defined
>>>>property and must be removed to return the processor to a valid state.
>>>>- The Zookeeper dependency that NiFi uses for state management and
>>>>cluster elections was upgraded to v3.5.5. From v3.5.x onwards, 
>>>> *Zookeeper
>>>>changed the zookeeper.properties file format and as a result NiFi users
>>>>using an existing embedded zookeeper will need to adjust their existing
>>>>zookeeper.properties file accordingly*. More details here:
>>>>
>>>> https://zookeeper.apache.org/doc/r3.5.3-beta/zookeeperReconfig.html#sc_reconfig_clientport
>>>>    .
>>>>For new deployments of the 1.10.0 release onwards, NiFi will be
>>>>packaged with an upda

Re: Nifi 1.8 to 1.11.4 migration issue

2020-07-30 Thread sanjeet rath
Hi Mark,
Thanks for responding.
Sorry, i can't share the log, Due to our company policy.
its keeping printing this 2 warning in nifi-app.log once the server is up
this warning is gone.
1-> O.apache.nifi.controler.flocontroler failed to send heart beat due to ;
org.apache.nifi.cluster.protocol.protocalException: Failed marshalling
'HEARTBEAT" protocol message due to javax.net.sslException:Read timed out.
2-> o.a.nifi.fingerprint.FingerprintFactory unable to get the
controller service of type org.apache.nifi.processor.aws.s3.puts3object;it
is default properties will be fingerprinted instead of being ignored

Thanks Boris for the response, but i am not that issue, facing node
disconnection issue.



On Fri, Jul 31, 2020 at 12:18 AM Mark Payne  wrote:

> Sanjeet,
>
> I’d recommend grabbing a thread dump while NiFi is starting up (after it’s
> been going for 3 minutes or so) and providing that to understand why it’s
> taking 30 minutes to startup. Specifically, the “main” thread will be of
> interest.
>
> Thanks
> -Mark
>
>
> On Jul 30, 2020, at 2:04 PM, sanjeet rath  wrote:
>
> Hi ,
> yes i am using external zookeeper 3.5 version.I have already followed all
> the document you have share .
> Now i am seeing after waiting around 30 mins the node gets connected after
> that there is no node disconnection issue.
> when i restarted , again it is taking 30 min for node connection to
> cluster.
>
> not able to figure out where is the issue.
> and same below warning also comes.
> O.apache.nifi.controler.flocontroler failed to send heart beat due to ;
> org.apache.nifi.cluster.protocol.protocalException: Failed marshalling
> 'HEARTBEAT" protocol message due to javax.net.sslException:Read timed out.
>
> On Thu, Jul 30, 2020 at 8:01 PM Nathan Gough  wrote:
>
>> Hi Sanjeet,
>>
>> One thing to do is make sure you're using at least Zookeeper 3.5.5:
>>
>> Migrating from 1.x.x to 1.10.0
>>>
>>>- The RPM creation mechanism appears broken for both Java 8 and Java
>>>11 binaries if you wanted to build those yourself.  Should be resolved 
>>> in a
>>>later release.
>>>- We've removed the following nars from the default convenience
>>>binary.  These include kite-nar, kafka-0-8-nar, flume-nar, media-nar,
>>>druid-controller-service-api-nar, druid-nar, other-graph-services-nar.  
>>> You
>>>can still get them from the various artifact repositories and use them in
>>>your flows but we cannot bundle them due to space limitations by default.
>>>- The "Auto-Create Partitions" property was removed from the
>>>PutHive3Streaming processor, causing existing instances of this processor
>>>to become invalid. The property would appear as a unsupported 
>>> user-defined
>>>property and must be removed to return the processor to a valid state.
>>>- The Zookeeper dependency that NiFi uses for state management and
>>>cluster elections was upgraded to v3.5.5. From v3.5.x onwards, *Zookeeper
>>>changed the zookeeper.properties file format and as a result NiFi users
>>>using an existing embedded zookeeper will need to adjust their existing
>>>zookeeper.properties file accordingly*. More details here:
>>>
>>> https://zookeeper.apache.org/doc/r3.5.3-beta/zookeeperReconfig.html#sc_reconfig_clientport
>>>.
>>>For new deployments of the 1.10.0 release onwards, NiFi will be
>>>packaged with an updated template zookeeper.properties file.
>>>To update an existing zookeeper.properties file however, edit the
>>>conf/zookeeper.properties file:
>>>   1. Remove the clientPort=2181 line (or whatever your port number
>>>   may be)
>>>   2. Add the client port to the end of the server string eg:
>>>   server.1=localhost:2888:3888;2181
>>>
>>>
>>  You may also need to ensure your server certificates contain a DNS entry
>> in the Subject Alternative Name (SAN) for their respective hosts.
>>
>> On Thu, Jul 30, 2020 at 10:17 AM sanjeet rath 
>> wrote:
>>
>>> Hi,
>>>
>>> The error is,
>>>
>>> O.apache.nifi.controler.flocontroler failed to send heart beat due to ;
>>> org.apache.nifi.cluster.protocol.protocalException: Failed marshalling
>>> 'HEARTBEAT" protocol message due to javax.net.sslException:Read timed out.
>>>
>>> Before migration there was no issue in cluster and ckuster are connected.
>>> Just bought flow.xml.gz ,user.xml & authorization.xml from other 1.8
>>> 

Re: Nifi 1.8 to 1.11.4 migration issue

2020-07-30 Thread sanjeet rath
Hi ,
yes i am using external zookeeper 3.5 version.I have already followed all
the document you have share .
Now i am seeing after waiting around 30 mins the node gets connected after
that there is no node disconnection issue.
when i restarted , again it is taking 30 min for node connection to cluster.

not able to figure out where is the issue.
and same below warning also comes.
O.apache.nifi.controler.flocontroler failed to send heart beat due to ;
org.apache.nifi.cluster.protocol.protocalException: Failed marshalling
'HEARTBEAT" protocol message due to javax.net.sslException:Read timed out.

On Thu, Jul 30, 2020 at 8:01 PM Nathan Gough  wrote:

> Hi Sanjeet,
>
> One thing to do is make sure you're using at least Zookeeper 3.5.5:
>
> Migrating from 1.x.x to 1.10.0
>>
>>- The RPM creation mechanism appears broken for both Java 8 and Java
>>11 binaries if you wanted to build those yourself.  Should be resolved in 
>> a
>>later release.
>>- We've removed the following nars from the default convenience
>>binary.  These include kite-nar, kafka-0-8-nar, flume-nar, media-nar,
>>druid-controller-service-api-nar, druid-nar, other-graph-services-nar.  
>> You
>>can still get them from the various artifact repositories and use them in
>>your flows but we cannot bundle them due to space limitations by default.
>>- The "Auto-Create Partitions" property was removed from the
>>PutHive3Streaming processor, causing existing instances of this processor
>>to become invalid. The property would appear as a unsupported user-defined
>>property and must be removed to return the processor to a valid state.
>>- The Zookeeper dependency that NiFi uses for state management and
>>cluster elections was upgraded to v3.5.5. From v3.5.x onwards, *Zookeeper
>>changed the zookeeper.properties file format and as a result NiFi users
>>using an existing embedded zookeeper will need to adjust their existing
>>zookeeper.properties file accordingly*. More details here:
>>
>> https://zookeeper.apache.org/doc/r3.5.3-beta/zookeeperReconfig.html#sc_reconfig_clientport
>>.
>>For new deployments of the 1.10.0 release onwards, NiFi will be
>>packaged with an updated template zookeeper.properties file.
>>To update an existing zookeeper.properties file however, edit the
>>conf/zookeeper.properties file:
>>   1. Remove the clientPort=2181 line (or whatever your port number
>>   may be)
>>   2. Add the client port to the end of the server string eg:
>>   server.1=localhost:2888:3888;2181
>>
>>
>  You may also need to ensure your server certificates contain a DNS entry
> in the Subject Alternative Name (SAN) for their respective hosts.
>
> On Thu, Jul 30, 2020 at 10:17 AM sanjeet rath 
> wrote:
>
>> Hi,
>>
>> The error is,
>>
>> O.apache.nifi.controler.flocontroler failed to send heart beat due to ;
>> org.apache.nifi.cluster.protocol.protocalException: Failed marshalling
>> 'HEARTBEAT" protocol message due to javax.net.sslException:Read timed out.
>>
>> Before migration there was no issue in cluster and ckuster are connected.
>> Just bought flow.xml.gz ,user.xml & authorization.xml from other 1.8
>> server.
>>
>> Thanks & Regards
>> Sanjeet
>>
>>
>> On Thu, 30 Jul 2020, 7:37 pm sanjeet rath, 
>> wrote:
>>
>>>  nifi-app.log file , i am getting constant error.
>>>
>>> Apache.nifi.controler.FlowControler Failed to send heart beat due to
>>> org.apache.nifi.protocal.protocolException
>>>
>>> On Thu, 30 Jul 2020, 6:54 pm Mark Payne,  wrote:
>>>
>>>> Sanjeet,
>>>>
>>>> What error are you receiving?
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On Jul 30, 2020, at 9:21 AM, sanjeet rath 
>>>> wrote:
>>>>
>>>> 
>>>> Hi Team,
>>>>
>>>> Any help on my trailed mail query will be highly appreciated.still i am
>>>> strgling yo fix yhe issue.
>>>>
>>>> In nifi-app.log file , i am getting constant error.
>>>>
>>>> Apache.nifi.controler.FlowControler Failed to send heart beat due to
>>>> org.apache.nifi.protocal.protocolEcslception.
>>>>
>>>> Thanks & Regards,
>>>> Sanjeet
>>>>
>>>> On Tue, 28 Jul 2020, 10:21 pm sanjeet rath, 
>>>> wrote:
>>>>
>>>>> Hi Team,
>>>>>
>>>>> I am facing a 

Re: Nifi 1.8 to 1.11.4 migration issue

2020-07-30 Thread sanjeet rath
Hi,

The error is,

O.apache.nifi.controler.flocontroler failed to send heart beat due to ;
org.apache.nifi.cluster.protocol.protocalException: Failed marshalling
'HEARTBEAT" protocol message due to javax.net.sslException:Read timed out.

Before migration there was no issue in cluster and ckuster are connected.
Just bought flow.xml.gz ,user.xml & authorization.xml from other 1.8 server.

Thanks & Regards
Sanjeet


On Thu, 30 Jul 2020, 7:37 pm sanjeet rath,  wrote:

>  nifi-app.log file , i am getting constant error.
>
> Apache.nifi.controler.FlowControler Failed to send heart beat due to
> org.apache.nifi.protocal.protocolException
>
> On Thu, 30 Jul 2020, 6:54 pm Mark Payne,  wrote:
>
>> Sanjeet,
>>
>> What error are you receiving?
>>
>> Sent from my iPhone
>>
>> On Jul 30, 2020, at 9:21 AM, sanjeet rath  wrote:
>>
>> 
>> Hi Team,
>>
>> Any help on my trailed mail query will be highly appreciated.still i am
>> strgling yo fix yhe issue.
>>
>> In nifi-app.log file , i am getting constant error.
>>
>> Apache.nifi.controler.FlowControler Failed to send heart beat due to
>> org.apache.nifi.protocal.protocolEcslception.
>>
>> Thanks & Regards,
>> Sanjeet
>>
>> On Tue, 28 Jul 2020, 10:21 pm sanjeet rath, 
>> wrote:
>>
>>> Hi Team,
>>>
>>> I am facing a wired issue while doing migration from 1.8 to 1.11.4
>>> I am bringing flow.xml.gz,user.xml,authorization.xml from a nifi
>>> cluster(1.8 version) to a newly created 1.11.4 version cluster.
>>>
>>> When i am starting the 1.11.4 cluster with this new files, i am getting
>>> node disconnected error (this node is not conncted to cluster pop up )
>>>
>>> However with only users.xml, authrization.xml from 1.8 and default
>>> 1.11.4 flow.xml.gz. i am getting no error all 3 nodes are getting conncted
>>> in the new cluster.
>>> Infact when i am bringing another 1.8 env's flow.xml.gz files it working
>>> fine.
>>>
>>> So specific for this flow.xml.gz file only my nodes are getting
>>> disconncted.
>>>
>>> It will really helpfull if you can help where i need to check to
>>> resolve this issue.
>>>
>>> I am using external zookeeper 3.5 version
>>>
>>> Thanks in advance,
>>> Sanjeet
>>>
>>>


Re: Nifi 1.8 to 1.11.4 migration issue

2020-07-30 Thread sanjeet rath
 nifi-app.log file , i am getting constant error.

Apache.nifi.controler.FlowControler Failed to send heart beat due to
org.apache.nifi.protocal.protocolException

On Thu, 30 Jul 2020, 6:54 pm Mark Payne,  wrote:

> Sanjeet,
>
> What error are you receiving?
>
> Sent from my iPhone
>
> On Jul 30, 2020, at 9:21 AM, sanjeet rath  wrote:
>
> 
> Hi Team,
>
> Any help on my trailed mail query will be highly appreciated.still i am
> strgling yo fix yhe issue.
>
> In nifi-app.log file , i am getting constant error.
>
> Apache.nifi.controler.FlowControler Failed to send heart beat due to
> org.apache.nifi.protocal.protocolEcslception.
>
> Thanks & Regards,
> Sanjeet
>
> On Tue, 28 Jul 2020, 10:21 pm sanjeet rath, 
> wrote:
>
>> Hi Team,
>>
>> I am facing a wired issue while doing migration from 1.8 to 1.11.4
>> I am bringing flow.xml.gz,user.xml,authorization.xml from a nifi
>> cluster(1.8 version) to a newly created 1.11.4 version cluster.
>>
>> When i am starting the 1.11.4 cluster with this new files, i am getting
>> node disconnected error (this node is not conncted to cluster pop up )
>>
>> However with only users.xml, authrization.xml from 1.8 and default 1.11.4
>> flow.xml.gz. i am getting no error all 3 nodes are getting conncted in the
>> new cluster.
>> Infact when i am bringing another 1.8 env's flow.xml.gz files it working
>> fine.
>>
>> So specific for this flow.xml.gz file only my nodes are getting
>> disconncted.
>>
>> It will really helpfull if you can help where i need to check to
>> resolve this issue.
>>
>> I am using external zookeeper 3.5 version
>>
>> Thanks in advance,
>> Sanjeet
>>
>>


Re: Nifi 1.8 to 1.11.4 migration issue

2020-07-30 Thread sanjeet rath
Hi Team,

Any help on my trailed mail query will be highly appreciated.still i am
strgling yo fix yhe issue.

In nifi-app.log file , i am getting constant error.

Apache.nifi.controler.FlowControler Failed to send heart beat due to
org.apache.nifi.protocal.protocolEcslception.

Thanks & Regards,
Sanjeet

On Tue, 28 Jul 2020, 10:21 pm sanjeet rath,  wrote:

> Hi Team,
>
> I am facing a wired issue while doing migration from 1.8 to 1.11.4
> I am bringing flow.xml.gz,user.xml,authorization.xml from a nifi
> cluster(1.8 version) to a newly created 1.11.4 version cluster.
>
> When i am starting the 1.11.4 cluster with this new files, i am getting
> node disconnected error (this node is not conncted to cluster pop up )
>
> However with only users.xml, authrization.xml from 1.8 and default 1.11.4
> flow.xml.gz. i am getting no error all 3 nodes are getting conncted in the
> new cluster.
> Infact when i am bringing another 1.8 env's flow.xml.gz files it working
> fine.
>
> So specific for this flow.xml.gz file only my nodes are getting
> disconncted.
>
> It will really helpfull if you can help where i need to check to
> resolve this issue.
>
> I am using external zookeeper 3.5 version
>
> Thanks in advance,
> Sanjeet
>
>


Nifi 1.8 to 1.11.4 migration issue

2020-07-28 Thread sanjeet rath
Hi Team,

I am facing a wired issue while doing migration from 1.8 to 1.11.4
I am bringing flow.xml.gz,user.xml,authorization.xml from a nifi
cluster(1.8 version) to a newly created 1.11.4 version cluster.

When i am starting the 1.11.4 cluster with this new files, i am getting
node disconnected error (this node is not conncted to cluster pop up )

However with only users.xml, authrization.xml from 1.8 and default 1.11.4
flow.xml.gz. i am getting no error all 3 nodes are getting conncted in the
new cluster.
Infact when i am bringing another 1.8 env's flow.xml.gz files it working
fine.

So specific for this flow.xml.gz file only my nodes are getting disconncted.

It will really helpfull if you can help where i need to check to
resolve this issue.

I am using external zookeeper 3.5 version

Thanks in advance,
Sanjeet


Re: 3 node of nifi generating 3 different flow files

2020-06-18 Thread sanjeet rath
Hi Andy,
Already tried by deleting flow.xml.gz ,users.xml,authorization.xml file of
3 nodes.
But still getting the same error.

in nifi-app.log is showing exception that" local flow is differet than
cluster flow"

Also new flow.xml.gz file is having different 'rootGroupid' on each node
Thanks.
Sanjeet

On Fri, 19 Jun 2020, 5:16 am Andy LoPresto,  wrote:

> Sanjeet,
>
> If this is for a new cluster, you can delete the flow.xml.gz file from all
> nodes and restart NiFi. When the nodes start up again, they will create the
> new flow definition file on each node respectively with the synced root
> process group ID.
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> He/Him
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Jun 18, 2020, at 4:44 PM, sanjeet rath  wrote:
>
> Thanks, Andy for your quick response.
> I did it as you suggested, now nifi is using its default encryption.
>
> But still gettting the error, in nifi-app.log is showing exception that"
> local flow is differet than cluster flow"
>
> As per my analysis, The issue is it got 3 different flow.xml.gz files in
> each node.The 'rootGroupid' of each flow.xml.gz file is different.
>
> As per my understanding ,the flow.xml.gz fie should be same accross all
> nodes.so i copied flow.xml.gz file from one node to other two nodes to make
> it sync.but still getting same error.
> The zookeeper is connected properly to all nodes.
>
> Thanks,
> Sanjeet
>
>
>
> On Thu, 18 Jun 2020, 10:55 pm Andy LoPresto,  wrote:
>
>> You do not need to manually run this command, as it migrates the
>> encryption key used for sensitive processor properties (e.g. database
>> passwords in the flow) that are stored in your flow.xml.gz file. You only
>> need this command when you have a cluster which has been using one
>> encryption key to secure these values and you want to migrate to a new key.
>>
>> When starting a new cluster, set the nifi.sensitive.props.key value to
>> the desired value on all cluster nodes, and NiFi will automatically encrypt
>> and decrypt the sensitive processor properties with it.
>>
>>
>> Andy LoPresto
>> alopre...@apache.org
>> *alopresto.apa...@gmail.com *
>> He/Him
>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>
>> On Jun 18, 2020, at 3:55 AM, sanjeet rath  wrote:
>>
>> Hi ,
>>
>> I got a issue while starting a 3 node nifi cluster. nifi-app.log is
>> showing exception that" local flow file is differet than cluster flow"
>> The issue is i got 3 different flow files in each node.'rootGroupid' of
>> each flow file is different.
>>
>> Previous i was facing no issue in creating 3 node clister.but this time
>> ,i made a change here  during creation the clusters i have encripted the
>> flow file with using nifi toolkit.(encript-config.sh)
>>
>>  Encript-config.sh -f /nifi/flow.xml.gz -n nifi/nifi.properties
>> --bootstrapconf /nifi/bootstrap.conf -s password -x
>>
>> Please note that there is are no flows are availavle in canvas.its a new
>> nifi cluster.
>>
>> Could you please help me to understand what i am doing wrong?
>>
>> Thanks a lot in advance for helping me.
>>
>> Regards,
>> Sanjeet
>>
>>
>>
>


Re: 3 node of nifi generating 3 different flow files

2020-06-18 Thread sanjeet rath
Thanks, Andy for your quick response.
I did it as you suggested, now nifi is using its default encryption.

But still gettting the error, in nifi-app.log is showing exception that"
local flow is differet than cluster flow"

As per my analysis, The issue is it got 3 different flow.xml.gz files in
each node.The 'rootGroupid' of each flow.xml.gz file is different.

As per my understanding ,the flow.xml.gz fie should be same accross all
nodes.so i copied flow.xml.gz file from one node to other two nodes to make
it sync.but still getting same error.
The zookeeper is connected properly to all nodes.

Thanks,
Sanjeet



On Thu, 18 Jun 2020, 10:55 pm Andy LoPresto,  wrote:

> You do not need to manually run this command, as it migrates the
> encryption key used for sensitive processor properties (e.g. database
> passwords in the flow) that are stored in your flow.xml.gz file. You only
> need this command when you have a cluster which has been using one
> encryption key to secure these values and you want to migrate to a new key.
>
> When starting a new cluster, set the nifi.sensitive.props.key value to the
> desired value on all cluster nodes, and NiFi will automatically encrypt and
> decrypt the sensitive processor properties with it.
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> He/Him
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Jun 18, 2020, at 3:55 AM, sanjeet rath  wrote:
>
> Hi ,
>
> I got a issue while starting a 3 node nifi cluster. nifi-app.log is
> showing exception that" local flow file is differet than cluster flow"
> The issue is i got 3 different flow files in each node.'rootGroupid' of
> each flow file is different.
>
> Previous i was facing no issue in creating 3 node clister.but this time ,i
> made a change here  during creation the clusters i have encripted the flow
> file with using nifi toolkit.(encript-config.sh)
>
>  Encript-config.sh -f /nifi/flow.xml.gz -n nifi/nifi.properties
> --bootstrapconf /nifi/bootstrap.conf -s password -x
>
> Please note that there is are no flows are availavle in canvas.its a new
> nifi cluster.
>
> Could you please help me to understand what i am doing wrong?
>
> Thanks a lot in advance for helping me.
>
> Regards,
> Sanjeet
>
>
>


3 node of nifi generating 3 different flow files

2020-06-18 Thread sanjeet rath
Hi ,

I got a issue while starting a 3 node nifi cluster. nifi-app.log is showing
exception that" local flow file is differet than cluster flow"
The issue is i got 3 different flow files in each node.'rootGroupid' of
each flow file is different.

Previous i was facing no issue in creating 3 node clister.but this time ,i
made a change here  during creation the clusters i have encripted the flow
file with using nifi toolkit.(encript-config.sh)

 Encript-config.sh -f /nifi/flow.xml.gz -n nifi/nifi.properties
--bootstrapconf /nifi/bootstrap.conf -s password -x

Please note that there is are no flows are availavle in canvas.its a new
nifi cluster.

Could you please help me to understand what i am doing wrong?

Thanks a lot in advance for helping me.

Regards,
Sanjeet


AWS custom processor build failing on upgrading nifi version to 1.11 from 1.8

2020-06-05 Thread sanjeet rath
Hi,

I have created a  custom aws controler service by creating a new java
project. Where i have created a new java file is same as
awsCredentialProviderControlerService  with some of my changes.

Public class awsCredentialProviderControlerService extends
abstractcontrolerservice implements awsCredentialProviderService
{
code
}

In my project i have added nifi-aws-processor 1.8 as pom depedency to make
it work and build nar file and this was working fine.

However when i am changing the pom version to 1.11.4 of nifi-aws-processor.

It gives me Compilation error on awsCredentialProviderService.

When i am explicitily adding Nifi-aws-service-api as my pom file as a
depdency in my new project along with nifi-aws-processor it is working fine.

My question is, Nifi-aws-service-api
Is already added as pom dependency in nifi-aws-processor's pom file.

So only addding nifi-aws-processor in pom file as a dependency should work
and it was working in 1.8 version.


Could you please help me to understand,
Is there in build related changes there in 1.11.4 version.

Thanks and regards
Sanjeet


Re: Storing output of shellscript in s3 bucket

2020-04-10 Thread sanjeet rath
Hi,

Thanks for your quick reply.Yeah i am using executestreamcommand to execute
bellow script

zk-migrator.sh -s -z
destinationHostname:destinationClientPort/destinationRootPath/components
-f /path/to/export/zk-source-data.json

Can the zk-source-data.json file written as a output flow file of the above
processor.if yes please let me know how.

Many thanks
sanjeet

On Fri, 10 Apr 2020, 9:25 pm Bryan Bende,  wrote:

> Hello,
>
> Assuming you are using the ExecuteStreamCommand processors, then the
> output of the command is written to the flow file content. So if your
> command writes the JSON to stdout, then it should end up in the flow file.
>
> Thanks,
>
> Bryan
>
>
> On Fri, Apr 10, 2020 at 11:23 AM sanjeet rath 
> wrote:
>
>> Hi,
>>
>> I have a scenario , where i have to Execute a shell script amd the output
>> of the script is a json file and i want to put the file in s3 bucket.
>>
>> I am able to do it by building 2 flows.
>>
>> One flow for executestremproces and store the json file in folder in
>> System
>>
>> Then another flow to get the file from system and using puts3object to
>> put in s3 bucket. But building write and read in separate flow is bit risky
>> and noway i am make it dependent as getfile has no preprocessor.
>>
>>
>> Is it possible not to store the json file (output of shell script) in
>> file system and as a flow file move it to s3 bucket ?
>> Means in one flow to execute shellscript and store output in s3 bucket.
>>
>> Regards,
>> Sanjeet
>>
>>


Storing output of shellscript in s3 bucket

2020-04-10 Thread sanjeet rath
Hi,

I have a scenario , where i have to Execute a shell script amd the output
of the script is a json file and i want to put the file in s3 bucket.

I am able to do it by building 2 flows.

One flow for executestremproces and store the json file in folder in System

Then another flow to get the file from system and using puts3object to put
in s3 bucket. But building write and read in separate flow is bit risky and
noway i am make it dependent as getfile has no preprocessor.


Is it possible not to store the json file (output of shell script) in file
system and as a flow file move it to s3 bucket ?
Means in one flow to execute shellscript and store output in s3 bucket.

Regards,
Sanjeet


Re: Nifi zookeeper state migration to different cluster

2020-03-27 Thread sanjeet rath
Hi,

If someone could help me with my trailed query, then it would be
really helpful.
I eagerly await for your response.

Regards,
Sanjeet

On Thu, Mar 26, 2020 at 3:20 PM sanjeet rath  wrote:

>
> Hi Team,
>
> I am trying to achieve Nifi migration to a different cluster using
> zookeeper state migration (in case of failure of cluster)
>
> Steps i am doing.
>
> zk-migrator.sh -r -z
> sourceHostname:sourceClientPort/sourceRootPath/components -f
> /path/to/export/zk-source-data.json
>
> zk-migrator.sh -s -z
> destinationHostname:destinationClientPort/destinationRootPath/components -f
> /path/to/export/zk-source-data.json
> also moved the flow.xml.gz, users.xml, authorization.xml from source
> cluster to the destination cluster.
>
> Then its working fine, means its capturing the processor state id from the
> old cluster.
>
> But my question is:
>
> Is the above approach is correct  for moving to new cluster with state
> when there is failure in old cluster.
> And one more thing, as per zk migration doc the flow need be stoped before
> running the zk migration export script.
> Is it mandatory ?
>
> Is there any other approach is there to achieve this, then please suggest,
>
> Thanks,
> Sanjeet
>
>
>
>
>
>
>
>
>
>
> --
> Sanjeet Kumar Rath,
> mob- +91 8777577470
>
>

-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Nifi zookeeper state migration to different cluster

2020-03-26 Thread sanjeet rath
Hi Team,

I am trying to achieve Nifi migration to a different cluster using
zookeeper state migration (in case of failure of cluster)

Steps i am doing.

zk-migrator.sh -r -z
sourceHostname:sourceClientPort/sourceRootPath/components -f
/path/to/export/zk-source-data.json

zk-migrator.sh -s -z
destinationHostname:destinationClientPort/destinationRootPath/components -f
/path/to/export/zk-source-data.json
also moved the flow.xml.gz, users.xml, authorization.xml from source
cluster to the destination cluster.

Then its working fine, means its capturing the processor state id from the
old cluster.

But my question is:

Is the above approach is correct  for moving to new cluster with state when
there is failure in old cluster.
And one more thing, as per zk migration doc the flow need be stoped before
running the zk migration export script.
Is it mandatory ?

Is there any other approach is there to achieve this, then please suggest,

Thanks,
Sanjeet










-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Exception is showing in nifi UI users page

2020-03-09 Thread sanjeet rath
Hi team,

I have the AD group name from the authoriser.xml file i want to delete the
corresponding Unique group dientifier value from authorisation.xml file.

Could you please help me to find out the file where we are keeping the
relation map between AD group name and corresponding Unique group idetifier
value.

Thanks a lot,
Sanjeet

On Fri, 6 Mar 2020, 8:07 pm sanjeet rath,  wrote:

> Yes Mat,
>
> But i cant upgrading nifi till next 2 months.
> As i am not able to add user so its a blocker.
>
> Thats why asking u how the AD group name from the authoriser file is
> mapped to group identifier of authorization.xml file.
>
> Any place where we are storing this maping?
>
> So that i can delete it from authorization.xml
>
> Any other workaround is there.
>
> Thanks,
> Sanjeet
>
> On Fri, 6 Mar 2020, 7:24 pm Matt Gilman,  wrote:
>
>> I would recommend upgrading. If I recall correctly, there were actually a
>> couple of JIRAs addressing different but similar issues dealing with users
>> or groups being removed from a directory server. Even if you could identify
>> and remove the problematic entries, you could possibly hit this again if
>> another user or group were removed later.
>>
>> Thanks
>>
>> On Fri, Mar 6, 2020 at 6:35 AM sanjeet rath 
>> wrote:
>>
>>> Hi ,
>>>
>>> The AD group names are present in Authoriser.xml file,
>>> In authorisation.xml file the group identifier (unique id) is present
>>> inside policy identifier tag.
>>> So could you please help me to understand where AD group name in
>>> authoriser.xml file and group identifier in authorisation.xml map together.
>>> So that i can delete it in authoriser.xml file.
>>>
>>> Thanks,
>>> Sanjeet
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Mar 6, 2020 at 12:04 AM sanjeet rath 
>>> wrote:
>>>
>>>> Thanks Mat,
>>>>
>>>> For this quick response.
>>>>
>>>> Thanks a lot ,
>>>>
>>>> Sanjeet
>>>>
>>>> On Thu, 5 Mar 2020, 11:43 pm Matt Gilman, 
>>>> wrote:
>>>>
>>>>> I just responded to your StackOverflow post:
>>>>>
>>>>>
>>>>> https://stackoverflow.com/questions/60551242/nifi-user-addition-gives-u-null-pointer-exception/60551638#60551638
>>>>>
>>>>> I believe you'll need to upgrade to a version that addresses the BUG.
>>>>>
>>>>> Thanks!
>>>>>
>>>>> On Thu, Mar 5, 2020 at 1:10 PM sanjeet rath 
>>>>> wrote:
>>>>>
>>>>>> Hi Team,
>>>>>>
>>>>>> I am using nifi cluster from 1 month & i am able to add new user
>>>>>> policies everything.its a LDAP based user addition.
>>>>>> but suddenly from last 2 days , in nifi user addition page(after
>>>>>> clicking on users in nifi UI) i am getting Error message "An unexcepted
>>>>>> error has occure.please click logs for more details".
>>>>>> and in nifif-user.log i found the bellow log.
>>>>>>
>>>>>> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has
>>>>>> occurred: java.lang.NullPointerException. Returning Internal Server Error
>>>>>> response. java.lang.NullPointerException: null at
>>>>>> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
>>>>>> at
>>>>>> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
>>>>>> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) 
>>>>>> at
>>>>>> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at
>>>>>> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>>>>>> at
>>>>>> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>>>>>> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
>>>>>> at
>>>>>> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>>>>>>
>>>>>> I am not able to figuring out where should i start looking out.
>>>>>> Could please someone help me to look a starting point where can check
>>>>>> and what need to be checked.
>>>>>>
>>>>>> Thanks,
>>>>>> Sanjeet
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Sanjeet Kumar Rath,
>>>>>> mob- +91 8777577470
>>>>>>
>>>>>>
>>>
>>> --
>>> Sanjeet Kumar Rath,
>>> mob- +91 8777577470
>>>
>>>


Re: Exception is showing in nifi UI users page

2020-03-06 Thread sanjeet rath
Yes Mat,

But i cant upgrading nifi till next 2 months.
As i am not able to add user so its a blocker.

Thats why asking u how the AD group name from the authoriser file is mapped
to group identifier of authorization.xml file.

Any place where we are storing this maping?

So that i can delete it from authorization.xml

Any other workaround is there.

Thanks,
Sanjeet

On Fri, 6 Mar 2020, 7:24 pm Matt Gilman,  wrote:

> I would recommend upgrading. If I recall correctly, there were actually a
> couple of JIRAs addressing different but similar issues dealing with users
> or groups being removed from a directory server. Even if you could identify
> and remove the problematic entries, you could possibly hit this again if
> another user or group were removed later.
>
> Thanks
>
> On Fri, Mar 6, 2020 at 6:35 AM sanjeet rath 
> wrote:
>
>> Hi ,
>>
>> The AD group names are present in Authoriser.xml file,
>> In authorisation.xml file the group identifier (unique id) is present
>> inside policy identifier tag.
>> So could you please help me to understand where AD group name in
>> authoriser.xml file and group identifier in authorisation.xml map together.
>> So that i can delete it in authoriser.xml file.
>>
>> Thanks,
>> Sanjeet
>>
>>
>>
>>
>>
>>
>> On Fri, Mar 6, 2020 at 12:04 AM sanjeet rath 
>> wrote:
>>
>>> Thanks Mat,
>>>
>>> For this quick response.
>>>
>>> Thanks a lot ,
>>>
>>> Sanjeet
>>>
>>> On Thu, 5 Mar 2020, 11:43 pm Matt Gilman, 
>>> wrote:
>>>
>>>> I just responded to your StackOverflow post:
>>>>
>>>>
>>>> https://stackoverflow.com/questions/60551242/nifi-user-addition-gives-u-null-pointer-exception/60551638#60551638
>>>>
>>>> I believe you'll need to upgrade to a version that addresses the BUG.
>>>>
>>>> Thanks!
>>>>
>>>> On Thu, Mar 5, 2020 at 1:10 PM sanjeet rath 
>>>> wrote:
>>>>
>>>>> Hi Team,
>>>>>
>>>>> I am using nifi cluster from 1 month & i am able to add new user
>>>>> policies everything.its a LDAP based user addition.
>>>>> but suddenly from last 2 days , in nifi user addition page(after
>>>>> clicking on users in nifi UI) i am getting Error message "An unexcepted
>>>>> error has occure.please click logs for more details".
>>>>> and in nifif-user.log i found the bellow log.
>>>>>
>>>>> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has
>>>>> occurred: java.lang.NullPointerException. Returning Internal Server Error
>>>>> response. java.lang.NullPointerException: null at
>>>>> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
>>>>> at
>>>>> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
>>>>> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) at
>>>>> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at
>>>>> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>>>>> at
>>>>> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>>>>> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
>>>>> at
>>>>> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>>>>>
>>>>> I am not able to figuring out where should i start looking out.
>>>>> Could please someone help me to look a starting point where can check
>>>>> and what need to be checked.
>>>>>
>>>>> Thanks,
>>>>> Sanjeet
>>>>>
>>>>>
>>>>> --
>>>>> Sanjeet Kumar Rath,
>>>>> mob- +91 8777577470
>>>>>
>>>>>
>>
>> --
>> Sanjeet Kumar Rath,
>> mob- +91 8777577470
>>
>>


Re: Exception is showing in nifi UI users page

2020-03-06 Thread sanjeet rath
Hi ,

The AD group names are present in Authoriser.xml file,
In authorisation.xml file the group identifier (unique id) is present
inside policy identifier tag.
So could you please help me to understand where AD group name in
authoriser.xml file and group identifier in authorisation.xml map together.
So that i can delete it in authoriser.xml file.

Thanks,
Sanjeet






On Fri, Mar 6, 2020 at 12:04 AM sanjeet rath  wrote:

> Thanks Mat,
>
> For this quick response.
>
> Thanks a lot ,
>
> Sanjeet
>
> On Thu, 5 Mar 2020, 11:43 pm Matt Gilman,  wrote:
>
>> I just responded to your StackOverflow post:
>>
>>
>> https://stackoverflow.com/questions/60551242/nifi-user-addition-gives-u-null-pointer-exception/60551638#60551638
>>
>> I believe you'll need to upgrade to a version that addresses the BUG.
>>
>> Thanks!
>>
>> On Thu, Mar 5, 2020 at 1:10 PM sanjeet rath 
>> wrote:
>>
>>> Hi Team,
>>>
>>> I am using nifi cluster from 1 month & i am able to add new user
>>> policies everything.its a LDAP based user addition.
>>> but suddenly from last 2 days , in nifi user addition page(after
>>> clicking on users in nifi UI) i am getting Error message "An unexcepted
>>> error has occure.please click logs for more details".
>>> and in nifif-user.log i found the bellow log.
>>>
>>> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has
>>> occurred: java.lang.NullPointerException. Returning Internal Server Error
>>> response. java.lang.NullPointerException: null at
>>> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
>>> at
>>> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
>>> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) at
>>> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at
>>> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>>> at
>>> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>>> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at
>>> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>>>
>>> I am not able to figuring out where should i start looking out.
>>> Could please someone help me to look a starting point where can check
>>> and what need to be checked.
>>>
>>> Thanks,
>>> Sanjeet
>>>
>>>
>>> --
>>> Sanjeet Kumar Rath,
>>> mob- +91 8777577470
>>>
>>>

-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Exception is showing in nifi UI users page

2020-03-05 Thread sanjeet rath
Thanks Mat,

For this quick response.

Thanks a lot ,

Sanjeet

On Thu, 5 Mar 2020, 11:43 pm Matt Gilman,  wrote:

> I just responded to your StackOverflow post:
>
>
> https://stackoverflow.com/questions/60551242/nifi-user-addition-gives-u-null-pointer-exception/60551638#60551638
>
> I believe you'll need to upgrade to a version that addresses the BUG.
>
> Thanks!
>
> On Thu, Mar 5, 2020 at 1:10 PM sanjeet rath 
> wrote:
>
>> Hi Team,
>>
>> I am using nifi cluster from 1 month & i am able to add new user policies
>> everything.its a LDAP based user addition.
>> but suddenly from last 2 days , in nifi user addition page(after clicking
>> on users in nifi UI) i am getting Error message "An unexcepted error has
>> occure.please click logs for more details".
>> and in nifif-user.log i found the bellow log.
>>
>> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred:
>> java.lang.NullPointerException. Returning Internal Server Error response.
>> java.lang.NullPointerException: null at
>> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
>> at
>> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
>> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) at
>> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at
>> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>> at
>> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at
>> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>>
>> I am not able to figuring out where should i start looking out.
>> Could please someone help me to look a starting point where can check and
>> what need to be checked.
>>
>> Thanks,
>> Sanjeet
>>
>>
>> --
>> Sanjeet Kumar Rath,
>> mob- +91 8777577470
>>
>>


Exception is showing in nifi UI users page

2020-03-05 Thread sanjeet rath
Hi Team,

I am using nifi cluster from 1 month & i am able to add new user policies
everything.its a LDAP based user addition.
but suddenly from last 2 days , in nifi user addition page(after clicking
on users in nifi UI) i am getting Error message "An unexcepted error has
occure.please click logs for more details".
and in nifif-user.log i found the bellow log.

o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred:
java.lang.NullPointerException. Returning Internal Server Error response.
java.lang.NullPointerException: null at
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
at
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) at
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)

I am not able to figuring out where should i start looking out.
Could please someone help me to look a starting point where can check and
what need to be checked.

Thanks,
Sanjeet


-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Regarding NiFi 1.10 parameter context

2020-01-08 Thread sanjeet rath
Hi ,

Previously i have used nipyapi.canvas.update_variable_registry()  to update
my processor group variables. I have updated it to nifi 1.10.So I want to
use new feature parameter context which is present in 1.10 nifi version.

I am able to update this parameters both sensitive and non sensitive values
using nifi cli.

But i am not able to update parameter context parameters using nipyapi.

 Could someone please suggest me how to do it.

Any example or reference where this been implemented  will be really
helpful.

Thanks and regards,
Sanjeet


Need help regarding processor group

2019-12-17 Thread sanjeet rath
Hi ,

Is there any api to get all proces group details (considering multiple
process groups are there in hierarchical order) by using invokeHTTP
processor.
like i have to send email notification to different process group regarding
when there processor reaches back presure threshold value.

Thanks ,
Sanjeet


Re: Clarification regarding NiFi registry

2019-12-10 Thread sanjeet rath
Hi Bryan,

I am facing the issue in 1.8 version
Currently I am dubugging the nifi code to figuring it out.
In the mean time could u please tell me when we add the registry name and
URL in Nifi settings UI, actually in which location localy this value
stored.means the persistant storage in local node.

Regards,
Sanjeet

On Tue, 10 Dec, 2019, 8:18 PM Bryan Bende,  wrote:

> One other use has reported this, but I have been unable to reproduce it:
>
> https://issues.apache.org/jira/browse/NIFI-6767
>
> Can you come up with steps that someone could take to reproduce the
> problem?
>
> Starting with step 1 - download version xyz of Nifi
>
> On Tue, Dec 10, 2019 at 9:30 AM sanjeet rath 
> wrote:
> >
> > Thanks Andy for this quick response.
> > Actually i am going in details because while restarting nifi i am
> getting incorrect info in nifi settings -- >registry location -- >location
> as
> > https:// null or sometimes
> > https:// host -1
> >
> > is there any suggestions for this issue.
> > thanks ,
> > sanjeet
> >
> >
> >
> > On Tue, 10 Dec, 2019, 7:22 PM Andy LoPresto, 
> wrote:
> >>
> >> The NiFi Registry documentation [1] explains the interaction between
> the two services in more detail, but the basic process is that a Registry
> Client is configured in NiFi which contains the information necessary for
> NiFi to connect to the registry. Assets (currently flow definitions or
> extensions) are persisted or retrieved to/from the registry instance over
> HTTP(S).
> >>
> >> In NiFi Registry, those assets are persisted in either Git or a
> relational database configured locally.
> >>
> >> When NiFi restarts, the flow controller (global scheduling system) uses
> the configured Registry Client to connect to the registry instance as
> necessary.
> >>
> >> [1] https://nifi.apache.org/docs/nifi-registry-docs/index.html
> >>
> >>
> >> Andy LoPresto
> >> alopre...@apache.org
> >> alopresto.apa...@gmail.com
> >> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> >>
> >> On Dec 10, 2019, at 1:51 PM, sanjeet rath 
> wrote:
> >>
> >> Hi ,
> >>
> >> I just have confusion but no where i am finding a proper documentation
> regarding how nifi nodes intigrates with nifi registry. means internally
> how it works.as per the code , the node calls API to get the registry
> details and storing the entiry in RegistryDTO object.
> >>
> >> So my question is , Is it storing in persistant storage like db or in
> memory.
> >> If it stores in memory, then when we restarting the nifi application
> how it again connect to registry , is it again call the API .
> >>
> >> I am sorry if my understanding is wrong.
> >> please let me know.
> >>
> >> Thanks in advance
> >> Regards,
> >> Sanjeet
> >>
> >>
>


Re: Clarification regarding NiFi registry

2019-12-10 Thread sanjeet rath
Thanks Andy for this quick response.
Actually i am going in details because while restarting nifi i am getting
incorrect info in nifi settings -- >registry location -- >location as
https:// null or sometimes
https:// host -1

is there any suggestions for this issue.
thanks ,
sanjeet



On Tue, 10 Dec, 2019, 7:22 PM Andy LoPresto,  wrote:

> The NiFi Registry documentation [1] explains the interaction between the
> two services in more detail, but the basic process is that a Registry
> Client is configured in NiFi which contains the information necessary for
> NiFi to connect to the registry. Assets (currently flow definitions or
> extensions) are persisted or retrieved to/from the registry instance over
> HTTP(S).
>
> In NiFi Registry, those assets are persisted in either Git or a relational
> database configured locally.
>
> When NiFi restarts, the flow controller (global scheduling system) uses
> the configured Registry Client to connect to the registry instance as
> necessary.
>
> [1] https://nifi.apache.org/docs/nifi-registry-docs/index.html
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Dec 10, 2019, at 1:51 PM, sanjeet rath  wrote:
>
> Hi ,
>
> I just have confusion but no where i am finding a proper documentation
> regarding how nifi nodes intigrates with nifi registry. means internally
> how it works.as per the code , the node calls API to get the registry
> details and storing the entiry in RegistryDTO object.
>
> So my question is , Is it storing in persistant storage like db or in
> memory.
> If it stores in memory, then when we restarting the nifi application how
> it again connect to registry , is it again call the API .
>
> I am sorry if my understanding is wrong.
> please let me know.
>
> Thanks in advance
> Regards,
> Sanjeet
>
>
>


Clarification regarding NiFi registry

2019-12-10 Thread sanjeet rath
Hi ,

I just have confusion but no where i am finding a proper documentation
regarding how nifi nodes intigrates with nifi registry. means internally
how it works.as per the code , the node calls API to get the registry
details and storing the entiry in RegistryDTO object.

So my question is , Is it storing in persistant storage like db or in
memory.
If it stores in memory, then when we restarting the nifi application how it
again connect to registry , is it again call the API .

I am sorry if my understanding is wrong.
please let me know.

Thanks in advance
Regards,
Sanjeet


Re: Clarification regarding swap of flow file to storage

2019-12-06 Thread sanjeet rath
Thanks Brayn,

Your clarification is much appreciated.
Yours suggestions are always helpfull for me.
We have set up our projects with allmost all the default parameter present
in nifi.properties & bootstrap config file.

I am trying to improve performance, so trying to understand each
parameters.I have also gone through the nifi dip dive document.

Could you please suggest any attributes in those config files which i can
play arround to improve the performance.

suggestion of any tools to identify the performance bottleneck will be very
helpfull.

currently for onprem cluster log monitering we r using kibana & dynatrace
and for AWS cloud cluster DataDog is used.

once again thanks a lot for your support,

Sanjeet

On Fri, 6 Dec, 2019, 7:13 PM Bryan Bende,  wrote:

> Hello,
>
> Swap files are written under the flow file repo, in a sub directory
> called swap. I don't think it is configurable anymore.
>
> When a queue has flow files in it, the content of those flow file is
> not in memory in the queue, its just java objects representing the
> pointers to the flow files.
>
> So swapping is to help with memory when you have too many java
> objects, it writes out the information about what was in the queue to
> a swap file, but again its not the content of those flow files which
> is why it is under flow file repo.
>
> Thanks,
>
> Bryan
>
> On Fri, Dec 6, 2019 at 5:10 AM sanjeet rath 
> wrote:
> >
> > Hi Team,
> >
> > Any help on my trailed mail query will be very helpful.One more thing if
> i add a new directory for swap storage will it improve any  performance in
> processing.
> >
> > Thanks & Regards,
> > Sanjeet
> >
> > On Thu, 5 Dec, 2019, 12:06 PM sanjeet rath, 
> wrote:
> >>
> >> Hi Team,
> >>
> >> As per the nifi document, once the number of  flow file in a connection
> queue will reach the threshold value, then it removed from hash map(Java
> heap space) and moved to disc.
> >> could u please confirm the deault which repository it moves to , it's
> content repository or to flowfile repository.
> >>
> >> could i configure the swap directory using nifi.swap.storage.directory
> parameter ?
> >>
> >> Thanks,
> >> sanjeet
>


Re: Clarification regarding swap of flow file to storage

2019-12-06 Thread sanjeet rath
Hi Team,

Any help on my trailed mail query will be very helpful.One more thing if i
add a new directory for swap storage will it improve any  performance in
processing.

Thanks & Regards,
Sanjeet

On Thu, 5 Dec, 2019, 12:06 PM sanjeet rath,  wrote:

> Hi Team,
>
> As per the nifi document, once the number of  flow file in a connection
> queue will reach the threshold value, then it removed from hash map(Java
> heap space) and moved to disc.
> could u please confirm the deault which repository it moves to , it's
> content repository or to flowfile repository.
>
> could i configure the swap directory using nifi.swap.storage.directory
> parameter ?
>
> Thanks,
> sanjeet
>


Clarification regarding swap of flow file to storage

2019-12-04 Thread sanjeet rath
Hi Team,

As per the nifi document, once the number of  flow file in a connection
queue will reach the threshold value, then it removed from hash map(Java
heap space) and moved to disc.
could u please confirm the deault which repository it moves to , it's
content repository or to flowfile repository.

could i configure the swap directory using nifi.swap.storage.directory
parameter ?

Thanks,
sanjeet


Re: CI/CD for custom processor in aws nifi cluster

2019-11-18 Thread sanjeet rath
Thanks Bryan for your quick response.
we are using nifi 1.8 version.Could you please suggest something regards to
1.8 version.

Regards,
Sanjeet

On Mon, 18 Nov, 2019, 10:21 PM Bryan Bende,  wrote:

> If you are using NiFi 1.9.x or newer, then there is a new feature to
> auto-load new NARs in a directory specified by
> nifi.nar.library.autoload.directory in nifi.properties.
>
> You could make part of your NiFi flow run ListS3 -> FetchS3 -> PutFile
> into the auto-load directory, then NiFi will update itself and you
> won't need to restart.
>
> You'll want to make List3 run on all nodes since each node needs to
> get its own copy of the NAR.
>
> On Mon, Nov 18, 2019 at 11:42 AM sanjeet rath 
> wrote:
> >
> >
> > Hi Team,
> >
> > I have a requirement of building CI/CD pipeline using jenkin & concourse
> to build and deploy custom nars in AWS nifi cluster.
> > The CI part is completed successfully , but in the CD while doing the
> deployment using concourse pipeline, we have a problem ,  we can not ssh to
> port 22 of the nifi cluster nodes due to restrictions.
> > So we planed to copy the nar files from artifactory(jfrog) to s3 bucket
> & from S3 bucket we will copy it to custom_lib folder of the nodes of aws
> nifi cluster.
> > we have another restriction to avoid use of cron tab to (run on each
> niffi cluster nodes )to pull it from s3 .
> >
> > Could you please suggest , how to copy files from s3 to cutom_lib folder
> of nifi cluster and restart nifi service using jenkin and concourse
> >
> > Thanks
> > --
> > Sanjeet Kumar Rath,
> > mob- +91 8777577470
> >
>


CI/CD for custom processor in aws nifi cluster

2019-11-18 Thread sanjeet rath
Hi Team,

I have a requirement of building CI/CD pipeline using jenkin & concourse to
build and deploy custom nars in AWS nifi cluster.
The CI part is completed successfully , but in the CD while doing the
deployment using concourse pipeline, we have a problem ,  we can not ssh to
port 22 of the nifi cluster nodes due to restrictions.
So we planed to copy the nar files from artifactory(jfrog) to s3 bucket  &
from S3 bucket we will copy it to custom_lib folder of the nodes of aws
nifi cluster.
we have another restriction to avoid use of cron tab to (run on each niffi
cluster nodes )to pull it from s3 .

Could you please suggest , how to copy files from s3 to cutom_lib folder of
nifi cluster and restart nifi service using jenkin and concourse

Thanks
-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Default Retry mechanism for NIFI puts3Object processor

2019-11-12 Thread sanjeet rath
Thanks Peter for helping me out.

Regards,
Sanjeet

On Tue, 12 Nov, 2019, 6:49 PM Peter Turcsanyi,  wrote:

> Hi Sanjeet,
>
> There is an open issue [1] about retry handling in AWS processors with a
> pull request available [2] that might be interesting for you / solve your
> problem. Unfortunately it has not been merged yet.
>
> This would be a more generic solution for all AWS processors which also
> adds an option to configure the retry policy.
>
> Regards,
> Peter
>
> [1] https://issues.apache.org/jira/browse/NIFI-6486
> [2] https://github.com/apache/nifi/pull/3612
>
> On Mon, Nov 11, 2019 at 6:15 PM sanjeet rath 
> wrote:
>
>> Hi Team,
>>
>> I am using puts3Object processor of the nifi , to uploading object from
>> onprem to AWS s3 bucket. i believe we have 2 types of uploading , single
>> part upload and multipart upload as per the threshold value defined for
>> multipart.
>>
>> for multipart , 3 steps are followed
>> 1)s3.nitiateMultipartUpload , 2)s3.uploadPart 3)s3.completeMultipartUpload
>>
>> while checking the code i found , in s3.completeMultipartUpload method,
>> if there is any server side exception(5**), then it is retrying 3 times (as
>> in CompleteMultipartUploadRetryCondition class of AWS SDK,
>> MAX_RETRY_ATTEMPTS is constant variable of value 3) using a do while loop .
>>
>> I have 2 questions
>>
>> a) This default retry mechanism (value is 3)is only used in
>> s3.completeMultipartUpload method ? as i don't find any code for retry used
>> in single object upload.
>>
>> b) if am going to changes MaxErrorRetry value AWS ClientConfiguration,
>> does this will change it retry count if there is S3exception(5**)  as per
>> value i have set, as its a constant value of 3. Please confirm.
>>
>> c)If B answer is YES. Then only
>> ClientConfiguration.MaxErrorRetry(myCostumValue) will work or
>>
>> I have to add bellow code for retry policy also.
>>
>> ClientConfiguration.setRetryPolicy(new
>> RetryPolicy(config.getRetryPolicy().getRetryCondition(),config.getRetryPolicy().getBackoffStrategy(),
>> myCostumValue, true).
>>
>>
>> Thanks ,
>>
>> Sanjeet
>>
>>
>>
>>


Default Retry mechanism for NIFI puts3Object processor

2019-11-11 Thread sanjeet rath
Hi Team,

I am using puts3Object processor of the nifi , to uploading object from
onprem to AWS s3 bucket. i believe we have 2 types of uploading , single
part upload and multipart upload as per the threshold value defined for
multipart.

for multipart , 3 steps are followed
1)s3.nitiateMultipartUpload , 2)s3.uploadPart 3)s3.completeMultipartUpload

while checking the code i found , in s3.completeMultipartUpload method, if
there is any server side exception(5**), then it is retrying 3 times (as in
CompleteMultipartUploadRetryCondition class of AWS SDK,  MAX_RETRY_ATTEMPTS
is constant variable of value 3) using a do while loop .

I have 2 questions

a) This default retry mechanism (value is 3)is only used in
s3.completeMultipartUpload method ? as i don't find any code for retry used
in single object upload.

b) if am going to changes MaxErrorRetry value AWS ClientConfiguration, does
this will change it retry count if there is S3exception(5**)  as per value
i have set, as its a constant value of 3. Please confirm.

c)If B answer is YES. Then only
ClientConfiguration.MaxErrorRetry(myCostumValue) will work or

I have to add bellow code for retry policy also.

ClientConfiguration.setRetryPolicy(new
RetryPolicy(config.getRetryPolicy().getRetryCondition(),config.getRetryPolicy().getBackoffStrategy(),
myCostumValue, true).


Thanks ,

Sanjeet


Re: Regarding AWScredentialproviderService

2019-10-29 Thread sanjeet rath
Hi ,

please ignore my previous mail.as i got the answer.

On Wed, 30 Oct, 2019, 8:16 AM sanjeet rath,  wrote:

> Hi,
>  I have built  custom controller service(AWS-CREDENTIAL-CONTROLER-SERVICE)
> and use this one in processor service(NIFI-AWS-SERVICE).for puts3Object.
> In this controller service i have used custom parameters to connect to aws
> and getting token , also using this token i am able to to use connect AWS
> and able to upload object using puts3object processor.
>
> But i am not able to understand few things ,
> 1) How token refresh works in custom controller service.
> 2) Does it refresh after time out.
>
> Regards,
> Sanjeet Kumar Rath,
> mob- +91 8777577470
>
>


Regarding AWScredentialproviderService

2019-10-29 Thread sanjeet rath
Hi,
 I have built  custom controller service(AWS-CREDENTIAL-CONTROLER-SERVICE)
and use this one in processor service(NIFI-AWS-SERVICE).for puts3Object.
In this controller service i have used custom parameters to connect to aws
and getting token , also using this token i am able to to use connect AWS
and able to upload object using puts3object processor.

But i am not able to understand few things ,
1) How token refresh works in custom controller service.
2) Does it refresh after time out.

Regards,
Sanjeet Kumar Rath,
mob- +91 8777577470


Regarding limit of Puts3Object processor

2019-10-29 Thread sanjeet rath
Hi Team,

I have question, why does puts3Object limit itself 14 msgs/Sec. can
identify bottlenecks using some logs.
Any code explanation will be very helpful.

Thanks
-- 
Sanjeet ,
mob- +91 8777577470


Re: Unit test is getting failed for custom ,Awscredentialprovidercontrolerservice

2019-10-22 Thread sanjeet rath
Thanks Bryan, got ur point.
Now everything is working fine.


On Tue, 22 Oct, 2019, 7:46 PM Bryan Bende,  wrote:

> You shouldn't be modifying the service code to make the test pass.
>
> You need to set whatever properties are needed to make it valid by
> using Runner.setProperty(service, property name, value)
>
> On Tue, Oct 22, 2019 at 6:54 AM Otto Fowler 
> wrote:
> >
> >
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/credentials/provider/service/AWSCredentialsProviderControllerServiceTest.java
> >
> > It may be helpful for you to look at other things in the code base. Here
> are the tests for the current service
> >
> >
> >
> >
> > On October 21, 2019 at 22:31:38, sanjeet rath (rath.sanj...@gmail.com)
> wrote:
> >
> > Hi Team,
> >
> > Our project structure we have a custom controller service
> ,Awscredentialprovidercontrolerservice controller service  to connect AWS
> with our defined 5 attributes means properties.(This is present in a
> separate project NIFI-AWS-CUSTOM_PING_CONTROLER).
> >
> > In NiFi UI this controller service is working fine .But in Unit testing
> I am using bellow code to test.
> >
> > @Test
> > public void test Awscredentialprovidercontrolerservice() {
> >
> > final TestRunner runner = TestRunners.newTestRunner(new
> puts3Object);
> > final Awscredentialprovidercontrolerservice  serviceimpl= new
> Awscredentialprovidercontrolerservice()
> > Runner.setproperty(…) //Setting my 5 properties which I have created for
> my custom controller service
> > Runner.enablecontrolerservice(serviceimpl)
> >  // will do assert  }
> >
> > Here in enabling controller service gives null pointer exception in the
> custom validate method , its excepting all the default Properties also need
> to be declared like Accesskey, secret key etc(which is present in the
> default Awscredentialprovidercontrolerservice class) in my custom
> controller  Awscredentialprovidercontrolerservice.
> >
> > After declaring the default properties in my custom
> Awscredentialprovidercontrolerservice unit test is working fine.But problem
> is these parameters are appearing in the NiFi UI of my custom
> Awscredentialprovidercontrolerservice.
> >
> > So I have 2 option, either after declaring of the default properties, is
> there any way to stop displaying I Nifi UI.
> >
> > Or As its working fine in UI flow without setting default properties in
> custom Awscredentialprovidercontrolerservice.so should I set something in
> Unit test case to make it passed
> >
> > Thanks & Regards
> > --
> > Sanjeet Kumar Rath,
> > mob- +91 8777577470
> >
>


Unit test is getting failed for custom ,Awscredentialprovidercontrolerservice

2019-10-21 Thread sanjeet rath
Hi Team,

Our project structure we have a custom controller service
,Awscredentialprovidercontrolerservice controller service  to connect AWS
with our defined 5 attributes means properties.(This is present in a
separate project NIFI-AWS-CUSTOM_PING_CONTROLER).

In NiFi UI this controller service is working fine .But in Unit testing I
am using bellow code to test.

@Test
public void test Awscredentialprovidercontrolerservice() {

final TestRunner runner = TestRunners.newTestRunner(new
puts3Object);
final Awscredentialprovidercontrolerservice  serviceimpl= new
Awscredentialprovidercontrolerservice()
Runner.setproperty(…) //Setting my 5 properties which I have created for my
custom controller service
Runner.enablecontrolerservice(serviceimpl)
 // will do assert  }

Here in enabling controller service gives null pointer exception in the
custom validate method , its excepting all the default Properties also need
to be declared like Accesskey, secret key etc(which is present in the
default Awscredentialprovidercontrolerservice class) in my
custom controller  Awscredentialprovidercontrolerservice.

After declaring the default properties in my custom
Awscredentialprovidercontrolerservice unit test is working fine.But problem
is these parameters are appearing in the NiFi UI of my custom
Awscredentialprovidercontrolerservice.

So I have 2 option, either after declaring of the default properties, is
there any way to stop displaying I Nifi UI.

Or As its working fine in UI flow without setting default properties in
custom Awscredentialprovidercontrolerservice.so should I set something in
Unit test case to make it passed
Thanks & Regards
-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Clarification Regarding custom controller service & AWS-CREDENTIAL-CONTROLER-SERVICE

2019-10-20 Thread sanjeet rath
Thanks Joe  and i am extremely sorry being impatient(bcoz today i have a
solution meeting to give the details).
I have gone through the nifi developer guide But i am not able to
understand how puts3Object processor is getting my custom
awscrdentailProvider controller dependency.as i have never given this
dependency in pom.And from the document its saying it is dynamically its
loading the implementation of the controller service by the class loader.

whenever you have time please explain, how NIFI-AWS-PROCESSOR service is
getting the dependency of AWS-CRDENTAIL-CONTROLER custom controller.
Already the pom of AWS-CRDENTAIL-CONTROLER custom controller contain
dependency of custom NIFI-AWS-PROCESSOR .

Regards,
Sanjeet



On Mon, Oct 21, 2019 at 8:12 AM Joe Witt  wrote:

> Sanjeet
>
> Your e-mail was sent 10 hours ago and is during the weekend.  Please be
> patient.
>
> I would do two nars.  The controller service nar and the processor nar
> which depends on it.  This way you can have other processor nars that also
> depend on that controller service nar if necessary.
>
> Thanks
>
> On Sun, Oct 20, 2019 at 10:26 PM sanjeet rath 
> wrote:
>
>> Hi Team,
>>
>> Please clarify my trailed mail query.
>>
>>
>> On Sun, 20 Oct, 2019, 9:35 PM sanjeet rath, 
>> wrote:
>>
>>> Hi Team,
>>>
>>> I have a requirement of building a custom
>>> controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) and custom processor
>>> service(NIFI-AWS-SERVICE).for puts3Object.
>>> Already code changes are done for both of it. and i am building 2
>>> separate nars .
>>> The custom controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) nar
>>> contain dependency of custom processor service(NIFI-AWS-SERVICE).
>>>
>>> So for puts3Object processor, it should use my custom controller service.
>>>
>>> My question is,  Should i build 2 separate nars , 1 for custom processor
>>> service(NIFI-AWS-SERVICE) and 1 for custom
>>> controller service(AWS-CREDENTIAL-CONTROLER-SERVICE). and putting both the
>>> nars , in lib folder to make nifi work.
>>>
>>> Or should i build only 1 nar file for custom
>>> controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) as it has a pom
>>> dependency of custom processor service(NIFI-AWS-SERVICE).
>>>
>>> Please clarify.
>>>
>>> Regards,
>>> Sanjeet
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Sanjeet Kumar Rath,
>>> mob- +91 8777577470
>>>
>>>

-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Clarification Regarding custom controller service & AWS-CREDENTIAL-CONTROLER-SERVICE

2019-10-20 Thread sanjeet rath
Hi Team,

Please clarify my trailed mail query.


On Sun, 20 Oct, 2019, 9:35 PM sanjeet rath,  wrote:

> Hi Team,
>
> I have a requirement of building a custom
> controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) and custom processor
> service(NIFI-AWS-SERVICE).for puts3Object.
> Already code changes are done for both of it. and i am building 2 separate
> nars .
> The custom controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) nar
> contain dependency of custom processor service(NIFI-AWS-SERVICE).
>
> So for puts3Object processor, it should use my custom controller service.
>
> My question is,  Should i build 2 separate nars , 1 for custom processor
> service(NIFI-AWS-SERVICE) and 1 for custom
> controller service(AWS-CREDENTIAL-CONTROLER-SERVICE). and putting both the
> nars , in lib folder to make nifi work.
>
> Or should i build only 1 nar file for custom
> controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) as it has a pom
> dependency of custom processor service(NIFI-AWS-SERVICE).
>
> Please clarify.
>
> Regards,
> Sanjeet
>
>
>
>
>
> --
> Sanjeet Kumar Rath,
> mob- +91 8777577470
>
>


Clarification Regarding custom controller service & AWS-CREDENTIAL-CONTROLER-SERVICE

2019-10-20 Thread sanjeet rath
Hi Team,

I have a requirement of building a custom
controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) and custom processor
service(NIFI-AWS-SERVICE).for puts3Object.
Already code changes are done for both of it. and i am building 2 separate
nars .
The custom controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) nar contain
dependency of custom processor service(NIFI-AWS-SERVICE).

So for puts3Object processor, it should use my custom controller service.

My question is,  Should i build 2 separate nars , 1 for custom processor
service(NIFI-AWS-SERVICE) and 1 for custom
controller service(AWS-CREDENTIAL-CONTROLER-SERVICE). and putting both the
nars , in lib folder to make nifi work.

Or should i build only 1 nar file for custom
controller service(AWS-CREDENTIAL-CONTROLER-SERVICE) as it has a pom
dependency of custom processor service(NIFI-AWS-SERVICE).

Please clarify.

Regards,
Sanjeet





-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Re: Doubt regarding building email nar file

2019-10-20 Thread sanjeet rath
Thanks Mike :)

On Sun, Oct 20, 2019 at 9:34 PM Mike Thomsen  wrote:

> The pom associated with the bundle.
>
> On Sun, Oct 20, 2019 at 11:41 AM sanjeet rath 
> wrote:
>
>>
>> Hi Team,
>>
>> My requirement is build to custom email processor for
>> extractingBody(already code changes are done).
>> I have implemented CI in this project using jenkin.
>>
>> SO my project structure is:
>> NIFI-EMAIL-BOUNDLE
>> ->NIFI-EMAIL-PROCESSOR
>> ->NIFI-EMAIL-NAR
>>
>> So every above project has a pom file, my required expected nar file has
>> been generated in NIFI-EMAIL-NAR projects target file.
>> So in jenkin job which pom file path should be given to generate the nar
>> file.
>>
>> Thanks,
>> Sanjeet
>>
>> --
>> Sanjeet Kumar Rath,
>> mob- +91 8777577470
>>
>>

-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Doubt regarding building email nar file

2019-10-20 Thread sanjeet rath
Hi Team,

My requirement is build to custom email processor for
extractingBody(already code changes are done).
I have implemented CI in this project using jenkin.

SO my project structure is:
NIFI-EMAIL-BOUNDLE
->NIFI-EMAIL-PROCESSOR
->NIFI-EMAIL-NAR

So every above project has a pom file, my required expected nar file has
been generated in NIFI-EMAIL-NAR projects target file.
So in jenkin job which pom file path should be given to generate the nar
file.

Thanks,
Sanjeet

-- 
Sanjeet Kumar Rath,
mob- +91 8777577470


Unable to do unit testing Awscredentialprovidercontrolerservice

2019-10-17 Thread sanjeet rath
Hi Team,

I am new to this community, today i have joined and this is my first
query.(already from last 15 days i got stuck here)

I have build a nifi custom processor.to put a object in AWS s3 bucket.(Just
Modified Puts3Object processor in NIFI- AWS-PROCESSOR project)
In this processor  Awscredentialprovidercontrolerservice is the default one
to connect AWS.

But our project structure we have a custom controller service
,Awscredentialprovidercontrolerservice controller service  to connect AWS
with our defined attributes.(This is present in a separate project
NIFI-AWS-CUSTOM_PING_CONTROLER)
I want to use this custom controller service , instead of the default
Awscredentialprovidercontrolerservice one which is present in the NIFI-
AWS-PROCESSOR project

My question is I want to do the unit testing (Using JUnit & Mockito)  to
validate this custom Awscredentialprovidercontrolerservice is working or
not like if I get invalid bucket name then test case should failed.

My code structure:
@Test
public void testRetryLogin() {
final TestRunner runner = TestRunners.newTestRunner(new
puts3Object);
final Awscredentialprovidercontrolerservice  serviceimpl= new
Awscredentialprovidercontrolerservice()
Runner.setproperty(…)
Runner.enablecontrolerservice(serviceimpl)
runner.run();
   // will do assert  }

This gives me compiler error in error in
Awscredentialprovidercontrolerservice instance creation line as custom
controller service project is not available to this processor service
project. If we add dependency in pom file to make it available then it will
be circular dependency.
Because  already in custom controlerservice pom file , custom processor is
already added in dependency..

Thanks in advance :)

-- 
Sanjeet Kumar Rath,
mob- +91 8777577470