Re: Controlling number of threads in distributed mode

2019-02-13 Thread Milan Das
Hi Joe,
Use case is I have to query HTTP request.  The HTTP  request  from  NIFI can 
only run 3 queries at a time.

The queries to be executed are coming from Kafka topic and we have 6 node 
cluster.
Ideally would like to run the 3 http queries from 3 different nodes otherwise 
one of the node (primary node) will be overloaded with data.
Thanks,
Milan Das



> On Feb 13, 2019, at 4:38 PM, Joe Witt  wrote:
> 
> Milan
> 
> The only mechanism to limit whether a given component executes on a given
> nifi node or not is whether it runs on all or whether it runs on primary
> node.
> 
> Can you describe the use case that would lead to wanting to run a component
> on a specified number of nodes as opposed to all nodes?  It would defy the
> clustering logic/data distribution mechanisms of site to site and load
> balanced connections for example.
> 
> Thanks
> 
> On Wed, Feb 13, 2019 at 4:34 PM Milan Das  wrote:
> 
>> Hello,
>> Is there a way to control number of thread less than cluster member ?
>> I have a 5 node NiFI cluster and I want to run 3 instances of the
>> processor in distributed mode.
>> 
>> Thanks,
>> Milan Das
>> 
>> 



Controlling number of threads in distributed mode

2019-02-13 Thread Milan Das
Hello,
Is there a way to control number of thread less than cluster member ?
I have a 5 node NiFI cluster and I want to run 3 instances of the  processor in 
distributed mode.

Thanks,
Milan Das



Re: NIFI DBCP connection pool not working for oracle.

2019-02-01 Thread Milan Das
1. Did you configure the Oracle JDBC Driver in DBCP connection pool /Database 
driver name?
2. Does JDBC driver UrL have access to NIFI runAs user ?

Thanks,
Milan Das


> On Feb 1, 2019, at 2:18 AM, vishwa  wrote:
> 
> Hi, 
> 
> I'm trying to make a Oracle connection pool via controller service - DBCP
> Connection Pool module. 
> 
> Database Connection URL -- jdbc:oracle:thin:@TTTEST:1521:SID
> Database Driver Class Name  -- oracle.jdbc.driver.OracleDriver
> Database User- Username
> Database User- Password
> 
> I am not able to read the data getting below error.
> 
> ".QueryDatabaseTable
> QueryDatabaseTable[id=a3a9501e-0168-1000-a5fc-4a85d8aa8cef] Unable to
> execute SQL select query SELECT * FROM emp due to
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException:
> Cannot load JDBC driver class 'oracle.jdbc.driver.OracleDriver':
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException:
> Cannot load JDBC driver class 'oracle.jdbc.driver.OracleDriver'
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException:
> Cannot load JDBC driver class 'oracle.jdbc.driver.OracleDriver'"
> 
> Can someone help with this!
> 
> 
> 
> --
> Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/



Re: NIFI install and setup

2019-01-04 Thread Milan Das
1. Are you trying to do standalone Single server or cluster instance ? If 
standalone just untar and start NIFI. If cluster more configuration steps are 
needed.
2. Do you want any security ?
3. If cluster ? Do you want to use embedded or external zookeeper

Thanks,
Milan

> On Jan 4, 2019, at 4:49 PM, Monark Pandey  
> wrote:
> 
> Hi Team,
> 
> I am reaching out to you as I need guidance on installation steps for NIFI.
> 
> I am currently trying to accomplish a MVP using NIFI for our ETL platform.
> 
> Regards,
> Monark Pandey
> 
> Engineer
> Commercial Data Engineering
> Phoenix, AZ
> 
> American Express made the following annotations 
> 
> "This message and any attachments are solely for the intended recipient and 
> may contain confidential or privileged information. If you are not the 
> intended recipient, any disclosure, copying, use, or distribution of the 
> information 
> 
> included in this message and any attachments is prohibited. If you have 
> received this communication in error, please notify us by reply e-mail and 
> immediately and permanently delete this message and any attachments. Thank 
> you." 
> American Express a ajouté le commentaire suivant le 
> Ce courrier et toute pièce jointe qu'il contient sont réservés au seul 
> destinataire indiqué et peuvent renfermer des renseignements confidentiels et 
> privilégiés. Si vous n'êtes pas le destinataire prévu, toute divulgation, 
> duplication, utilisation ou distribution du courrier ou de toute pièce jointe 
> est interdite. Si vous avez reçu cette communication par erreur, veuillez 
> nous en aviser par courrier et détruire immédiatement le courrier et les 
> pièces jointes. Merci. 
> 



Re: Set root level access policies using NiFi REST API

2018-12-27 Thread Milan Das
Thank you. I used Chrome developers tool to track all API calls.
I added all the steps in my blogs. It is needed by many users.
https://milandas.wordpress.com/2018/12/27/nifi-grant-all-access-to-initial-admin-user/
 
<https://milandas.wordpress.com/2018/12/27/nifi-grant-all-access-to-initial-admin-user/>

Thanks,
Milan Das



> On Dec 18, 2018, at 4:50 PM, Andy LoPresto  wrote:
> 
> You can also use tools like the NiFi CLI [1] or community-provided tools like 
> NiPyAPI [2] to exercise the REST API via command-line actions rather than 
> having to execute individual curl commands, etc. 
> 
> [1] https://github.com/apache/nifi/tree/master/nifi-toolkit/nifi-toolkit-cli
> [2] https://nipyapi.readthedocs.io/en/latest/readme.html
> 
> 
> Andy LoPresto
> alopre...@apache.org
> alopresto.apa...@gmail.com
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> 
>> On Dec 18, 2018, at 11:05 AM, Bryan Bende  wrote:
>> 
>> Anything you can do from the UI you can do from the REST API.
>> 
>> You can open something like Chrome Dev tools and watch the network tab
>> which performing the desired action in the UI. Then you can see what
>> API calls the UI makes.
>> On Tue, Dec 18, 2018 at 1:53 PM Milan Das  wrote:
>>> 
>>> I am wondering if it is possible to set root level access policies using 
>>> NiFi REST API.
>>> 
>>> 
>>> 
>>> There is an unanswerd forum.
>>> 
>>> https://community.hortonworks.com/answers/213913/post.html
>>> 
>>> 
>>> 
>>> Thanks,
>>> 
>>> Milan Das
>>> 
> 



Set root level access policies using NiFi REST API

2018-12-18 Thread Milan Das
I am wondering if it is possible to set root level access policies using NiFi 
REST API. 

 

There is an unanswerd forum.

https://community.hortonworks.com/answers/213913/post.html

 

Thanks,

Milan Das



Re: HortonworksSchemaRegistry kerberos/TLS support

2018-12-17 Thread Milan Das
Sure. I will give a try. Is there any open ticket for Kerberos ? 
If not, I can log an issue, may be able to work also.

Thanks,
Milan

On 12/17/18, 12:08 PM, "Pierre Villard"  wrote:

Hey Milan,

You might be interested by this JIRA [1]. There is a pull request and a
feedback would certainly be appreciated if you can give it a try.

[1] https://issues.apache.org/jira/browse/NIFI-5753

Thanks,
Pierre

Le lun. 17 déc. 2018 à 17:36, Milan Das  a écrit :

> Hello Team,
>
> I do not see an option to use Kerberos and SSL Truststore in
> HortonworksSchemaRegistry ? wondering if there is a way to use 
kerberos/TLS
> with HortonworksSchemaRegistry ?
>
>
>
> Thanks,
>
    > Milan
>
>
>
> Milan Das
> Sr. System Architect
> email: m...@interset.com
> mobile: +1 678 216 5660
> www.interset.com
>
>
>
>
>
>





HortonworksSchemaRegistry kerberos/TLS support

2018-12-17 Thread Milan Das
Hello Team,

I do not see an option to use Kerberos and SSL Truststore in 
HortonworksSchemaRegistry ? wondering if there is a way to use kerberos/TLS 
with HortonworksSchemaRegistry ?

 

Thanks,

Milan

 

Milan Das
Sr. System Architect
email: m...@interset.com
mobile: +1 678 216 5660
www.interset.com

 

 



Re: Secured NIFI (clustered) error on ListFile viewstate

2018-12-14 Thread Milan Das
Hi Andy,
I have tested it with NIFI 1.8.0
I get the a diferent error.

Unable to get the state for the specified component 
afc9148c-0167-1000--a97bf4b6: java.io.IOException: Failed to obtain 
value from ZooKeeper for component with ID afc9148c-0167-1000--a97bf4b6 
with exception code CONNECTIONLOSS



Thanks,
Milan Das

On 11/8/18, 6:41 PM, "Andy LoPresto"  wrote:

There was an issue with malformed replicated requests causing a timeout [1] 
in previous versions, and this was resolved in 1.8.0. Could you try against 
1.8.0? This issue had nothing to do with ZK though, so if the only difference 
between working/failing is whether ZK is secured, I think the problem is there. 
Are you sure that your ZK configs are correct?

[1] 
https://github.com/apache/nifi/commit/748cf745628dab20b7e71f12b5dcfe6ed0bbf134

 
Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Nov 9, 2018, at 10:35 AM, Milan Das  wrote:
> 
> Hi Andy,
> Thanks for your response.
> I have tested it with 1.4 , 1.6 and 1.7.1 .
> It is external zookeeper .
> It works when zookeeper is not secured.
> Zookeeper is up and running. Otherwise NIFI cluster state will not show 
running.
    > 
> Thanks,
> Milan Das
> 
> 
> 
> On 11/8/18, 6:32 PM, "Andy LoPresto"  wrote:
> 
>Hi Milan,
> 
>What version of NiFi are you using? Are you using the internal ZK 
instance or a standalone instance? Did this work before/without ZK JaaS?
> 
>It appears the error is on request replication to another node, and 
the node is not listening/responding to the request. 
> 
>Andy LoPresto
>alopre...@apache.org
>alopresto.apa...@gmail.com
>PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> 
>> On Nov 9, 2018, at 2:07 AM, Milan Das  wrote:
>> 
>> Hello Nifi team,
>> 
>> Wondering if it is something more I need to do.
>> 
>> Otherwise I am planning to log a defect in Jira
>> 
>> 
>> 
>> Thanks,
>> 
>> Milan
>> 
>> 
>> 
>> From: Milan Das 
>> Date: Friday, November 2, 2018 at 10:08 AM
>> To: "dev@nifi.apache.org" 
>> Subject: Secured NIFI (clustered) error on ListFile viewstate
>> 
>> 
>> 
>> I have a (Kerberos) secured NIFI running and it is connecting to secured 
SASL secured zookeeper.   Zookeeper-jaas is configured in bootstrap. Cluster 
starts clean
>> 
>> Not sure, if there is something else need to be done. 
>> 
>> 
>> 
>> Configuratons are as below:
>> 
>> 
>> 
>> conf/state-management.xml
>> 
>>   
>> 
>>   zk-provider
>> 
>>   
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
>> 
>>   
>> 
>>   /nifi
>> 
>>   10 seconds
>> 
>>   Open
>> 
>>   
>> 
>> 
>> 
>> 
>> 
>> conf/bootstrap.conf
>> 
>> java.arg.16=-Djava.security.auth.login.config=./conf/zookeeper-jaas.conf
>> 
>> 
>> 
>> 
>> 
>> Error Message:
>> 
>> 
>> 
>> 2018-11-02 13:58:22,829 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
/nifi-api/processors/cc7b96b9-0166-1000--87736df4/state to 
hdp265-secured-i56.interset.com:9443 due to javax.ws.rs.ProcessingException: 
java.net.SocketTimeoutException: Read timed out
>> 
>> 2018-11-02 13:58:22,831 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator
>> 
>> javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: Read 
timed out
>> 
>>   at 
org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:284)
>> 
>>   at 
org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)
>> 
>>   at 
org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)
>> 
>>   at 
org.glassfish.jersey.internal.Errors.process(Errors.java:31

Re: Custom processors/controller services without Maven

2018-11-15 Thread Milan Das
Trust me, you are bringing trouble. I am not sure if there is any user have 
done that.

You may practice the following and try building it the nar (zip on your own).
https://www.nifi.rocks/developing-a-custom-apache-nifi-processor-json/


Thanks,
Milan Das

On 11/15/18, 4:54 PM, "Arun kumar"  wrote:

Hi,

I am looking for ways to write custom processors and controller services
and generate NAR from eclipse without maven dependencies. Pls provide
pointers.

Thanks.





Re: Secured NIFI (clustered) error on ListFile viewstate

2018-11-08 Thread Milan Das
Hi Andy,
Thanks for your response.
I have tested it with 1.4 , 1.6 and 1.7.1 .
It is external zookeeper .
It works when zookeeper is not secured.
Zookeeper is up and running. Otherwise NIFI cluster state will not show running.

Thanks,
Milan Das



On 11/8/18, 6:32 PM, "Andy LoPresto"  wrote:

Hi Milan,

What version of NiFi are you using? Are you using the internal ZK instance 
or a standalone instance? Did this work before/without ZK JaaS?

It appears the error is on request replication to another node, and the 
node is not listening/responding to the request. 

Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Nov 9, 2018, at 2:07 AM, Milan Das  wrote:
> 
> Hello Nifi team,
> 
> Wondering if it is something more I need to do.
> 
> Otherwise I am planning to log a defect in Jira
> 
> 
> 
> Thanks,
> 
> Milan
> 
> 
> 
> From: Milan Das 
> Date: Friday, November 2, 2018 at 10:08 AM
> To: "dev@nifi.apache.org" 
> Subject: Secured NIFI (clustered) error on ListFile viewstate
> 
> 
> 
> I have a (Kerberos) secured NIFI running and it is connecting to secured 
SASL secured zookeeper.   Zookeeper-jaas is configured in bootstrap. Cluster 
starts clean
> 
> Not sure, if there is something else need to be done. 
> 
> 
> 
> Configuratons are as below:
> 
> 
> 
> conf/state-management.xml
> 
>
> 
>zk-provider
> 
>
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
> 
>
> 
>/nifi
> 
>10 seconds
> 
>Open
> 
>
> 
> 
> 
> 
> 
> conf/bootstrap.conf
> 
> java.arg.16=-Djava.security.auth.login.config=./conf/zookeeper-jaas.conf
> 
> 
> 
> 
> 
> Error Message:
> 
> 
> 
> 2018-11-02 13:58:22,829 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
/nifi-api/processors/cc7b96b9-0166-1000--87736df4/state to 
hdp265-secured-i56.interset.com:9443 due to javax.ws.rs.ProcessingException: 
java.net.SocketTimeoutException: Read timed out
> 
> 2018-11-02 13:58:22,831 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator
> 
> javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: Read 
timed out
> 
>at 
org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:284)
> 
>at 
org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)
> 
>at 
org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)
> 
>at 
org.glassfish.jersey.internal.Errors.process(Errors.java:316)
> 
>at 
org.glassfish.jersey.internal.Errors.process(Errors.java:298)
> 
>at 
org.glassfish.jersey.internal.Errors.process(Errors.java:229)
> 
>at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:414)
> 
>at 
org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:752)
> 
>at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:661)
> 
>at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:875)
> 
>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 
>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 
>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 
>at java.lang.Thread.run(Thread.java:748)
> 
> Caused by: java.net.SocketTimeoutException: Read timed out
> 
>at java.net.SocketInputStream.socketRead0(Native Method)
> 
>at 
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 
>at 
java.net.SocketInputStream.read(SocketInputStream.java:171)
> 
>

Re: Secured NIFI (clustered) error on ListFile viewstate

2018-11-08 Thread Milan Das
Hello Nifi team,

Wondering if it is something more I need to do.

Otherwise I am planning to log a defect in Jira

 

Thanks,

Milan

 

From: Milan Das 
Date: Friday, November 2, 2018 at 10:08 AM
To: "dev@nifi.apache.org" 
Subject: Secured NIFI (clustered) error on ListFile viewstate

 

I have a (Kerberos) secured NIFI running and it is connecting to secured SASL 
secured zookeeper.   Zookeeper-jaas is configured in bootstrap. Cluster starts 
clean

Not sure, if there is something else need to be done. 

 

Configuratons are as below:

 

conf/state-management.xml



zk-provider


org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider



/nifi

10 seconds

Open



 

 

conf/bootstrap.conf

java.arg.16=-Djava.security.auth.login.config=./conf/zookeeper-jaas.conf

 

 

Error Message:

 

2018-11-02 13:58:22,829 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
/nifi-api/processors/cc7b96b9-0166-1000--87736df4/state to 
hdp265-secured-i56.interset.com:9443 due to javax.ws.rs.ProcessingException: 
java.net.SocketTimeoutException: Read timed out

2018-11-02 13:58:22,831 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator

javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: Read timed out

at 
org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:284)

at 
org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)

at 
org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)

at org.glassfish.jersey.internal.Errors.process(Errors.java:316)

at org.glassfish.jersey.internal.Errors.process(Errors.java:298)

at org.glassfish.jersey.internal.Errors.process(Errors.java:229)

at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:414)

at 
org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:752)

at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:661)

at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:875)

at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.net.SocketTimeoutException: Read timed out

at java.net.SocketInputStream.socketRead0(Native Method)

at 
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)

at java.net.SocketInputStream.read(SocketInputStream.java:171)

at java.net.SocketInputStream.read(SocketInputStream.java:141)

at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)

at sun.security.ssl.InputRecord.read(InputRecord.java:503)

at 
sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)

at 
sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)

at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)

at 
java.io.BufferedInputStream.fill(BufferedInputStream.java:246)

at 
java.io.BufferedInputStream.read1(BufferedInputStream.java:286)

at 
java.io.BufferedInputStream.read(BufferedInputStream.java:345)

at 
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)

at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)

at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)

at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)

at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)

at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)

at 
org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:390)

at 
org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:282)

... 14 common frames omitted



Re: Contributing to NIFI (Hazelcast Custom processor)

2018-11-04 Thread Milan Das
Thanks Mike.
I have created https://issues.apache.org/jira/browse/NIFI-5786
I will create a PR with NIFI-5786 and request for review.

Thanks,
Milan Das

On 11/4/18, 8:00 PM, "Mike Thomsen"  wrote:

Milan,

Yeah, add at least a Jira ticket that describes what you did. Doesn't have
to be multiple, unless you just want to break it down into smaller and
faster reviews. (ex. if the PR is 10k LOC and 4 processors, might be faster
to make it 4 tickets)

Thanks,

Mike

On Sun, Nov 4, 2018 at 7:46 PM Milan Das  wrote:

> Hello PMP,
>
> I am writing a custom processor to use Hazelcast through NIFI.  I went
>  through the complete documentation at
> 
https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-HowtocontributetoApacheNiFi
> .
>
> Before I create a PR do I need to open any JIRA tickets for added features
> ?
>
    >
>
> Thanks,
>
> Milan Das
>
>
>
>





Contributing to NIFI (Hazelcast Custom processor)

2018-11-04 Thread Milan Das
Hello PMP,

I am writing a custom processor to use Hazelcast through NIFI.  I went  through 
the complete documentation at 
https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-HowtocontributetoApacheNiFi
 .

Before I create a PR do I need to open any JIRA tickets for added features ?

 

Thanks,

Milan Das

 



Secured NIFI (clustered) error on ListFile viewstate

2018-11-02 Thread Milan Das
I have a (Kerberos) secured NIFI running and it is connecting to secured SASL 
secured zookeeper.   Zookeeper-jaas is configured in bootstrap. Cluster starts 
clean

Not sure, if there is something else need to be done. 

 

Configuratons are as below:

 

conf/state-management.xml

    

    zk-provider

    
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider

    

    /nifi

    10 seconds

    Open

    

 

 

conf/bootstrap.conf

java.arg.16=-Djava.security.auth.login.config=./conf/zookeeper-jaas.conf

 

 

Error Message:

 

2018-11-02 13:58:22,829 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
/nifi-api/processors/cc7b96b9-0166-1000--87736df4/state to 
hdp265-secured-i56.interset.com:9443 due to javax.ws.rs.ProcessingException: 
java.net.SocketTimeoutException: Read timed out

2018-11-02 13:58:22,831 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator

javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: Read timed out

    at 
org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:284)

    at 
org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)

    at 
org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)

    at org.glassfish.jersey.internal.Errors.process(Errors.java:316)

    at org.glassfish.jersey.internal.Errors.process(Errors.java:298)

    at org.glassfish.jersey.internal.Errors.process(Errors.java:229)

    at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:414)

    at 
org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:752)

    at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:661)

    at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:875)

    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

    at java.util.concurrent.FutureTask.run(FutureTask.java:266)

    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

    at java.lang.Thread.run(Thread.java:748)

Caused by: java.net.SocketTimeoutException: Read timed out

    at java.net.SocketInputStream.socketRead0(Native Method)

    at 
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)

    at java.net.SocketInputStream.read(SocketInputStream.java:171)

    at java.net.SocketInputStream.read(SocketInputStream.java:141)

    at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)

    at sun.security.ssl.InputRecord.read(InputRecord.java:503)

    at 
sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)

    at 
sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)

    at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)

    at 
java.io.BufferedInputStream.fill(BufferedInputStream.java:246)

    at 
java.io.BufferedInputStream.read1(BufferedInputStream.java:286)

    at 
java.io.BufferedInputStream.read(BufferedInputStream.java:345)

    at 
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)

    at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)

    at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)

    at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)

    at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)

    at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)

    at 
org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:390)

    at 
org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:282)

    ... 14 common frames omitted



Re: Hi

2018-10-30 Thread Milan Das
I mean ParseEvtx processor.

On 10/30/18, 7:37 AM, "Milan Das"  wrote:

Did you try WinEventLogProcessor ?
Thanks,
    Milan Das

On 10/29/18, 11:17 PM, "find" <907611...@qq.com> wrote:

Hi,
  Sorry to disturb you.I want to parse a .evtx(a window event log file) 
file using nifi(https://github.com/apache/nifi),but I just don't know how to 
use it(specific codes to implement),I wonder if you have a demo or not?Thanks 
in advance.







Re: Hi

2018-10-30 Thread Milan Das
Did you try WinEventLogProcessor ?
Thanks,
Milan Das

On 10/29/18, 11:17 PM, "find" <907611...@qq.com> wrote:

Hi,
  Sorry to disturb you.I want to parse a .evtx(a window event log file) 
file using nifi(https://github.com/apache/nifi),but I just don't know how to 
use it(specific codes to implement),I wonder if you have a demo or not?Thanks 
in advance.




Controller Service not loading ERROR: The service APIs should not be bundled with the implementations.

2018-10-29 Thread Milan Das
Hello NIFI Dev,

I am trying to add two  new controller services. Getting an error with one of 
the controller services. I am not sure what went wrong. 

I am guessing it is pssoibly nifi jar maven dependencies scope I have added. 
Like nifi-hbase-client-service-api, nifi-distributed-cache-client-service-api

 

 

2018-10-29 21:35:17,614 WARN [main] org.apache.nifi.nar.ExtensionManager 
Controller Service com.interset.nifi.hbase.CDHHBase_ClientService is bundled 
with its supporting APIs com.interset.nifi.hbase.CDHHBaseClientService. The 
service APIs should not be bundled with the implementations.

2018-10-29 21:35:17,614 ERROR [main] org.apache.nifi.nar.ExtensionManager 
Skipping Controller Service com.interset.nifi.hbase.CDHHBase_ClientService 
because it is bundled with its supporting APIs and requires instance class 
loading.

2018-10-29 21:35:21,523 WARN [main] o.a.n.d.html.HtmlDocumentationWriter Could 
not link to com.interset.nifi.hbase.CDHHBase_ClientService because no bundles 
were found

 

 

Thanks,

Milan Das

 



Re: Load issues

2018-10-23 Thread Milan Das
Did you increase the no. of Concurrent Tasks and Run Duration on slow 
processors ?

Thanks,
Milan Das

On 10/23/18, 8:26 PM, "Phil H"  wrote:

Hi team,

My NiFi is struggling to process data (lots of back pressure in various
queues) but it is only using a very small amount (2-5%) of CPU according to
top.

Any ideas?





Re: Unable to List Queue

2018-10-15 Thread Milan Das
Hi Brian,
Yes that was the problem.
I didn’t know that cluster node identity also need to be added. After adding it 
worked. 
Thanks a lot.

Thanks,
Milan Das

On 10/15/18, 5:44 PM, "Bryan Bende"  wrote:

Just to confirm, the cluster nodes are also granted access to "view the 
data"?

That is the main difference between clustered vs non-clustered, so I
would think something is not correct with the access policies for the
nodes.
On Mon, Oct 15, 2018 at 5:29 PM Milan Das  wrote:
>
> Hi Bryan
> Thanks for your response.
> The user have all access including view the data at root processor level. 
It works when is.cluster is false. It doesn’t work when is.cluster is true.
    >
> Thanks,
> Milan Das
>
>
> On 10/15/18, 2:56 PM, "Bryan Bende"  wrote:
>
> The error message is saying your user does not have permission to view
> the data for the given processor.
>
> There is a specific policy for viewing data which is described in the
> admin guide component policies [1], the policy named "view the data".
>
> I think you should be able to create the "view the data" policy on the
> root process group to allow the user to see all data, but I can't
> remember off the top of my head.
>
> I think the users representing the nodes also might need to be in that
> policy as well, since in a cluster the requests are being proxied and
> it needs to ensure the node proxying the user is also authorized to
> receive the data.
>
> [1] 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#component-level-access-policies
> On Mon, Oct 15, 2018 at 2:20 PM Milan Das  wrote:
> >
> > Hello Nifi Team,
> >
> > I am having an issue only when cluster mode is on.
> >
> >
> >
> > Issue is, I am unable to list Queue on secured cluster. It is 
communicating on sasl with Zookeeper and the cluster is configured with TLS 
encryption and nifi.security.user.login.identity.provider=kerberos-provider
> >
> >
> >
> >  Queue on Success Queue: My flow is simple GenerateFlowFile 
(success) --> Funnel.
> >
> >
> >
> > Yes I added all policies at root level to user nifiadmin1. This 
works when I set the cluster to false.
> >
> >
> >
> > NIFI version : 1.6.0
> >
> >
> >
> >
> >
> >
> >
> > Error:
> >
> >
> >
> > 2018-10-14 15:03:21,620 INFO [NiFi Web Server-38] 
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for 
nifiadm...@interset.com
> >
> > 2018-10-14 15:03:21,621 INFO [NiFi Web Server-38] 
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[nifiadm...@interset.com], 
groups[] does not have permission to access the requested resource. Unable to 
view the data for Processor with ID 7312084e-0166-1000--6ef08dd3. 
Returning Forbidden response.
> >
> > 2018-10-14 15:03:21,623 INFO [NiFi Web Server-40] 
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[nifiadm...@interset.com], 
groups[] does not have permission to access the requested resource. Node 
ip-172-30-1-235.ec2.internal:8443 is unable to fulfill this request due to: 
Unable to view the data for Processor with ID 
7312084e-0166-1000--6ef08dd3. Contact the system administrator. 
Returning Forbidden response.
> >
> > 2018-10-14 15:03:21,633 INFO [NiFi Web Server-138] 
o.a.n.w.s.NiFiAuthenticationFilter Attempting request for 
() POST 
https://ip-172-30-1-235.ec2.internal:8443/nifi-api/flowfile-queues/73121f31-0166-1000--24972726/listing-requests
 (source ip: 172.30.1.235)
> >
> > 2018-10-14 15:03:21,633 INFO [NiFi Web Server-138] 
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for nifiadmin1@
> >
> >
> >
> > Thanks,
> >
> > Milan Das
> >
>
>
>





Re: Unable to List Queue

2018-10-15 Thread Milan Das
Hi Bryan
Thanks for your response.
The user have all access including view the data at root processor level. It 
works when is.cluster is false. It doesn’t work when is.cluster is true.

Thanks,
Milan Das


On 10/15/18, 2:56 PM, "Bryan Bende"  wrote:

The error message is saying your user does not have permission to view
the data for the given processor.

There is a specific policy for viewing data which is described in the
admin guide component policies [1], the policy named "view the data".

I think you should be able to create the "view the data" policy on the
root process group to allow the user to see all data, but I can't
remember off the top of my head.

I think the users representing the nodes also might need to be in that
policy as well, since in a cluster the requests are being proxied and
it needs to ensure the node proxying the user is also authorized to
receive the data.

[1] 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#component-level-access-policies
On Mon, Oct 15, 2018 at 2:20 PM Milan Das  wrote:
>
> Hello Nifi Team,
>
> I am having an issue only when cluster mode is on.
>
>
>
> Issue is, I am unable to list Queue on secured cluster. It is 
communicating on sasl with Zookeeper and the cluster is configured with TLS 
encryption and nifi.security.user.login.identity.provider=kerberos-provider
>
>
>
>  Queue on Success Queue: My flow is simple GenerateFlowFile (success) --> 
Funnel.
>
>
>
> Yes I added all policies at root level to user nifiadmin1. This works 
when I set the cluster to false.
>
>
>
> NIFI version : 1.6.0
>
>
>
>
>
>
>
> Error:
>
>
>
> 2018-10-14 15:03:21,620 INFO [NiFi Web Server-38] 
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for 
nifiadm...@interset.com
>
> 2018-10-14 15:03:21,621 INFO [NiFi Web Server-38] 
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[nifiadm...@interset.com], 
groups[] does not have permission to access the requested resource. Unable to 
view the data for Processor with ID 7312084e-0166-1000--6ef08dd3. 
Returning Forbidden response.
>
> 2018-10-14 15:03:21,623 INFO [NiFi Web Server-40] 
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[nifiadm...@interset.com], 
groups[] does not have permission to access the requested resource. Node 
ip-172-30-1-235.ec2.internal:8443 is unable to fulfill this request due to: 
Unable to view the data for Processor with ID 
7312084e-0166-1000--6ef08dd3. Contact the system administrator. 
Returning Forbidden response.
>
> 2018-10-14 15:03:21,633 INFO [NiFi Web Server-138] 
o.a.n.w.s.NiFiAuthenticationFilter Attempting request for 
() POST 
https://ip-172-30-1-235.ec2.internal:8443/nifi-api/flowfile-queues/73121f31-0166-1000--24972726/listing-requests
 (source ip: 172.30.1.235)
>
> 2018-10-14 15:03:21,633 INFO [NiFi Web Server-138] 
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for nifiadmin1@
>
>
>
> Thanks,
>
> Milan Das
>





Unable to List Queue

2018-10-15 Thread Milan Das
Hello Nifi Team,

I am having an issue only when cluster mode is on. 

 

Issue is, I am unable to list Queue on secured cluster. It is communicating on 
sasl with Zookeeper and the cluster is configured with TLS encryption and 
nifi.security.user.login.identity.provider=kerberos-provider

 

 Queue on Success Queue: My flow is simple GenerateFlowFile (success) --> 
Funnel. 

 

Yes I added all policies at root level to user nifiadmin1. This works when I 
set the cluster to false.

 

NIFI version : 1.6.0

 

 

 

Error:

 

2018-10-14 15:03:21,620 INFO [NiFi Web Server-38] 
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for 
nifiadm...@interset.com

2018-10-14 15:03:21,621 INFO [NiFi Web Server-38] 
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[nifiadm...@interset.com], 
groups[] does not have permission to access the requested resource. Unable to 
view the data for Processor with ID 7312084e-0166-1000--6ef08dd3. 
Returning Forbidden response.

2018-10-14 15:03:21,623 INFO [NiFi Web Server-40] 
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[nifiadm...@interset.com], 
groups[] does not have permission to access the requested resource. Node 
ip-172-30-1-235.ec2.internal:8443 is unable to fulfill this request due to: 
Unable to view the data for Processor with ID 
7312084e-0166-1000--6ef08dd3. Contact the system administrator. 
Returning Forbidden response.

2018-10-14 15:03:21,633 INFO [NiFi Web Server-138] 
o.a.n.w.s.NiFiAuthenticationFilter Attempting request for 
() POST 
https://ip-172-30-1-235.ec2.internal:8443/nifi-api/flowfile-queues/73121f31-0166-1000--24972726/listing-requests
 (source ip: 172.30.1.235)

2018-10-14 15:03:21,633 INFO [NiFi Web Server-138] 
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for nifiadmin1@

 

Thanks,

Milan Das



Re: NIFI single node in cluster mode

2018-10-14 Thread Milan Das
Thanks all for the advise.
Found the problem. I was adding two Djava parameters in java.arg.N. I added 
then two different  lines it worked.

Now I see the problem when I list Queue on Success Queye: My flow is simple 
GenerateFlowFile (success) --> Funnel. 
Yes I added all policies at root level to user nifiadmin1. This works when I 
set the cluster to false.

NIFI version : 1.6.0



Error:

2018-10-14 15:03:21,620 INFO [NiFi Web Server-38] 
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for 
nifiadm...@interset.com
2018-10-14 15:03:21,621 INFO [NiFi Web Server-38] 
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[nifiadm...@interset.com], 
groups[] does not have permission to access the requested resource. Unable to 
view the data for Processor with ID 7312084e-0166-1000--6ef08dd3. 
Returning Forbidden response.
2018-10-14 15:03:21,623 INFO [NiFi Web Server-40] 
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[nifiadm...@interset.com], 
groups[] does not have permission to access the requested resource. Node 
ip-172-30-1-235.ec2.internal:8443 is unable to fulfill this request due to: 
Unable to view the data for Processor with ID 
7312084e-0166-1000--6ef08dd3. Contact the system administrator. 
Returning Forbidden response.
2018-10-14 15:03:21,633 INFO [NiFi Web Server-138] 
o.a.n.w.s.NiFiAuthenticationFilter Attempting request for 
() POST 
https://ip-172-30-1-235.ec2.internal:8443/nifi-api/flowfile-queues/73121f31-0166-1000--24972726/listing-requests
 (source ip: 172.30.1.235)
2018-10-14 15:03:21,633 INFO [NiFi Web Server-138] 
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for nifiadmin1@

Regards,
Milan Das


Milan Das
Sr. System Architect
email: m...@interset.com
<https://www.linkedin.com/in/milandas/>
www.interset.com <http://www.interset.com/>
 


On 10/13/18, 2:39 PM, "Jeff"  wrote:

Milan,

If you haven't already done so, please take a look at the NiFi Admin
Guide's sections "Securing Zookeeper" [1] and "Kerberizing NiFi’s ZooKeeper
Client" [2], which should help you configure NiFi to use a kerberized
ZooKeeper.

[1]

https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#securing_zookeeper
[2]

https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#zk_kerberos_client
    
On Sat, Oct 13, 2018 at 9:38 AM Milan Das  wrote:

> Problem is I am using Kerbrized zookeeper and it is failing to create nifi
> basepath. Even if TGT is getting created Authentication is failing.
>
>
> 2018-10-13 13:33:53,573 INFO [Thread-12] org.apache.zookeeper.Login TGT
> refresh thread started.
> 2018-10-13 13:33:53,576 INFO [Thread-12] org.apache.zookeeper.Login TGT
> valid starting at:Sat Oct 13 13:33:53 UTC 2018
> 2018-10-13 13:33:53,576 INFO [Thread-12] org.apache.zookeeper.Login TGT
> expires:  Sun Oct 14 13:33:53 UTC 2018
> 2018-10-13 13:33:53,577 INFO [Thread-12] org.apache.zookeeper.Login TGT
> refresh sleeping until: Sun Oct 14 09:38:53 UTC 2018
> 2018-10-13 13:33:53,577 INFO
> [main-SendThread(ip-172-30-1-132.ec2.internal:2181)]
> o.a.zookeeper.client.ZooKeeperSaslClient Client will use GSSAPI as SASL
> mechanism.
> 2018-10-13 13:33:53,606 INFO [main-EventThread]
> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
> 2018-10-13 13:33:53,616 ERROR
> [main-SendThread(ip-172-30-1-132.ec2.internal:2181)]
> o.a.zookeeper.client.ZooKeeperSaslClient SASL authentication failed using
> login context 'Client'.
> 2018-10-13 13:33:53,723 WARN [main]
> o.a.n.c.l.e.CuratorLeaderElectionManager Unable to determine the Elected
> Leader for role 'Cluster Coordinator' due to
> org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode 
=
> AuthFailed for /nifi/leaders/Cluster Coordinator; assuming no leader has
> been elected
> 2018-10-13 13:33:53,724 INFO [Curator-Framework-0]
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2018-10-13 13:33:53,726 INFO [main]
> o.apache.nifi.controller.FlowController It appears that no Cluster
> Coordinator has been Elected yet. Registering for Cluster Coordinator 
Role.
>
>
> Thanks,
> Milan Das
>
>
> On 10/12/18, 6:26 PM, "Bryan Bende"  wrote:
>
> There is also another property for the # of candidates to wait for 
when
> voting, if it sees the # of candidates first it will short circuit the
> time
> period. So setting the candidates to 1 for a single node cluster 
should
> start immediately.
>
> On Fri, Oct 12, 2018 at 5:59 PM Jon Logan  wrote:
>
> > It waits for elect

Re: NIFI single node in cluster mode

2018-10-13 Thread Milan Das
Problem is I am using Kerbrized zookeeper and it is failing to create nifi 
basepath. Even if TGT is getting created Authentication is failing.


2018-10-13 13:33:53,573 INFO [Thread-12] org.apache.zookeeper.Login TGT refresh 
thread started.
2018-10-13 13:33:53,576 INFO [Thread-12] org.apache.zookeeper.Login TGT valid 
starting at:Sat Oct 13 13:33:53 UTC 2018
2018-10-13 13:33:53,576 INFO [Thread-12] org.apache.zookeeper.Login TGT 
expires:  Sun Oct 14 13:33:53 UTC 2018
2018-10-13 13:33:53,577 INFO [Thread-12] org.apache.zookeeper.Login TGT refresh 
sleeping until: Sun Oct 14 09:38:53 UTC 2018
2018-10-13 13:33:53,577 INFO 
[main-SendThread(ip-172-30-1-132.ec2.internal:2181)] 
o.a.zookeeper.client.ZooKeeperSaslClient Client will use GSSAPI as SASL 
mechanism.
2018-10-13 13:33:53,606 INFO [main-EventThread] 
o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2018-10-13 13:33:53,616 ERROR 
[main-SendThread(ip-172-30-1-132.ec2.internal:2181)] 
o.a.zookeeper.client.ZooKeeperSaslClient SASL authentication failed using login 
context 'Client'.
2018-10-13 13:33:53,723 WARN [main] o.a.n.c.l.e.CuratorLeaderElectionManager 
Unable to determine the Elected Leader for role 'Cluster Coordinator' due to 
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed for /nifi/leaders/Cluster Coordinator; assuming no leader has been 
elected
2018-10-13 13:33:53,724 INFO [Curator-Framework-0] 
o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2018-10-13 13:33:53,726 INFO [main] o.apache.nifi.controller.FlowController It 
appears that no Cluster Coordinator has been Elected yet. Registering for 
Cluster Coordinator Role.


Thanks,
Milan Das


On 10/12/18, 6:26 PM, "Bryan Bende"  wrote:

There is also another property for the # of candidates to wait for when
voting, if it sees the # of candidates first it will short circuit the time
period. So setting the candidates to 1 for a single node cluster should
start immediately.

On Fri, Oct 12, 2018 at 5:59 PM Jon Logan  wrote:

> It waits for election for a specific period of time, which if I recall is
> fairly high (I think 5 minutes?). If you lower this it'll still wait for 
an
> election but will complete faster (we do 30 seconds, but you could do
> lower). There's a property controlling this.
>
> On Fri, Oct 12, 2018 at 5:41 PM Milan Das  wrote:
>
> > Hello Nifi team,
> >
> > Is it possible to run a single NIFI node in cluster mode ? I have this
> > requirement because we will add other nodes soon down line.
> >
> > I tried that by setting  “nifi.cluster.is.node” but and zookeeper
> setting.
> > But seems it waits ever for election.
> >
> >
> >
    > > Appreciate your thoughts.
> >
> >
> >
> > Thanks,
> >
> > Milan Das
> >
> >
>
-- 
Sent from Gmail Mobile





ListFile is slow when it scan NFS mount

2018-09-20 Thread Milan Das
Hello All,

We are using NIFI 1.5 for one of the client. Seems like Listfile is super slow 
when it scan a directory hosted on NFS mount.

Performance on `ls` seems to be around 200 files/sec. But Listfile takes 20 
mins to list 200 files.

 

Is there any known issue or custom settings need to be conjured on ListS3 ?

 

Thanks,

Milan Das

 

 



How to Add DFM user /group ?

2018-08-19 Thread Milan Das
As  “Initial Admin Identity” do not have default access “access the controller 
– modify”. I need a way to add a DFM user.

I searched the documentation, it talks about the detail about DFM user role,  
but there is no where it is mentioned how to add a DFM user/group ?

 

“initial Admin Identiry” user by default can not add any NIFI component because 
it have only Read access to “Global policy to view the user interface” is read.

 

Basically I want add a DFM user or provide DFM to “Initial Admin Identity”. I 
am using NIFI version 1.5

 

Appreciate any help.

 

Thanks,

Milan Das

 

 



NIFI Multiple Kerberos configuration

2018-06-21 Thread Milan Das
Hello Team,

I have very unique problem. We are integration two kerberized haddop system and 
they have their own Kerbros setup.

Is it possible to two Kerberos kdc configurations in NIFI ? Integration is 
Kafka from one Hadoop to Kafka on 2nd Hadoop.

Really appreciate any thoughts.

 

Regards,

Milan Das

 

Milan Das
Sr. System Architect
email: m...@interset.com
mobile: +1 678 216 5660
www.interset.com

 

 



Re: How to run NiFi on HTTPS

2018-05-23 Thread Milan Das
I use the following script the generate certificate

openssl req -new -keyout out/$I.key -out out/$I.request -days 365 -nodes -subj 
"/C=US/ST=California/L=Irvine/O=example com/CN=${I}" -newkey rsa:2048

My complete script I use (you can refer but I can not support the script for 
your need)
https://github.com/dmilan77/scripts/blob/master/ssl/createSSLCert.sh


You can verify using nmap or oepnssl

openssl s_client -connect hostname:443 -tls1_2

nmap --script ssl-enum-ciphers -p 443 hostname



Regards,
Milan Das


On 5/23/18, 8:03 AM, "Brajendra Mishra" <brajendra_mis...@persistent.com> 
wrote:

I am using openssl for creating SSL certificate but I am unable to figure 
out which option(s) is required to make TLS supported certificate.
Could you please help me out on this. I am not aware much about these 
certificates and all?

Brajendra Mishra
Persistent Systems Ltd.

-Original Message-
From: Milan Das <m...@interset.com> 
Sent: Wednesday, May 23, 2018 5:21 PM
To: dev@nifi.apache.org; alopre...@apache.org
Subject: Re: How to run NiFi on HTTPS

Hi Bajendra,
TLS vs SSL is all depends on how you generate the SSL certificates.
If the SSL certificate is TLS 1.2 then JVM SSL protocol will use TLS.

Regards,
Milan
Sr. Architect-- Interset Inc

On 5/23/18, 7:40 AM, "Brajendra Mishra" <brajendra_mis...@persistent.com> 
wrote:

Hi Andy,

Thanks a lot for you valuable inputs. I could found the actual 
resolution after debugging the issue.
Earlier I was using IBM java 8 instead of Oracle Java 8/open JDK 8, 
where SSLv3 Protocol is not enabled by default, hence I was facing this issue.

Now I am able to see the NiFi UI securely but please let me know the 
way where I can run this nifi on TLS 1.0, 1.2 protocol instead of SSLv3 so I 
can use IBM Java 8 as well?

Brajendra Mishra
Persistent Systems Ltd.

From: Andy LoPresto <alopre...@apache.org>
Sent: Wednesday, May 23, 2018 4:16 AM
To: dev@nifi.apache.org
Subject: Re: How to run NiFi on HTTPS

Apache NiFi does not support Basic Authentication in any scenario. 
There are multiple options for user authentication to the NiFi UI/API, 
including LDAP, Kerberos, client certificates, Apache Knox, and OpenID Connect. 
More details about configuring these options are available in the Admin Guide 
[1].

As for your TLS error, my guess is that there is an error with the 
certificate you generated. The error “No overlapping cipher suites available” 
can occur when the certificate is expired or otherwise invalid, and all the 
available cipher suites require an RSA key for signing or encryption. To 
further debug this, you can use the OpenSSL s_client tool to attempt to make a 
connection via the command line, and enable the JSSE SSL debugging via a 
modification to bootstrap.conf. Once you restart, you should see additional 
TLS/SSL debug output in logs/nifi-bootstrap.conf.

For us to be able to offer further advice, you’ll need to provide more 
information, like stacktraces from your logs, or the openssl output from 
examining the certificates. Images do not come through on the list, so please 
copy and paste text output instead. There are other possible explanations, such 
as OS-limited cipher suites available, older browser versions, etc. but these 
are much less common.

Add this line to bootstrap.conf:

java.arg.15=-Djavax.net.debug=ssl,handshake

[1] 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#user_authentication


Andy LoPresto
alopre...@apache.org<mailto:alopre...@apache.org>
alopresto.apa...@gmail.com<mailto:alopresto.apa...@gmail.com>
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

On May 22, 2018, at 5:53 AM, Brajendra Mishra 
<brajendra_mis...@persistent.com<mailto:brajendra_mis...@persistent.com>> wrote:

Team I need to know the implementation of basic authentication with 
HTTPS as well.

Brajendra Mishra
Persistent Systems Ltd.

From: Brajendra Mishra 
<brajendra_mis...@persistent.com<mailto:brajendra_mis...@persistent.com>>
Sent: Tuesday, May 22, 2018 6:22 PM
To: dev@nifi.apache.org<mailto:dev@nifi.apache.org>
Subject: How to run NiFi on HTTPS

Hi Team,

I have used tlstoolkit to create required files (nifi.properties, 
keystore and truststore files) to run NiFi on HTTPS.
I also configured successfully and ran the NiFi service correctly which 
show it is running on Https protocol.
But once I tried to see its UI I am facing follo

Re: How to run NiFi on HTTPS

2018-05-23 Thread Milan Das
Hi Bajendra,
TLS vs SSL is all depends on how you generate the SSL certificates.
If the SSL certificate is TLS 1.2 then JVM SSL protocol will use TLS.

Regards,
Milan
Sr. Architect-- Interset Inc

On 5/23/18, 7:40 AM, "Brajendra Mishra"  
wrote:

Hi Andy,

Thanks a lot for you valuable inputs. I could found the actual resolution 
after debugging the issue.
Earlier I was using IBM java 8 instead of Oracle Java 8/open JDK 8, where 
SSLv3 Protocol is not enabled by default, hence I was facing this issue.

Now I am able to see the NiFi UI securely but please let me know the way 
where I can run this nifi on TLS 1.0, 1.2 protocol instead of SSLv3 so I can 
use IBM Java 8 as well?

Brajendra Mishra
Persistent Systems Ltd.

From: Andy LoPresto 
Sent: Wednesday, May 23, 2018 4:16 AM
To: dev@nifi.apache.org
Subject: Re: How to run NiFi on HTTPS

Apache NiFi does not support Basic Authentication in any scenario. There 
are multiple options for user authentication to the NiFi UI/API, including 
LDAP, Kerberos, client certificates, Apache Knox, and OpenID Connect. More 
details about configuring these options are available in the Admin Guide [1].

As for your TLS error, my guess is that there is an error with the 
certificate you generated. The error “No overlapping cipher suites available” 
can occur when the certificate is expired or otherwise invalid, and all the 
available cipher suites require an RSA key for signing or encryption. To 
further debug this, you can use the OpenSSL s_client tool to attempt to make a 
connection via the command line, and enable the JSSE SSL debugging via a 
modification to bootstrap.conf. Once you restart, you should see additional 
TLS/SSL debug output in logs/nifi-bootstrap.conf.

For us to be able to offer further advice, you’ll need to provide more 
information, like stacktraces from your logs, or the openssl output from 
examining the certificates. Images do not come through on the list, so please 
copy and paste text output instead. There are other possible explanations, such 
as OS-limited cipher suites available, older browser versions, etc. but these 
are much less common.

Add this line to bootstrap.conf:

java.arg.15=-Djavax.net.debug=ssl,handshake

[1] 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#user_authentication


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

On May 22, 2018, at 5:53 AM, Brajendra Mishra 
> wrote:

Team I need to know the implementation of basic authentication with HTTPS 
as well.

Brajendra Mishra
Persistent Systems Ltd.

From: Brajendra Mishra 
>
Sent: Tuesday, May 22, 2018 6:22 PM
To: dev@nifi.apache.org
Subject: How to run NiFi on HTTPS

Hi Team,

I have used tlstoolkit to create required files (nifi.properties, keystore 
and truststore files) to run NiFi on HTTPS.
I also configured successfully and ran the NiFi service correctly which 
show it is running on Https protocol.
But once I tried to see its UI I am facing following error on all browsers 
(IE, Firefox and Chrome):

"Secure Connection Failed - An error occurred during a connection to 
localhost:9090. Cannot communicate securely with peer: no common encryption 
algorithm(s). Error code: SSL_ERROR_NO_CYPHER_OVERLAP"

[cid:image001.png@01D3F1F9.9EC1D450]

Could you please let me know how can I see NiFi UI in this case? I have 
already tried all possible options (spread on internet) to get rid this issue 
on browsers but no luc


Brajendra Mishra
Persistent Systems Ltd.

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is 
the property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.






Decommissioning nifi node from cluster

2018-05-15 Thread Milan Das
Hi All,

I have 5 node NIFI cluster running in a client environment. We have decided to 
keep the running with 4 NIFI cluster node by taking one down.

NIFI is using embedded zookeeper.

Question is: How can we take one of the NIFI out from the cluster. I can simply 
remove and stop it but I want to make sure all  in flight transactions are 
processed or not lost (Should be processed by other cluster member).

 

Is the correct process to use “remove” using  node manager?  Does the process 
going to maintain all in-flight transactions ?

 

Thanks,

 

Milan Das
Sr. System Architect
email: m...@interset.com
mobile: +1 678 216 5660
www.interset.com

 

 



Re: published by PublishKafkaRecord_0_10 doesn't embed schema.

2018-03-26 Thread Milan Das
Hi Bryan,
We are using NIFI 1.4.0. Can we backport this fix to NIFI 1.4?

Thanks,
Milan Das

On 3/26/18, 11:26 AM, "Bryan Bende" <bbe...@gmail.com> wrote:

Hello,

What version of NiFi are you using?

This should be fixed in 1.5.0:

https://issues.apache.org/jira/browse/NIFI-4639

Thanks,

Bryan


On Sun, Mar 25, 2018 at 6:45 PM, Milan Das <m...@interset.com> wrote:
> Hello Nifi Users,
>
> Apparently, it seems like PublishKafkaRecord_0_10 doesn't embed schema 
even if it Avro Record writer is configured with “Embed Avro Schema”.
>
> I have seen the following post from Bryan Brende.  Wondering if it is a 
known issue or if I am missing anything here.
>
>
>
> 
https://community.hortonworks.com/questions/110652/cant-consume-avro-messages-whcih-are-published-by.html
>
>
>
>
>
> This is how message looks in Kafka-console-consumer, when published using 
“PublishKafkaRecord_0_11”
>
>
>
>
>
>  $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--topic test --from-beginning
>
>
>
> 
�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$�zpA
>
>   
  �=h*�p��l
>
> 
�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$+�=�ت��;�.Y7
>
> 
�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$�D�p��"B��
   r0
>
> 
�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$ekLl�;]�,Y�͙�
>
> 
�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$�z��klŤ�1�'�z�
>
> 
�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$���ξu��5�V}>�_
>
> 
�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$��=%��VbK�
>
> 
��'~���X�controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$��0
>
>
>
>
>
> When I publish the message using my java class output from console 
consumer prints the avro schema.
>
>
>
> 
Objavro.schema�P{"type":"record","name":"ActiveDirectoryRecord","namespace":"com..schema","doc":"for
 more info, refer to 
http://docs.splunk.com/Documentation/CIM/4.2.0/User/Resource","fields":[{"name":"action","type":"string","doc":"The
 action performed on the resource."},{"name":"dest","type":"string","doc":"The 
target involved in the authentication. May be aliased from more specific 
fields, such as dest_host, dest_ip, or 
dest_nt_host."},{"name":"signature_id","type":"int","doc":"Description of the 
change performed (integer)"},{"name":"time","type":"string","doc":"ISO 8601 
timestamp of the eventl <TRUNCATED…..  
>,{"name":"privileges","type":["null",{"type":"array","items":"string"}],"doc":"The
 list of privileges associated with a Privilege Escalation 
event","default":null},{"name":"subcode","type":["null","string"],"doc":"The 
error subcode for 
auth;��d��g%�z�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00usedid1Pz��
>
>
>
>
>
> Regards,
>
> Milan Das
>
>





published by PublishKafkaRecord_0_10 doesn't embed schema.

2018-03-25 Thread Milan Das
Hello Nifi Users,

Apparently, it seems like PublishKafkaRecord_0_10 doesn't embed schema even if 
it Avro Record writer is configured with “Embed Avro Schema”.

I have seen the following post from Bryan Brende.  Wondering if it is a known 
issue or if I am missing anything here.

 

https://community.hortonworks.com/questions/110652/cant-consume-avro-messages-whcih-are-published-by.html

 

 

This is how message looks in Kafka-console-consumer, when published using 
“PublishKafkaRecord_0_11”

 

 

 $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test 
--from-beginning

 

�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$�zpA

    
�=h*�p��l

�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$+�=�ت��;�.Y7

�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$�D�p��"B��
   r0

�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$ekLl�;]�,Y�͙�

�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$�z��klŤ�1�'�z�

�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$���ξu��5�V}>�_

�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$��=%��VbK�

��'~���X�controller1.ad.interset.com�H22018-03-15T09:07:04-04:00CONTROLLER1$��0

 

 

When I publish the message using my java class output from console consumer 
prints the avro schema.

 

Objavro.schema�P{"type":"record","name":"ActiveDirectoryRecord","namespace":"com..schema","doc":"for
 more info, refer to 
http://docs.splunk.com/Documentation/CIM/4.2.0/User/Resource","fields":[{"name":"action","type":"string","doc":"The
 action performed on the resource."},{"name":"dest","type":"string","doc":"The 
target involved in the authentication. May be aliased from more specific 
fields, such as dest_host, dest_ip, or 
dest_nt_host."},{"name":"signature_id","type":"int","doc":"Description of the 
change performed (integer)"},{"name":"time","type":"string","doc":"ISO 8601 
timestamp of the eventl <TRUNCATED…..  
>,{"name":"privileges","type":["null",{"type":"array","items":"string"}],"doc":"The
 list of privileges associated with a Privilege Escalation 
event","default":null},{"name":"subcode","type":["null","string"],"doc":"The 
error subcode for 
auth;��d��g%�z�SUCCESS6controller1.ad.interset.com�H22018-03-15T09:07:04-04:00usedid1Pz��

 

 

Regards,

Milan Das




Re: Finding Performance bottleneck issue

2018-03-17 Thread Milan Das
Thanks for your help Mike Thomsen & Joe Witt.
Performance problem was in the way I was each line from flow input. My logic 
was : io.InputStream->io.BufferedReader-> while reader.readLine() loop.
Now I am using guava library.
I have changed the code to use “String inputString = CharStreams.toString(new 
InputStreamReader(in, "UTF-8"));”
I gained 20 times  performance.


Regards,
Milan Das





On 3/16/18, 12:07 PM, "Mike Thomsen" <mikerthom...@gmail.com> wrote:

That seems like a very reasonable use case. You said:

> I see that my processor is actually queuing up  records at source.

Are you saying that the processor isn't able to process them that quickly
such that you're seeing a big backlog in the input queue?

On Fri, Mar 16, 2018 at 11:56 AM, Milan Das <m...@interset.com> wrote:

> Hi Mike,
> My processor is processing windows Text event as below and creating a JSON
> out of it.
> Also I am applying simple JoltTransformer (Simple just Shift and Default)
> to convert to different  JSON (no hierarchy) .
>
> Output have the following:
> 1. Original text
> 2. Converted JSON
> 3. JOLT transformed JSON
> 4. Failure
>
>
> Steps in program:
> 1. Converting the event to Java Map (using regex: "([^:=]*)[:=]([^:=]*)")
> 2. Map to Json (using Gson)
> 3. Jolt transfeormation
>
>
>
> Example event:
>
> Examples of 4626
> User / Device claims information.
>
> Subject:
> Security ID: %1
> Account Name:%2
> Account Domain:  %3
> Logon ID:%4
>
> Logon Type:  %9
>
> New Logon:
> Security ID: %5
> Account Name:%6
> Account Domain:  %7
> Logon ID:%8
>
> Event in sequence:   %10 of %11
>
> User Claims: %12
>
> Device Claims:   %13
>
> The subject fields indicate the account on the local system which
> requested the logon. This is most commonly a service such as the Server
> service, or a local process such as Winlogon.exe or Services.exe.
>
> The logon type field indicates the kind of logon that occurred. The most
> common types are 2 (interactive) and 3 (network).
>
> The New Logon fields indicate the account for whom the new logon was
> created, i.e. the account that was logged on.
>
> This event is generated when the Audit User/Device claims subcategory is
> configured and the user’s logon token contains user/device claims
> information. The Logon ID field can be used to correlate this event with
> the corresponding user logon event as well as to any other security audit
> events generated during this logon session.
>
>
>
> Regards,
> Milan Das
>
>
> On 3/16/18, 10:56 AM, "Mike Thomsen" <mikerthom...@gmail.com> wrote:
>
> Milan,
>
> Can you share some details about where you are running into problems?
> Like
> a basic description of what it's trying to do?
>
> On Fri, Mar 16, 2018 at 10:39 AM, Milan Das <m...@interset.com> wrote:
>
> > I have a custom processor, it works as expected. But I feel there is
> some
>     > performance measure need to be done. I see that my processor is
> actually
> > queuing up  records at source.
> >
> > Is there a run a load  test and do performance measure for Custom
> > Processor?
> >
> >
> >
> > Regards,
> >
> > Milan Das
> >
> >
>
>
>
>





Re: Finding Performance bottleneck issue

2018-03-16 Thread Milan Das
Hi Mike,
My processor is processing windows Text event as below and creating a JSON out 
of it.
Also I am applying simple JoltTransformer (Simple just Shift and Default) to 
convert to different  JSON (no hierarchy) .

Output have the following:
1. Original text
2. Converted JSON
3. JOLT transformed JSON
4. Failure


Steps in program:
1. Converting the event to Java Map (using regex: "([^:=]*)[:=]([^:=]*)")
2. Map to Json (using Gson)
3. Jolt transfeormation



Example event:

Examples of 4626
User / Device claims information.

Subject:
Security ID: %1
Account Name:%2
Account Domain:  %3
Logon ID:%4

Logon Type:  %9

New Logon:
Security ID: %5
Account Name:%6
Account Domain:  %7
Logon ID:%8

Event in sequence:   %10 of %11

User Claims: %12

Device Claims:   %13

The subject fields indicate the account on the local system which requested the 
logon. This is most commonly a service such as the Server service, or a local 
process such as Winlogon.exe or Services.exe.

The logon type field indicates the kind of logon that occurred. The most common 
types are 2 (interactive) and 3 (network).

The New Logon fields indicate the account for whom the new logon was created, 
i.e. the account that was logged on.

This event is generated when the Audit User/Device claims subcategory is 
configured and the user’s logon token contains user/device claims information. 
The Logon ID field can be used to correlate this event with the corresponding 
user logon event as well as to any other security audit events generated during 
this logon session.



Regards,
Milan Das


On 3/16/18, 10:56 AM, "Mike Thomsen" <mikerthom...@gmail.com> wrote:

Milan,

Can you share some details about where you are running into problems? Like
a basic description of what it's trying to do?

On Fri, Mar 16, 2018 at 10:39 AM, Milan Das <m...@interset.com> wrote:

> I have a custom processor, it works as expected. But I feel there is some
> performance measure need to be done. I see that my processor is actually
> queuing up  records at source.
>
> Is there a run a load  test and do performance measure for Custom
> Processor?
    >
>
>
> Regards,
>
> Milan Das
>
>





Finding Performance bottleneck issue

2018-03-16 Thread Milan Das
I have a custom processor, it works as expected. But I feel there is some 
performance measure need to be done. I see that my processor is actually 
queuing up  records at source.

Is there a run a load  test and do performance measure for Custom Processor?

 

Regards,

Milan Das



Re: HiveConnectionPool URL with trustStorePasswrd

2018-03-14 Thread Milan Das
We are going to use a Custom controller and mark the URL as sensitive property. 
Client don’t want to make any password visible.

 

Regards,

Milan

 

From: Andy LoPresto <alopre...@apache.org>
Reply-To: <dev@nifi.apache.org>
Date: Monday, March 12, 2018 at 4:00 PM
To: <dev@nifi.apache.org>
Subject: Re: HiveConnectionPool URL with trustStorePasswrd

 

Milan,

 

I am also not aware of any way to use an encrypted value in the JDBC connection 
string. In my understanding, the truststore password is only used to verify the 
integrity of the truststore which is used locally (i.e. not transmitted) to 
accept the remote endpoint’s TLS certificate. 

 

You could probably write a custom controller service replacing 
HiveConnectionPool [1] which implemented HiveDBCPService and marked the 
connection string as a sensitive property, so it would be encrypted on disk by 
NiFi and not revealed over the API, but it might be difficult to use in this 
way because the entire connection string would be hidden in the UI. You could 
also theoretically have separate property descriptors for the connection string 
and truststore password and construct the connection string yourself 
internally, but this is probably overkill.  

 

[1] 
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.5.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html

 

 

Andy LoPresto

alopre...@apache.org

alopresto.apa...@gmail.com

PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

 

On Mar 12, 2018, at 12:27 PM, Pierre Villard <pierre.villard...@gmail.com> 
wrote:

 

Hi Milan,

As far as I know, there is not. It's the same when you connect with the
beeline client from a node.
Note that you can set the chmod/chown of the truststore file to be only
readable by the user running nifi. It should help preventing unauthorized
access.

Pierre

2018-03-12 14:49 GMT+01:00 Milan Das <m...@interset.com>:



Hello folks,

I am connecting to Hive environment with TLS security on. In order to do
that need to send trustStorePasswrd  in Hive2 URL . As the configuration is
in controller services, not able to find a way to set the
trustStorePassword in encrypted format.

Wondering if there is a way to set trustStorePassword in encrypted format ?



Database ConnectionUrl: jdbc:hive2://ip-xxx-xx-x-xxx.
ec2.internal:1/default;principal=hive/_h...@co.acme.com
;ssl=true;sslTrustStore=/etc/hadoop/ssl/truststore.jks;
trustStorePassword=password



Regard,



[image: graph]

*Milan Das*
Sr. System Architect

email: m...@interset.com
mobile: +1 678 216 5660 <(678)%20216-5660>

[image: dIn icon] <https://www.linkedin.com/in/milandas/>

www.interset.com

 



HiveConnectionPool URL with trustStorePasswrd

2018-03-12 Thread Milan Das
Hello folks,

I am connecting to Hive environment with TLS security on. In order to do that 
need to send trustStorePasswrd  in Hive2 URL . As the configuration is in 
controller services, not able to find a way to set the trustStorePassword in 
encrypted format.

Wondering if there is a way to set trustStorePassword in encrypted format ?

 

Database ConnectionUrl: 
jdbc:hive2://ip-xxx-xx-x-xxx.ec2.internal:1/default;principal=hive/_h...@co.acme.com;ssl=true;sslTrustStore=/etc/hadoop/ssl/truststore.jks;trustStorePassword=password

 

Regard,

 

Milan Das
Sr. System Architect
email: m...@interset.com
mobile: +1 678 216 5660
www.interset.com

 

 



Re: Hive backward compatibily

2018-03-06 Thread Milan Das
Daniel,
This helps. I am to compile with CDH libraries. 
Thanks a lot.
~ Milan

On 3/6/18, 4:08 AM, "Daniel Chaffelson" <chaffel...@gmail.com> wrote:

Hi Milan,
Apologies for a link as an answer, but I'm currently traveling and
responding from mobile.

I worked through a similar problem in NiFi-1.1 and shared details of my
method which hopefully should help you.


https://stackoverflow.com/questions/39200903/apache-nifi-hive-processors-with-hive-1-1-cdh-5-7-1

Cheers,
Dan.

On Mon, 5 Mar 2018, 18:21 Matt Burgess, <mattyb...@apache.org> wrote:

> Milan,
>
> Unfortunately no, without code changes the Hive processors are not
> compatible with Hive 1.1.x.  There were API changes both to the Java code
> and the Thrift code (the latter adds the client_protocol field you
> mentioned), so you can't change the version of the Hive dependencies and
> get the NAR to compile. You'd have to make the code changes yourself and
> build with the appropriate version of Hive. There are some vendor-specific
> profiles (hortonworks, cloudera, mapr) that you can use when specifying
> vendor-specific Hive versions.
>
> Regards,
> Matt
>
>
> On Mon, Mar 5, 2018 at 12:29 PM, Milan Das <m...@interset.com> wrote:
>
> > Hello NIFI Dev,
> >
> > I am trying to use SelectHiveQL with Hive version hive-1.1.0 (it is part
> > of the Cloudera 5.13.1)
> >
> > https://www.cloudera.com/documentation/enterprise/
> > release-notes/topics/cdh_vd_cdh_package_tarball_513.html#tarball_5131
> >
> >
> >
> > Seems like because SelectHiveQL uses hive-1.2.1 client library I am
> > getting following error. My NIFI version is 1.4.0
> >
> >
> >
> > Caused by: java.sql.SQLException: Could not establish connection to
> > jdbc:hive2://ip-NNN-NN-N-NNN.ec2.internal:1/default;
> > principal=hive/_h...@.interset.com: Required field 'client_protocol' is
> > unset! Struct:TOpenSessionReq(client_protocol:null,
> > configuration:{use:database=default})
> >
> > at org.apache.hive.jdbc.HiveConnection.openSession(
> > HiveConnection.java:594)
> >
> > at org.apache.hive.jdbc.HiveConnection.(
> > HiveConnection.java:192)
> >
> >
> >
> > hive.version-> pom.xml
> >
> > https://github.com/apache/nifi/blob/rel/nifi-1.4.0/pom.xml#L105
> >
> >
> >
> >
> >
> >
> >
> > Wondering if there is a way for backward compatibility.
> >
> >
> >
> > Regards,
> >
> >
> >
> > [image: graph]
> >
> > *Milan Das*
> > Sr. System Architect
> >
> > email: m...@interset.com
> > mobile: +1 678 216 5660
> >
> > [image: dIn icon] <https://www.linkedin.com/in/milandas/>
> >
> > www.interset.com
> >
> >
> >
> >
> >
> >
> >
> >
> >
>





Hive backward compatibily

2018-03-05 Thread Milan Das
Hello NIFI Dev,

I am trying to use SelectHiveQL with Hive version hive-1.1.0 (it is part of the 
Cloudera 5.13.1)

https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh_package_tarball_513.html#tarball_5131

 

Seems like because SelectHiveQL uses hive-1.2.1 client library I am getting 
following error. My NIFI version is 1.4.0

 

Caused by: java.sql.SQLException: Could not establish connection to 
jdbc:hive2://ip-NNN-NN-N-NNN.ec2.internal:1/default;principal=hive/_h...@.interset.com:
 Required field 'client_protocol' is unset! 
Struct:TOpenSessionReq(client_protocol:null, 
configuration:{use:database=default})

    at 
org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:594)

    at 
org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:192)

 

hive.version-> pom.xml

https://github.com/apache/nifi/blob/rel/nifi-1.4.0/pom.xml#L105

 

 

 

Wondering if there is a way for backward compatibility. 

 

Regards,

 

Milan Das
Sr. System Architect
email: m...@interset.com
mobile: +1 678 216 5660
www.interset.com

 

 

 

 



Re: Will you accept contributions in Scala?

2018-02-13 Thread Milan Das
I think we should not add blindly any language but should be open to add couple 
of language like Scala.
In Bigdata world Scala/Python/Java are widely accepted.

Thanks,
Milan Das
Interset 

On 2/13/18, 10:20 AM, "Weiss, Adam" <adam.we...@perkinelmer.com> wrote:

I think it makes the most sense to me for us to publish a separate repo 
with a module and nar build for now and post when it's available in the users 
group.

Thanks for the discussion everyone, hopefully we can start making some 
helpful contributions soon.

-Adam


On 2018/02/10 23:43:31, Tony Kurc <t...@gmail.com<mailto:t...@gmail.com>> 
wrote:
> It is like Matt read my mind.>
>
> On Sat, Feb 10, 2018 at 6:26 PM, Matt Burgess 
<ma...@apache.org<mailto:ma...@apache.org>> wrote:>
>
> > I'm fine with a vote, but I'll be voting to keep Java as the single>
> > language for the (non-test) code. I share the same concerns as many of>
> > the other folks as far as accepting other languages, it's mainly the>
> > "slippery slope" argument that I don't want to turn into a>
> > JVM-language flame war.  If Scala, why not Groovy? Certainly the>
> > syntax is closer to Java, and the community has accepted it as a valid>
> > language for writing unit tests, although we stopped short for>
> > allowing it for the deployable NiFi codebase, for the same reasons>
> > IIRC.  If Scala and/or Groovy, why not Kotlin?  The same argument>
> > (albeit more tenuous) goes for Clojure and just about every other JVM>
> > language (although I don't expect a call for LuaJ processors lol).>
> >>
> > Whether we decide to support various languages ad-hoc or not, I would>
> > strenuously object to multiple/hybrid build systems for the deployed>
> > artifacts. If I could switch NiFi completely to Gradle I would, but I>
> > realize there are good reasons for not doing so (yet?) in the Apache>
> > NiFi community, and I would never want any hybrid Maven/Gradle build>
> > for the deployable code, likewise for SBT, Leiningen, etc. With a>
> > custom Mojo for Maven NAR builds, and the complexity for hybrid builds>
> > in general, I think this would create a maintenance nightmare.>
> >>
> > The language thing is a tough decision though, it's not awesome that>
> > specifying a single language can be a barrier to a more diverse>
> > community, certainly Scala-based bundles would be more than welcome in>
> > the overall NiFi ecosystem, I just think the cons outweigh the pros>
> > for the baseline code. I've written Groovy processors/NARs using>
> > Gradle as the build system, and I'm good with keeping them in my own>
> > repo, especially when the Extension Registry becomes a thing. I can>
> > see the Extension Registry perhaps making this a moot point, but>
> > clearly we need to have this discussion in the meantime.>
> >>
> > Regards,>
> > Matt>
> >>
> >>
> > On Sat, Feb 10, 2018 at 5:23 PM, Andrew Grande 
<ap...@gmail.com<mailto:ap...@gmail.com>> wrote:>
> > > Wasn't there a warning trigger about the NiFi distro size from Apache>
> > > recently? IMO, before talking alternative languages, solve the 
modularity>
> > > and NAR distribution problem. I think the implementation of a module>
> > won't>
> > > matter much then, the point being not everything has to go in the 
core,>
> > > base distribution, but can still be easily sourced from a known repo, 
for>
> > > example.>
> > >>
> > > I have a feeling NiFi 1.6+ can be approaching 2GB distro size soon :)>
> > >>
> > > Andrew>
> > >>
> > > On Sat, Feb 10, 2018, 5:12 PM Joey Frazee 
<jo...@icloud.com<mailto:jo...@icloud.com>>>
> > wrote:>
> > >>
> > >> This probably necessitates a vote, yeah?>
> > >>>
> > >> Frankly, I’m usually happier writing Scala, and I’ve not encountered 
any>
> > >> problems using processors written in Scala, but I think it’ll be>
> > important>
> > >> to tread lightly.>
> > >>>
> > >> There’s a few things that pop into my head:>
> > >>>
> > >> - Maintainability and reviewability. A very very good Java developer

Nifi "event driven" scheduling strategy Custom Processor

2017-12-27 Thread Milan Das
I am trying to build a custom processor with “Event Driven”. I added  
“@EventDriven” annotation. It is showing EventDriven in the scheduling strategy.

Question: how to send an event to this processor to intiate the flow? What are 
my options?

 

Regards,

 

Milan Das
Sr. System Architect
email: m...@interset.com

www.interset.com

 

 

 



Re: DBCP Connection Pooling using multiple OJDBC Drivers

2017-12-27 Thread Milan Das
Hi Nadeem,
I think you can do very similar to Kafka implementation. For different version 
NIFI have two different processor for Kafka 10 and 11  “PublishKafka_0_10” and 
“PublishKafka_0_11”

Regards,


Milan Das
Sr. System Architect
email: m...@interset.com
<https://www.linkedin.com/in/milandas/>
www.interset.com <http://www.interset.com/>
 


On 12/27/17, 7:08 AM, "Mohammed Nadeem" <nadeemm...@gmail.com> wrote:

I'm building a custom processor where i need to execute PL/SQL Procedures
with the help of DBCP Connection Pooling Controller Service. The custom
processor which executes PL/SQL Procedures needs to connect to different
Oracle Databases like 11g and Oracle 8i. 

The Problem i'm facing here is that these oracle databases needs different
ojdbc jars . For example Oracle 11g needs ojdbc7.jar and Oracle 8i needs
ojdbc14.jar . The Custom processor needs ojdbc7.jar as maven dependency to
execute complex Oracle jdbc types such as ARRAY ,STRUCT etc.  When I load
two dbcp controller services which uses different ojdbc.jar's for the same
custom processor it is working for one oracle database but not for other.

Detail Description.

If I connect to Oracle Database 11g where i give diver location as
ojdbc7.jar in dbcp controller service then its throwing an error saying "
java.sql.Exception : can't wrapped connection to requested interface".
To resolve this issue i added ojdbc7.jar in nifi lib folder and the error
went.

Now, when i connect to Oracle 8i with ojdbc14.jar in dbcp controller
service.. It is throwing an error saying " ArrayOutOfBound Exception 7" . I
guess it is trying to use incompatible jar which was given in the lib folder
(ojdbc7.jar) . If I add ojdbc14.jar in the lib then earlier one is not
working giving same error "  java.sql.Exception : can't wrapped connection
to requested interface".

Could you please help me out there.. Not Sure how nifi classloader works ..

Thanks  in advance


Regards,
Nadeem

 



--
Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/





NIFI-4715 : ListS3 list duplicate files when incoming file throughput to S3 is high

2017-12-22 Thread Milan Das
I have logged a defect in NIFI. ListS3 is generation duplicate flows  when S3 
throughput is high.

 

Root cause is: 
When the file gets uploaded to S3 simultaneously when List S3 is in progress.
onTrigger--> maxTimestamp is initiated as 0L.
This is clearing keys as per the code below

When lastModifiedTime on S3 object is same as currentTimestamp for the listed 
key it should be skipped. As the key is cleared, it is loading the same file 
again. 
I think fix should be to initiate the maxTimestamp with currentTimestamp not 0L.

 

 

 

https://issues.apache.org/jira/browse/ NIFI-4715

 

The fix I did already seems ok and working for us.

long maxTimestamp = currentTimestamp;

 

Wanted to check thought from other experts or of there is any other know fix .

 

 

Regards,

 

Milan Das
Sr. System Architect
email: m...@interset.com
mobile: +1 678 216 5660
www.interset.com

 

 



Please add me to nifi mailing list

2017-12-19 Thread Milan Das
Hello All,
I would like to start contributing to NIFI project. Please add me and guide
me to be a part of the team.
My introduction is quick: I work with Interset we are heavily using NIFI.

I have started with creating a bug NIFI-4715 and also I have the fix.

Regards,
Milan Das