Re: NiFi copying windows .part files

2017-12-05 Thread Joe Witt
Imagine a filename construct where you wanted to pick up any file that
begins with the phrase 'start' but does NOT end in the phrase 'part'.

The name is of a form 'begin.middle.end'.

This filename start.middle.ok would get picked up.

This filename start.middle.part would not.

The pattern for that example would be
  start\..+\.(?!part)

The key part of that is the negative lookahead for ensuring it does
not end in part.

Thanks

On Wed, Dec 6, 2017 at 2:29 AM, Ravi Papisetti (rpapiset)
 wrote:
> Yeah..that is good idea, but we are already using this option to copy file 
> with certain prefix. Not sure how I can use this field to meet both exclusion 
> and inclusion criterion.
>
> Any thoughts.
>
> Thanks,
> Ravi Papisetti
>
> On 06/12/17, 1:26 AM, "Joe Witt"  wrote:
>
> Ravi
>
> Please use the 'File Filter' property of ListFile to control ignoring
> filenames until they no longer end in 'part'.
>
> Thanks
>
> On Wed, Dec 6, 2017 at 2:14 AM, Ravi Papisetti (rpapiset)
>  wrote:
> > Hi,
> >
> >
> >
> > We are using Apache NiFi 1.3.0
> >
> >
> >
> > We have a process flow to copy files from NFS to HDFS (with processors
> > ListFile, FetchFile and PutHDFS)
> >
> >
> >
> > In the NiFi process flow, ListFile is configured to listen to a 
> directory on
> > NFS. When a file (ex: x.csv) is being copied from a windows machine to 
> NFS
> > (while transfer is in the middle), a part file(x.csv.part) is created 
> at NFS
> > until transfer is complete.
> >
> >
> >
> > ListFile has picked up this x.csv.part file and fetchFile picked up 
> this to
> > transfer to HDFS, didn’t update the file name back to x.csv in HDFS when
> > transfer is complete.
> >
> >
> >
> > But, in case a file from linux file system, while file copy to NFS is in
> > progress it created (.x.csv) and when transfer is complete, at both NFS 
> and
> > HDFS, filename is updated to x.csv (from .x.csv).
> >
> >
> >
> > Any thought how we can configure ListFile not to pickup these part 
> files or
> > any configurations in NiFi that fixes file names for these windows part
> > files?
> >
> >
> >
> > Appreciate your help.
> >
> >
> >
> > Thanks,
> >
> > Ravi Papisetti
>
>


Re: NiFi copying windows .part files

2017-12-05 Thread Joe Witt
Ravi

Please use the 'File Filter' property of ListFile to control ignoring
filenames until they no longer end in 'part'.

Thanks

On Wed, Dec 6, 2017 at 2:14 AM, Ravi Papisetti (rpapiset)
 wrote:
> Hi,
>
>
>
> We are using Apache NiFi 1.3.0
>
>
>
> We have a process flow to copy files from NFS to HDFS (with processors
> ListFile, FetchFile and PutHDFS)
>
>
>
> In the NiFi process flow, ListFile is configured to listen to a directory on
> NFS. When a file (ex: x.csv) is being copied from a windows machine to NFS
> (while transfer is in the middle), a part file(x.csv.part) is created at NFS
> until transfer is complete.
>
>
>
> ListFile has picked up this x.csv.part file and fetchFile picked up this to
> transfer to HDFS, didn’t update the file name back to x.csv in HDFS when
> transfer is complete.
>
>
>
> But, in case a file from linux file system, while file copy to NFS is in
> progress it created (.x.csv) and when transfer is complete, at both NFS and
> HDFS, filename is updated to x.csv (from .x.csv).
>
>
>
> Any thought how we can configure ListFile not to pickup these part files or
> any configurations in NiFi that fixes file names for these windows part
> files?
>
>
>
> Appreciate your help.
>
>
>
> Thanks,
>
> Ravi Papisetti


NiFi copying windows .part files

2017-12-05 Thread Ravi Papisetti (rpapiset)
Hi,

We are using Apache NiFi 1.3.0

We have a process flow to copy files from NFS to HDFS (with processors 
ListFile, FetchFile and PutHDFS)

In the NiFi process flow, ListFile is configured to listen to a directory on 
NFS. When a file (ex: x.csv) is being copied from a windows machine to NFS 
(while transfer is in the middle), a part file(x.csv.part) is created at NFS 
until transfer is complete.

ListFile has picked up this x.csv.part file and fetchFile picked up this to 
transfer to HDFS, didn’t update the file name back to x.csv in HDFS when 
transfer is complete.

But, in case a file from linux file system, while file copy to NFS is in 
progress it created (.x.csv) and when transfer is complete, at both NFS and 
HDFS, filename is updated to x.csv (from .x.csv).

Any thought how we can configure ListFile not to pickup these part files or any 
configurations in NiFi that fixes file names for these windows part files?

Appreciate your help.

Thanks,
Ravi Papisetti


Re: unable to start InvokeHTTP processor in secure Nifi 1.4.0 cluster....

2017-12-05 Thread Josh Anderton
Hi Dan/Joe,

I have encountered the same issue and after a bit of digging it appears as
if during the update to OkHttp3 a bug was introduced in the
setSslFactoryMethod.  The issue is that the method attempts to prepare a
keystore even if properties for the keystore are not defined in the
SSLContextFactory.  The exception is being thrown around line 571 of
InvokeHTTP as a keystore is attempted to be initialized without a keystore
type.

The good news is that there appears to be an easy workaround (not fully
tested yet) which is to define a keystore in your SSLContextFactory, you
can even use the same properties already defined for your truststore and I
believe your processor will start working.

Please let me know if I have misdiagnosed or if there are issues with the
workaround.

Thanks,
Josh

On Tue, Dec 5, 2017 at 9:42 AM, dan young  wrote:

> Hello Joe,
>
> Here's the JIRA. LMK if you need additional details.
>
> https://issues.apache.org/jira/browse/NIFI-4655
>
> Regards,
>
> Dano
>
> On Mon, Dec 4, 2017 at 10:46 AM Joe Witt  wrote:
>
>> Dan
>>
>> Please share as much of your config for the processor as you can.
>> Also, please file a JIRA for this.  There is definitely a bug that
>> needs to be addressed if you can make an NPE happen.
>>
>> Thanks
>>
>> On Mon, Dec 4, 2017 at 12:27 PM, dan young  wrote:
>> > Hello,
>> >
>> >
>> > I'm working on migrating some flows over to a secure cluster with OIDC.
>> When
>> > I try to start an InvokeHTTP processor, I'm getting the following
>> errors in
>> > the logs.  Is there some permission/policy that I need to set for this
>> to
>> > work?  or is this something else?
>> >
>> >
>> > Nifi 1.4.0
>> >
>> >
>> > 2017-12-04 17:20:03,972 ERROR [StandardProcessScheduler Thread-8]
>> > o.a.nifi.processors.standard.InvokeHTTP
>> > InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5]
>> > InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5] failed to invoke
>> > @OnScheduled method due to java.lang.RuntimeException: Failed while
>> > executing one of processor's OnScheduled task.; processor will not be
>> > scheduled to run for 30 seconds: java.lang.RuntimeException: Failed
>> while
>> > executing one of processor's OnScheduled task.
>> >
>> > java.lang.RuntimeException: Failed while executing one of processor's
>> > OnScheduled task.
>> >
>> > at
>> > org.apache.nifi.controller.StandardProcessorNode.invokeTaskA
>> sCancelableFuture(StandardProcessorNode.java:1483)
>> >
>> > at
>> > org.apache.nifi.controller.StandardProcessorNode.access$000(
>> StandardProcessorNode.java:103)
>> >
>> > at
>> > org.apache.nifi.controller.StandardProcessorNode$1.run(Stand
>> ardProcessorNode.java:1302)
>> >
>> > at
>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> >
>> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> >
>> > at
>> > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>> tureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>> >
>> > at
>> > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>> tureTask.run(ScheduledThreadPoolExecutor.java:293)
>> >
>> > at
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1149)
>> >
>> > at
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:624)
>> >
>> > at java.lang.Thread.run(Thread.java:748)
>> >
>> > Caused by: java.util.concurrent.ExecutionException:
>> > java.lang.reflect.InvocationTargetException
>> >
>> > at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>> >
>> > at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>> >
>> > at
>> > org.apache.nifi.controller.StandardProcessorNode.invokeTaskA
>> sCancelableFuture(StandardProcessorNode.java:1466)
>> >
>> > ... 9 common frames omitted
>> >
>> > Caused by: java.lang.reflect.InvocationTargetException: null
>> >
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >
>> > at
>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> >
>> > at
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> >
>> > at java.lang.reflect.Method.invoke(Method.java:498)
>> >
>> > at
>> > org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnota
>> tions(ReflectionUtils.java:137)
>> >
>> > at
>> > org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnota
>> tions(ReflectionUtils.java:125)
>> >
>> > at
>> > org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnota
>> tions(ReflectionUtils.java:70)
>> >
>> > at
>> > org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnota
>> tion(ReflectionUtils.java:47)
>> >
>> > at
>> > org.apache.nifi.controller.StandardProcessorNode$1$1.call(
>> 

Re: PutParquet with S3

2017-12-05 Thread Bryan Bende
Take a look at the MergeRecord processor, you can use that before
PutParquet to create the appropriately sized files.

On Tue, Dec 5, 2017 at 10:36 PM Madhukar Thota 
wrote:

> Thanks Joey,
>
> It worked. Do you know how to control the parquet file size when it writes
> to S3. I see lot of small files to s3. Is it possible to right either 512mb
> or 1GB size file?
>
>
> On Tue, Dec 5, 2017 at 8:57 PM, Joey Frazee 
> wrote:
>
>> PutParquet doesn't have the AWS S3 SDK included in it itself but it
>> provides an "Additional Classpath Resources" property that you need to
>> point at a directory with all the S3 dependencies. I just tested this the
>> other day with the following jars:
>>
>> aws-java-sdk-1.7.4.jar
>> hadoop-aws-2.7.3.jar
>> hadoop-common-2.7.3.jar
>> httpclient-4.5.3.jar
>> httpcore-4.4.4.jar
>> jackson-annotations-2.6.0.jar
>> jackson-core-2.6.1.jar
>> jackson-databind-2.6.1.jar
>>
>> So just grab those from maven central and you should be good to go.
>>
>> -joey
>>
>> On Dec 5, 2017, 6:53 PM -0600, Madhukar Thota ,
>> wrote:
>>
>> Hi
>>
>> Is it possible to use PutParquet processor to write files into S3? I
>> tried by setting s3 bucket in core-site.xml file but i am getting *No
>> FileSystem for scheme: s3a*
>>
>> *core-site.xml*
>>
>> 
>> 
>> 
>>
>> 
>>
>> 
>> 
>> fs.defaultFS
>> s3a://testing
>> 
>> 
>> fs.s3a.access.key
>> 
>> 
>> 
>> fs.s3a.secret.key
>> xxx
>> 
>> 
>>
>>
> --
Sent from Gmail Mobile


Re: PutParquet with S3

2017-12-05 Thread Madhukar Thota
Thanks Joey,

It worked. Do you know how to control the parquet file size when it writes
to S3. I see lot of small files to s3. Is it possible to right either 512mb
or 1GB size file?


On Tue, Dec 5, 2017 at 8:57 PM, Joey Frazee  wrote:

> PutParquet doesn't have the AWS S3 SDK included in it itself but it
> provides an "Additional Classpath Resources" property that you need to
> point at a directory with all the S3 dependencies. I just tested this the
> other day with the following jars:
>
> aws-java-sdk-1.7.4.jar
> hadoop-aws-2.7.3.jar
> hadoop-common-2.7.3.jar
> httpclient-4.5.3.jar
> httpcore-4.4.4.jar
> jackson-annotations-2.6.0.jar
> jackson-core-2.6.1.jar
> jackson-databind-2.6.1.jar
>
> So just grab those from maven central and you should be good to go.
>
> -joey
>
> On Dec 5, 2017, 6:53 PM -0600, Madhukar Thota ,
> wrote:
>
> Hi
>
> Is it possible to use PutParquet processor to write files into S3? I tried
> by setting s3 bucket in core-site.xml file but i am getting *No
> FileSystem for scheme: s3a*
>
> *core-site.xml*
>
> 
> 
> 
>
> 
>
> 
> 
> fs.defaultFS
> s3a://testing
> 
> 
> fs.s3a.access.key
> 
> 
> 
> fs.s3a.secret.key
> xxx
> 
> 
>
>


Re: PutParquet with S3

2017-12-05 Thread Joey Frazee
PutParquet doesn't have the AWS S3 SDK included in it itself but it provides an 
"Additional Classpath Resources" property that you need to point at a directory 
with all the S3 dependencies. I just tested this the other day with the 
following jars:

aws-java-sdk-1.7.4.jar
hadoop-aws-2.7.3.jar
hadoop-common-2.7.3.jar
httpclient-4.5.3.jar
httpcore-4.4.4.jar
jackson-annotations-2.6.0.jar
jackson-core-2.6.1.jar
jackson-databind-2.6.1.jar

So just grab those from maven central and you should be good to go.

-joey

On Dec 5, 2017, 6:53 PM -0600, Madhukar Thota , wrote:
> Hi
>
> Is it possible to use PutParquet processor to write files into S3? I tried by 
> setting s3 bucket in core-site.xml file but i am getting No FileSystem for 
> scheme: s3a
>
> core-site.xml
>
> 
> 
> 
>
> 
>
> 
> 
> fs.defaultFS
> s3a://testing
> 
> 
> fs.s3a.access.key
> 
> 
> 
> fs.s3a.secret.key
> xxx
> 
> 
>


RE: [EXT] CDC like updates on Nifi

2017-12-05 Thread Peter Wicks (pwicks)
Alberto,

Since it sounds like you have control over the structure of the tables, this 
should be doable.

If you have a changelog table for each table this will probably be easier, and 
in your changelog table you’ll need to make sure you have a good transaction 
timestamp column and a change type column (I/U/D). Then use QueryDatabaseTable 
to tail your change log table, one copy of QueryDatabaseTable for each change 
table.

Now your changes are in easy to ingest Avro files. For HIVE I’d probably use an 
external table with the Avro schema, this makes it easy to use PutHDFS to load 
the file and make it accessible from HIVE. I haven’t used Phoenix, sorry.

If you have a single change table for all tables, then you can still use the 
above patter, but you’ll need a middle step where you extract and rebuild the 
changes. Maybe if you store the changes in JSON you could extract them using 
one of the Record parsers and then rebuild the data row. Much harder though.

Thanks,
  Peter

From: Alberto Bengoa [mailto:albe...@propus.com.br]
Sent: Wednesday, December 06, 2017 06:24
To: users@nifi.apache.org
Subject: [EXT] CDC like updates on Nifi

Hey folks,

I read about Nifi CDC processor for MySQL and other CDC "solutions" with Nifi 
found on Google, like these:

https://community.hortonworks.com/idea/53420/apache-nifi-processor-to-address-cdc-use-cases-for.html
https://community.hortonworks.com/questions/88686/change-data-capture-using-nifi-1.html
https://community.hortonworks.com/articles/113941/change-data-capture-cdc-with-apache-nifi-version-1-1.html

I'm trying a different approach to acquire fresh information from tables, using 
triggers on source database's tables to write changes to a "changelog table".

This is done, but my questions are:

Would Nifi be capable to read this tables, transform these data to generate a 
SQL equivalent query (insert/update/delete) to send to Hive and/or Phoenix with 
current available processors?

Which would be the best / suggested flow?

The objective is to keep tables on the Data Lake as up-to-date as possible for 
real time analyses.

Cheers,
Alberto


PutParquet with S3

2017-12-05 Thread Madhukar Thota
Hi

Is it possible to use PutParquet processor to write files into S3? I tried
by setting s3 bucket in core-site.xml file but i am getting *No FileSystem
for scheme: s3a*

*core-site.xml*









fs.defaultFS
s3a://testing


fs.s3a.access.key



fs.s3a.secret.key
xxx




CDC like updates on Nifi

2017-12-05 Thread Alberto Bengoa
Hey folks,

I read about Nifi CDC processor for MySQL and other CDC "solutions" with
Nifi found on Google, like these:

https://community.hortonworks.com/idea/53420/apache-nifi-
processor-to-address-cdc-use-cases-for.html
https://community.hortonworks.com/questions/88686/change-
data-capture-using-nifi-1.html
https://community.hortonworks.com/articles/113941/change-
data-capture-cdc-with-apache-nifi-version-1-1.html

I'm trying a different approach to acquire fresh information from tables,
using triggers on source database's tables to write changes to a "changelog
table".

This is done, but my questions are:

Would Nifi be capable to read this tables, transform these data to generate
a SQL equivalent query (insert/update/delete) to send to Hive and/or
Phoenix with current available processors?

Which would be the best / suggested flow?

The objective is to keep tables on the Data Lake as up-to-date as possible
for real time analyses.

Cheers,
Alberto


NiFi - replace custom script by available processors

2017-12-05 Thread tzhu
Hi,

My task is to analyze some log files to count the number of certain string
occurrences and record the number in a SQL table. If the table does not
exist, NiFi should create a new SQL table and put the data in. I have the
process set up as the following picture:
 

In ExecuteScript, I write a few lines using callback to read the content and
write a new flowfile based on the result. The output is a string which can
be used as a SQL query. If PutSQL is successful, then the process
terminates; if it failed, then it goes to another custom script which
contains the "create table" query combined with the information from
previous flowfile (the insert query before).

I want to know if I can improve the process by replacing the ExecuteScript
processors by other processors.
I have two failed approaches:
1. In the analyze part, use split text to split the contents into lines,
then use RouteOnContent to find the lines with the desired string, then use
updateCounter to record the number. But then I stuck with how to get the
counter number into flowfile.
2. To put flowfile into SQL table, I want to use other format such as JSON,
then use convertJSONToSQL to create the SQL query. But I got error message
saying "1st arg can't be coerced to int". Maybe I can't use dictionary as
JSON file.

Or maybe there are some better ways improve the process. Any help is
appreciated!

Thanks,
Tina



--
Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/


Re: unable to start InvokeHTTP processor in secure Nifi 1.4.0 cluster....

2017-12-05 Thread dan young
Hello Joe,

Here's the JIRA. LMK if you need additional details.

https://issues.apache.org/jira/browse/NIFI-4655

Regards,

Dano

On Mon, Dec 4, 2017 at 10:46 AM Joe Witt  wrote:

> Dan
>
> Please share as much of your config for the processor as you can.
> Also, please file a JIRA for this.  There is definitely a bug that
> needs to be addressed if you can make an NPE happen.
>
> Thanks
>
> On Mon, Dec 4, 2017 at 12:27 PM, dan young  wrote:
> > Hello,
> >
> >
> > I'm working on migrating some flows over to a secure cluster with OIDC.
> When
> > I try to start an InvokeHTTP processor, I'm getting the following errors
> in
> > the logs.  Is there some permission/policy that I need to set for this to
> > work?  or is this something else?
> >
> >
> > Nifi 1.4.0
> >
> >
> > 2017-12-04 17:20:03,972 ERROR [StandardProcessScheduler Thread-8]
> > o.a.nifi.processors.standard.InvokeHTTP
> > InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5]
> > InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5] failed to invoke
> > @OnScheduled method due to java.lang.RuntimeException: Failed while
> > executing one of processor's OnScheduled task.; processor will not be
> > scheduled to run for 30 seconds: java.lang.RuntimeException: Failed while
> > executing one of processor's OnScheduled task.
> >
> > java.lang.RuntimeException: Failed while executing one of processor's
> > OnScheduled task.
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1483)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:103)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1302)
> >
> > at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> >
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> >
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >
> > at java.lang.Thread.run(Thread.java:748)
> >
> > Caused by: java.util.concurrent.ExecutionException:
> > java.lang.reflect.InvocationTargetException
> >
> > at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> >
> > at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1466)
> >
> > ... 9 common frames omitted
> >
> > Caused by: java.lang.reflect.InvocationTargetException: null
> >
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >
> > at java.lang.reflect.Method.invoke(Method.java:498)
> >
> > at
> >
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
> >
> > at
> >
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
> >
> > at
> >
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
> >
> > at
> >
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1306)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1302)
> >
> > ... 6 common frames omitted
> >
> > Caused by: java.lang.NullPointerException: null
>


RE: NIfi User Details

2017-12-05 Thread Willmer, Alex (UK Defence)
You can use Apache Ranger, with the NiFi plugin.

https://cwiki.apache.org/confluence/display/RANGER/NiFi+Plugin

Ranger performs authentication/authorisation of NiFi users, and keeps an audit 
of user actions. The is written to a SOLR instance for interactive 
browsing/searching, and can also be written to long term storage (e.g. HDFS).

Regards, Alex

From: mohit.j...@open-insights.co.in [mailto:mohit.j...@open-insights.co.in]
Sent: 05 December 2017 07:17
To: users@nifi.apache.org
Subject: NIfi User Details

Hi all,

Is there any way to  capture which user has run any NiFi process apart from 
nifi-user.log file?

Regards,
Mohit