Re: Nifi compatibility with Hadoop Version

2016-09-10 Thread Andre
Shashi,

To the best of my knowledge you should be able to use Hadoop 2.7 against
1.0.0 and master. This should also include those packaged by third party
vendors via profiles:

-Pmapr
-Pcloudera

I also suggest you cherry pick 80224e3e5ed7ee7b09c4985a920a7fa393bff26c as
this commit has added a few new properties to control versions used during
compilation.

The way to compile against a particular hadoop version is to build NiFi
from the sources and use -Dhadoop.version=X parameter.

Cheers

On Sat, Sep 10, 2016 at 11:53 PM, Shashi Vishwakarma <
shashi.vish...@gmail.com> wrote:

> Hi All
>
> I just got a very basic question about nifi. I see that nifi has got
> default putHDFS and getHdfs processor.
>
> Does nifi depends on hadoop version present on cluster?
>
> Lets nifi 0.6 is compatible with hadoop 2.7 etc. something like this.
>
> Do we have such metrics or it purely depends on hadoop configuration that
> we provide.
>
> Thanks
> Shashi
>


Re: Provenance expiration error

2016-09-10 Thread Adam J. Shook
Thank you for the reply!  I take it upgrading from 0.7.0 to 1.0.0 is the
same upgrade process as defined on the wiki?  Is there some additional
items due to it being a major release upgrade?

Thanks,
--Adam

On Sat, Sep 10, 2016 at 1:08 PM, Joe Percivall 
wrote:

> Hello Adam,
>
>
> Sorry no one has responded yet.
>
> Taking a look at the stack trace, I think you are running into
> NIFI-2087[1]. This was addressed in 1.0.0.
> [1] https://issues.apache.org/jira/browse/NIFI-2087
>
>
>
> Joe
> - - - - - -
> Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
>
>
>
> On Saturday, September 10, 2016 12:42 AM, Adam J. Shook <
> adamjsh...@gmail.com> wrote:
>
>
>
> --bump--
>
> Any ideas on the below issue?
>
> Thanks,
> --Adam
>
>
> On Wed, Aug 31, 2016 at 4:46 PM, Adam J. Shook 
> wrote:
>
> Hello,
> >
> >
> >I continue to receive the below error regarding deleting entries from the
> provenance repository.  The Googles aren't returning anything too helpful.
> >
> >
> >NiFi v0.7.0 on RHEL 6.8, JDK 1.8.0_60
> >
> >
> >Any ideas?
> >
> >
> >Thanks,
> >--Adam
> >
> >
> >2016-08-31 16:42:17,763 WARN [Provenance Maintenance Thread-3] o.a.n.p.
> PersistentProvenanceRepository Failed to perform Expiration Action
> org.apache.nifi.provenance. lucene.DeleteIndexAction@ 4aff1156 on
> Provenance Event file /data01/nifi/provenance_ repository/5190858.prov.gz
> due to java.lang. IllegalArgumentException: Cannot skip to block -1 because
> the value is negative; will not perform additional Expiration Actions on
> this file at this time
> >2016-08-31 16:42:17,763 WARN [Provenance Maintenance Thread-3] o.a.n.p.
> PersistentProvenanceRepository
> >java.lang. IllegalArgumentException: Cannot skip to block -1 because the
> value is negative
> >at org.apache.nifi.provenance. StandardRecordReader. skipToBlock(
> StandardRecordReader.java:111) ~[nifi-persistent-provenance-
> repository-0.7.0.jar:0.7.0]
> >at org.apache.nifi.provenance. StandardRecordReader.
> getMaxEventId( StandardRecordReader.java:458) ~[nifi-persistent-provenance-
> repository-0.7.0.jar:0.7.0]
> >at org.apache.nifi.provenance. lucene.DeleteIndexAction.
> execute(DeleteIndexAction. java:52) ~[nifi-persistent-provenance-
> repository-0.7.0.jar:0.7.0]
> >at org.apache.nifi.provenance. PersistentProvenanceRepository
> .purgeOldEvents( PersistentProvenanceRepository .java:907)
> ~[nifi-persistent-provenance- repository-0.7.0.jar:0.7.0]
> >at org.apache.nifi.provenance. PersistentProvenanceRepository
> $2.run( PersistentProvenanceRepository .java:261)
> [nifi-persistent-provenance- repository-0.7.0.jar:0.7.0]
> >at java.util.concurrent. Executors$RunnableAdapter.
> call(Executors.java:511) [na:1.8.0_60]
> >at java.util.concurrent. FutureTask.runAndReset(
> FutureTask.java:308) [na:1.8.0_60]
> >at java.util.concurrent. ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$ 301( ScheduledThreadPoolExecutor. java:180)
> [na:1.8.0_60]
> >at java.util.concurrent. ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run( ScheduledThreadPoolExecutor. java:294)
> [na:1.8.0_60]
> >at java.util.concurrent. ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142) [na:1.8.0_60]
> >at java.util.concurrent. ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617) [na:1.8.0_60]
> >at java.lang.Thread.run(Thread. java:745) [na:1.8.0_60]
> >
> >
>


Re: Nifi compatibility with Hadoop Version

2016-09-10 Thread Bryan Bende
It doesn't change too frequently, I believe we have been on 2.6.x for quite
some time as shown by the links Manish provided.

It really comes down to figuring out when there is a new stable client that
is released and when that version becomes the most popular, and how
compatible it is with other versions, etc.

On Sat, Sep 10, 2016 at 3:29 PM, Manish Gupta 8 
wrote:

> I generally prefer to check Github for such information. Look for project
> properties in pom.xml for Hadoop version.
>
>
>
> NiFi 1.0: https://github.com/apache/nifi/blob/rel/nifi-1.0.0/pom.xml
>
> NiFi 0.7: https://github.com/apache/nifi/blob/rel/nifi-0.7.0/pom.xml
>
> NiFi 0.6: https://github.com/apache/nifi/blob/rel/nifi-0.6.1/pom.xml
>
>
>
> Regards,
>
> Manish
>
>
>
> *From:* Shashi Vishwakarma [mailto:shashi.vish...@gmail.com]
> *Sent:* Saturday, September 10, 2016 3:20 PM
> *To:* users@nifi.apache.org
> *Subject:* Re: Nifi compatibility with Hadoop Version
>
>
>
> Thanks a lot. Does hadoop Client version changes with NiFi version? Where
> can get more information about it that which version of Nifi is packed with
> which version of hadoop?
>
> Thanks Shashi
>
>
>
> On 11 Sep 2016 12:20 am, "Bryan Bende"  wrote:
>
> Shashi,
>
>
>
> Apache NiFi is currently built with the Apache Hadoop 2.6.2 client, so
> generally it will work with any versions of Hadoop that this client is
> compatible with.
>
>
>
> NiFi is not using any libraries or anything from the target cluster,
> except for the config files for locations of services, and the client
> itself is bundled with the NiFi build.
>
>
>
> There have been some efforts recently to provide build profiles for NiFi
> for those who want to build a version of NiFi that uses vendor specific
> libraries (i.e. MapR, CDH, HDP, etc.), but I can't fully speak to the
> current state of that effort.
>
>
>
> Thanks,
>
>
>
> Bryan
>
>
>
> On Sat, Sep 10, 2016 at 9:53 AM, Shashi Vishwakarma <
> shashi.vish...@gmail.com> wrote:
>
> Hi All
>
>
>
> I just got a very basic question about nifi. I see that nifi has got
> default putHDFS and getHdfs processor.
>
>
>
> Does nifi depends on hadoop version present on cluster?
>
>
>
> Lets nifi 0.6 is compatible with hadoop 2.7 etc. something like this.
>
>
>
> Do we have such metrics or it purely depends on hadoop configuration that
> we provide.
>
>
>
> Thanks
>
> Shashi
>
>
>
>


Re: Nifi compatibility with Hadoop Version

2016-09-10 Thread Bryan Bende
Shashi,

Apache NiFi is currently built with the Apache Hadoop 2.6.2 client, so
generally it will work with any versions of Hadoop that this client is
compatible with.

NiFi is not using any libraries or anything from the target cluster, except
for the config files for locations of services, and the client itself is
bundled with the NiFi build.

There have been some efforts recently to provide build profiles for NiFi
for those who want to build a version of NiFi that uses vendor specific
libraries (i.e. MapR, CDH, HDP, etc.), but I can't fully speak to the
current state of that effort.

Thanks,

Bryan

On Sat, Sep 10, 2016 at 9:53 AM, Shashi Vishwakarma <
shashi.vish...@gmail.com> wrote:

> Hi All
>
> I just got a very basic question about nifi. I see that nifi has got
> default putHDFS and getHdfs processor.
>
> Does nifi depends on hadoop version present on cluster?
>
> Lets nifi 0.6 is compatible with hadoop 2.7 etc. something like this.
>
> Do we have such metrics or it purely depends on hadoop configuration that
> we provide.
>
> Thanks
> Shashi
>


Re: Provenance expiration error

2016-09-10 Thread Joe Percivall
Hello Adam,


Sorry no one has responded yet.

Taking a look at the stack trace, I think you are running into NIFI-2087[1]. 
This was addressed in 1.0.0.
[1] https://issues.apache.org/jira/browse/NIFI-2087



Joe 
- - - - - - 
Joseph Percivall
linkedin.com/in/Percivall
e: joeperciv...@yahoo.com



On Saturday, September 10, 2016 12:42 AM, Adam J. Shook  
wrote:



--bump--

Any ideas on the below issue?

Thanks,
--Adam


On Wed, Aug 31, 2016 at 4:46 PM, Adam J. Shook  wrote:

Hello,
>
>
>I continue to receive the below error regarding deleting entries from the 
>provenance repository.  The Googles aren't returning anything too helpful.
>
>
>NiFi v0.7.0 on RHEL 6.8, JDK 1.8.0_60
>
>
>Any ideas?
>
>
>Thanks,
>--Adam
>
>
>2016-08-31 16:42:17,763 WARN [Provenance Maintenance Thread-3] o.a.n.p. 
>PersistentProvenanceRepository Failed to perform Expiration Action 
>org.apache.nifi.provenance. lucene.DeleteIndexAction@ 4aff1156 on Provenance 
>Event file /data01/nifi/provenance_ repository/5190858.prov.gz due to 
>java.lang. IllegalArgumentException: Cannot skip to block -1 because the value 
>is negative; will not perform additional Expiration Actions on this file at 
>this time
>2016-08-31 16:42:17,763 WARN [Provenance Maintenance Thread-3] o.a.n.p. 
>PersistentProvenanceRepository
>java.lang. IllegalArgumentException: Cannot skip to block -1 because the value 
>is negative
>at org.apache.nifi.provenance. StandardRecordReader. skipToBlock( 
> StandardRecordReader.java:111) ~[nifi-persistent-provenance- 
> repository-0.7.0.jar:0.7.0]
>at org.apache.nifi.provenance. StandardRecordReader. getMaxEventId( 
> StandardRecordReader.java:458) ~[nifi-persistent-provenance- 
> repository-0.7.0.jar:0.7.0]
>at org.apache.nifi.provenance. lucene.DeleteIndexAction. 
> execute(DeleteIndexAction. java:52) ~[nifi-persistent-provenance- 
> repository-0.7.0.jar:0.7.0]
>at org.apache.nifi.provenance. PersistentProvenanceRepository 
> .purgeOldEvents( PersistentProvenanceRepository .java:907) 
> ~[nifi-persistent-provenance- repository-0.7.0.jar:0.7.0]
>at org.apache.nifi.provenance. PersistentProvenanceRepository $2.run( 
> PersistentProvenanceRepository .java:261) [nifi-persistent-provenance- 
> repository-0.7.0.jar:0.7.0]
>at java.util.concurrent. Executors$RunnableAdapter. 
> call(Executors.java:511) [na:1.8.0_60]
>at java.util.concurrent. FutureTask.runAndReset( FutureTask.java:308) 
> [na:1.8.0_60]
>at java.util.concurrent. ScheduledThreadPoolExecutor$ 
> ScheduledFutureTask.access$ 301( ScheduledThreadPoolExecutor. java:180) 
> [na:1.8.0_60]
>at java.util.concurrent. ScheduledThreadPoolExecutor$ 
> ScheduledFutureTask.run( ScheduledThreadPoolExecutor. java:294) [na:1.8.0_60]
>at java.util.concurrent. ThreadPoolExecutor.runWorker( 
> ThreadPoolExecutor.java:1142) [na:1.8.0_60]
>at java.util.concurrent. ThreadPoolExecutor$Worker.run( 
> ThreadPoolExecutor.java:617) [na:1.8.0_60]
>at java.lang.Thread.run(Thread. java:745) [na:1.8.0_60]
>
>


Re: Nifi | Multiple nar dependency in pom

2016-09-10 Thread Shashi Vishwakarma
Thanks. Above information was very much usefull.

Thanks
Shashi

On Tue, Sep 6, 2016 at 6:16 PM, Matt Gilman  wrote:

> That is correct. Currently, each NAR can only have a single NAR
> dependency. Typically we package the Controller Service APIs together or
> establish a chain. By establishing a chain this is building a transitive
> NAR dependency. Any Controller Service APIs bundled in ancestor NARs will
> be available.
>
> Note, I'm specifically calling out Controller Service APIs as the
> implementations of the Controller Services do not need to be in this NAR
> dependency chain I'm describing. They can be bundled in a separate adjacent
> NARs that share the same Controller Service API NAR dependency.
>
> Thanks
>
> Matt
>
> On Tue, Sep 6, 2016 at 6:27 AM, Shashi Vishwakarma <
> shashi.vish...@gmail.com> wrote:
>
>> Hi
>>
>> I am developing a two custom processor , one having a dependency on
>> controller service 1  and another having a dependency on controller service
>> 2.
>>
>> In processor nar pom , i tried to include both dependency as below.
>>
>>  
>> com.abc.nifi
>> nifi-custom1-service-api-nar
>> 0.3.0-SNAPSHOT
>> nar
>> 
>>
>> 
>> com.abc.nifi.services
>> nifi-custom2-services-nar
>> 0.3.0-SNAPSHOT
>> nar
>> 
>>
>> After compiling , it is giving following error.
>>
>>  Failed to execute goal org.apache.nifi:nifi-nar-maven-plugin:1.1.0:nar
>> (default-nar) on project nifi-custom-nar: Error assembling NAR: Each NAR
>> represents a ClassLoader. A NAR dependency allows that NAR's ClassLoader to
>> be used as the parent of this NAR's ClassLoader. As a result, only a single
>> NAR dependency is allowed.
>>
>> Does that means I cant not include two nar dependency? Is there any
>> way/workaround for this?
>>
>> Thanks
>> Shashi
>>
>
>


Re: PermissionBasedStatusMergerSpec is failing

2016-09-10 Thread Jeff
Tijo,

Have you modified ProcessorStatusSnapshotDTO.java or
PermissionBasedStatusMergerSpec.groovy?

On Sat, Sep 10, 2016 at 7:48 AM Tijo Thomas  wrote:

> Hi Jeff
>
> I recently  rebase  from master.
> Then I cloned again and ran mvn  package
>
> Tijo
>
> On 09-Sep-2016 9:12 pm, "Jeff"  wrote:
>
>> Tijo,
>>
>> I just ran this test on master and it's passing for me.  Can you provide
>> some details about the branch you're on when running the tests?  I see that
>> tasksDuration is 00:30:00.000 when it's expecting 00:00:00.000, and that's
>> why the JSON isn't matching.
>>
>> On Thu, Sep 8, 2016 at 4:58 PM Tijo Thomas  wrote:
>>
>>> Hi
>>> Nifi test case is failing (PermissionBasedStatusMergerSpec) .
>>> This is written in Grovy .. not comfortable with Groovy .
>>>
>>> Running org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec
>>> Tests run: 20, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.922
>>> sec <<< FAILURE! - in
>>> org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec
>>> Merge
>>> ProcessorStatusSnapshotDTO[0](org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec)
>>> Time elapsed: 0.144 sec  <<< FAILURE!
>>> org.spockframework.runtime.SpockComparisonFailure: Condition not
>>> satisfied:
>>>
>>> returnedJson == expectedJson
>>> ||  |
>>> ||
>>> {"id":"hidden","groupId":"hidden","name":"hidden","type":"hidden","bytesRead":0,"bytesWritten":0,"read":"0
>>> bytes","written":"0 bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
>>> bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
>>> bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":"0","tasksDuration":"00:00:00.000","activeThreadCount":0}
>>> |false
>>> |1 difference (99% similarity)
>>> |
>>> {"id":"hidden","groupId":"hidden","name":"hidden","type":"hidden","bytesRead":0,"bytesWritten":0,"read":"0
>>> bytes","written":"0 bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
>>> bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
>>> bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":"0","tasksDuration":"00:(3)0:00.000","activeThreadCount":0}
>>> |
>>> {"id":"hidden","groupId":"hidden","name":"hidden","type":"hidden","bytesRead":0,"bytesWritten":0,"read":"0
>>> bytes","written":"0 bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
>>> bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
>>> bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":"0","tasksDuration":"00:(0)0:00.000","activeThreadCount":0}
>>> {"id":"hidden","groupId":"hidden","name":"hidden","type":"hidden","bytesRead":0,"bytesWritten":0,"read":"0
>>> bytes","written":"0 bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
>>> bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
>>> bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":"0","tasksDuration":"00:30:00.000","activeThreadCount":0}
>>>
>>> at
>>> org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec.Merge
>>> ProcessorStatusSnapshotDTO(PermissionBasedStatusMergerSpec.groovy:257)
>>>
>>> Merge
>>> ProcessorStatusSnapshotDTO[1](org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec)
>>> Time elapsed: 0.01 sec  <<< FAILURE!
>>> org.spockframework.runtime.SpockComparisonFailure: Condition not
>>> satisfied:
>>>
>>> Tijo
>>>
>>>
>>>
>>>


Re: PermissionBasedStatusMergerSpec is failing

2016-09-10 Thread Tijo Thomas
Hi Jeff

I recently  rebase  from master.
Then I cloned again and ran mvn  package

Tijo

On 09-Sep-2016 9:12 pm, "Jeff"  wrote:

> Tijo,
>
> I just ran this test on master and it's passing for me.  Can you provide
> some details about the branch you're on when running the tests?  I see that
> tasksDuration is 00:30:00.000 when it's expecting 00:00:00.000, and that's
> why the JSON isn't matching.
>
> On Thu, Sep 8, 2016 at 4:58 PM Tijo Thomas  wrote:
>
>> Hi
>> Nifi test case is failing (PermissionBasedStatusMergerSpec) .
>> This is written in Grovy .. not comfortable with Groovy .
>>
>> Running org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec
>> Tests run: 20, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.922
>> sec <<< FAILURE! - in org.apache.nifi.cluster.manager.
>> PermissionBasedStatusMergerSpec
>> Merge ProcessorStatusSnapshotDTO[0](org.apache.nifi.cluster.manager.
>> PermissionBasedStatusMergerSpec)  Time elapsed: 0.144 sec  <<< FAILURE!
>> org.spockframework.runtime.SpockComparisonFailure: Condition not
>> satisfied:
>>
>> returnedJson == expectedJson
>> ||  |
>> ||  {"id":"hidden","groupId":"hidden","name":"hidden","type"
>> :"hidden","bytesRead":0,"bytesWritten":0,"read":"0 bytes","written":"0
>> bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
>> bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
>> bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":
>> "0","tasksDuration":"00:00:00.000","activeThreadCount":0}
>> |false
>> |1 difference (99% similarity)
>> |{"id":"hidden","groupId":"hidden","name":"hidden","type"
>> :"hidden","bytesRead":0,"bytesWritten":0,"read":"0 bytes","written":"0
>> bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
>> bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
>> bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":
>> "0","tasksDuration":"00:(3)0:00.000","activeThreadCount":0}
>> |{"id":"hidden","groupId":"hidden","name":"hidden","type"
>> :"hidden","bytesRead":0,"bytesWritten":0,"read":"0 bytes","written":"0
>> bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
>> bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
>> bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":
>> "0","tasksDuration":"00:(0)0:00.000","activeThreadCount":0}
>> {"id":"hidden","groupId":"hidden","name":"hidden","type"
>> :"hidden","bytesRead":0,"bytesWritten":0,"read":"0 bytes","written":"0
>> bytes","flowFilesIn":0,"bytesIn":0,"input":"0 (0
>> bytes)","flowFilesOut":0,"bytesOut":0,"output":"0 (0
>> bytes)","taskCount":0,"tasksDurationNanos":0,"tasks":
>> "0","tasksDuration":"00:30:00.000","activeThreadCount":0}
>>
>> at org.apache.nifi.cluster.manager.PermissionBasedStatusMergerSpec.Merge
>> ProcessorStatusSnapshotDTO(PermissionBasedStatusMergerSpec.groovy:257)
>>
>> Merge ProcessorStatusSnapshotDTO[1](org.apache.nifi.cluster.manager.
>> PermissionBasedStatusMergerSpec)  Time elapsed: 0.01 sec  <<< FAILURE!
>> org.spockframework.runtime.SpockComparisonFailure: Condition not
>> satisfied:
>>
>> Tijo
>>
>>
>>
>>