Joey,

Yes. The client must be installed and setup (this is a requirement for the
compiled NiFi as well).

Without the client installed and configured the MapR libraries (java and
native) would be lost in to what ZK connect in order to get information
about the CLDB (their alternative to namenode).

Cheers

On Mon, Mar 26, 2018 at 1:20 AM, Joey Frazee <joey.fra...@icloud.com> wrote:

> I'm kinda going on memory here because I lost some notes I had about doing
> this, but I think the compile against the mapr libs presumes you have also
> have the C-based mapr client libs on your machine at compile time and run
> time. I skimmed that blog post, albeit very quickly, and didn't see that
> explicitly mentioned in there.
>
> Using the additional jars in PutHDFS would presumably require them too.
>
> Andre, that's correct isn't it?
>
> On Mar 24, 2018, 8:26 AM -0500, Mark Payne <marka...@hotmail.com>, wrote:
>
> Andre,
>
> I knew this was possible but had no idea how. Thanks for the great
> explanation and associates caveats!
>
> -Mark
>
>
> On Mar 24, 2018, at 1:04 AM, Andre <andre-li...@fucs.org> wrote:
>
> Ravi,
>
> There are two ways of solving this.
>
> One of them (suggested to me MapR representatives) is to deploy MapR's
> FUSE client to your NiFi nodes, use the PutFile processor instead of
> PutHDFS and let the MapR client pump coordinate the API engagement with
> MapR-FS. This is a very clean and robust approach, however it may have
> licensing implications as the FUSE client is licensed. (per node if I
> recall correctly).
>
> The other one is to use the out of box PutHDFS processor with a bit of
> configurations (it works on both Secure and Insecure clusters).
>
> Try this out
>
> Instead of recompiling PutHDFS simply point it to the mapr-client jars and
> use a core-site.xml with the following content:
>
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>maprfs:///</value>
> </property>
> </configuration>
>
> Please note maprclients don't play ball with kerberos nicely and you will
> be required to use a MapR ticket to access the system. This can be easily
> done by:
>
> sudo -u <whatever_user_nifi_uses> "kinit -kt /path/to/your/keytab &&
> maprlogin kerberos"
>
> Cheers
>
> [1] https://lists.apache.org/thread.html/af9244266e89990618152bb59b5bf9
> 5c9a49dc2428ea3fa0e6aaa682@%3Cusers.nifi.apache.org%3E
> [2] https://cwiki.apache.org/confluence/x/zI5zAw
>
>
>
> On Fri, Mar 23, 2018 at 5:05 AM, Ravi Papisetti (rpapiset) <
> rpapi...@cisco.com> wrote:
>
>> Hi,
>>
>>
>>
>> I have re-compiled nifi with mapr dependencies as per instructions at
>> http://hariology.com/integrating-mapr-fs-and-apache-nifi/
>>
>>
>>
>> Created process flow with ListFile > FetchFile > PutHDFS. As soon as I
>> start this process group nifi-bootstrap.log is getting filled with
>>
>> 2018-03-21 22:56:26,806 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr 2018-03-21 22:56:26,8003 select failed(-1) error
>> Invalid argument
>>
>> 2018-03-21 22:56:26,806 ERROR [NiFi logging handler]
>> org.apache.nifi.StdErr 2018-03-21 22:56:26,8003 select failed(-1) error
>> Invalid argument
>>
>>
>>
>> This log grows into GBs in minutes. I had to stop nifi to stop the
>> flooding.
>>
>>
>>
>> I found similar issue in petaho forum: https://jira.pentaho.com/brows
>> e/PDI-16270
>>
>>
>>
>> Any one has any thoughts why this error might be coming?
>>
>>
>>
>> Appreciate any help.
>>
>>
>>
>> Thanks,
>>
>> Ravi Papisetti
>>
>
>

Reply via email to