Hi Ravi,

Could you provide some additional details in terms of both your NiFi
environment and the MapR destination?

Is your NiFi a single instance or clustered?  In the case of the latter, is
security established for your ZooKeeper ensemble?

Is your target cluster Kerberized?  What version are you running?  Have you
attempted to use the List/GetHDFS processors?  Do they also have errors in
reading?

Thanks!
--aldrin

On Mon, Jun 13, 2016 at 5:19 PM, Ravi Papisetti (rpapiset) <
rpapi...@cisco.com> wrote:

> Thanks Conrad for your reply.
>
> Yes, I have configured putHDFS with "Remove Owner" and "Renive Group" with
> same values as on HDFS. Also, nifi service is started under the same user.
>
>
>
> Thanks,
>
> Ravi Papisetti
>
> Technical Leader
>
> Services Technology Incubation Center
> <http://wwwin.cisco.com/CustAdv/ts/cstg/stic/>
>
> rpapi...@cisco.com
>
> Phone: +1 512 340 3377
>
>
> [image: stic-logo-email-blue]
>
> From: Conrad Crampton <conrad.cramp...@secdata.com>
> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
> Date: Monday, June 13, 2016 at 4:01 PM
> To: "users@nifi.apache.org" <users@nifi.apache.org>
> Subject: Re: Writing files to MapR File system using putHDFS
>
> Hi,
>
> Sounds like a permissions problem. Have you set the Remote Owner and
> Remote Groups settings in the processor appropriate for HDFS permissions?
>
> Conrad
>
>
>
> *From: *"Ravi Papisetti (rpapiset)" <rpapi...@cisco.com>
> *Reply-To: *"users@nifi.apache.org" <users@nifi.apache.org>
> *Date: *Monday, 13 June 2016 at 21:25
> *To: *"users@nifi.apache.org" <users@nifi.apache.org>, "
> d...@nifi.apache.org" <d...@nifi.apache.org>
> *Subject: *Writing files to MapR File system using putHDFS
>
>
>
> Hi,
>
>
>
> We just started exploring apache nifi for data onboarding into MapR
> distribution. Have configured putHDFS with yarn-site.xml from on local mapr
> client where cluster information is provided, configured the "Directory"
> with mapr fs directory to write the files, configured nifi to run as user
> has permission to write to mapr fs, inspie of that we are getting below
> error while writing the file into given file system path. I am doubting,
> nifi is not talking to the cluster or talking with wrong user, appreciate
> if you some can guide me to troubleshoot this issue or any solutions if we
> are doing something wrong:
>
> Nifi workflow is very simple: GetFile is configure to read from locla file
> system, connected to PutHDFS with yarn-site.xml and directory information
> configured.
>
> 2016-06-13 15:14:36,305 INFO [Timer-Driven Process Thread-2]
> o.apache.nifi.processors.hadoop.PutHDFS
> PutHDFS[id=07abcfaa-fa8d-496b-81f0-b1b770672719] Kerberos relogin
> successful or ticket still valid
> 2016-06-13 15:14:36,324 ERROR [Timer-Driven Process Thread-2]
> o.apache.nifi.processors.hadoop.PutHDFS
> PutHDFS[id=07abcfaa-fa8d-496b-81f0-b1b770672719] Failed to write to HDFS
> due to java.io.IOException: /app/DataAnalyticsFramework/catalog/nifi could
> not be created: java.io.IOException:
> /app/DataAnalyticsFramework/catalog/nifi could not be created
> 2016-06-13 15:14:36,330 ERROR [Timer-Driven Process Thread-2]
> o.apache.nifi.processors.hadoop.PutHDFS
> java.io.IOException: /app/DataAnalyticsFramework/catalog/nifi could not be
> created
>         at
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:238)
> ~[nifi-hdfs-processors-0.6.1.jar:0.6.1]
>         at
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> [nifi-api-0.6.1.jar:0.6.1]
>         at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
> [nifi-framework-core-0.6.1.jar:0.6.1]
>         at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
> [nifi-framework-core-0.6.1.jar:0.6.1]
>         at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> [nifi-framework-core-0.6.1.jar:0.6.1]
>         at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
> [nifi-framework-core-0.6.1.jar:0.6.1]
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> [na:1.7.0_101]
>         at
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> [na:1.7.0_101]
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> [na:1.7.0_101]
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> [na:1.7.0_101]
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [na:1.7.0_101]
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [na:1.7.0_101]
>         at java.lang.Thread.run(Thread.java:745) [na:1.7.0_101]
>
> Appreciate any help.
>
>
>
> Thanks,
>
> Ravi Papisetti
>
> Technical Leader
>
> Services Technology Incubation Center
> <http://wwwin.cisco.com/CustAdv/ts/cstg/stic/>
>
> rpapi...@cisco.com
>
> Phone: +1 512 340 3377
>
>
>
> [image: tic-logo-email-blue]
>
>
>
> ***This email originated outside SecureData***
>
> Click here <https://www.mailcontrol.com/sr/MZbqvYs5QwJvpeaetUwhCQ==> to
> report this email as spam.
>
>
> SecureData, combating cyber threats
>
> ------------------------------
>
> The information contained in this message or any of its attachments may be
> privileged and confidential and intended for the exclusive use of the
> intended recipient. If you are not the intended recipient any disclosure,
> reproduction, distribution or other dissemination or use of this
> communications is strictly prohibited. The views expressed in this email
> are those of the individual and not necessarily of SecureData Europe Ltd.
> Any prices quoted are only valid if followed up by a formal written quote.
>
> SecureData Europe Limited. Registered in England & Wales 04365896.
> Registered Address: SecureData House, Hermitage Court, Hermitage Lane,
> Maidstone, Kent, ME16 9NT
>

Reply via email to