Bryan,

It's for uploading data on HDInsight cluster and as per the patch in
Nifi-1922.

Regards,
Manish


On Thu, Dec 1, 2016 at 12:16 PM, Bryan Bende <bbe...@gmail.com> wrote:

> Manish,
>
> Was the reason for your custom version of PutHDFS just to include the
> Azure libraries? or was there other functionality you needed to introduce?
>
> The reason I'm asking is because in the Apache NiFi 1.1 release (just
> released) we added an "Additional Classpath Resources" property to all HDFS
> processors, and I have been able to use that to connect to an Azure WASB
> filesystem.
>
> I can provide the list of JARs I added to the classpath to make it work if
> that would help.
>
> Thanks,
>
> Bryan
>
>
> On Thu, Dec 1, 2016 at 12:11 PM, Manish G <manish.gg...@gmail.com> wrote:
>
>> Hi Joe,
>>
>> I am not using anything from Nifi 1.0 and its all 0.7. What indicates the
>> use of 1.0?
>>
>> Regards,
>> Manish
>>
>> On Thu, Dec 1, 2016 at 12:07 PM, Joe Witt <joe.w...@gmail.com> wrote:
>>
>>> Manish
>>>
>>> It also appears from the log output you provided so far that this is a
>>> combination of nifi 0.7 and 1.0 parts.  We do not recommend doing this
>>> and it is not likely to work.
>>>
>>> Please try the flow from a single release line.  Apache NiFi 1.1.0 is
>>> available for use now.
>>>
>>> Thanks
>>> Joe
>>>
>>> On Thu, Dec 1, 2016 at 9:02 AM, Manish G <manish.gg...@gmail.com> wrote:
>>> > Got it. I was not aware of "bin/nifi.sh dump". I will try that once I
>>> see
>>> > the issue again.
>>> >
>>> > Thanks,
>>> > Manish
>>> >
>>> > On Thu, Dec 1, 2016 at 11:59 AM, Joe Witt <joe.w...@gmail.com> wrote:
>>> >>
>>> >> Manish
>>> >>
>>> >> When it is in the stuck state can you please run
>>> >> bin/nifi.sh dump.  If you can then share the nifi-bootstrap.log that
>>> >> would aid us in narrowing in on a possible cause.
>>> >>
>>> >> Thanks
>>> >> Joe
>>> >>
>>> >> On Thu, Dec 1, 2016 at 8:44 AM, Manish G <manish.gg...@gmail.com>
>>> wrote:
>>> >> > Hi Joe,
>>> >> >
>>> >> > Here is what I can see in the App Log:
>>> >> >
>>> >> > 2016-12-01 09:28:52,004 ERROR [Timer-Driven Process Thread-4]
>>> >> > testing.nifi.processor.hdfs.testingPutHDFS ""
>>> >> > org.apache.hadoop.fs.azure.AzureException:
>>> >> > java.util.NoSuchElementException:
>>> >> > An error occurred while enumerating the result, check the original
>>> >> > exception
>>> >> > for details.
>>> >> > at
>>> >> >
>>> >> > org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrie
>>> veMetadata(AzureNativeFileSystemStore.java:1930)
>>> >> > ~[hadoop-azure-2.7.2.jar:na]
>>> >> > at
>>> >> >
>>> >> > org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStat
>>> us(NativeAzureFileSystem.java:1592)
>>> >> > ~[hadoop-azure-2.7.2.jar:na]
>>> >> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
>>> >> > ~[hadoop-common-2.7.2.jar:na]
>>> >> > at
>>> >> >
>>> >> > testing.nifi.processor.hdfs.testingPutHDFS.onTrigger(testing
>>> PutHDFS.java:260)
>>> >> > ~[testing.nifi.processor-1.0.0.nar-unpacked/:na]
>>> >> > at
>>> >> >
>>> >> > org.apache.nifi.processor.AbstractProcessor.onTrigger(Abstra
>>> ctProcessor.java:27)
>>> >> > [nifi-api-0.7.0.jar:0.7.0]
>>> >> > at
>>> >> >
>>> >> > org.apache.nifi.controller.StandardProcessorNode.onTrigger(S
>>> tandardProcessorNode.java:1054)
>>> >> > [nifi-framework-core-0.7.0.jar:0.7.0]
>>> >> > at
>>> >> >
>>> >> > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask
>>> .call(ContinuallyRunProcessorTask.java:136)
>>> >> > [nifi-framework-core-0.7.0.jar:0.7.0]
>>> >> > at
>>> >> >
>>> >> > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask
>>> .call(ContinuallyRunProcessorTask.java:47)
>>> >> > [nifi-framework-core-0.7.0.jar:0.7.0]
>>> >> > at
>>> >> >
>>> >> > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingA
>>> gent$1.run(TimerDrivenSchedulingAgent.java:127)
>>> >> > [nifi-framework-core-0.7.0.jar:0.7.0]
>>> >> > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown
>>> Source)
>>> >> > [na:1.8.0_101]
>>> >> > at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
>>> >> > [na:1.8.0_101]
>>> >> > at
>>> >> >
>>> >> > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>>> tureTask.access$301(Unknown
>>> >> > Source) [na:1.8.0_101]
>>> >> > at
>>> >> >
>>> >> > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>>> tureTask.run(Unknown
>>> >> > Source) [na:1.8.0_101]
>>> >> > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
>>> Source)
>>> >> > [na:1.8.0_101]
>>> >> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
>>> Source)
>>> >> > [na:1.8.0_101]
>>> >> > at java.lang.Thread.run(Unknown Source) [na:1.8.0_101]
>>> >> > Caused by: java.util.NoSuchElementException: An error occurred
>>> while
>>> >> > enumerating the result, check the original exception for details.
>>> >> > at
>>> >> >
>>> >> > com.microsoft.azure.storage.core.LazySegmentedIterator.hasNe
>>> xt(LazySegmentedIterator.java:113)
>>> >> > ~[azure-storage-2.0.0.jar:na]
>>> >> > at
>>> >> >
>>> >> > org.apache.hadoop.fs.azure.StorageInterfaceImpl$WrappingIter
>>> ator.hasNext(StorageInterfaceImpl.java:128)
>>> >> > ~[hadoop-azure-2.7.2.jar:na]
>>> >> > at
>>> >> >
>>> >> > org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrie
>>> veMetadata(AzureNativeFileSystemStore.java:1909)
>>> >> > ~[hadoop-azure-2.7.2.jar:na]
>>> >> > ... 15 common frames omitted
>>> >> > Caused by: com.microsoft.azure.storage.StorageException: Forbidden
>>> >> > at
>>> >> >
>>> >> > com.microsoft.azure.storage.StorageException.translateFromHt
>>> tpStatus(StorageException.java:202)
>>> >> > ~[azure-storage-2.0.0.jar:na]
>>> >> > at
>>> >> >
>>> >> > com.microsoft.azure.storage.StorageException.translateExcept
>>> ion(StorageException.java:172)
>>> >> > ~[azure-storage-2.0.0.jar:na]
>>> >> > at
>>> >> >
>>> >> > com.microsoft.azure.storage.core.ExecutionEngine.executeWith
>>> Retry(ExecutionEngine.java:273)
>>> >> > ~[azure-storage-2.0.0.jar:na]
>>> >> > at
>>> >> >
>>> >> > com.microsoft.azure.storage.core.LazySegmentedIterator.hasNe
>>> xt(LazySegmentedIterator.java:109)
>>> >> > ~[azure-storage-2.0.0.jar:na]
>>> >> > ... 17 common frames omitted
>>> >> > Caused by: java.lang.NullPointerException: null
>>> >> > at
>>> >> >
>>> >> > com.microsoft.azure.storage.core.ExecutionEngine.executeWith
>>> Retry(ExecutionEngine.java:181)
>>> >> > ~[azure-storage-2.0.0.jar:na]
>>> >> > ... 18 common frames omitted
>>> >> >
>>> >> > Regards,
>>> >> > Manish
>>> >> >
>>> >> > On Thu, Dec 1, 2016 at 10:36 AM, Joe Witt <joe.w...@gmail.com>
>>> wrote:
>>> >> >>
>>> >> >> Manish
>>> >> >>
>>> >> >> Please produce and share the thread dump I mentioned.
>>> >> >>
>>> >> >> Thanks
>>> >> >> Joe
>>> >> >>
>>> >> >> On Dec 1, 2016 7:23 AM, "Manish G" <manish.gg...@gmail.com> wrote:
>>> >> >>>
>>> >> >>> Hi,
>>> >> >>>
>>> >> >>> I don't know why, but this is happening now more frequently. Where
>>> >> >>> should
>>> >> >>> I look into to find the root cause?
>>> >> >>>
>>> >> >>> Thanks,
>>> >> >>> Manish
>>> >> >>>
>>> >> >>> On Wed, Nov 30, 2016 at 9:20 PM, Manish G <manish.gg...@gmail.com
>>> >
>>> >> >>> wrote:
>>> >> >>>>
>>> >> >>>> Hi Joe,
>>> >> >>>>
>>> >> >>>> Thanks for the quick reply. Yes, the processor keeps running on a
>>> >> >>>> single
>>> >> >>>> thread (even after stopping). And the number remains there even
>>> after
>>> >> >>>> stopping.
>>> >> >>>> Today, it happened on my customized putHDFS processor. Only thing
>>> >> >>>> different in this processor is - I have added an additional
>>> attribute
>>> >> >>>> that
>>> >> >>>> tells if the processor created the directory while loading the
>>> file
>>> >> >>>> on HDFS.
>>> >> >>>> I don't think this should be the issue though.
>>> >> >>>>
>>> >> >>>> Regards,
>>> >> >>>> Manish
>>> >> >>>>
>>> >> >>>>
>>> >> >>>> On Wed, Nov 30, 2016 at 7:05 PM, Joe Witt <joe.w...@gmail.com>
>>> wrote:
>>> >> >>>>>
>>> >> >>>>> Manish
>>> >> >>>>>
>>> >> >>>>> When it is stuck do you see a number in the top right corner of
>>> the
>>> >> >>>>> processor?  When you stop it does the number remain?  That
>>> number is
>>> >> >>>>> telling you how many threads are still executing.  Which
>>> processor
>>> >> >>>>> are
>>> >> >>>>> we talking about?  When it is in the stuck state can you please
>>> run
>>> >> >>>>> bin/nifi.sh dump.  If you can then share the nifi-bootstrap.log
>>> that
>>> >> >>>>> would aid us in narrowing in on a possible cause.
>>> >> >>>>>
>>> >> >>>>> Thanks
>>> >> >>>>> Joe
>>> >> >>>>>
>>> >> >>>>> On Wed, Nov 30, 2016 at 7:02 PM, Manish G <
>>> manish.gg...@gmail.com>
>>> >> >>>>> wrote:
>>> >> >>>>> >
>>> >> >>>>> > Hi,
>>> >> >>>>> >
>>> >> >>>>> > I have noticed that sometime a flow file gets stuck on a
>>> processor
>>> >> >>>>> > for a
>>> >> >>>>> > very long time for no reason and then I can not even stop the
>>> >> >>>>> > processor to
>>> >> >>>>> > look at the flow flow file from queue. If I click on stop,
>>> then
>>> >> >>>>> > processor
>>> >> >>>>> > goes into a state where I cannot start/stop the processor.
>>> >> >>>>> >
>>> >> >>>>> > On restarting the NiFi, the file gets processed successfully
>>> and
>>> >> >>>>> > routed to
>>> >> >>>>> > success queue. I checked in App log, but everything seems to
>>> be
>>> >> >>>>> > normal for
>>> >> >>>>> > the flow file. I don't see anything mysterious in provenance
>>> too
>>> >> >>>>> > (except
>>> >> >>>>> > that queue time is in hours).
>>> >> >>>>> >
>>> >> >>>>> > Has anyone else faced a similar issue? What else should I
>>> check to
>>> >> >>>>> > identify
>>> >> >>>>> > the root cause for this?
>>> >> >>>>> >
>>> >> >>>>> > Thanks,
>>> >> >>>>> > Manish
>>> >> >>>>
>>> >> >>>>
>>> >> >>>>
>>> >> >>>>
>>> >> >>>> --
>>> >> >>>>
>>> >> >>>>
>>> >> >>>> With Warm Regards,
>>> >> >>>> Manish
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>> --
>>> >> >>>
>>> >> >>>
>>> >> >>> With Warm Regards,
>>> >> >>> Manish
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> >
>>> >> >
>>> >> > With Warm Regards,
>>> >> > Manish
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> >
>>> > With Warm Regards,
>>> > Manish
>>>
>>
>>
>>
>> --
>>
>>
>> *With Warm Regards,*
>> *Manish*
>>
>
>


-- 


*With Warm Regards,*
*Manish*

Reply via email to