[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3472:
------------------------------
    Description: 
PutHDFS is not able to relogin if the ticket expires.

NiFi, running locally as standalone, was sending files to HDFS.  After 
suspending the system for the weekend, when the flow attempted to continue to 
processo flowfiles, the following exception occurred:
{code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
class ClientNamenodeProtocolTranslatorPB over [host:port] after 3 fail over 
attempts. Trying to fail over immediately.
java.io.IOException: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "[host:port]"; destination host is: 
[host:port];
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
~[hadoop-common-2.7.3.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
~[hadoop-common-2.7.3.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
~[hadoop-common-2.7.3.jar:na]
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
 ~[hadoop-common-2.7.3.jar:na]
        at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
 ~[hadoop-hdfs-2.7.3.jar:na]
        at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
~[na:na]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_102]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
 ~[hadoop-common-2.7.3.jar:na]
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 ~[hadoop-common-2.7.3.jar:na]
        at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
[hadoop-hdfs-2.7.3.jar:na]
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
 [hadoop-hdfs-2.7.3.jar:na]
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
 [hadoop-hdfs-2.7.3.jar:na]
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 [hadoop-common-2.7.3.jar:na]
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
 [hadoop-hdfs-2.7.3.jar:na]
        at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
[nifi-hdfs-processors-1.1.1.jar:1.1.1]
        at java.security.AccessController.doPrivileged(Native Method) 
[na:1.8.0_102]
        at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
 [hadoop-common-2.7.3.jar:na]
        at 
org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
[nifi-hdfs-processors-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 [nifi-api-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
 [nifi-framework-core-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
 [nifi-framework-core-1.1.1.jar:1.1.1]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_102]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_102]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_102]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_102]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_102]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_102]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]{code}

After stopping and starting the PutHDFS processor, flowfiles were able to be 
transferred to HDFS.

  was:
PutHDFS is not able to relogin if the ticket expires.

NiFi, running locally as standalone, was sending files to HDFS.  After 
suspending the system for the weekend, when the flow attempted to continue to 
processo flowfiles, the following exception occurred:
{code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
class ClientNamenodeProtocolTranslatorPB over 
hdp-cluster-2-1.novalocal/172.22.105.152:8020 after 3 fail over attempts. 
Trying to fail over immediately.
java.io.IOException: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "hw13184.local/10.0.0.153"; destination 
host is: "hdp-cluster-2-1.novalocal":8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
~[hadoop-common-2.7.3.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
~[hadoop-common-2.7.3.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
~[hadoop-common-2.7.3.jar:na]
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
 ~[hadoop-common-2.7.3.jar:na]
        at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
 ~[hadoop-hdfs-2.7.3.jar:na]
        at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
~[na:na]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_102]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
 ~[hadoop-common-2.7.3.jar:na]
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 ~[hadoop-common-2.7.3.jar:na]
        at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
[hadoop-hdfs-2.7.3.jar:na]
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
 [hadoop-hdfs-2.7.3.jar:na]
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
 [hadoop-hdfs-2.7.3.jar:na]
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 [hadoop-common-2.7.3.jar:na]
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
 [hadoop-hdfs-2.7.3.jar:na]
        at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
[nifi-hdfs-processors-1.1.1.jar:1.1.1]
        at java.security.AccessController.doPrivileged(Native Method) 
[na:1.8.0_102]
        at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
 [hadoop-common-2.7.3.jar:na]
        at 
org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
[nifi-hdfs-processors-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 [nifi-api-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
 [nifi-framework-core-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-1.1.1.jar:1.1.1]
        at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
 [nifi-framework-core-1.1.1.jar:1.1.1]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_102]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_102]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_102]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_102]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_102]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_102]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]{code}
After stopping and starting the PutHDFS processor, flowfiles were able to be 
transferred to HDFS.


> PutHDFS Kerberos relogin not working (tgt) after ticket expires
> ---------------------------------------------------------------
>
>                 Key: NIFI-3472
>                 URL: https://issues.apache.org/jira/browse/NIFI-3472
>             Project: Apache NiFi
>          Issue Type: Bug
>    Affects Versions: 1.1.1
>            Reporter: Jeff Storck
>
> PutHDFS is not able to relogin if the ticket expires.
> NiFi, running locally as standalone, was sending files to HDFS.  After 
> suspending the system for the weekend, when the flow attempted to continue to 
> processo flowfiles, the following exception occurred:
> {code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
> org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
> server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
> o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
> class ClientNamenodeProtocolTranslatorPB over [host:port] after 3 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: "[host:port]"; destination 
> host is: [host:port];
>       at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
> ~[hadoop-common-2.7.3.jar:na]
>       at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
> ~[hadoop-common-2.7.3.jar:na]
>       at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
> ~[hadoop-common-2.7.3.jar:na]
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  ~[hadoop-common-2.7.3.jar:na]
>       at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>  ~[hadoop-hdfs-2.7.3.jar:na]
>       at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
> ~[na:na]
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_102]
>       at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>  ~[hadoop-common-2.7.3.jar:na]
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.7.3.jar:na]
>       at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
>       at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
> [hadoop-hdfs-2.7.3.jar:na]
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>  [hadoop-hdfs-2.7.3.jar:na]
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>  [hadoop-hdfs-2.7.3.jar:na]
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  [hadoop-common-2.7.3.jar:na]
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>  [hadoop-hdfs-2.7.3.jar:na]
>       at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>       at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_102]
>       at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
>       at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>       at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.1.jar:1.1.1]
>       at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>       at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>       at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>       at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_102]
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_102]
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_102]
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_102]
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_102]
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_102]
>       at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]{code}
> After stopping and starting the PutHDFS processor, flowfiles were able to be 
> transferred to HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to