[ 
https://issues.apache.org/jira/browse/IGNITE-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137193#comment-15137193
 ] 

Ivan Veselovsky edited comment on IGNITE-2195 at 2/8/16 5:16 PM:
-----------------------------------------------------------------

Pull Request: https://github.com/apache/ignite/pull/464

I suggest a fix without dedicated re-login thread. Each file system operation 
checks if re-login is necessary and, if so, performs re-login. 

Configuration example:
{code}
                <bean 
class="org.apache.ignite.configuration.FileSystemConfiguration" 
parent="igfsCfgBase">

                 .....
                    <property name="secondaryFileSystem">
                        <bean 
class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem" >
                            <property name="fileSystemFactory">
                                <bean 
class="org.apache.ignite.hadoop.fs.SecureHadoopFileSystemFactory">
                                     <property name="configPaths" 
value="/etc/hadoop/conf/core-site.xml"/>
                                     <property name="keyTab" 
value="/etc/krb5.keytab" />
                                     <property name="keyTabPrincipal" 
value="foo" />
                                     <property name="reloginInterval" 
value="#{30 * 60 * 1000}" />
                                </bean>
                            </property>
                        </bean>
                    </property>

{code}


was (Author: iveselovskiy):
Pull Request: https://github.com/apache/ignite/pull/464

I suggest a fix without dedicated re-login thread. Each file system operation 
checks if re-login is necessary and, if so, performs re-login. 

Configuration example:
{code}
                <bean 
class="org.apache.ignite.configuration.FileSystemConfiguration" 
parent="igfsCfgBase">

                 .....
                    <property name="secondaryFileSystem">
                        <bean 
class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem" >
                            <property name="fileSystemFactory">
                                <bean 
class="org.apache.ignite.hadoop.fs.SecureHadoopFileSystemFactory">
                                     <property name="configPaths" 
value="/etc/hadoop/conf/core-site.xml"/>
                                     <property name="keyTab" 
value="/etc/krb5.keytab" />
                                     <property name="keyTabPrincipal" 
value="foo" />
                                     <property name="reloginInterval" 
value="#{90 * 1000}" />
                                </bean>
                            </property>
                        </bean>
                    </property>

{code}

> Accessing from IGFS to HDFS that is in kerberised environment
> -------------------------------------------------------------
>
>                 Key: IGNITE-2195
>                 URL: https://issues.apache.org/jira/browse/IGNITE-2195
>             Project: Ignite
>          Issue Type: Bug
>          Components: hadoop, IGFS
>    Affects Versions: ignite-1.4
>            Reporter: Denis Magda
>            Assignee: Vladimir Ozerov
>            Priority: Critical
>              Labels: important
>             Fix For: 1.6
>
>         Attachments: kerbersized_hadoop_fs_factory.zip
>
>
> There is some issue in the current IGFS implementation that doesn't take into 
> account some Kerberos user related settings which leads to the exception 
> below when there is an attempt to work with Kerberised cluster
> {noformat}
> Connecting to HDFS with the following settings [uri=null, cfg=all-site.xml, 
> userName=null]
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> org.apache.hadoop.security.AccessControlException: SIMPLE authentication is 
> not enabled. Available:[TOKEN, KERBEROS]
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2096)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:944)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:927)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:872)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:868)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:868)
> at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1694)
> at org.apache.hadoop.fs.FileSystem$6.<init>(FileSystem.java:1786)
> at org.apache.hadoop.fs.FileSystem.listFiles(FileSystem.java:1783)
> at com.ig.HadoopFsIssue.main(HadoopFsIssue.java:35)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
> at org.apache.hadoop.ipc.Client.call(Client.java:1427)
> at org.apache.hadoop.ipc.Client.call(Client.java:1358)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy7.getListing(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy8.getListing(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094)
> {noformat}
> The issue is fixed in the following way. Need to revisit the fix and check 
> whether it can lead to some other consequences.
> {noformat}
> /**
> * @return {@link org.apache.hadoop.fs.FileSystem}  instance for this 
> secondary Fs.
> * @throws IOException
> */
> public FileSystem createFileSystem(String userName) throws IOException {
>     userName = IgfsUtils.fixUserName(userName);
>     UserGroupInformation.setConfiguration(cfg);
>     UserGroupInformation ugi = UserGroupInformation.createProxyUser(userName, 
> UserGroupInformation.getCurrentUser());
>     try {
>         return ugi.doAs(new PrivilegedExceptionAction<FileSystem>() {
>             @Override
>             public FileSystem run() throws Exception {
>                     return FileSystem.get(uri, cfg);
>             }
>         });
>     } catch (InterruptedException e) {
>         Thread.currentThread().interrupt();
>         throw new IOException("Failed to create file system due to 
> interrupt.", e);
>     }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to