[
https://issues.apache.org/jira/browse/HADOOP-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15517557#comment-15517557
]
Anu Engineer commented on HADOOP-13548:
---------------------------------------
[~jojochuang] Thanks for leaving pointers to the older JIRAs. It was very
helpful in understanding how we got here.
Frankly reading this patch gave me a headache :), thank you for the effort you
have put into this patch.
Some high level comments and then to specifics.
Couple of suggestions:
* If it is not too complicated can you please add a test case which describes
what you think Sqoop is doing, being able to debug the issue with and without
this patch will be helpful. If it is too much work, leave me a set of
instructions on how to repro this issue and I will try to do that.
* Would it make sense to write a wrapper function that invokes
excludeIncompatibleCredentialProviders -- Which does the FileSystem.class
isolation and then your code changes are pretty much isolated or factor out the
code excludeIncompatibleCredentialProviders into reusable functions ? For
example, something that returns the class, that gets checked for credential
provider and Filesystem ?
* Looks like we make some sort of an implied assumption that no class can
inherit from both credentialProvider and FileSystem, which is true in today's
code base. Do you want to make that explicit via a Precondition check ? It
really does not matter as the the code will treat it as a credentialProvider
for now.
Specific Comments:
* Both LocalJavaKeyStoreProvider and JavaKeyStoreProvider -- has a function
called {{getSchemeName}} -- due to the inheritance from AbastractJavaKeyStore
and now we have {{getScheme}} due to CredentialProvider. It looks redundant,
you might want to rename the base function to getSchemeName and use that for
all the classes.
*
{{SERVICE_CREDENTIAL_PROVIDER.put(cpf.getScheme(),cpf.getCredentialProviderClass());}}
Looks like this could overwrite a value in the map ? Should we not throw if we
find a duplicate ?
* since we added the check for CredentialProvider.class are these comments and
subsequenct handling still valid ?
{code}
} catch (IOException ioe) {
// not all providers are filesystem based
// for instance user:/// will not be able to
// have a filesystem class associated with it.
if (newProviderPath.length() > 0) {
newProviderPath.append(",");
}
newProviderPath.append(provider);
{code}
* In this following line --
{code} if (candidateClass.isAssignableFrom(clazz)) {code} I understand how we
need to prevent recursive file path access, but does that semantics hold true
for credential providers ? Can I not inherit a class from jkcsProvider and be
compatible with jkcsProvider ? You don't need to solve this, but making sure
that it was intentional and not accidental.
> Remove recursive dependencies of credential providers in LdapGroupsMapping
> --------------------------------------------------------------------------
>
> Key: HADOOP-13548
> URL: https://issues.apache.org/jira/browse/HADOOP-13548
> Project: Hadoop Common
> Issue Type: Bug
> Components: security
> Affects Versions: 2.6.0
> Reporter: Wei-Chiu Chuang
> Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13548.001.patch, HADOOP-13548.002.patch
>
>
> HADOOP-11934 discovered an infinite loop of dependencies in the use of
> credential provider in LdapGroupsMapping. It added a new localjceks:// URI to
> workaround the problem. The assumption is that the groups mapping is used
> only in NameNode and that using a local credential file is not a problem.
> However, there are cases where Hadoop clients, such as Sqoop, may use hdfs://
> based credential provider and use LdapGroupsMapping at the same time. We
> should use HADOOP-12846 to exclude hdfs:// URI credential providers.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]