[ 
https://issues.apache.org/jira/browse/HADOOP-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828247#comment-13828247
 ] 

Bikas Saha commented on HADOOP-9478:
------------------------------------

We noticed that the changes in jira caused client side deployment of Tez to 
have errors. 
Tez is designed to have a client side install. So we package Tez and its 
dependencies and upload that onto HDFS and those jars are used to run Tez job. 
Tez brings in mapreduce-client-core.jar as a dependency for InputFormats etc.
When we build Tez against trunk then the mapreduce-client-core.jar that we 
bring in uses DeprecatedDelta added in that jar. However, the Configuration in 
the cluster comes from the cluster deployed jars for hadoop common and that 
does not have DeprecationDelta. So the execution fails.
This basically means that if someone compiles MR from trunk and runs MR against 
a cluster deployed with 2.2 then MR will not work.

> Fix race conditions during the initialization of Configuration related to 
> deprecatedKeyMap
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9478
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9478
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 2.0.0-alpha
>         Environment: OS:
> CentOS release 6.3 (Final)
> JDK:
> java version "1.6.0_27"
> Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
> Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
> Hadoop:
> hadoop-2.0.0-cdh4.1.3/hadoop-2.0.0-cdh4.2.0
> Security:
> Kerberos
>            Reporter: Dongyong Wang
>            Assignee: Colin Patrick McCabe
>             Fix For: 2.2.1
>
>         Attachments: HADOOP-9478.001.patch, HADOOP-9478.002.patch, 
> HADOOP-9478.003.patch, HADOOP-9478.004.patch, HADOOP-9478.005.patch, 
> hadoop-9478-1.patch, hadoop-9478-2.patch
>
>
> When we lanuch the client appliation which use kerberos security,the 
> FileSystem can't be create because the exception ' 
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.security.SecurityUtil'.
> I check the exception stack trace,it maybe caused by the unsafe get operation 
> of the deprecatedKeyMap which used by the 
> org.apache.hadoop.conf.Configuration.
> So I write a simple test case:
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.hdfs.HdfsConfiguration;
> public class HTest {
>     public static void main(String[] args) throws Exception {
>         Configuration conf = new Configuration();
>         conf.addResource("core-site.xml");
>         conf.addResource("hdfs-site.xml");
>         FileSystem fileSystem = FileSystem.get(conf);
>         System.out.println(fileSystem);
>         System.exit(0);
>     }
> }
> Then I launch this test case many times,the following exception is thrown:
> Exception in thread "TGT Renewer for XXX" 
> java.lang.ExceptionInInitializerError
>      at 
> org.apache.hadoop.security.UserGroupInformation.getTGT(UserGroupInformation.java:719)
>      at 
> org.apache.hadoop.security.UserGroupInformation.access$1100(UserGroupInformation.java:77)
>      at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:746)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 16
>      at java.util.HashMap.getEntry(HashMap.java:345)
>      at java.util.HashMap.containsKey(HashMap.java:335)
>      at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1989)
>      at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1867)
>      at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1785)
>      at org.apache.hadoop.conf.Configuration.get(Configuration.java:712)
>      at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:731)
>      at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1047)
>      at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:76)
>      ... 4 more
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>      at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:453)
>      at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:133)
>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:436)
>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:403)
>      at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:125)
>      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2262)
>      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
>      at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
>      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:162)
>      at HTest.main(HTest.java:11)
> Caused by: java.lang.reflect.InvocationTargetException
>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>      at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>      at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>      at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:442)
>      ... 11 more
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.security.SecurityUtil
>      at 
> org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:231)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:159)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:148)
>      at 
> org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:452)
>      at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:434)
>      at org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:496)
>      at 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.<init>(ConfiguredFailoverProxyProvider.java:88)
>      ... 16 more
> If the HashMap used at multi-thread enviroment,not only the put operation be 
> synchronized,the get operation(eg. containKey) should be synchronzied too.
> The simple solution is trigger the init of SecurityUtil before creating the 
> FileSystem,but I think it's should be synchronized for get of 
> deprecatedKeyMap.
> Thanks. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to