[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13926084#comment-13926084
 ] 

Chris Nauroth commented on MAPREDUCE-4052:
------------------------------------------

bq. btw, Chris Nauroth, is the use case that upgraded-client with non-upgraded 
NM important ?

I brought this up, because I've been in situations where someone wanted to pick 
up a client-side bug fix ahead of the cluster's upgrade schedule.  It looks to 
me like this is a gray area in our policies though.

http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/Compatibility.html#Wire_compatibility

>From the content in that page, we've made a specific commitment that old 
>clients continue to work with new servers.  As Jian said, that part is fine 
>with this patch.  What is less clear is whether or not we've made a commitment 
>for new clients to work with old servers.  Of course, it's best to strive for 
>it, and forward compatibility is one of our motivations in the protobuf 
>messages, but I can't tell from that policy statement if we've made a 
>commitment to it.  This is probably worth some wider discussion before 
>changing the patch.

If we do need to achieve that kind of compatibility, then it's going to be a 
more challenging patch.  I think we'd end up needing to add an optional version 
number or at least a flag on the {{Container}} returned in the 
{{AllocateResponse}}.  This would tell the client whether or not the container 
can accept the new syntax, and then the client could use the old code path as a 
fallback path for compatibility with old servers that don't set this version 
number or flag.  That would work for containers submitted by an AM.  I can't 
think of a similar solution that would work for the initial AM container 
though, because it seems to me like the RPC sequence there doesn't have as 
clear of a way for indicating capabilities inside the container that's going to 
run the AM before its submission.

Like I said, please do discuss wider before pursuing this.  I'd hate to send 
you down an unnecessary rathole if the current patch is fine.  :-)  Thanks, 
Jian.

> Windows eclipse cannot submit job from Windows client to Linux/Unix Hadoop 
> cluster.
> -----------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-4052
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4052
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: job submission
>    Affects Versions: 0.23.1, 2.2.0
>         Environment: client on the Windows, the the cluster on the suse
>            Reporter: xieguiming
>            Assignee: Jian He
>         Attachments: MAPREDUCE-4052-0.patch, MAPREDUCE-4052.1.patch, 
> MAPREDUCE-4052.2.patch, MAPREDUCE-4052.patch
>
>
> when I use the eclipse on the windows to submit the job. and the 
> applicationmaster throw the exception:
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/mapreduce/v2/app/MRAppMaster
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
> Could not find the main class: 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.  Program will exit.
> The reasion is :
> class Apps addToEnvironment function, use the
> private static final String SYSTEM_PATH_SEPARATOR =
>       System.getProperty("path.separator");
> and will result the MRApplicationMaster classpath use the ";" separator.
> I suggest that nodemanger do the replace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to