@hadoop.apache.org
Cc: mapreduce-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
yarn-...@hadoop.apache.org
Subject: Re: [DISCUSS] Clarification on Compatibility Policy:
Upgraded
Client + Old Server
I think this kind of compatibility issue still could surface for
HDFS
;
yarn-...@hadoop.apache.org
Subject: Re: [DISCUSS] Clarification on Compatibility Policy:
Upgraded
Client + Old Server
I think this kind of compatibility issue still could surface for
HDFS,
particularly for custom applications (i.e. something not executed
via
hadoop jar
...@hortonworks.com]
Sent: 20 March 2014 05:36
To: common-dev@hadoop.apache.org
Cc: mapreduce-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
yarn-...@hadoop.apache.org
Subject: Re: [DISCUSS] Clarification on Compatibility Policy: Upgraded
Client + Old Server
I think this kind of compatibility
on Compatibility Policy: Upgraded
Client + Old Server
I think this kind of compatibility issue still could surface for HDFS,
particularly for custom applications (i.e. something not executed via
hadoop jar on a cluster node, where the client classes ought to be
injected into the classpath
-...@hadoop.apache.org;
yarn-...@hadoop.apache.org
Subject: Re: [DISCUSS] Clarification on Compatibility Policy: Upgraded
Client + Old Server
I think this kind of compatibility issue still could surface for HDFS,
particularly for custom applications (i.e. something not executed via
hadoop jar
I'd like to discuss clarification of part of our compatibility policy.
Here is a link to the compatibility documentation for release 2.3.0:
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/Compatibility.html#Wire_compatibility
For convenience, here are the specific lines
+1 on supporting new clients with old servers of the same major version,
and updating the policy to capture that clearly.
On Wed, Mar 19, 2014 at 1:59 PM, Chris Nauroth cnaur...@hortonworks.comwrote:
I'd like to discuss clarification of part of our compatibility policy.
Here is a link to the
It makes sense only for YARN today where we separated out the clients. HDFS is
still a monolithic jar so this compatibility issue is kind of invalid there.
+vinod
On Mar 19, 2014, at 1:59 PM, Chris Nauroth cnaur...@hortonworks.com wrote:
I'd like to discuss clarification of part of our
I think this kind of compatibility issue still could surface for HDFS,
particularly for custom applications (i.e. something not executed via
hadoop jar on a cluster node, where the client classes ought to be
injected into the classpath automatically). Running DistCP between 2
clusters of
...@hortonworks.com]
Sent: 20 March 2014 05:36
To: common-dev@hadoop.apache.org
Cc: mapreduce-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
yarn-...@hadoop.apache.org
Subject: Re: [DISCUSS] Clarification on Compatibility Policy: Upgraded Client +
Old Server
I think this kind of compatibility
10 matches
Mail list logo