Dhruba Borthakur wrote:
It is really nice to have wire-compatibility between clients and servers
running different versions of hadoop. The reason we would like this is
because we can allow the same client (Hive, etc) submit jobs to two
different clusters running different versions of hadoop. But I am not stuck
up on the name of the release that supports wire-compatibility, it can be
either 1.0  or something later than that.
API compatibility  +1
Data compatibility +1
Job Q compatibility -1Wire compatibility +0


That's stability of the job submission network protocol you are looking for there. * We need a job submission API that is designed to work over long-haul links and versions
 * It does not have to be the same as anything used in-cluster
* It does not actually need to run in the JobTracker. An independent service bridging the stable long-haul API to an unstable datacentre protocol does work, though authentication and user-rights are a troublespot

Similarly, it would be good for a stable long-haul HDFS protocol, such as FTP or webdav. Again, no need to build into the namenode .

see http://www.slideshare.net/steve_l/long-haul-hadoop
and commentary under http://wiki.apache.org/hadoop/BristolHadoopWorkshop

Reply via email to