It's over TCP/IP, in a custom protocol. See DataXceiver.java. My sense is that it's a custom protocol because Hadoop's IPC mechanism isn't optimized for large messages.
-- Philip On Thu, May 7, 2009 at 9:11 AM, Foss User <foss...@gmail.com> wrote: > I understand that the blocks are transferred between various nodes > using HDFS protocol. I believe, even the job classes are distributed > as files using the same HDFS protocol. > > Is this protocol written over TCP/IP from scratch or this is a > protocol that works on top of some other protocol like HTTP, etc.? >