Thank you very much, Andrew.

On Sat, Mar 22, 2014 at 7:02 PM, Andrew Wang <andrew.w...@cloudera.com>wrote:

> Hi Dhaivat,
>
> Take a look at DatanodeRegistration and it's parent class DatanodeID. DR is
> a Writable, meaning it's a custom serialization format. Hadoop 2 uses
> protobuf for the RPC serialization rather than writables (with the
> exception of DN data transfer).
>
>
> https://github.com/apache/hadoop-common/blob/branch-1.2/src/hdfs/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
>
> https://github.com/apache/hadoop-common/blob/branch-1.2/src/hdfs/org/apache/hadoop/hdfs/protocol/DatanodeID.java
>
> Best,
> Andrew
>
>
> On Sat, Mar 22, 2014 at 1:16 PM, Dhaivat Pandya <dhaivatpan...@gmail.com
> >wrote:
>
> > Hi everyone,
> >
> > I'm currently working on an application that requires some important
> > details about the DataNode registration (w/ NameNode) procedure.
> >
> > Specifically, I have understood (after Wireshark-ing and looking through
> > the Hadoop code) that the DataNode is registered with the NameNode using
> a
> > single packet which tells the NameNode where the DataNode is "located"
> > (i.e. host and port).
> >
> > However, in this packet, I'm not clear as to what scheme is used to
> > serialize the DataNode information. I am running Hadoop 1.2.1. Any
> > information will be appreciated.
> >
> > Thank you,
> >
> > Dhaivat Pandya
> >
>

Reply via email to