I think you must misunderstanding the architecture.
I believe the "master node" that you talk about is actually a NameNode, the
client must talk with NameNode. the namenode will check if the file that you
try to write existed or not. if the file doesn't exist, NN will create a
record in NN and return a DFSOutputSteam. the data will be writed in this
OutputStream. The DFSOutputStream will handle which datanode the data will
be writed to.

So, the only entry point is the NameNode. you can not connect to the other
node to write the file.

On Wed, Jan 26, 2011 at 7:24 PM, Alessandro Binhara <[email protected]>wrote:

> Hello..
>
> I'm confused about the architecture of HDFS
> I have a client java write a files on a master node. It´s work perfect!..
> well i see a HDFS architecture
> http://hadoop.apache.org/common/docs/r0.20.0/images/hdfsarchitecture.gif
> some client access a data note..
>
> i try use my java client write direct on slave node. I change IP on client
> configuration:
>                Configuration conf = new Configuration();
> conf.set("fs.default.name","hdfs://192.168.254.25:54310/"); //slave ip
>
> I got this error. .
> 26/01/2011 09:19:52 org.apache.hadoop.ipc.Client$Connection
> handleConnectionFailure
> It dont work ..
>
> In architecture picture show me that all datanode can conectable. That´s
> correct ?
>
> How it work ?
> I can only conect to write on HDFS on master node ??
>
> thank s
>



-- 
-----李平

Reply via email to