Hi, The only way to do this is to set the replication factor to 1.
<property> <name>dfs.replication</name> <value>1</value> </property> You have to set this property to 1 and upload the file to HDFS locally on the DD where you want it to be stored. Still no guarantee that it will end up there. But why would you want to do this? It totally goes against the Hadoop paradigm to worry about block placement. Jasper 2012/8/14 prabhu k <prabhu.h...@gmail.com> > Hi Bejoy, > > I have same related query, 1 master and 2 salve nodes, is it possible to > send data into one DataNode( one slave node)? > > Thanks, > Prabhu. > > On Tue, Aug 14, 2012 at 2:17 PM, Bejoy Ks <bejoy...@yahoo.com> wrote: > >> Hi Shaik >> >> I didn't get your query correctly, but I assume with Master Node you >> meant NameNode(NN) and JobTracker(JT) and with Slave Nodes it is >> DataNode(DN) and TaskTracker(TT). In hdfs the NN holds just the meta data >> the actual blocks are stored in DNs . So your question seems a little out >> of track, please share more details for a better understanding on your >> requirement so that we can help you better. >> >> Regards, >> Bejoy KS >> >> ------------------------------ >> *From:* shaik ahamed <shaik5...@gmail.com> >> *To:* user@hive.apache.org >> *Sent:* Tuesday, August 14, 2012 1:40 PM >> *Subject:* Loading data only into one node >> >> Hi Users, >> >> I have an HDFS set up with 1 master node and 2 slave >> nodes.Is it possible loading data only into master node,with out disturbing >> the configuration of other 2 slave nodes. >> >> Thanks in advance. >> >> Regards, >> shaik. >> >> >> > --