Hi Dhiraj, Unfortunately there is no such feature available in HDFS. HDFS is designed to utilize all the datanodes available in the cluster and serve maximum clients by distributing the load to different datanodes.
Anyway, there is one workaround for this. You can place your client (both reader and writer) in that particular node itself. In this case most (may not all) of your requests will be using local node itself, though this will create more load on that particular datanode ( anyway this is your req). Regards, Vinay On Tue, Oct 14, 2014 at 10:37 AM, Dhiraj Kamble <dhiraj.kam...@sandisk.com> wrote: > Hi, > > Is it possible to redirect writes to one particular node i.e. store the > primary replica always on the same node; and have reads served from this > primary node. If the primary node goes down; then hadoop replication works > as per its policy; but when this node comes up it should again become the > primary node. I don't see any config parameter available for core-site.xml > or hdfs-site.xml to serve this purpose. > > Is there any way I can do this. > > Regards, > Dhiraj > > > ________________________________ > > PLEASE NOTE: The information contained in this electronic mail message is > intended only for the use of the designated recipient(s) named above. If > the reader of this message is not the intended recipient, you are hereby > notified that you have received this message in error and that any review, > dissemination, distribution, or copying of this message is strictly > prohibited. If you have received this communication in error, please notify > the sender by telephone or e-mail (as shown above) immediately and destroy > any and all copies of this message in your possession (whether hard copies > or electronically stored copies). > >