Clients can't currently handle NN restarts gracefully. At least on my 1.x/0.20.x tests, the client didn't fail if it didn't had a new block to allocate (when the NN was being restarted, since it didn't make a connection to the NN at all) but it did fail when it was writing small blocks very fast and the NN ended the connection when it was trying to locate the next block (al)location to write to. So, don't bet on it, and instead use HA NameNodes with failover, available in 2.x onwards.
On Thu, Jun 7, 2012 at 3:59 PM, Rita <rmorgan...@gmail.com> wrote: > Running Hadoop 0.22 and I need to restart the namenode so my new rack > configuration will be set into place. I am thinking of doing a quick stop > and start of the namenode but what will happen to the current clients? Can > they tolerate a 30 second hiccup by retrying? > > -- > --- Get your facts first, then you can distort them as you please.-- -- Harsh J