Michael,

I think I already recommended that ;)

Here our 2 clusters both run with that config.

J-D

On Mon, Jan 26, 2009 at 9:15 AM, Michael Dagaev <michael.dag...@gmail.com>wrote:

> Thanks, Jean-Daniel.
> Would you recommend setting dfs.datanode.socket.write.timeout=0 in
> hadoop-site.xml ?
>
> On Mon, Jan 26, 2009 at 4:08 PM, Jean-Daniel Cryans <jdcry...@apache.org>
> wrote:
> > Michael,
> >
> > It seems there are many reasons that it can timeout, the example given in
> > HADOOP-3831 is a slow reading client.
> >
> > J-D
> >
> > On Mon, Jan 26, 2009 at 8:46 AM, Michael Dagaev <
> michael.dag...@gmail.com>wrote:
> >
> >> Jean-Daniel,
> >>
> >>               Property dfs.datanode.socket.write.timeout is not set
> >> in hadoop-site.xml.
> >> It does not appear in hadoop-default.xml either.
> >>
> >> Do you know why the data node sockets timed out ? The host does not
> >> look overloaded.
> >>
> >> Thank you for your cooperation,
> >> M.
> >>
> >> On Mon, Jan 26, 2009 at 3:33 PM, Jean-Daniel Cryans <
> jdcry...@apache.org>
> >> wrote:
> >> > Michael,
> >> >
> >> > You don't see anything in your region server logs? Mmm., we usually
> get
> >> > those if we don't set the following in the hadoop-site.xml file:
> >> >
> >> > <property>
> >> >  <name>dfs.datanode.socket.write.timeout</name>
> >> >  <value>0</value>
> >> > </property>
> >> >
> >> > See if it stops the exception. In any case, until Hadoop 0.18.3 and
> >> Hadoop
> >> > 0.19.1, you should probably still use that config to be safe.
> >> >
> >> > J-D
>

Reply via email to