On Mon, 07 Dec 2009 09:51:42 PST
Mario Kadastik <mario.kadastik at cern.ch> wrote:

> I'm not 100% sure if this is the right forum, but admins feel free to move it 
> if it's not.

> The issue I have is in short that whenever I fire up hadoop data
> node on the new osol system and start writing files to the HDFS on
> other nodes, the system will work for a few minutes writing files as

Other nodes? Is that systems, or something else?

> Namely I have three other nodes that are similar, but not
> exact. They are a bit older nodes so they feature E5420 cpu-s and a
> previous model of the motherboard, they also have a tick less memory
> and use 750GB disks. Those systems run Solaris 10 and no matter how
> much I transfer to them through hadoop nothing moves them. So this
> makes me believe that there are a few options that could be bad
> here:
>
> 1) a bug in OpenSolaris kernel (driver or not)
> 2) bad motherboard (hmm.... doubt a bit)
> 3) bad Areca controller
> 4) bad disks
> 5) bad memory

If you're writing to other machines, then shouldn't you add:

6) bad nic.

I've seen problems with the e1000g on one of my systems also; I just
haven't had time to do any diagnosis on it.

        <mike
-- 
Mike Meyer <mwm at mired.org>           http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

Reply via email to