The link to the mail discussion is broken but here it is:

http://wiki.apache.org/hadoop/MachineScaling

"Hadoop benefits greatly from ECC memory, which is not low-end,
however using ECC memory is RECOMMENDED."

Generally speaking you can redo a map reduce task if it faults due to
ram issues, but you can't always redo a hbase operation.

-ryan

On Mon, Nov 29, 2010 at 11:58 PM, Christian van der Leeden
<[email protected]> wrote:
> Hi,
>
>        I guess this is only reliability, since the read failures will be 
> corrected
> but also reported, so you have a possibility to change the memory that is bad,
> Without ECC you get garbage or read failures (SIG 11 e.g.) and your node
> will be corrupted or down.
>
> Christian
>
> On Nov 30, 2010, at 5:31 AM, Hari Sreekumar wrote:
>
>> Hi,
>>
>>   I read recently that ECC memory is better for HBase/Hadoop. But isn't ECC
>> memory slightly slower than normal memory? How much extra reliability does
>> ECC provide? Has anyone done a comparison of ECC vs normal memory stress
>> test of HBase or hadoop in general? It also costs more. Is it worth it?
>>
>> Thanks,
>> Hari
>
>

Reply via email to