Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The following page has been changed by johanoskarsson:
http://wiki.apache.org/hadoop/HardwareBenchmarks

------------------------------------------------------------------------------
  It basically generates 10gb of random data per node and sorts it.
  
  == Hardware ==
- ||Cluster name||CPU model||CPU freq||No cores||RAM||Disk size||Disk 
interface||Disk rpm||No disks||Network type||Number of machines||Number of 
racks||
+ ||Cluster name||CPU model||CPU freq||Cores||RAM||Disk size||Disk 
interface||Disk rpm||Disks||Network type||Number of machines||Number of racks||
- ||Herd1||Intel Xeon LV||2.0ghz||4||4gb||250gb||SATA||7200rpm||4||GigE||35||2||
+ ||Herd1||Intel Xeon 
LV||2.0ghz||4||4gb||0.25tb||SATA||7200rpm||4||GigE||35||2||
- ||Herd2||Intel Xeon 
5320||1.86ghz||8||8gb||750gb||SATA2||7200rpm||4||GigE||10||1||
+ ||Herd2||Intel Xeon 
5320||1.86ghz||8||8gb||0.75tb||SATA2||7200rpm||4||GigE||20||2||
  
  == Benchmark ==
  All benchmarks run with the default randomwriter and sort parameters.
  
- I ran into some odd behavior on Herd2 where if i set the Max tasks / node to 
10 instead of 5 the reducers don't start until the mappers finish, slowing the 
job significantly.
+ ||Cluster name||Version||Sort time s||Mappers||Reducers||Max map tasks / 
node||Max reduce tasks / node||Map speculative ex||Reduce speculative 
ex||Parallel copies||Sort mb||Sort factor||
+ ||Herd1||0.14.3||3977 s||5600||175||?||?||Yes||Yes||20||200||10||
+ ||Herd2||0.14.3||2377s||1600||50||?||?||Yes||Yes||20||200||10||
+ ||Herd2||0.18.3||1715s||1600||50||7||8||No||Yes||20||100||50||
  
- ||Cluster name||Version||Sort time s||Mappers||Reducers||Max tasks / 
node||Speculative ex||Parallel copies||Sort mb||Sort factor||
- ||Herd1||0.14.3||3977 s||5600||175||5||Yes||20||200||10||
- ||Herd2||0.14.3||2377s||1600||50||5||Yes||20||200||10||
- 

Reply via email to