Better to provide a summary as well as the link
On Friday, February 18, 2011, Shrinivas Joshi <[email protected]> wrote: > There seems to be a wiki page already intended for capturing information on > disks in Hadoop environment. http://wiki.apache.org/hadoop/DiskSetup > > Do we just want to link the thread on HDD recommendations from this wiki > page? > > -Shrinivas > > On Tue, Feb 15, 2011 at 11:48 AM, zGreenfelder <[email protected]>wrote: > >> untopposing everything. >> >> >> > Since the OP believes that their requirement is 1TB per node... a >> single >> >> > 2TB would be the best choice. It allows for additional space and you >> >> really >> >> > shouldn't be too worried about disk i/o being your bottleneck. >> >> >> >> >> The original poster also seemed somewhat interested in disk bandwidth. >> >> >> >> That is facilitated by having more than on disk in the box. >> >> >> >> On Sat, Feb 12, 2011 at 8:26 AM, Michael Segel < >> [email protected] >> >> >wrote: >> >> >> On Tue, Feb 15, 2011 at 11:49 AM, Shrinivas Joshi <[email protected]> >> wrote: >> > Thanks much to all who shared their inputs. This really helps. It would >> be >> > nice to have a wiki page collecting all this good information. I will >> check >> > with that. We are definitely going with large capacity disks (>= 1TB). >> > >> > -Shrinivas >> > >> >> I think the guidelines would be good to capture, but that seems like >> it'd be more of a footnote or subsection to a larger hardware >> notes/specs/suggestions page with some guides for picking processors, >> memory, et al(maybe also noting what flavors of OSes are known to have >> particular upside/downsides). It was noted no less than 3 times in >> the thread this is a very fluid target and completely reasonable >> choices today (e.g. X TB sata drives) might be viewed as silly in a >> year or 6 months. >> >> that's my personal opinion, anyway. >> >> >> >> -- >> Even the Magic 8 ball has an opinion on email clients: Outlook not so good. >> >
