Hello list,

I was unable to access the archives for this list as
http://hadoop.apache.org/mail/core-user/ returns 403.

I am interested in using HDFS for storage, and for map/reduce only
tangentially.  I see clusters mentioned in the docs with many many nodes and
9TB of disk.

Is HDFS expected to scale to > 100TB?

Does it require massive parallelism to scale to many files?  For instance, do 
you
think it would slow down drastically in a 2 node 32T config?

The workload is file serving at about 100mbit, 20req/sec.

Any input is appreciated :)

-- 
Todd Troxell
http://rapidpacket.com/~xtat

Reply via email to