For the Hadoop filesystem, I know that it is basically unlimited in terms of
storage because one can always add new hardware, but it is unlimited in
terms of a single file?

What I mean by this is if I store a file /user/dir/a.index and this file has
say 100 blocks in it where there is only enough space on any server for 10
blocks; will the Hadoop filesystem store and replicate different blocks on
different servers and give the client a single file view or does a whole
file have to be stored and replicated across machines.

Dennis

Reply via email to