,
Yours sincerely,
SreeDeepya
--
View this message in context:
http://www.nabble.com/hdfs-doubt-tp22764502p22765332.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
M. Raşit ÖZDAŞ
Your client doesn't have to be on the namenode it can be on any system that
can access the namenode and the datanodes.
Hadoop uses 64MB block to store files so file sizes = 64mb should be as
efficient as 128MB or 1GB file sizes.
more reading and information here:
data upto petabytes but will be in
gigabytes.Is
hadoop still efficient in storing mega and giga bytes of data
Thanking you,
Yours sincerely,
SreeDeepya
--
View this message in context:
http://www.nabble.com/hdfs-doubt-tp22764502p22765332.html
Sent from the Hadoop core-user