Re: hdfs-doubt

2009-04-02 Thread Rasit OZDAS
, Yours sincerely, SreeDeepya -- View this message in context: http://www.nabble.com/hdfs-doubt-tp22764502p22765332.html Sent from the Hadoop core-user mailing list archive at Nabble.com. -- M. Raşit ÖZDAŞ

Re: hdfs-doubt

2009-03-29 Thread Billy Pearson
Your client doesn't have to be on the namenode it can be on any system that can access the namenode and the datanodes. Hadoop uses 64MB block to store files so file sizes = 64mb should be as efficient as 128MB or 1GB file sizes. more reading and information here:

Re: hdfs-doubt

2009-03-29 Thread deepya
data upto petabytes but will be in gigabytes.Is hadoop still efficient in storing mega and giga bytes of data Thanking you, Yours sincerely, SreeDeepya -- View this message in context: http://www.nabble.com/hdfs-doubt-tp22764502p22765332.html Sent from the Hadoop core-user