Your client doesn't have to be on the namenode it can be on any system that
can access the namenode and the datanodes.
Hadoop uses 64MB block to store files so file sizes >= 64mb should be as
efficient as 128MB or 1GB file sizes.
more reading and information here:
http://wiki.apache.org/hadoop/HadoopPresentations
http://wiki.apache.org/hadoop/HadoopArticles
Billy
"sree deepya" <[email protected]> wrote in
message news:[email protected]...
Hi sir/madam,
I am SreeDeepya,doing Mtech in IIIT.I am working on a project named cost
effective and scalable storage server.Our main goal of the project is to
be
able to store images in a server and the data can be upto petabytes.For
that
we are using HDFS.I am new to hadoop and am just learning about it.
Can you please clarify some of the doubts I have.
At present we configured one datanode and one namenode.Jobtracker is
running
on namenode and tasktracker on datanode.Now namenode also acts as
client.Like we are writing programs in the namenode to store or retrieve
images.My doubts are
1.Can we put the client and namenode in two separate systems?
2.Can we access the images from the datanode of hadoop cluster from a
machine in which hdfs is not there?
3.At present we may not have data upto petabytes but will be in
gigabytes.Is
hadoop still efficient in storing mega and giga bytes of data????
Thanking you,
Yours sincerely,
SreeDeepya