Hi,

I just wanted to share a test we conducted in our small cluster of 3 datanodes and one namenode. Basically we have lots of data to process and we run a parsing script outside hadoop that creates the key,value pairs. This output which is plain txt files is then imported into hadoop using the put/get etc commands.

In order to speed up things we run the parsing jobs on multiple machines in parallel which are not part of our cluster (3 datanodes + namenode) but they do have the same version of hadoop installed as the cluster which we use to perform the puts. This work flow has significantly improved our time to import the data into HADOOP after which we run the reduce-only step to aggregate.

Currently the way to insert data is through our namenode which all the machines outsude the cluster call them hdfs clients connect to and are not part of the master/slave setup. I haven't tried but maybe we can perform these puts via the datanodes themselves and not just through the namenode? Right now the namenode is the single point through which the hdfs client machines insert the parsed data.

Secondly i would assume that this is a safe way to import parsed data into hadoop before we aggregate and will most likely not cause any data corruption in HDFS. Granted anything can happen :).

It would be interesting to import our logs and perform the mapping step inside HADOOP versus doing it outside. I wonder if the performance will be better, worse or the same. Yes this is dependent on many factors and one of them is the amount of datanodes, data to process, hardware etc we have but we are limited. We are trying to utilize machines outside the cluster which are idle and can process info and then insert the output into HADOOP HDFS via puts.

Your thoughts, comments, suggestions are welcome.

Thanks,
Usman




--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

Reply via email to