On Thu, Apr 8, 2010 at 10:51 AM, Patrick Angeles <patr...@cloudera.com>wrote:
> Packaging the job and config and sending it to the JobTracker and various > nodes also adds a few seconds overhead. > > On Thu, Apr 8, 2010 at 10:37 AM, Jeff Zhang <zjf...@gmail.com> wrote: > > > By default, for each task hadoop will create a new jvm process which will > > be > > the major cost in my opinion. You can customize configuration to let > > tasktracker reuse the jvm to eliminate the overhead to some extend. > > > > On Thu, Apr 8, 2010 at 8:55 PM, Aleksandar Stupar < > > stupar.aleksan...@yahoo.com> wrote: > > > > > Hi all, > > > > > > As I realize hadoop is mainly used for tasks that take long > > > time to execute. I'm considering to use hadoop for task > > > whose lower bound in distributed execution is like 5 to 10 > > > seconds. Am wondering what would the overhead be with > > > using hadoop. > > > > > > Does anyone have an idea? Any link where I can find this out? > > > > > > Thanks, > > > Aleksandar. > > > > > > > > > > > > > > > > > > > -- > > Best Regards > > > > Jeff Zhang > > > All jobs make entries in a jobhistory directory on the task tracker. As of now the jobhistory directory has some limitations with ext3 you hit max files in a directory at 32k, if you use xfs or ext4 you can have no theoretical limit but hadoop itself will bog down if the directory gets too large. If you want to do this enable JVM re-use as mentioned above to shorten job start times. Also be prepared to make some shell scripts to handle some cleanup tasks. Edward