Thanks Andrzej and Doug!

I will try both in my later work and evaluate them.

On 1/17/07, Doug Cutting <[EMAIL PROTECTED]> wrote:
> Andrzej Bialecki wrote:
> > The reason is that if you pack this file into your job JAR, the job jar
> > would become very large (presumably this 40MB is already compressed?).
> > Job jar needs to be copied to each tasktracker for each task, so you
> > will experience performance hit just because of the size of the job jar
> > ... whereas if this file sits on DFS and is highly replicated, its
> > content will always be available locally.
>
> Note that the job jar is copied into HDFS with a highish replication
> (10?), and that it is only copied to each tasktracker node once per
> *job*, not per task.  So it's only faster to manage this yourself if you
> have a sequence of jobs that share this data, and if the time to
> re-replicate it per job is significant.
>
> Doug
>

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Nutch-developers mailing list
Nutch-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nutch-developers

Reply via email to