[ 
https://issues.apache.org/jira/browse/PIG-872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12727660#action_12727660
 ] 

Milind Bhandarkar commented on PIG-872:
---------------------------------------

A couple of things:

As Pradeep says, only the hadoop job that performs FR join needs to add the 
replicated dataset to distributed cache.

Second, make sure that the replicated dataset has high replication, such as 10 
(or the same replication as job.jar). For already materialized dataset, Pig 
need not do anything but only warn if the replication factor is small (e.g. 3) 
But if the replicated dataset is being produced as an intermediate output by 
Pig, it needs to be generated with high replication factor. 

> use distributed cache for the replicated data set in FR join
> ------------------------------------------------------------
>
>                 Key: PIG-872
>                 URL: https://issues.apache.org/jira/browse/PIG-872
>             Project: Pig
>          Issue Type: Improvement
>            Reporter: Olga Natkovich
>
> Currently, the replicated file is read directly from DFS by all maps. If the 
> number of the concurrent maps is huge, we can overwhelm the NameNode with 
> open calls.
> Using distributed cache will address the issue and might also give a 
> performance boost since the file will be copied locally once and the reused 
> by all tasks running on the same machine.
> The basic approach would be to use cacheArchive to place the file into the 
> cache on the frontend and on the backend, the tasks would need to refer to 
> the data using path from the cache.
> Note that cacheArchive does not work in Hadoop local mode. (Not a problem for 
> us right now as we don't use it.)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to