[ https://issues.apache.org/jira/browse/PIG-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12852485#action_12852485 ]
Pradeep Kamath commented on PIG-1337: ------------------------------------- We may need to add a new method - "addToDistributedCache()" on LoadFunc - notice this is an adder not a setter since there is only one key for distributed cache in hadoop's Job (Configuration in the Job). So implementations of loadfunc will have to use the DistributedCache.add*() methods. > Need a way to pass distributed cache configuration information to hadoop > backend in Pig's LoadFunc > -------------------------------------------------------------------------------------------------- > > Key: PIG-1337 > URL: https://issues.apache.org/jira/browse/PIG-1337 > Project: Pig > Issue Type: Improvement > Affects Versions: 0.6.0 > Reporter: Chao Wang > Fix For: 0.8.0 > > > The Zebra storage layer needs to use distributed cache to reduce name node > load during job runs. > To to this, Zebra needs to set up distributed cache related configuration > information in TableLoader (which extends Pig's LoadFunc) . > It is doing this within getSchema(conf). The problem is that the conf object > here is not the one that is being serialized to map/reduce backend. As such, > the distributed cache is not set up properly. > To work over this problem, we need Pig in its LoadFunc to ensure a way that > we can use to set up distributed cache information in a conf object, and this > conf object is the one used by map/reduce backend. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.