Todd Lipcon commented on PIG-924:

bq. I think this is a bad idea and is totally unmaintainable. In particular, 
the HadoopShim interface is very specific to the changes in those particular 
versions. We are trying to stabilize the FileSystem and Map/Reduce interfaces 
to avoid these problems and that is a much better solution.

Agreed that this is not a long term solution. Like you said, the long term 
solution is stabilized cross-version APIs so this isn't necessary. The fact is, 
though, that there are a significant number of people running 0.18.x who would 
like to use Pig 0.4.0, and supporting them out of the box seems worth it. This 
patch is pretty small and easily verifiable both by eye and by tests. Given 
that the API is still changing for 0.21, and Pig hasn't adopted the "new" MR 
APIs yet, it seems like it's premature to leave 18 in the cold.

Do you have an objection to committing this only on the 0.4.0 branch and *not* 
planning to maintain it in trunk/0.5?

> Make Pig work with multiple versions of Hadoop
> ----------------------------------------------
>                 Key: PIG-924
>                 URL: https://issues.apache.org/jira/browse/PIG-924
>             Project: Pig
>          Issue Type: Bug
>            Reporter: Dmitriy V. Ryaboy
>         Attachments: pig_924.2.patch, pig_924.3.patch, pig_924.patch
> The current Pig build scripts package hadoop and other dependencies into the 
> pig.jar file.
> This means that if users upgrade Hadoop, they also need to upgrade Pig.
> Pig has relatively few dependencies on Hadoop interfaces that changed between 
> 18, 19, and 20.  It is possibly to write a dynamic shim that allows Pig to 
> use the correct calls for any of the above versions of Hadoop. Unfortunately, 
> the building process precludes us from the ability to do this at runtime, and 
> forces an unnecessary Pig rebuild even if dynamic shims are created.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

Reply via email to