[ https://issues.apache.org/jira/browse/PIG-924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12745540#action_12745540 ]
Olga Natkovich commented on PIG-924: ------------------------------------ Todd and Dmitry, I understand your intention. I am wondering if in the current situation, the following might not be the best course of action: (1) Release Pig 0.4.0. I think we resolved all the blockers and can start the process (2) Wait till Hadoop 20.1 is released and release Pig 0.5.0. Owen promised that Hadoop 20.1 will go out for a vote next week. This means that Pig 0.4.0 and 0.5.0 will be just a couple of weeks apart which should not be a big issue for users. Meanwhile they can apply PIG-660 to the code bundled with Pig 0.4.0 or the trunk. I am currently working with the release engineering to get an official hadoop20.jar that Pig can be build with. I expect to have it in the next couple of days. The concern with applying the patch is the code complexity it introduces. Also, if there are patches that are version specific, they will not be easy to apply. Multiple branches is something we understand and know how to work with better. We also don't want to set a precedent of supporting pig releases on multiple versions on Hadoop because it is not clear that this is something we will be able to maintain going forward. > Make Pig work with multiple versions of Hadoop > ---------------------------------------------- > > Key: PIG-924 > URL: https://issues.apache.org/jira/browse/PIG-924 > Project: Pig > Issue Type: Bug > Reporter: Dmitriy V. Ryaboy > Attachments: pig_924.2.patch, pig_924.3.patch, pig_924.patch > > > The current Pig build scripts package hadoop and other dependencies into the > pig.jar file. > This means that if users upgrade Hadoop, they also need to upgrade Pig. > Pig has relatively few dependencies on Hadoop interfaces that changed between > 18, 19, and 20. It is possibly to write a dynamic shim that allows Pig to > use the correct calls for any of the above versions of Hadoop. Unfortunately, > the building process precludes us from the ability to do this at runtime, and > forces an unnecessary Pig rebuild even if dynamic shims are created. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.