[
https://issues.apache.org/jira/browse/PIG-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13158136#comment-13158136
]
Daniel Dai commented on PIG-1270:
---------------------------------
Thanks Min, that's encouraging. Which version of Hadoop are you using?
Also I discussed with Min in IM, it will be better to have a global flag to
signal the sufficiency of records, so that we can address more cases for this
optimization.
> Push limit into loader
> ----------------------
>
> Key: PIG-1270
> URL: https://issues.apache.org/jira/browse/PIG-1270
> Project: Pig
> Issue Type: Bug
> Components: impl
> Affects Versions: 0.7.0
> Reporter: Daniel Dai
> Assignee: Daniel Dai
> Attachments: PIG-1270-1.patch, PIG-1270-2.patch, PIG-1270-3.patch
>
>
> We can optimize limit operation by stopping early in PigRecordReader. In
> general, we need a way to communicate between PigRecordReader and execution
> pipeline. POLimit could instruct PigRecordReader that we have already had
> enough records and stop feeding more data.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira