AFAIK, only limits imposed by your available memory (something has to hold
all that metadata), and potentially by your version of Hadoop (it might
refuse to schedule a job with 100K input splits).

On Tue, Oct 18, 2011 at 12:20 AM, Something Something <
[email protected]> wrote:

> Is there a limit on:
>
> 1)  How long the $FILES string can be?
> 2)  Total # of input paths to process?
>
> when I do this in my Pig script...
>
> *LOAD '$FILES'*
> *                AS (xyz:chararray, abc:int);*
>
>
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths
> to
> process : 4430
>
>
> Thanks for the help.
>

Reply via email to