If it pig0.3 or higher you would be able to just use STORE command multiple times in the pig script to store results directly into hdfs. A = LOAD ... ... B = GROUP A ... C = GROUP A ... ... STORE B ... STORE C ...
Also look into http://hadoop.apache.org/pig/docs/r0.3.0/piglatin.html#Multi-Query+Execution I do not how he is your data set,but you might be able to increase the memory parameters to be able to do it in single script. Cheers, /R On 1/30/10 7:36 AM, "Jennie Cochran-Chinn" <[email protected]> wrote: I had a question about storing data to different files. The basic jist of what we are doing is taking a large set of data, performing a group by and then storing each group's dataBag into a distinct file (on S3). Currently we are using a UDF inside a FOREACH loop that writes the dataBag to a local tmp file and then pushes it to S3. This does not seem to be the ideal way to do this and we were wondering if anyone had any suggestions. I know there is the MultiStore function in the piggybank, but given that we have many different groups, it does not appear that would scale very well. For instance, in some experiments the cluster I was using could not open new streams and thus failed. Thanks, Jennie
