Ashutosh,

I problem is I don't wan to use that location at all since I am
constructing the output location based on tuple input. The location is just
a dummy holder for me to substitute the right parameters

Felix

On Wed, Nov 2, 2011 at 10:47 AM, Ashutosh Chauhan <[email protected]>wrote:

> Hey Felix,
>
> >> The only problem is that in the setStoreLocation function we have to
> call
> >> FileOutputFormat.setOutputPath(job, new Path(location));
>
> Cant you massage location to appropriate string you want to?
>
> Ashutosh
>
> On Tue, Nov 1, 2011 at 18:07, felix gao <[email protected]> wrote:
>
> > I have wrote a custom store function that primarily based on the
> > multi-storage store function.  They way I use it is
> >
> >
> > store load_log INTO
> > '/Users/felix/Documents/pig/multi_store_output/ns_{0}/site_{1}' using
> > MyMultiStorage('2,1', '1,2');
> > where {0} and {1} will be substituted with the tuple index at 0 and index
> > at 1.  Everything is fine and all the data is written to the correct
> place.
> >  The only problem is that in the setStoreLocation function we have to
> call
> > FileOutputFormat.setOutputPath(job, new Path(location)); i have
> > 'Users/felix/Documents/pig/multi_store_output/ns_{0}/site_{1}' as my
> output
> > location so there is actually a folder created in my fs with ns_{0}
> > and site_{1}.  Is there a way to tell hadoop not to create those output
> > directory?
> >
> > Thanks,
> >
> > Felix
> >
>

Reply via email to