Hi Dipesh,

during workflow / job submission you can define variables inside
job.properties coming e.g. from env vars that are used in workflow.xml. So
much for the flexibility.

Can you tell me a use case where runtime routing to different JT / NN
instances via Oozie (and not e.g. coming from a load balancer setting
configured runtime) is better?

Thanks,

Andras

--
Andras PIROS
Software Engineer
<http://www.cloudera.com/>

On Mon, Dec 5, 2016 at 7:45 PM, mdk-swandha <[email protected]>
wrote:

> Hi Alex,
>
> The idea is to call this external service which will find the best cluster
> and inform the caller. So today this caller is Oozie, tomorrow it will be
> Zeppelin or any other application.
>
> How can I provide multiple JT and NN addresses in job.properties? You mean
> during job/workflow creation? I will still need to overwrite job.properties
> or provide these values somewhere dynamically?
>
> Thanks.
> -Dipesh
>
> On Mon, Dec 5, 2016 at 5:24 AM, Andras Piros <[email protected]>
> wrote:
>
> > Hi Dipesh,
> >
> > seems like a bad idea to programmatically change job-tracker or
> > name-node properties
> > - it's just not the task of Oozie to determine what are the exact JT or
> NN
> > instances Oozie should use.
> >
> > Instead, I'd rather setup a load balancer for JT and another one for NN,
> > and provide those addresses to Oozie's job.properties. That way, we
> > separate concerns - the load balancer can choose the JT or NN node
> runtime,
> > e.g. on a round robin basis.
> >
> > Regards,
> >
> > Andras
> >
> > --
> > Andras PIROS
> > Software Engineer
> > <http://www.cloudera.com/>
> >
> > On Thu, Dec 1, 2016 at 9:29 PM, mdk-swandha <[email protected]>
> > wrote:
> >
> > > Hi,
> > >
> > > I have a use case like this - in a multi cluster (hadoop cluster)
> > > environment if I would like to send a job/oozie workflow to a desired
> > > cluster during runtime, how can this be done.
> > >
> > > I see that there is JavaActionExecutor class which read NN and
> JobTracker
> > > in createBaseHadoopConf method
> > >
> > > All HadoopActionExectors are derived from JavaActionExecutor so this
> > seems
> > > to be a place wherein I can insert my code. How can I do this without
> > > disrupting the original flow by adding my hook.
> > >
> > > One option is to to derive my new JavaActionExecutor and over ride
> > > createBaseHadoopConf method and then derive all ActionExecutors from my
> > new
> > > JavaActionExecutor. It doesn't seem to be elegant to me, so thought to
> > ask
> > > out here.
> > >
> > > Any input will be useful.
> > >
> > > Thanks.
> > > -Dipesh
> > >
> >
>

Reply via email to