Hi Deniz,

StateFun would be looking for module.yaml(s) in the classpath.
If you are submitting the job to an existing Flink cluster this really
means that it needs to be either:
1. packaged with the jar (like you are already doing)
2. be present at the classpath, this means that you can place your
module.yaml at the /lib directory of your Flink installation, I
suppose that you have different installations in different environments.

I'm not aware of a way to submit any additional files with the jar via the
flink cli, but perhaps someone else can chime in :-)

Cheers,
Igal.


On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <lend...@gmail.com> wrote:

> Hi,
>
> We have a simple stateful-function job, consuming from Kafka, calling
> an HTTP endpoint (on AWS via an Elastic Load Balancer) and publishing
> the result back via Kafka again.
>
> * We created a jar file to be deployed on a standalone cluster (it's
> not a docker Image), therefore we add `statefun-flink-distribution`
> version 3.0.0 as a dependency in that jar file.
> * Entry class in our job configuration is
> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
> simply keep a single module.yaml file in resources folder for the
> module configuration.
>
> My question here is, we would like to deploy that jar to different
> environments (dev. and prod.) and not sure how we can pass different
> module configurations (module.yaml or module_nxt.yaml/module_prd.yaml)
> to the job during startup without creating separate jar files for
> different environments?
>
> Thanks,
> Deniz
>

Reply via email to