The execution within the IDE is most likely not loading the flink-conf.yaml
file to read the configuration.  When run from the IDE you get a
LocalStreamEnvironment, which starts a LocalFlinkMiniCluster.
LocalStreamEnvironment is created by
StreamExecutionEnvironment.createLocalEnvironment without passing it any
configuration.  So none
of StreamExecutionEnvironment., LocalStreamEnvironment,
and LocalFlinkMiniCluster try to read the config file.

This makes it difficult to test certain Flink features from within the IDE,
as some configuration properties can't be set programmatically.  For
instance, you can't configure the external checkpoint URL in code.  It can
only be yet in the config file.  That means you can't run a job that turns
on external checkpoints from within the IDE.

Ideally one of these components would try load the config file when
executing locally.  You could then point it to the config file via
the FLINK_CONF_DIR environment variable.


On Fri, Sep 8, 2017 at 8:47 AM, AndreaKinn <kinn6...@hotmail.it> wrote:

> UPDATE:
>
> I'm trying to implement the version with one node and two task slots on my
> laptop. I have also in configured flink-conf.yaml the key:
>
> taskmanager.numberOfTaskSlots: 2
>
> but when I execute my program in the IDE:
>
> /org.apache.flink.runtime.jobmanager.scheduler.
> NoResourceAvailableException:
> Not enough free slots available to run the job. You can decrease the
> operator parallelism or increase the number of slots per TaskManager in the
> configuration. /
>
> parallelism is set 1.
>
> Which could be the problem?
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/
>

Reply via email to