[ 
https://issues.apache.org/jira/browse/FLINK-2954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Jiang updated FLINK-2954:
------------------------------
    Description: 
There are programs that rely on custom environment variables. In hadoop 
mapreduce job we can use -Dmapreduce.map.env and - Dmapreduce.reduce.env to do 
pass them. Similarly in Spark
we can use --conf 'spark.executor.XXX=value for XXX'. There is no such feature 
yet in Flink.

This has given Flink a serious disadvantage when customers need such feature.



  was:
There are programs that relies on custom environment variables. In hadoop 
mapreduce job we can use -Dmapreduce.map.env and - Dmapreduce.reduce.env to do 
pass them. Similarly in Spark
we can use --conf 'spark.executor.XXX=value for XXX'. There is no such feature 
yet in Flink.

This has given Flink a serious disadvantage when customers need such feature.




> Not able to pass custom environment variables in cluster to processes that 
> spawning TaskManager
> -----------------------------------------------------------------------------------------------
>
>                 Key: FLINK-2954
>                 URL: https://issues.apache.org/jira/browse/FLINK-2954
>             Project: Flink
>          Issue Type: Bug
>          Components: Command-line client, Distributed Runtime
>            Reporter: Jian Jiang
>            Priority: Critical
>
> There are programs that rely on custom environment variables. In hadoop 
> mapreduce job we can use -Dmapreduce.map.env and - Dmapreduce.reduce.env to 
> do pass them. Similarly in Spark
> we can use --conf 'spark.executor.XXX=value for XXX'. There is no such 
> feature yet in Flink.
> This has given Flink a serious disadvantage when customers need such feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to