[ 
https://issues.apache.org/jira/browse/FLINK-23697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396699#comment-17396699
 ] 

Till Rohrmann commented on FLINK-23697:
---------------------------------------

Hi [~jackylau], I am not sure whether I fully understand how Spark achieves 
isolation.

In the case of Flink you have different deployment modes that give you 
different guarantees of isolation:
h2. Session mode
The session mode gives the weakest isolation since it allows to run several 
jobs on the same set of resources. Hence, it can happen that user code of two 
jobs runs in the same {{TaskExecutor}} process

h2. Application/Per-job mode
The application mode only executes a single Flink job. This guarantees that a 
Flink job has dedicated resources and there won't be any inference from other 
jobs.

If you want to have better job isolation, then I recommend using Flink's 
[application/per-job 
mode|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/overview/#deployment-modes].

> flink standalone Isolation is poor, why not do as spark does
> ------------------------------------------------------------
>
>                 Key: FLINK-23697
>                 URL: https://issues.apache.org/jira/browse/FLINK-23697
>             Project: Flink
>          Issue Type: Bug
>            Reporter: jackylau
>            Priority: Major
>
> flink standalone Isolation is poor, why not do as spark does.
> spark abstract cluster manager, executor just like flink taskmanager.
> spark worker(standalone) is process, and executor is as child process of 
> worker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to