[GitHub] flink issue #3599: [FLINK-6174][HA]introduce a new election service to make ...

2017-03-24 Thread WangTaoTheTonic
Github user WangTaoTheTonic commented on the issue:

https://github.com/apache/flink/pull/3599
  
I don't think it's a good idea, as it can not solve the "split brain" issue 
too.

The key problem is that `LeaderLatch` in curator is too sensitive to 
connection state to Zookeeper(it will revoke leadership when connection to 
zookeeper is temporarily broke), and probably the best way is offerring a 
"duller" LeaderLatch, which can be also used in standalone cluster.

I did same work in our own private Spark release, let me see if it can be 
reused.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3599: [FLINK-6174][HA]introduce a new election service to make ...

2017-03-24 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3599
  
I would suggest to fix this the following way:

  - There is an upcoming patch that makes the Flink codebase use the 
`HighAvailabilityServices` properly in all places.
  - We introduce a new HA mode called `yarnsimple` or so (next to `none` 
and `zookeeper`) and instantiate a new implementation of 
`HighAvailabilityServices` which is ZooKeeper independent.
  - The new implementation of the High Availability Services does not use 
ZooKeeper. It uses a leader service that always grants the JobManager 
leadership, but also implements a way for TaskManagers to find the JobManager 
(to be seen how, possibly a file in HDFS or so). It also implements a ZooKeeper 
independent CompletedCheckpointStore that finds checkpoints by maintaining a 
file with completed checkpoints.

That is all not a "proper" HA setup - it only works as long as there is 
strictly only one master
But it comes close and is ZooKeeper independent.

Is that what you are looking for?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3599: [FLINK-6174][HA]introduce a new election service to make ...

2017-03-23 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3599
  
-1 sorry.

This needs to go to the drawing board (FLIP or detailed JIRA discussion) 
before we consider a change that is impacting the guarantees and failure mode 
so heavily.

Some initial comments:

  - In proper HA, you need some service that "locks" the leader, otherwise 
you are vulnerable to the "split brain" problem where a network partition makes 
multiple JobManagers work as leaders, each with some TaskManagers.

  - In FLIP-6, we are introducing the `HighAvailabilityServices` to allow 
for multiple levels of guarantees with different implementations. I can see 
that introducing a highly-available but not split-brain-protected is 
interesting, but it should not replace any existing mode, but be a new mode.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3599: [FLINK-6174][HA]introduce a new election service to make ...

2017-03-23 Thread wenlong88
Github user wenlong88 commented on the issue:

https://github.com/apache/flink/pull/3599
  
Hi, I may have described my concern wrongly in the last comment, my concern 
is that in yarn it is possible that two application master running at the same 
time: 
eg: RM launches a AM and then the machine lost connection with RM by some 
reason, RM will launch another AM. It is possible that the first AM will be 
still running when launching the second AM in some scenario like NM heartbeat 
timeout but running.
When it is possible that there are two AM running at the same time, we may 
go into a dead lock using the AlwaysLeaderService as follows: 
1. the first AM grant leadership
2. the second AM grant leadership
3. the second AM write leader info
4. the first AM write leader info
5. the first AM killed by NM or some cluster monitor tool since RM marked 
NM as unavailable.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3599: [FLINK-6174][HA]introduce a new election service to make ...

2017-03-23 Thread WangTaoTheTonic
Github user WangTaoTheTonic commented on the issue:

https://github.com/apache/flink/pull/3599
  
Thanks for your comments @wenlong88 .

I also gave a thought about adding retry logic when zk failover, but this 
part should modify `LeaderLatch` in curator, which is a 3rd party library, or 
we can only add a our private LeaderLatch through coping most parts of the 
implementation in curator.

Even with adding this AlwaysLeaderService, the JM failover can also go well 
as RM will start a new instance.

about FLIP-6, I'll check the solution and find if anything can help with 
this :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3599: [FLINK-6174][HA]introduce a new election service to make ...

2017-03-22 Thread wenlong88
Github user wenlong88 commented on the issue:

https://github.com/apache/flink/pull/3599
  
Hi, @WangTaoTheTonic I think we can improve the reaction of 
ZookeeperLeaderElectionService on zookeeper connection expired or other errors 
instead of introducing the AlwaysLeaderService such as adding a retry before 
revoking leadership, because when the problem is caused by errors on the 
machine which JM is running on, we need to trigger a failover to make the JM 
change a machine. 
On the other hand, in the coming FLIP-6 implementation, JM failover will 
not trigger cancelling all running tasks.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---