[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720080#comment-16720080
 ] 

maoling commented on ZOOKEEPER-3213:
------------------------------------

[~miaojl]
1.the node:10.35.104.124 is the leader at that moment?
2.the data inconsistency exists for a long time?
3.can you show us the transactioal log about the delete of 48, which three 
nodes all have?
4.it couldn't be better that you can provide us the antianaphylaxis-logs at 
that moment of the three nodes
5. some similar issues can be found 
[here|https://issues.apache.org/jira/browse/ZOOKEEPER-1809?jql=project%20%3D%20ZOOKEEPER%20AND%20text%20~%20%22ephemeral%20delete%22]
 

> Transaction has delete log bug actually it is not delete
> --------------------------------------------------------
>
>                 Key: ZOOKEEPER-3213
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3213
>             Project: ZooKeeper
>          Issue Type: Bug
>          Components: leaderElection, server
>    Affects Versions: 3.4.10
>         Environment: Linux
> Java 1.8
> ZK 3.4.10
> server1: 10.35.104.123
> server2: 10.35.104.124
> server3: 10.35.104.125
>            Reporter: miaojianlong
>            Priority: Blocker
>
> # first i found my spark(2.2.0) turn to standby (HA mode with zk) and i can 
> not restart the service to restore the problem。
>  # Then I found that there are three nodes in the /spark/leader_election/ 
> directory, which are 48, 93, and 94. These are temporary sequential nodes, 
> and 48 should have been timed out. And I looked at the transaction log and 
> did have a log of delete 48. But the actual data still exists.
> The above phenomenon appears on the two nodes 10.35.104.123 and 
> 10.35.104.125, and only 93 and 94 on 10.35.104.124.
> Unable to export logs due to phenomenon in the company intranet



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to