[
https://issues.apache.org/jira/browse/ZOOKEEPER-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16722506#comment-16722506
]
miaojianlong commented on ZOOKEEPER-3213:
-----------------------------------------
# server was restart, and the out log is lost, so i don't sure
# yes, it was still exists when i create this issue
# i update the log zip,the 48 node name is
_c_59675fec-2173-480c-acea-9a2f9a337f7b-latch-0000000048
[~maoling]
> Transaction has delete log bug actually it is not delete
> --------------------------------------------------------
>
> Key: ZOOKEEPER-3213
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3213
> Project: ZooKeeper
> Issue Type: Bug
> Components: leaderElection, server
> Affects Versions: 3.4.10
> Environment: Linux
> Java 1.8
> ZK 3.4.10
> server1: 10.35.104.123
> server2: 10.35.104.124
> server3: 10.35.104.125
> Reporter: miaojianlong
> Priority: Blocker
> Attachments: transactionlog.zip
>
>
> # first i found my spark(2.2.0) turn to standby (HA mode with zk) and i can
> not restart the service to restore the problem。
> # Then I found that there are three nodes in the /spark/leader_election/
> directory, which are 48, 93, and 94. These are temporary sequential nodes,
> and 48 should have been timed out. And I looked at the transaction log and
> did have a log of delete 48. But the actual data still exists.
> The above phenomenon appears on the two nodes 10.35.104.123 and
> 10.35.104.125, and only 93 and 94 on 10.35.104.124.
> Unable to export logs due to phenomenon in the company intranet
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)