Lukasz Osipiuk commented on ZOOKEEPER-710:
This is application stuff.
Lets take exist function from our code which wraps zookeeper library as an
Its declaration is:
ZooKeeper::ErrorCode::Type ZooKeeper::exists(const std::string& path,
const WatchFunction& watchFunction,
As you see caller passes high level watchFunction to be called.
Our wrapper maintains a map (node,operationtype)->watcherFunction. Internal
watch function which is used in zoo_* calls
is responsible for calling user provided watchFunctions stored in map in case
of change concerning one of nodes.
When client disconnects from zookeeper all watches are invalidated (at least
AFAIK) so we are calling all user provided functions in such case.
We are doing this as soon client is reconnected to zookeeper. We simulate
situation when change occured to all nodes application was interested in. That
is what fireAllWatchFunctions function do.
Is it somewhat clear?
> permanent ZSESSIONMOVED error after client app reconnects to zookeeper cluster
> Key: ZOOKEEPER-710
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-710
> Project: Zookeeper
> Issue Type: Bug
> Affects Versions: 3.2.2
> Environment: debian lenny; ia64; xen virtualization
> Reporter: Lukasz Osipiuk
> Attachments: app1.log.2010-03-16.gz, app2.log.2010-03-16.gz,
> zookeeper-node1.log.2010-03-16.gz, zookeeper-node2.log.2010-03-16.gz,
> Originally problem was described on Users mailing list starting with this
> Below I restate it in more organized form.
> We occasionally (few times a day) observe that our client application
> disconnects from Zookeeper cluster.
> Application is written in C++ and we are using libzookeeper_mt library. In
> version 3.2.2.
> The disconnects we are observing are probably related to some problems with
> our network infrastructure - we are observing periods with great packet loss
> between machines in our DC.
> Sometimes after client application (i.e. zookeeper library) reconnects to
> zookeeper cluster we are observing that all subsequent requests return
> ZSESSIONMOVED error. Restarting client app helps - we always pass 0 as
> clientid to zookeeper_init function so old session is not reused.
> On 16-03-2010 we observed few occurences of problem. Example ones:
> - 22:08; client IP 10.1.112.60 (app1); sessionID 0x22767e1c9630000
> - 14:21; client IP 10.1.112.61 (app2); sessionID 0x324dcc1ba580085
> I attach logs of cluster and application nodes (only stuff concerining
> - [^zookeeper-node1.log.2010-03-16.gz] - logs of zookeepr cluster node 1
> - [^zookeeper-node2.log.2010-03-16.gz] - logs of zookeepr cluster node 2
> - [^zookeeper-node3.log.2010-03-16.gz] - logs of zookeepr cluster node 3
> - [^app1.log.2010-03-16.gz] - application logs of app1 10.1.112.60
> - [^app2.log.2010-03-16.gz] - application logs of app2 10.1.112.61
> I also made some analysis of case at 22:08:
> - Network glitch which resulted in problem occurred at about 22:08.
> - From what I see since 17:48 node2 was the leader and it did not
> change later yesterday.
> - Client was connected to node2 since 17:50
> - At around 22:09 client tried to connect to every node (1,2,3).
> Connections to node1 and node3 were closed
> with exception "Exception causing close of session 0x22767e1c9630000
> due to java.io.IOException: Read error".
> Connection to node2 stood alive.
> - All subsequent operations were refused with ZSESSIONMOVED error.
> Error visible both on client and on server side.
This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.