[ 
https://issues.apache.org/jira/browse/OAK-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek reassigned OAK-6547:
----------------------------------

    Assignee: Tomek Rękawek

> The machine id conflicts when running Oak in Docker containers
> --------------------------------------------------------------
>
>                 Key: OAK-6547
>                 URL: https://issues.apache.org/jira/browse/OAK-6547
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: documentmk
>            Reporter: Tomek Rękawek
>            Assignee: Tomek Rękawek
>            Priority: Minor
>             Fix For: 1.8
>
>
> When running multiple Oak cluster instances in the Docker containers on the 
> same host, following message can be spotted:
> {noformat}
> 10.08.2017 21:10:34.047 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo Found an existing 
> possibly active cluster node info (1) for this instance: mac:0242ac120003//, 
> will try use it.
> 10.08.2017 21:10:34.047 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo Waiting for 
> cluster node 1's lease to expire: 96s left
> 10.08.2017 21:10:39.049 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.document.ClusterNodeInfo Waiting for 
> cluster node 1's lease to expire: 91s left
> {noformat}
> This caused by the fact that all the instances have exactly the same machine 
> id and process id when running in a containers.
> My proposal is to use the Docker container id as the machine id. It can be 
> read from the {{/proc/1/cgroup}}:
> {noformat}
> 14:name=systemd:/docker/9a98ded1e2a49d87d59462b22e7eba647a7a2d4a46756a0839fe002db58a98f8
> 13:pids:/docker/9a98ded1e2a49d87d59462b22e7eba647a7a2d4a46756a0839fe002db58a98f8
> ...
> 1:name=openrc:/docker
> {noformat}
> The same file may be used to determine whether the Oak is running inside the 
> Docker or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to