[
https://issues.apache.org/jira/browse/MESOS-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14877168#comment-14877168
]
haosdent edited comment on MESOS-3430 at 9/19/15 3:13 PM:
----------------------------------------------------------
Thanks for [~jieyu] patch. Today I read the origin document which describe the
implement shared
subtrees.https://www.kernel.org/doc/ols/2006/ols2006v2-pages-209-222.pdf and
https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt
Let me try to record my understand for shared/slave mount points here. And try
to explain the behaviours above. When the filesystem we used is a shared mount
point, assume we have folder A and folder B under this filesystem. After we
execute
{code}
$ mount --bind A B
{code}
, we would have two mount subtrees and these subtrees use a same "peer group".
Event mount/unmount events under one peer of this "peer group" would sync to
other peers in this "peer group". For example, if we execute
{code}
$ mount --bind C B/c
{code}
, we could found two records in /proc/self/mountinfo.
{code}
104 38 8:3 /tmp/A /tmp/B rw,relatime shared:1 - xfs /dev/sda3
rw,seclabel,attr2,inode64,noquota
105 104 8:3 /tmp/C /tmp/B/c rw,relatime shared:1 - xfs /dev/sda3
rw,seclabel,attr2,inode64,noquota
106 38 8:3 /tmp/C /tmp/A/c rw,relatime shared:1 - xfs /dev/sda3
rw,seclabel,attr2,inode64,noquota
{code}
When mount C to B/c, it would produce a new mount event which mount C to A/c.
So that the mount points under the shared filesystem could keep same.
And when we make a mount point to slave. The mount and umount events only
propagate towards it. So when we execute
{code}
$ mount --make-slave B
$ mount --bind C B/c
{code}
, we could found only B/c become a mount point while A/c is still not a mount
point. Also it only have one new record in /proc/self/mountinfo.
And after we make a mount point to slave and make it as a shared mount point
again. It would create a new "peer group". This new "peer group" is different
which the outside shared mount. And every mount/unmount events in this new
"peer group" also which sync between every peers under this new "peer group".
According the document mentioned above, B becomes a shared-and-slave-mount now.
It would also receive mount events from its master(the outside shared mount),
share them with its peers(the new peer group).
was (Author: [email protected]):
Thanks for [~jieyu] patch. Today I read the origin document which describe the
implement shared
subtrees.https://www.kernel.org/doc/ols/2006/ols2006v2-pages-209-222.pdf and
https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt
Let me try to record my understand for shared/slave mount points here. And try
to explain the behaviours above. When the filesystem we used is a shared mount
point, assume we have folder A and folder B under this filesystem. After we
execute
{code}
$ mount --bind A B
{code}
, we would have two mount subtrees and these subtrees use a same "peer group".
Event mount/unmount events under one peer of this "peer group" would sync to
other peers in this "peer group". For example, if we execute
{code}
$ mount --bind C B/c
{code}
, we could found two records in /proc/self/mountinfo.
{code}
104 38 8:3 /tmp/A /tmp/B rw,relatime shared:1 - xfs /dev/sda3
rw,seclabel,attr2,inode64,noquota
105 104 8:3 /tmp/C /tmp/B/c rw,relatime shared:1 - xfs /dev/sda3
rw,seclabel,attr2,inode64,noquota
106 38 8:3 /tmp/C /tmp/A/c rw,relatime shared:1 - xfs /dev/sda3
rw,seclabel,attr2,inode64,noquota
{code}
When mount C to B/c, it would produce a new mount event which mount C to A/c.
So that the mount points under the shared filesystem could keep same.
And when we make a mount point to slave. The mount and umount events only
propagate towards it. So when we execute
{code}
$ mount --make-slave B
$ mount --bind C B/c
{code}
, we could found only B/c become a mount point while A/c is still not a mount
point. Also it only have one new record in /proc/self/mountinfo.
And after we make a mount point to slave and make it as a shared mount point
again. It would create a new "peer group". This new "peer group" is different
which the outside shared mount. And every mount/unmount events in this new
"peer group" also which sync between every "peer". According the document
mentioned above, B becomes a shared-and-slave-mount. It would also receive
mount events from its master(the outside shared mount), share them with its
peers(the new peer group).
> LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithoutRootFilesystem fails
> on CentOS 7.1
> ------------------------------------------------------------------------------------------
>
> Key: MESOS-3430
> URL: https://issues.apache.org/jira/browse/MESOS-3430
> Project: Mesos
> Issue Type: Bug
> Affects Versions: 0.25.0
> Reporter: Marco Massenzio
> Assignee: Jie Yu
> Labels: ROOT_Tests, flaky-test
> Attachments: verbose.log
>
>
> Just ran ROOT tests on CentOS 7.1 and had the following failure (clean build,
> just pulled from {{master}}):
> {noformat}
> [ RUN ]
> LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithoutRootFilesystem
> ../../src/tests/containerizer/filesystem_isolator_tests.cpp:498: Failure
> (wait).failure(): Failed to clean up an isolator when destroying container
> '366b6d37-b326-4ed1-8a5f-43d483dbbace' :Failed to unmount volume
> '/tmp/LinuxFilesystemIsolatorTest_ROOT_PersistentVolumeWithoutRootFilesystem_KXgvoH/sandbox/volume':
> Failed to unmount
> '/tmp/LinuxFilesystemIsolatorTest_ROOT_PersistentVolumeWithoutRootFilesystem_KXgvoH/sandbox/volume':
> Invalid argument
> ../../src/tests/utils.cpp:75: Failure
> os::rmdir(sandbox.get()): Device or resource busy
> [ FAILED ]
> LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithoutRootFilesystem (1943
> ms)
> [----------] 1 test from LinuxFilesystemIsolatorTest (1943 ms total)
> [----------] Global test environment tear-down
> [==========] 1 test from 1 test case ran. (1951 ms total)
> [ PASSED ] 0 tests.
> [ FAILED ] 1 test, listed below:
> [ FAILED ]
> LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithoutRootFilesystem
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)