Thanks! I'll check it out.
2017年11月24日 17:58,"Yan, Zheng" 写道:
> On Fri, Nov 24, 2017 at 4:59 PM, Zhang Qiang
> wrote:
> > Hi all,
> >
> > To observe what will happen to ceph-fuse mount if the network is down, we
> > blocked
> > network connections to
Hi all,
with our Ceph Luminous CephFS, we're plaqued with "failed to open ino"
messages. These don't seem to affect daily business (in terms of "file
access"). (There's a backup performance issue that may eventually be
related, but I'll report on that in a different thread.)
Our Ceph
On 23/11/17 17:19, meike.talb...@women-at-work.org wrote:
> Hello,
>
> in our preset Ceph cluster we used to have 12 HDD OSDs per host.
> All OSDs shared a common SSD for journaling.
> The SSD was used as root device and the 12 journals were files in the
> /usr/share directory, like this:
>
>
On Fri, Nov 24, 2017 at 4:59 PM, Zhang Qiang wrote:
> Hi all,
>
> To observe what will happen to ceph-fuse mount if the network is down, we
> blocked
> network connections to all three monitors by iptables. If we restore the
> network
> immediately(within minutes), the
Hi all,
To observe what will happen to ceph-fuse mount if the network is down, we
blocked
network connections to all three monitors by iptables. If we restore the
network
immediately(within minutes), the blocked I/O request will be restored,
every thing will
be back to normal.
But if we continue