t;> > > xen or kvm? would.
> >> > >
> >> > > So if you don't hit the low memory situation, you will not see the
> >> > > deadlock, and you can run like this for years without a problem. I
> have.
> >> > > But you are most likely to run out of memory duri
> > could compound your problems.
>> > >
>> > > On 3/7/19 3:56 AM, Marc Roos wrote:
>> > >>
>> > >>
>> > >> Container = same kernel, problem is with processes using the same
>> > >> kernel.
>> > &g
> >>
> > >>
> > >> Container = same kernel, problem is with processes using the same
> > >> kernel.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >
;
> > > On 3/7/19 3:56 AM, Marc Roos wrote:
> > >>
> > >>
> > >> Container = same kernel, problem is with processes using the same
> > >> kernel.
> > >>
> > >>
> > >>
> > >>
> > >>
el, problem is with processes using the same
> >> kernel.
> >>
> >>
> >>
> >>
> >>
> >>
> >> -Original Message-
> >> From: Daniele Riccucci [mailto:devs...@posteo.net]
> >> Sent: 07 March 2019 00:18
&g
, problem is with processes using the same
kernel.
-Original Message-
From: Daniele Riccucci [mailto:devs...@posteo.net]
Sent: 07 March 2019 00:18
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mount cephfs on ceph servers
Hello,
is the deadlock risk still an issue in containerized
9 00:18
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] mount cephfs on ceph servers
>
> Hello,
> is the deadlock risk still an issue in containerized deployments? For
> example with OSD daemons in containers and mounting the filesystem on
> the host machine?
>
Container = same kernel, problem is with processes using the same
kernel.
-Original Message-
From: Daniele Riccucci [mailto:devs...@posteo.net]
Sent: 07 March 2019 00:18
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mount cephfs on ceph servers
Hello
Hello,
is the deadlock risk still an issue in containerized deployments? For
example with OSD daemons in containers and mounting the filesystem on
the host machine?
Thank you.
Daniele
On 06/03/19 16:40, Jake Grimmett wrote:
Just to add "+1" on this datapoint, based on one month usage on
Just to add "+1" on this datapoint, based on one month usage on Mimic
13.2.4 essentially "it works great for us"
Prior to this, we had issues with the kernel driver on 12.2.2. This
could have been due to limited RAM on the osd nodes (128GB / 45 OSD),
and an older kernel.
Upgrading the RAM to
On 06/03/2019 12:07, Zhenshi Zhou wrote:
Hi,
I'm gonna mount cephfs from my ceph servers for some reason,
including monitors, metadata servers and osd servers. I know it's
not a best practice. But what is the exact potential danger if I mount
cephfs from its own server?
As a datapoint, I have
The general advice has been to not use the kernel client on an osd node as
you may see a deadlock under certain conditions. Using the fuse client
should be fine or use the kernel client inside a VM.
On Wed, 6 Mar 2019, 03:07 Zhenshi Zhou, wrote:
> Hi,
>
> I'm gonna mount cephfs from my ceph
Hi,
I'm gonna mount cephfs from my ceph servers for some reason,
including monitors, metadata servers and osd servers. I know it's
not a best practice. But what is the exact potential danger if I mount
cephfs from its own server?
Thanks
___
ceph-users
13 matches
Mail list logo