!
Von: David Turner
Gesendet: Dienstag, 19. Februar 2019 19:32
An: Hennen, Christian
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] CephFS: client hangs
You're attempting to use mismatching client name and keyring. You want to use
matching name and keyring. For your example, you
eyring and connecting to
> 127.0.0.1:6789
>
> ceph-fuse --keyring /etc/ceph/ceph.client.admin.keyring --name
> client.cephfs -m 192.168.1.17:6789 /mnt/cephfs
>
> -Ursprüngliche Nachricht-
> Von: Yan, Zheng
> Gesendet: Dienstag, 19. Februar 2019 11:31
> An: Hennen, Christian
>
127.0.0.1:6789
ceph-fuse --keyring /etc/ceph/ceph.client.admin.keyring --name client.cephfs -m
192.168.1.17:6789 /mnt/cephfs
-Ursprüngliche Nachricht-
Von: Yan, Zheng
Gesendet: Dienstag, 19. Februar 2019 11:31
An: Hennen, Christian
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users]
ristian Hennen
>
> Project Manager Infrastructural Services ZIMK University of Trier
> Germany
>
> Von: Ashley Merrick
> Gesendet: Montag, 18. Februar 2019 16:53
> An: Hennen, Christian
> Cc: ceph-users@lists.ceph.com
> Betreff: Re: [ceph-users] CephFS: client hangs
sts.ceph.com
Betreff: Re: [ceph-users] CephFS: client hangs
Correct yes from my expirence OSD’s aswel.
On Mon, 18 Feb 2019 at 11:51 PM, Hennen, Christian
mailto:christian.hen...@uni-trier.de> > wrote:
Hi!
>mon_max_pg_per_osd = 400
>
>In the ceph.conf and then restart
Correct yes from my expirence OSD’s aswel.
On Mon, 18 Feb 2019 at 11:51 PM, Hennen, Christian <
christian.hen...@uni-trier.de> wrote:
> Hi!
>
> >mon_max_pg_per_osd = 400
> >
> >In the ceph.conf and then restart all the services / or inject the config
> >into the running admin
>
> I restarted all
Hi!
>mon_max_pg_per_osd = 400
>
>In the ceph.conf and then restart all the services / or inject the config
>into the running admin
I restarted all MONs, but I assume the OSDs need to be restarted as well?
> MDS show a client got evicted. Nothing else looks abnormal. Do new cephfs
> clients
On Mon, Feb 18, 2019 at 10:55 PM Hennen, Christian
wrote:
>
> Dear Community,
>
>
>
> we are running a Ceph Luminous Cluster with CephFS (Bluestore OSDs). During
> setup, we made the mistake of configuring the OSDs on RAID Volumes. Initially
> our cluster consisted of 3 nodes, each housing 1
I know this may sound simple.
Have you tried raising the PG per an OSD limit, I'm sure I have seen in the
past people with the same kind of issue as you and was just I/O being
blocked due to a limit but not actively logged.
mon_max_pg_per_osd = 400
In the ceph.conf and then restart all the
Dear Community,
we are running a Ceph Luminous Cluster with CephFS (Bluestore OSDs). During
setup, we made the mistake of configuring the OSDs on RAID Volumes.
Initially our cluster consisted of 3 nodes, each housing 1 OSD. Currently,
we are in the process of remediating this. After a loss of
10 matches
Mail list logo