!
Von: David Turner
Gesendet: Dienstag, 19. Februar 2019 19:32
An: Hennen, Christian
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] CephFS: client hangs
You're attempting to use mismatching client name and keyring. You want to use
matching name and keyring. For your example
127.0.0.1:6789
ceph-fuse --keyring /etc/ceph/ceph.client.admin.keyring --name client.cephfs -m
192.168.1.17:6789 /mnt/cephfs
-Ursprüngliche Nachricht-
Von: Yan, Zheng
Gesendet: Dienstag, 19. Februar 2019 11:31
An: Hennen, Christian
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users]
nippets/77
> MDS log: https://gitlab.uni-trier.de/snippets/79?expanded=true&viewer=simple)
Kind regards
Christian Hennen
Project Manager Infrastructural Services ZIMK University of Trier
Germany
Von: Ashley Merrick
Gesendet: Montag, 18. Februar 2019 16:53
An: Hennen, Christian
C
Hi!
>mon_max_pg_per_osd = 400
>
>In the ceph.conf and then restart all the services / or inject the config
>into the running admin
I restarted all MONs, but I assume the OSDs need to be restarted as well?
> MDS show a client got evicted. Nothing else looks abnormal. Do new cephfs
> clients al
Dear Community,
we are running a Ceph Luminous Cluster with CephFS (Bluestore OSDs). During
setup, we made the mistake of configuring the OSDs on RAID Volumes.
Initially our cluster consisted of 3 nodes, each housing 1 OSD. Currently,
we are in the process of remediating this. After a loss of m
Dear Community,
here at ZIMK at the University of Trier we operate a Ceph Luminous Cluster
as filer for a HPC environment via CephFS (Bluestore backend). During setup
last year we made the mistake of not configuring the RAID as JBOD, so
initially the 3 nodes only housed 1 OSD each. Currently, w