[ceph-users] Re: cephfs file layouts, empty objects in first data pool

2020-02-10 Thread Håkan T Johansson
On Mon, 10 Feb 2020, Gregory Farnum wrote: On Mon, Feb 10, 2020 at 12:29 AM Håkan T Johansson wrote: On Mon, 10 Feb 2020, Gregory Farnum wrote: On Sun, Feb 9, 2020 at 3:24 PM Håkan T Johansson wrote: Hi, running 14.2.6, debian buster (backports). Have set up a

[ceph-users] Re: Running cephadm as a nonroot user

2020-02-10 Thread Jason Borden
I missed a line while pasting the previous message: # ceph orchestrator set backend cephadm ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Running cephadm as a nonroot user

2020-02-10 Thread Jason Borden
Ok, I've been digging around a bit with the code and made progress, but haven't got it all working yet. Here's what I've done: # yum install cephadm # ln -s ../sbin/cephadm /usr/bin/cephadm #Needed to reference the correct path # cephadm bootstrap --output-config /etc/ceph/ceph.conf

[ceph-users] How to monitor Ceph MDS operation latencies when slow cephfs performance

2020-02-10 Thread jalagam . ceph
Hello , Cephfs operations are slow in our cluster , I see low number of operations or throughput in the pools and all other resources as well. I think it is MDS operations that are causing the issue. I increased mds_cache_memory_limit to 3 GB from 1 GB but not seeing any improvements in the

[ceph-users] Re: cephfs file layouts, empty objects in first data pool

2020-02-10 Thread Dave Hall
I was also confused by this topic and had intended to post a question this week.  The documentation I recall reading said something about 'if you want to use erasure coding on a CephFS, you should use a small replicated data pool as the first pool, and your erasure coded pool as the second.' 

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.6

2020-02-10 Thread Joe Bardgett
Has anyone attempted to use gdbpmp since 14.2.6 to grab data? I have not been able to successfully do it on my clusters. It has just been hanging at attaching to process. If you have been able to, would you be available for a discussion regarding your configuration? Thanks, Joe Bardgett

[ceph-users] Re: Benefits of high RAM on a metadata server?

2020-02-10 Thread Marco Mühlenbeck
Hi together, I am new here. I am a little bit confused about the discussion about the amount RAM of the metadata server. In the SUSE Deployment Guide for SUSE Enterprise Storage 6 (release 2020-01-27) in the chapter "2.2 Minimum Cluster Configuration" there the is a sentence: "... Metadata

[ceph-users] Re: extract disk usage stats from running ceph cluster

2020-02-10 Thread Joe Comeau
try from admin node ceph osd df ceph osd status thanks Joe >>> 2/10/2020 10:44 AM >>> Hello MJ, Perhaps your PGs are a unbalanced? Ceph osd df tree Greetz Mehmet Am 10. Februar 2020 14:58:25 MEZ schrieb lists : >Hi, > >We would like to replace the current seagate ST4000NM0034 HDDs in

[ceph-users] Re: Running cephadm as a nonroot user

2020-02-10 Thread Jason Borden
Thanks for the quick reply! I am using the cephadm package. I just wasn't aware that of the user that was created as part of the package install. My /etc/sudoers.d/cephadm seems to be incorrect. It gives root permission to /usr/bin/cephadm, but cephadm is installed in /usr/sbin. That is easily

[ceph-users] ERROR: osd init failed: (1) Operation not permitted

2020-02-10 Thread Ml Ml
Hello List, first of all: Yes - i made mistakes. Now i am trying to recover :-/ I had a healthy 3 node cluster which i wanted to convert to a single one. My goal was to reinstall a fresh 3 Node cluster and start with 2 nodes. I was able to healthy turn it from a 3 Node Cluster to a 2 Node

[ceph-users] Re: extract disk usage stats from running ceph cluster

2020-02-10 Thread ceph
Hello MJ, Perhaps your PGs are a unbalanced? Ceph osd df tree Greetz Mehmet Am 10. Februar 2020 14:58:25 MEZ schrieb lists : >Hi, > >We would like to replace the current seagate ST4000NM0034 HDDs in our >ceph cluster with SSDs, and before doing that, we would like to >checkout >the typical

[ceph-users] Re: cephfs file layouts, empty objects in first data pool

2020-02-10 Thread Gregory Farnum
On Mon, Feb 10, 2020 at 12:29 AM Håkan T Johansson wrote: > > > On Mon, 10 Feb 2020, Gregory Farnum wrote: > > > On Sun, Feb 9, 2020 at 3:24 PM Håkan T Johansson > > wrote: > > > > Hi, > > > > running 14.2.6, debian buster (backports). > > > > Have set up a cephfs with 3 data

[ceph-users] Re: Running cephadm as a nonroot user

2020-02-10 Thread Sage Weil
There is a 'packaged' mode that does this, but it's a bit different: - you have to install the cephadm package on each host - the package sets up a cephadm user and sudoers.d file - mgr/cephadm will ssh in as that user and sudo as needed The net is that you have to make sure cephadm is installed

[ceph-users] Running cephadm as a nonroot user

2020-02-10 Thread Jason Borden
We have been using ceph-deploy in our existing cluster running as a non root user with sudo permissions. I've been working on getting an octopus cluster working using cephadm. During bootstrap I ran into a "execnet.gateway_bootstrap.HostNotFound" issue. It turns out that the problem was caused

[ceph-users] Fwd: PrimaryLogPG.cc: 11550: FAILED ceph_assert(head_obc)

2020-02-10 Thread Jake Grimmett
Dear All, Following a clunky* cluster restart, we had 23 "objects unfound" 14 pg recovery_unfound We could see no way to recover the unfound objects, we decided to mark the objects in one pg unfound... [root@ceph1 bad_oid]# ceph pg 5.f2f mark_unfound_lost delete pg has 2 objects unfound and

[ceph-users] extract disk usage stats from running ceph cluster

2020-02-10 Thread lists
Hi, We would like to replace the current seagate ST4000NM0034 HDDs in our ceph cluster with SSDs, and before doing that, we would like to checkout the typical usage of our current drives, over the last years, so we can select the best (price/performance/endurance) SSD to replace them with.

[ceph-users] Re: Write i/o in CephFS metadata pool

2020-02-10 Thread Samy Ascha
> On 6 Feb 2020, at 11:23, Stefan Kooman wrote: > >> Hi! >> >> I've confirmed that the write IO to the metadata pool is coming form active >> MDSes. >> >> I'm experiencing very poor write performance on clients and I would like to >> see if there's anything I can do to optimise the

[ceph-users] Re: cephfs file layouts, empty objects in first data pool

2020-02-10 Thread Håkan T Johansson
On Mon, 10 Feb 2020, Gregory Farnum wrote: On Sun, Feb 9, 2020 at 3:24 PM Håkan T Johansson wrote: Hi, running 14.2.6, debian buster (backports). Have set up a cephfs with 3 data pools and one metadata pool: myfs_data, myfs_data_hdd, myfs_data_ssd, and