Re: [ceph-users] Ceph for "home lab" / hobbyist use?

2019-09-10 Thread Hector Martin
I run Ceph on both a home server and a personal offsite backup server (both single-host setups). It's definitely feasible and comes with a lot of advantages over traditional RAID and ZFS and the like. The main disadvantages are performance overhead and resource consumption. On 07/09/2019

[ceph-users] ceph fs with backtrace damage

2019-09-10 Thread Fyodor Ustinov
Hi! After MDS scrub I got error: 1 MDSs report damaged metadata #ceph tell mds.0 damage ls [ { "damage_type": "backtrace", "id": 712325338, "ino": 1099526730308, "path": "/erant/smb/public/docs/3. Zvity/1. Prodazhi/~$Data-center 2019.08.xlsx" }, {

Re: [ceph-users] Ceph RBD Mirroring

2019-09-10 Thread Jason Dillaman
On Tue, Sep 10, 2019 at 2:08 PM Oliver Freyermuth wrote: > > Dear Jason, > > On 2019-09-10 18:50, Jason Dillaman wrote: > > On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth > > wrote: > >> > >> Dear Cephalopodians, > >> > >> I have two questions about RBD mirroring. > >> > >> 1) I can not get

Re: [ceph-users] reproducible rbd-nbd crashes

2019-09-10 Thread Marc Schöchlin
Hello Mike, Am 03.09.19 um 04:41 schrieb Mike Christie: > On 09/02/2019 06:20 AM, Marc Schöchlin wrote: >> Hello Mike, >> >> i am having a quick look to this on vacation because my coworker >> reports daily and continuous crashes ;-) >> Any updates here (i am aware that this is not very easy to

Re: [ceph-users] reproducible rbd-nbd crashes

2019-09-10 Thread Marc Schöchlin
Hello Mike, as described i set all the settings. Unfortunately it crashed also with these settings :-( Regards Marc [Tue Sep 10 12:25:56 2019] Btrfs loaded, crc32c=crc32c-intel [Tue Sep 10 12:25:57 2019] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [Tue Sep 10

Re: [ceph-users] reproducible rbd-nbd crashes

2019-09-10 Thread Jason Dillaman
On Tue, Sep 10, 2019 at 9:46 AM Marc Schöchlin wrote: > > Hello Mike, > > as described i set all the settings. > > Unfortunately it crashed also with these settings :-( > > Regards > Marc > > [Tue Sep 10 12:25:56 2019] Btrfs loaded, crc32c=crc32c-intel > [Tue Sep 10 12:25:57 2019] EXT4-fs (dm-0):

Re: [ceph-users] regurlary 'no space left on device' when deleting on cephfs

2019-09-10 Thread Kenneth Waegeman
Hi Paul, all, Thanks! But I don't seem to find how to debug the purge queue. When I check the purge queue, I get these numbers: [root@mds02 ~]# ceph daemon mds.mds02 perf dump | grep -E 'purge|pq'     "purge_queue": {     "pq_executing_ops": 0,     "pq_executing": 0,    

Re: [ceph-users] Ceph RBD Mirroring

2019-09-10 Thread Oliver Freyermuth
Dear Jason, On 2019-09-10 23:04, Jason Dillaman wrote: > On Tue, Sep 10, 2019 at 2:08 PM Oliver Freyermuth > wrote: >> >> Dear Jason, >> >> On 2019-09-10 18:50, Jason Dillaman wrote: >>> On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth >>> wrote: Dear Cephalopodians, I

[ceph-users] Using same name for rgw / beast web front end

2019-09-10 Thread Eric Choi
Hi there, we have been using ceph for a few years now, it's only now that I've noticed we have been using the same name for all RGW hosts, resulting when you run ceph -s: rgw: 1 daemon active (..) despite having more than 10 RGW hosts. * What are the side effects of doing this? Is this a no-no?

Re: [ceph-users] Ceph RBD Mirroring

2019-09-10 Thread Jason Dillaman
On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth wrote: > > Dear Cephalopodians, > > I have two questions about RBD mirroring. > > 1) I can not get it to work - my setup is: > > - One cluster holding the live RBD volumes and snapshots, in pool "rbd", > cluster name "ceph", > running

[ceph-users] Ceph RBD Mirroring

2019-09-10 Thread Oliver Freyermuth
Dear Cephalopodians, I have two questions about RBD mirroring. 1) I can not get it to work - my setup is: - One cluster holding the live RBD volumes and snapshots, in pool "rbd", cluster name "ceph", running latest Mimic. I ran "rbd mirror pool enable rbd pool" on that cluster

Re: [ceph-users] Ceph RBD Mirroring

2019-09-10 Thread Oliver Freyermuth
Dear Jason, On 2019-09-10 18:50, Jason Dillaman wrote: > On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth > wrote: >> >> Dear Cephalopodians, >> >> I have two questions about RBD mirroring. >> >> 1) I can not get it to work - my setup is: >> >> - One cluster holding the live RBD volumes

Re: [ceph-users] AutoScale PG Questions - EC Pool

2019-09-10 Thread Konstantin Shalygin
On 9/10/19 1:17 PM, Ashley Merrick wrote: So I am correct in 2048 being a very high number and should go for either 256 or 512 like you said for a cluster of my size with the EC Pool of 8+2? Indeed. I suggest stay at 256. k ___ ceph-users

Re: [ceph-users] AutoScale PG Questions - EC Pool

2019-09-10 Thread Konstantin Shalygin
I have a EC Pool (8+2) which has 30 OSD (3 Nodes), grown from the orginal 10 OSD (1 Node). I originally set the pool with a PG_NUM of 300, however the AutoScale PG is showing a warn saying I should set this to 2048, I am not sure if this is a good suggestion or if the Autoscale currently is

Re: [ceph-users] AutoScale PG Questions - EC Pool

2019-09-10 Thread Ashley Merrick
So I am correct in 2048 being a very high number and should go for either 256 or 512 like you said for a cluster of my size with the EC Pool of 8+2? Thanks On Tue, 10 Sep 2019 14:12:58 +0800 Konstantin Shalygin wrote You should not use 300PG

Re: [ceph-users] perf dump and osd perf will cause the performance of ceph if I run it for each service?

2019-09-10 Thread Romit Misra
Hi Lin, My 2 cents:- 1. The Perf Dump Stats for the Ceph Dameons can be collected via the admin socket itself. 2. Since all the Daemons themselves run in a distributed fashion, you would need to collect it at per host/daemon level. 3. There is no performance impact collecting