Re: [ceph-users] Using same name for rgw / beast web front end

2019-09-11 Thread Eric Choi
That clarified it thank you so much! On Wed, Sep 11, 2019 at 11:58 AM Casey Bodley wrote: > Hi Eric, > > boost::beast is a low-level c++ http protocol library that's hosted at > >

Re: [ceph-users] Ceph RBD Mirroring

2019-09-11 Thread Jason Dillaman
On Wed, Sep 11, 2019 at 12:57 PM Oliver Freyermuth wrote: > > Dear Jason, > > I played a bit more with rbd mirroring and learned that deleting an image at > the source (or disabling journaling on it) immediately moves the image to > trash at the target - > but setting rbd_mirroring_delete_delay

Re: [ceph-users] increase pg_num error

2019-09-11 Thread solarflow99
You don't have to increase pgp_num first? On Wed, Sep 11, 2019 at 6:23 AM Kyriazis, George wrote: > I have the same problem (nautilus installed), but the proposed command > gave me an error: > > # ceph osd require-osd-release nautilus > Error EPERM: not all up OSDs have

Re: [ceph-users] Using same name for rgw / beast web front end

2019-09-11 Thread Casey Bodley
Hi Eric, boost::beast is a low-level c++ http protocol library that's hosted at https://github.com/boostorg/beast. Radosgw uses this library, along with boost::asio, as the basis for its 'beast frontend'. The motivation behind this frontend is its flexible threading model and support for

Re: [ceph-users] increase pg_num error

2019-09-11 Thread Kyriazis, George
No, it’s pg_num first, then pgp_num. Found the problem, and still slowly working on fixing it. I upgraded from mimic to nautilus, but forgot to restart the OSD daemons for 2 of the OSDs. “ceph osd tell osd.* version” told me which OSDs had a stale version. Then it was just a matter of

Re: [ceph-users] ZeroDivisionError when running ceph osd status

2019-09-11 Thread Brad Hubbard
On Thu, Sep 12, 2019 at 1:52 AM Benjamin Tayehanpour wrote: > > Greetings! > > I had an OSD down, so I ran ceph osd status and got this: > > [root@ceph1 ~]# ceph osd status > Error EINVAL: Traceback (most recent call last): > File "/usr/lib64/ceph/mgr/status/module.py", line 313, in

Re: [ceph-users] How to create multiple Ceph pools, based on drive type/size/model etc?

2019-09-11 Thread Konstantin Shalygin
I have a 3-node Ceph cluster, with a mixture of Intel Optane 905P PCIe disks, and normal SATA SSD drives. I want to create two Ceph pools, one with only the Optane disks, and the other with only the SATA SSDs. When I checked "ceph osd tree", all the drives had device class "ssd". As a hack - I

Re: [ceph-users] increase pg_num error

2019-09-11 Thread Kyriazis, George
Ok, after all is settled, I tried changing pg_num again on my pool and it still didn’t work: # ceph osd pool get rbd1 pg_num pg_num: 100 # ceph osd pool set rbd1 pg_num 128 # ceph osd pool get rbd1 pg_num pg_num: 100 # ceph osd require-osd-release nautilus # ceph osd pool set rbd1 pg_num 128 #

Re: [ceph-users] Ceph Balancer Limitations

2019-09-11 Thread Konstantin Shalygin
We're using Nautilus 14.2.2 (upgrading soon to 14.2.3) on 29 CentOS osd servers. We've got a large variation of disk sizes and host densities. Such that the default crush mappings lead to an unbalanced data and pg distribution. We enabled the balancer manager module in pg upmap mode. The

Re: [ceph-users] How to create multiple Ceph pools, based on drive type/size/model etc?

2019-09-11 Thread Victor Hooi
Hi, Right - but what is you have two types of NVMe drives? I thought that there's only a fixed enum of device classes - hdd, ssd, or nvme. You can't add your own ones, right? Thanks, Victor On Thu, Sep 12, 2019 at 12:54 PM Konstantin Shalygin wrote: > I have a 3-node Ceph cluster, with a

Re: [ceph-users] How to create multiple Ceph pools, based on drive type/size/model etc?

2019-09-11 Thread Konstantin Shalygin
Right - but what is you have two types of NVMe drives? I thought that there's only a fixed enum of device classes - hdd, ssd, or nvme. You can't add your own ones, right? Indeed you can: `ceph osd crush set-device-class nvme2 osd.0`. k ___

Re: [ceph-users] How to create multiple Ceph pools, based on drive type/size/model etc?

2019-09-11 Thread Konstantin Shalygin
Right - but what is you have two types of NVMe drives? I thought that there's only a fixed enum of device classes - hdd, ssd, or nvme. You can't add your own ones, right? Indeed you can: `ceph set-device-class nvme2 osd.0`. k ___ ceph-users

Re: [ceph-users] AutoScale PG Questions - EC Pool

2019-09-11 Thread Ashley Merrick
Done : https://tracker.ceph.com/issues/41756 On Wed, 11 Sep 2019 14:48:42 +0800 Lars Marowsky-Bree wrote On 2019-09-10T13:36:53, Konstantin Shalygin wrote: > > So I am correct in 2048 being a very high number and should go for > > either 256 or 512 like

Re: [ceph-users] AutoScale PG Questions - EC Pool

2019-09-11 Thread Lars Marowsky-Bree
On 2019-09-10T13:36:53, Konstantin Shalygin wrote: > > So I am correct in 2048 being a very high number and should go for > > either 256 or 512 like you said for a cluster of my size with the EC > > Pool of 8+2? > Indeed. I suggest stay at 256. Might as well go to 512, but the 2^n is really

Re: [ceph-users] How to add 100 new OSDs...

2019-09-11 Thread Massimo Sgaravatto
Just for my education, why letting the balancer moving the PGs to the new OSDs (CERN approach) is better than a throttled backfilling ? Thanks, Massimo On Sat, Jul 27, 2019 at 12:31 AM Stefan Kooman wrote: > Quoting Peter Sabaini (pe...@sabaini.at): > > What kind of commit/apply latency

[ceph-users] ceph-volume lvm create leaves half-built OSDs lying around

2019-09-11 Thread Matthew Vernon
Hi, We keep finding part-made OSDs (they appear not attached to any host, and down and out; but still counting towards the number of OSDs); we never saw this with ceph-disk. On investigation, this is because ceph-volume lvm create makes the OSD (ID and auth at least) too early in the process and

Re: [ceph-users] How to add 100 new OSDs...

2019-09-11 Thread Stefan Kooman
Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com): > Just for my education, why letting the balancer moving the PGs to the new > OSDs (CERN approach) is better than a throttled backfilling ? 1) Because you can pause the process on any given moment and obtain HEALTH_OK again. 2) The

[ceph-users] KVM userspace-rbd hung_task_timeout on 3rd disk

2019-09-11 Thread Ansgar Jazdzewski
Hi, we are running ceph version 13.2.4 and qemu 2.10, we figured out that on VMs with more than three disks IO fails with hung task timeout, wehn ever we do IO on disks after the 2nd one. - is this issue known to a qemu / ceph version could not find something in the changelogs!? - do you have an

Re: [ceph-users] ceph-volume lvm create leaves half-built OSDs lying around

2019-09-11 Thread Alfredo Deza
On Wed, Sep 11, 2019 at 6:18 AM Matthew Vernon wrote: > > Hi, > > We keep finding part-made OSDs (they appear not attached to any host, > and down and out; but still counting towards the number of OSDs); we > never saw this with ceph-disk. On investigation, this is because > ceph-volume lvm

[ceph-users] RBD error when run under cron

2019-09-11 Thread Mike O'Connor
Hi All I'm having a problem running rbd export from cron, rbd expects a tty which cron does not provide. I tried the --no-progress but this did not help. Any ideas ? --- rbd export-diff --from-snap 1909091751 rbd/vm-100-disk-1@1909091817 - | seccure-encrypt | aws s3 cp -

[ceph-users] How to create multiple Ceph pools, based on drive type/size/model etc?

2019-09-11 Thread Victor Hooi
Hi, I have a 3-node Ceph cluster, with a mixture of Intel Optane 905P PCIe disks, and normal SATA SSD drives. I want to create two Ceph pools, one with only the Optane disks, and the other with only the SATA SSDs. When I checked "ceph osd tree", all the drives had device class "ssd". As a hack

Re: [ceph-users] increase pg_num error

2019-09-11 Thread Kyriazis, George
I have the same problem (nautilus installed), but the proposed command gave me an error: # ceph osd require-osd-release nautilus Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_NAUTILUS feature # I created my cluster with mimic and then upgraded to nautilus. What would be my next step?

Re: [ceph-users] RBD error when run under cron

2019-09-11 Thread Jason Dillaman
On Wed, Sep 11, 2019 at 7:48 AM Mike O'Connor wrote: > > Hi All > > I'm having a problem running rbd export from cron, rbd expects a tty which > cron does not provide. > I tried the --no-progress but this did not help. > > Any ideas ? I don't think that error is coming from the 'rbd' CLI: $

Re: [ceph-users] ceph-volume lvm create leaves half-built OSDs lying around

2019-09-11 Thread Janne Johansson
Den ons 11 sep. 2019 kl 12:18 skrev Matthew Vernon : > We keep finding part-made OSDs (they appear not attached to any host, > and down and out; but still counting towards the number of OSDs); we > never saw this with ceph-disk. On investigation, this is because > ceph-volume lvm create makes the

Re: [ceph-users] ceph-volume lvm create leaves half-built OSDs lying around [EXT]

2019-09-11 Thread Matthew Vernon
On 11/09/2019 12:18, Alfredo Deza wrote: > On Wed, Sep 11, 2019 at 6:18 AM Matthew Vernon wrote: >> or >> ii) allow the bootstrap-osd credential to purge OSDs > > I wasn't aware that the bootstrap-osd credentials allowed to > purge/destroy OSDs, are you sure this is possible? If it is I think >

Re: [ceph-users] How to add 100 new OSDs...

2019-09-11 Thread Massimo Sgaravatto
Thank you But the algorithms used during backfilling and during rebalancing (to decide where data have to be placed) are different ? I.e. assuming that no new data are written and no data are deleted, if you rely on the standard way (i.e. backfilling), when the data movement process finishes

Re: [ceph-users] Dashboard setting config values to 'false'

2019-09-11 Thread Tatjana Dehler
On 11.09.19 16:09, Tatjana Dehler wrote: > Hi Matt, > > On 11.09.19 15:56, Matt Dunavant wrote: >> Hi, >> >> >> When changing config options in Nautilus dashboard, is there a way to >> set values to 'false' when their default is 'true'? Currently it seems >> to be set as a checked box = 'true'

[ceph-users] Ceph Balancer Limitations

2019-09-11 Thread Adam Tygart
Hello all, We're using Nautilus 14.2.2 (upgrading soon to 14.2.3) on 29 CentOS osd servers. We've got a large variation of disk sizes and host densities. Such that the default crush mappings lead to an unbalanced data and pg distribution. We enabled the balancer manager module in pg upmap mode.

[ceph-users] Dashboard setting config values to 'false'

2019-09-11 Thread Matt Dunavant
Hi, When changing config options in Nautilus dashboard, is there a way to set values to 'false' when their default is 'true'? Currently it seems to be set as a checked box = 'true' but an unchecked box = default in which there is no way to set a value as 'false'. Thanks, Matt

Re: [ceph-users] Dashboard setting config values to 'false'

2019-09-11 Thread Tatjana Dehler
Hi Matt, On 11.09.19 15:56, Matt Dunavant wrote: > Hi, > > > When changing config options in Nautilus dashboard, is there a way to > set values to 'false' when their default is 'true'? Currently it seems > to be set as a checked box = 'true' but an unchecked box = default in > which there is no

Re: [ceph-users] Ceph RBD Mirroring

2019-09-11 Thread Oliver Freyermuth
Dear Jason, I played a bit more with rbd mirroring and learned that deleting an image at the source (or disabling journaling on it) immediately moves the image to trash at the target - but setting rbd_mirroring_delete_delay helps to have some more grace time to catch human mistakes. However,

Re: [ceph-users] Using same name for rgw / beast web front end

2019-09-11 Thread Eric Choi
Replying to my own question: 2. Beast is not a web front end, so it would be an apple-to-orange comparison. I just couldn't find any blogs / docs about it at first (found it here: https://github.com/ceph/Beast) Still unsure about the first question.. On Tue, Sep 10, 2019 at 4:45 PM Eric Choi

Re: [ceph-users] How to add 100 new OSDs...

2019-09-11 Thread Stefan Kooman
Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com): > Thank you > > But the algorithms used during backfilling and during rebalancing (to > decide where data have to be placed) are different ? Yes, the balancer takes more factors into consideration. It also takes into consideration all of

[ceph-users] ZeroDivisionError when running ceph osd status

2019-09-11 Thread Benjamin Tayehanpour
Greetings! I had an OSD down, so I ran ceph osd status and got this: [root@ceph1 ~]# ceph osd status Error EINVAL: Traceback (most recent call last):   File "/usr/lib64/ceph/mgr/status/module.py", line 313, in handle_command     return self.handle_osd_status(cmd)   File

[ceph-users] Multisite RGW - stucked metadata shards (metadata is behind on X shards)

2019-09-11 Thread P. O.
Hi all, In my environment with replicated two (mimic 13.2.6) clusters I have problem with stucked metadata shards. [Master root@rgw-1]$ radosgw-admin sync status realm b144111d-8176-47e5-aa3a-85c65032e8a9 (realm) zonegroup 2ead77cb-f5c2-4d62-9959-12912828fb4b (1_zonegroup)