[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-29 Thread Anthony D'Atri
> For similar reasons, CentOS 8 stream, as opposed to every other CentOS > released before, is very experimental. I would never go in production with > CentOS 8 stream. Is it, though? Was the experience really any different before “Stream” was appended to the name? We still saw dot

[ceph-users] Re: ceph df (octopus) shows USED is 7 times higher than STORED in erasure coded pool

2021-06-29 Thread Josh Baergen
Hey Arkadiy, If the OSDs are on HDDs and were created with the default bluestore_min_alloc_size_hdd, which is still 64KiB in Octopus, then in effect data will be allocated from the pool in 640KiB chunks (64KiB * (k+m)). 5.36M objects taking up 501GiB is an average object size of 98KiB which

[ceph-users] Pacific: RadosGW crashing on multipart uploads.

2021-06-29 Thread Chu, Vincent
Hi, I'm running into an issue with RadosGW where multipart uploads crash, but only on buckets with a hyphen, period or underscore in the bucket name and with a bucket policy applied. We've tested this in pacific 16.2.3 and pacific 16.2.4. Anyone run into this before? ubuntu@ubuntu:~/ubuntu$

[ceph-users] Re: ceph df (octopus) shows USED is 7 times higher than STORED in erasure coded pool

2021-06-29 Thread Arkadiy Kulev
Dear Josh, Thank you! I will be upgrading to Pacific and lowering bluestore_min_alloc_size_hdd down to 4K. Will report back with the results. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] ceph df (octopus) shows USED is 7 times higher than STORED in erasure coded pool

2021-06-29 Thread Arkadiy Kulev
The pool *default.rgw.buckets.data* has *501 GiB* stored, but USED shows *3.5 TiB *(7 times higher!)*:* root@ceph-01:~# ceph df --- RAW STORAGE --- CLASS SIZE AVAILUSED RAW USED %RAW USED hdd196 TiB 193 TiB 3.5 TiB 3.6 TiB 1.85 TOTAL 196 TiB 193 TiB 3.5 TiB 3.6

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-29 Thread Jean-Philippe Méthot
I had followed the steps in the documentation, but it’s likely that something went wrong during those steps. I copied over the mon store instead, as you recommended, and it fixed my problem. Thanks for your help, now I can finally progress with this. Jean-Philippe Méthot Senior Openstack

[ceph-users] Semantics of cephfs-mirror

2021-06-29 Thread Manuel Holtgrewe
Dear all, I'm sorry if I'm asking for the obvious or missing a previous discussion of this but I could not find the answer to my question online. I'd be happy to be pointed to the right direction only. The cephfs-mirror tool in pacific looks extremely promising. How does it work exactly? Is it

[ceph-users] Multi-site failed to retrieve sync info: (13) Permission denied

2021-06-29 Thread Владимир Клеусов
Hi, I have set up a multisite. Data (pools, buckets,users) from the master zone is synchronized to the secondary zone on the master radosgw-admin sync status realm 2194d6d2-0df4-400c-be8b-71dc74405ec2 (multisite-realm) zonegroup b41c6159-16e5-456f-a4e1-fb3dd280158f ((multisite-zg) zone

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-29 Thread Jean-Philippe Méthot
Rocky linux is very new and only released its first stable last week. If I want my staging environment to mirror my new production environment, I’m going to choose an OS that has a bit more history. For similar reasons, CentOS 8 stream, as opposed to every other CentOS released before, is very

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-29 Thread Milan Kupcevic
On 6/29/21 2:14 AM, Marc wrote: I’ve been running a staging Ceph environment on CentOS 7/Nautilus for quite a while now. Because of many good reasons that you can probably guess, I am currently trying to move this staging environment to Octopus on Ubuntu 20.04.2. What made you decide to

[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-06-29 Thread Lucian Petrut
Hi, It’s a compatibility issue, we’ll have to update the Windows Pacific build. Sorry for the delayed reply, hundreds of Ceph ML mails ended up in my spam box. Ironically, I’ll have to thank Office 365 for that :). Regards, Lucian Petrut From: Robert W. Eckert

[ceph-users] Re: docs dangers large raid

2021-06-29 Thread Marc
> > http://www.snia.org/sites/default/orig/sdc_archives/2010_presentations/t > uesday/JasonResch_%20Solving-Data-Loss.pdf > > While the presentation is definitely for very techy people, slides 26/27 > might be understood also by less techy people. > > But tell them that a high reading on the

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-29 Thread Stefan Kooman
On 6/28/21 10:46 PM, Jean-Philippe Méthot wrote: Hi, I’ve been running a staging Ceph environment on CentOS 7/Nautilus for quite a while now. Because of many good reasons that you can probably guess, I am currently trying to move this staging environment to Octopus on Ubuntu 20.04.2. Since

[ceph-users] Re: docs dangers large raid

2021-06-29 Thread Matthias Ferdinand
On Tue, Jun 29, 2021 at 08:37:36AM +, Marc wrote: > > Can someone point me to some good doc's describing the dangers of using a > large amount of disks in a raid5/raid6? (Understandable for less techy people) Hi, there are some slides at

[ceph-users] docs dangers large raid

2021-06-29 Thread Marc
Can someone point me to some good doc's describing the dangers of using a large amount of disks in a raid5/raid6? (Understandable for less techy people) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] [cephadm] Unable to create multiple unmanaged OSDs per device

2021-06-29 Thread Aggelos Avgerinos
Hi everyone, I'm currently trying to setup the OSDs in a fresh cluster using cephadm. The underlying devices are NVMe and I'm trying to provision 2 OSDs per device with the following spec: ``` service_type: osd service_id: all-nvmes unmanaged: true placement: label: osd data_devices: all:

[ceph-users] Re: Can we deprecate FileStore in Quincy?

2021-06-29 Thread Eric Petit
>> At a Ceph Day in Hillsboro someone, forgive me for not remembering >> who, spoke of running production on servers with 2GB RAM per OSD. He >> said that it was painful, required a lot of work, and would not >> recommend it. ymmv. > > Yeah, I wouldn't want to go below 4GB RAM. FWIW, I have

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-29 Thread Marc
> I’ve been running a staging Ceph environment on CentOS 7/Nautilus for quite a > while now. Because of many good reasons that you can probably guess, I am > currently trying to move this staging environment to Octopus on Ubuntu > 20.04.2. > > What made you decide to chose Ubuntu and not this