[ceph-users] Problems understanding 'ceph features' output

2017-12-15 Thread Massimo Sgaravatto
Hi I tried the jewel --> luminous update on a small testbed composed by: - 3 mon + mgr nodes - 3 osd nodes (4 OSDs per each of this node) - 3 clients (each client maps a single volume) In short: - I updated the 3 mons - I deployed mgr on the 3 mon hosts - I updated the 3 osd nodes - I updated

Re: [ceph-users] Problems understanding 'ceph features' output

2017-12-15 Thread Massimo Sgaravatto
gt; > On 12/15/2017 10:56 AM, Massimo Sgaravatto wrote: > >> Hi >> >> I tried the jewel --> luminous update on a small testbed composed by: >> >> - 3 mon + mgr nodes >> - 3 osd nodes (4 OSDs per each of this node) >> - 3 clients (each

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
ents, so it > should be > on your "public" network. Changing the IP of a mon is possible but > annoying, it is > often easier to remove and then re-add with a new IP (if possible): > > http://docs.ceph.com/docs/master/rados/operations/add- > or-rm-mons/#changing-a-

[ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
I have a ceph cluster that I manually deployed, and now I am trying to see if I can use ceph-deploy to deploy new nodes (in particular the object gw). The network configuration is the following: Each MON node has two network IP: one on a "management network" (not used for ceph related stuff) and

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
,logbsize=256k On Thu, May 10, 2018 at 1:12 PM, Paul Emmerich <paul.emmer...@croit.io> wrote: > check ceph.conf, it controls to which mon IP the client tries to connect. > > 2018-05-10 12:57 GMT+02:00 Massimo Sgaravatto < > massimo.sgarava...@gmail.com>: > >>

[ceph-users] Single ceph cluster for the object storage service of 2 OpenStack clouds

2018-05-15 Thread Massimo Sgaravatto
Hi I have been using for a while a single ceph cluster for the image and block storage services of two Openstack clouds. Now I want to use this ceph cluster also for the object storage services of the two OpenStack clouds and I want to implement that having a clear separation between the two

Re: [ceph-users] Single ceph cluster for the object storage service of 2 OpenStack clouds

2018-05-16 Thread Massimo Sgaravatto
single cluster. > > On Tue, May 15, 2018 at 9:29 AM Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >> Hi >> >> I have been using for a while a single ceph cluster for the image and >> block storage services of two Openstack clouds. >

[ceph-users] rgw default user quota for OpenStack users

2018-05-21 Thread Massimo Sgaravatto
I set: rgw user default quota max size = 2G in the ceph configuration file and I see that this works for users created using the "radosgw-admin user create" command [**] I see that instead quota is not set for users created through keystone. This [*] is the relevant part of my ceph

Re: [ceph-users] rgw default user quota for OpenStack users

2018-05-22 Thread Massimo Sgaravatto
like Proxmox. That might be a > good place to start. You could also look at the keystone code to see if > it's manually specifying things based on an application config file. > > On Mon, May 21, 2018 at 9:21 AM Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >

Re: [ceph-users] [client.rgw.hostname] or [client.radosgw.hostname] ?

2018-05-22 Thread Massimo Sgaravatto
t; > See what is used there > > > -----Original Message- > From: Massimo Sgaravatto [mailto:massimo.sgarava...@gmail.com] > Sent: dinsdag 22 mei 2018 11:46 > To: Ceph Users > Subject: [ceph-users] [client.rgw.hostname] or [client.radosgw.hostname] > ? > > I am

[ceph-users] [client.rgw.hostname] or [client.radosgw.hostname] ?

2018-05-22 Thread Massimo Sgaravatto
I am really confused about the use of [client.rgw.hostname] or [client.radosgw.hostname] in the configuration file. I don't understand if they have different purposes or if there is just a problem with documentation. E.g.: http://docs.ceph.com/docs/luminous/start/quick-rgw/ says that

[ceph-users] Several questions on the radosgw-openstack integration

2018-05-22 Thread Massimo Sgaravatto
I have several questions on the radosgw - OpenStack integration. I was more or less able to set it (using a Luminous ceph cluster and an Ocata OpenStack cloud), but I don't know if it working as expected. So, the questions: 1. I miss the meaning of the attribute "rgw keystone implicit

Re: [ceph-users] [client.rgw.hostname] or [client.radosgw.hostname] ?

2018-05-23 Thread Massimo Sgaravatto
tthing.hostname and it would work fine. > > On Tue, May 22, 2018, 5:54 AM Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >> # ls /var/lib/ceph/radosgw/ >> ceph-rgw.ceph-test-rgw-01 >> >> >> So [client.rgw.ceph-test-rgw-01] >> >> Tha

Re: [ceph-users] Several questions on the radosgw-openstack integration

2018-05-23 Thread Massimo Sgaravatto
For #1 I guess this is a known issue (http://tracker.ceph.com/issues/20570) On Tue, May 22, 2018 at 1:03 PM, Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > I have several questions on the radosgw - OpenStack integration. > > I was more or less able to set it (using

Re: [ceph-users] Several questions on the radosgw-openstack integration

2018-05-23 Thread Massimo Sgaravatto
e user can access her data also using S3. besides swift Cheers, Massimo On Wed, May 23, 2018 at 12:49 PM, Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > For #1 I guess this is a known issue (http://tracker.ceph.com/issues/20570 > ) > > On Tue, May 22, 2018 at

Re: [ceph-users] Luminous cluster - how to find out which clients are still jewel?

2018-05-29 Thread Massimo Sgaravatto
As far as I know the status wrt this issue is still the one reported in this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020585.html See also: http://tracker.ceph.com/issues/21315 Cheers, Massimo On Tue, May 29, 2018 at 8:39 AM, Linh Vu wrote: > Hi all, > > >

[ceph-users] Instructions for manually adding a object gateway node ?

2018-03-27 Thread Massimo Sgaravatto
Hi Are there somewhere some instructions on how to *MANUALLY* add a object gateway node on a Luminous cluster, that was manually installed (i.e. not using ceph-deploy) ? In the official doc I can find instruction only referring to ceph-deploy ... Thanks, Massimo

Re: [ceph-users] Ceph osd logs

2018-10-18 Thread Massimo Sgaravatto
I had the same problem (or a problem with the same symptoms) In my case the problem was with wrong ownership of the log file You might want to check if you are having the same issue Cheers, Massimo On Mon, Oct 15, 2018 at 6:00 AM Zhenshi Zhou wrote: > Hi, > > I added some OSDs into

[ceph-users] Some questions concerning filestore --> bluestore migration

2018-10-03 Thread Massimo Sgaravatto
Hi I have a ceph cluster, running luminous, composed of 5 OSD nodes, which is using filestore. Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA disk + 2x200GB SSD disk (then I have 2 other disks in RAID for the OS), 10 Gbps. So each SSD disk is used for the journal for 5 OSDs.

Re: [ceph-users] Some questions concerning filestore --> bluestore migration

2018-10-05 Thread Massimo Sgaravatto
; > solarflo...@gmail.com>: > >I use the same configuration you have, and I plan on using bluestore. > >My > >SSDs are only 240GB and it worked with filestore all this time, I > >suspect > >bluestore should be fine too. > > > > > >On Wed, Oct 3, 2018 at 4

Re: [ceph-users] Problems after migrating to straw2 (to enable the balancer)

2019-01-14 Thread Massimo Sgaravatto
osd.7 up 1.0 1.0 8 hdd 5.45609 osd.8 up 1.0 1.0 9 hdd 5.45609 osd.9 up 1.0 1.0 [root@ceph-mon-01 ~]# On Mon, Jan 14, 2019 at 3:13 PM Dan van der Ster wrote: > On Mon, Jan 14, 2019 at 3:06 PM Massimo Sg

Re: [ceph-users] Problems after migrating to straw2 (to enable the balancer)

2019-01-14 Thread Massimo Sgaravatto
019 at 3:18 PM Massimo Sgaravatto > wrote: > > > > Thanks for the prompt reply > > > > Indeed I have different racks with different weights. > > Are you sure you're replicating across racks? You have only 3 racks, > one of which is half the size of the o

Re: [ceph-users] Problems after migrating to straw2 (to enable the balancer)

2019-01-14 Thread Massimo Sgaravatto
sd.3 up 1.0 1.0 > > 4 hdd 5.45609 osd.4 up 1.0 1.0 > > 5 hdd 5.45609 osd.5 up 1.0 1.0 > > 6 hdd 5.45609 osd.6 up 1.0 1.0 > > 7 h

Re: [ceph-users] Problems enabling automatic balancer

2019-01-11 Thread Massimo Sgaravatto
-01 ceph]# ceph osd crush set-all-straw-buckets-to-straw2 Indeed. And when I start the balancer: [root@c-mon-01 ceph]# ceph balancer on I can't see anymore the problem On Fri, Jan 11, 2019 at 3:58 PM Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > I am trying

[ceph-users] Problems enabling automatic balancer

2019-01-11 Thread Massimo Sgaravatto
I am trying to enable the automatic balancer in my Luminous ceph cluster, following the documentation at: http://docs.ceph.com/docs/luminous/mgr/balancer/ [root@ceph-mon-01 ~]# ceph balancer status { "active": true, "plans": [], "mode": "crush-compat" } After having issued the

[ceph-users] Problems after migrating to straw2 (to enable the balancer)

2019-01-14 Thread Massimo Sgaravatto
I have a ceph luminous cluster running on CentOS7 nodes. This cluster has 50 OSDs, all with the same size and all with the same weight. Since I noticed that there was a quite "unfair" usage of OSD nodes (some used at 30 %, some used at 70 %) I tried to activate the balancer. But the balancer

Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-02-26 Thread Massimo Sgaravatto
On Mon, Feb 25, 2019 at 9:26 PM mart.v wrote: > - As far as I understand the reported 'implicated osds' are only the > primary ones. In the log of the osds you should find also the relevant pg > number, and with this information you can get all the involved OSDs. This > might be useful e.g. to

Re: [ceph-users] How to use straw2 for new buckets

2019-02-25 Thread Massimo Sgaravatto
Thanks a lot ! On Mon, Feb 25, 2019 at 9:18 AM Konstantin Shalygin wrote: > A few weeks ago I converted everything from straw to straw2 (to be able to > use the balancer) using the command: > > ceph osd crush set-all-straw-buckets-to-straw2 > > I have now just added a new rack bucket, and moved

[ceph-users] Problems creating a balancer plan

2019-03-01 Thread Massimo Sgaravatto
Hi I already used the balancer in my ceph luminous cluster a while ago when all the OSDs were using filestore. Now, after having added some bluestore OSDs, if I try to create a plan: [root@ceph-mon-01 ~]# ceph balancer status { "active": false, "plans": [], "mode": "crush-compat" }

Re: [ceph-users] Problems creating a balancer plan

2019-03-02 Thread Massimo Sgaravatto
Hi This is a luminous (v. 12.2.11) cluster Thanks, Massimo On Sat, Mar 2, 2019 at 2:49 PM Matthew H wrote: > Hi Massimo! > > What version of Ceph is in use? > > Thanks, > > -- > *From:* ceph-users on behalf of > Massimo Sgaravatto > *

Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-02-22 Thread Massimo Sgaravatto
A couple of hints to debug the issue (since I had to recently debug a problem with the same symptoms): - As far as I understand the reported 'implicated osds' are only the primary ones. In the log of the osds you should find also the relevant pg number, and with this information you can get all

[ceph-users] How to use straw2 for new buckets

2019-02-25 Thread Massimo Sgaravatto
A few weeks ago I converted everything from straw to straw2 (to be able to use the balancer) using the command: ceph osd crush set-all-straw-buckets-to-straw2 I have now just added a new rack bucket, and moved a couple of new osd nodes in this rack, using the commands: ceph osd crush add-bucket

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
be in the ceph documentation when the balancer is discussed Thanks again for your help ! On Thu, Mar 14, 2019 at 7:56 AM Konstantin Shalygin wrote: > On 3/14/19 1:53 PM, Massimo Sgaravatto wrote: > > > > So if I try to run the balancer in the current compat mode, should > > this

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
I am using Luminous everywhere On Thu, Mar 14, 2019 at 8:09 AM Konstantin Shalygin wrote: > On 3/14/19 2:09 PM, Massimo Sgaravatto wrote: > > I plan to use upmap after having migrated all my clients to CentOS 7.6 > > What is your current rel

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
I plan to use upmap after having migrated all my clients to CentOS 7.6 On Thu, Mar 14, 2019 at 8:03 AM Konstantin Shalygin wrote: > On 3/14/19 2:02 PM, Massimo Sgaravatto wrote: > > Oh, I missed this information. > > > > So this means that, after having run once the ba

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-13 Thread Massimo Sgaravatto
[root@c-mon-01 /]# ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS TYPE NAME -1 1.95190- 1.95TiB 88.4GiB 1.87TiB00 - root default -2 0- 0B 0B 0B00 - rack Rack15-PianoAlto -3 0.39038

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
[root@c-mon-01 balancer]# On Thu, Mar 14, 2019 at 7:34 AM Konstantin Shalygin wrote: > On 3/14/19 1:11 PM, Massimo Sgaravatto wrote: > > Thanks > > > > I will try to set the weight-set for the new OSDs > > > > But I am wondering what I did wrong to be in such sc

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
I have some clients running centos7.4 with kernel 3.10 I was told that the minimum requirements are kernel >=4.13 or CentOS >= 7.5. On Thu, Mar 14, 2019 at 8:11 AM Konstantin Shalygin wrote: > On 3/14/19 2:10 PM, Massimo Sgaravatto wrote: > > I am using Luminous everywhere >

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
AM Konstantin Shalygin wrote: > On 3/14/19 12:42 PM, Massimo Sgaravatto wrote: > > [root@c-mon-01 /]# ceph osd df tree > > ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS TYPE > > NAME > > -1 1.95190- 1.95TiB 88.4GiB 1.87TiB0 0

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
hdd 0.09760 0.09760 osd.16 17 hdd 0.09760 0.09760 osd.17 18 hdd 0.09760 0.09760 osd.18 19 hdd 0.09760 0.09760 osd.19 [root@c-mon-01 /]# On Thu, Mar 14, 2019 at 8:16 AM Konstantin Shalygin wrote: > On 3/14/19 2:15 PM, Massimo Sgaravatto wrote: > &g

Re: [ceph-users] Problems creating a balancer plan

2019-03-14 Thread Massimo Sgaravatto
On Sat, Mar 2, 2019 at 4:26 PM Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > Hi > This is a luminous (v. 12.2.11) cluster > > Thanks, Massimo > > On Sat, Mar 2, 2019 at 2:49 PM Matthew H > wrote: > >> Hi Massimo! >> >&

[ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-13 Thread Massimo Sgaravatto
I have a cluster where for some OSD the weight-set is defined, while for other OSDs it is not [*]. The OSDs with weight-set defined are Filestore OSDs created years ago using "ceph-disk prepare" The OSDs where the weight set is not defined are Bluestore OSDs installed recently using ceph-volume

[ceph-users] Debugging 'slow requests' ...

2019-02-08 Thread Massimo Sgaravatto
Our Luminous ceph cluster have been worked without problems for a while, but in the last days we have been suffering from continuous slow requests. We have indeed done some changes in the infrastructure recently: - Moved OSD nodes to a new switch - Increased pg nums for a pool, to have about ~

Re: [ceph-users] Debugging 'slow requests' ...

2019-02-11 Thread Massimo Sgaravatto
-> 192.168.222.204:6804/4159520 -- > replica_scrub(pg: > > 8.2bc,from:0'0,to:0'0,epoch:1205833/1205735,start:8:3d4e6916:::rbd_data.a6dc2425de9600.0006249c:0,end:8:3d4e7434:::rbd_data.47c1b437840214.0003c594:0,chunky:1,deep:0,version:9,allow_preemption:1,priority=5) >

Re: [ceph-users] Debugging 'slow requests' ...

2019-02-09 Thread Massimo Sgaravatto
or the 3 relevant OSDs (just for that time period) is at: https://drive.google.com/drive/folders/1TG5MomMJsqVbsuFosvYokNptLufxOnPY?usp=sharing Thanks again ! Cheers, Massimo On Fri, Feb 8, 2019 at 11:50 PM Brad Hubbard wrote: > Try capturing another log with debug_ms turned up. 1 or 5 should

Re: [ceph-users] Self serve / automated S3 key creation?

2019-02-02 Thread Massimo Sgaravatto
Since EC2 access is needed for our OpenStack users, we enabled in OpenStack the nova-ec2 service> In this way every user has already EC2 credentials that can be used also for S3 PS; If you are using Ocata there is unfortunately a problem:

Re: [ceph-users] Kernel requirements for balancer in upmap mode

2019-02-04 Thread Massimo Sgaravatto
Thanks a lot So, if I am using ceph just to provide block storage to an OpenStack cluster (so using libvirt), the kernel version on the client nodes shouldn't matter, right ? Thanks again, Massimo On Mon, Feb 4, 2019 at 10:02 AM Ilya Dryomov wrote: > On Mon, Feb 4, 2019 at 9:25 AM Mass

Re: [ceph-users] Kernel requirements for balancer in upmap mode

2019-02-04 Thread Massimo Sgaravatto
Thanks a lot ! On Mon, Feb 4, 2019 at 12:35 PM Konstantin Shalygin wrote: > So, if I am using ceph just to provide block storage to an OpenStack > cluster (so using libvirt), the kernel version on the client nodes > shouldn't matter, right ? > > Yep, just make sure your librbd on compute hosts

[ceph-users] Kernel requirements for balancer in upmap mode

2019-02-04 Thread Massimo Sgaravatto
The official documentation [*] says that the only requirement to use the balancer in upmap mode is that all clients must run at least luminous. But I read somewhere (also in this mailing list) that there are also requirements wrt the kernel. If so: 1) Could you please specify what is the minimum

[ceph-users] Automatic balancing vs supervised optimization

2019-09-06 Thread Massimo Sgaravatto
Hi I have question regarding supervised/automatic balancing using upmap. I created a plan in supervised mode, but its score was not expected to improve the data distribution. But the automatic balancer triggered a considerable rebalance. Is this normal ? I thought that automatic balancing

Re: [ceph-users] How to add 100 new OSDs...

2019-09-11 Thread Massimo Sgaravatto
Just for my education, why letting the balancer moving the PGs to the new OSDs (CERN approach) is better than a throttled backfilling ? Thanks, Massimo On Sat, Jul 27, 2019 at 12:31 AM Stefan Kooman wrote: > Quoting Peter Sabaini (pe...@sabaini.at): > > What kind of commit/apply latency

Re: [ceph-users] How to add 100 new OSDs...

2019-09-11 Thread Massimo Sgaravatto
(and therefore the status is HEALTH_OK), can the automatic balancer (in upmap mode) decide that some data have to be re-moved ? Thanks, Massimo On Wed, Sep 11, 2019 at 12:30 PM Stefan Kooman wrote: > Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com): > > Just for my education, wh

Re: [ceph-users] Problems understanding 'ceph-features' output

2019-07-30 Thread Massimo Sgaravatto
"feature" version supported by upmap ? E.g. right now I am interested about 0x1ffddff8eea4fffb. Is this also good enough for upmap ? Thanks, Massimo On Mon, Jul 29, 2019 at 6:02 PM Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > Thanks ! > > On Mon, Jul

[ceph-users] Problems understanding 'ceph-features' output

2019-07-29 Thread Massimo Sgaravatto
I have a ceph cluster where mon, osd and mgr are running ceph luminous If I try running ceph features [*], I see that clients are grouped in 2 sets: - the first one appears using luminous with features 0x3ffddff8eea4fffb - the second one appears using luminous too, but with features

Re: [ceph-users] Problems understanding 'ceph-features' output

2019-07-29 Thread Massimo Sgaravatto
hen > www.croit.io > Tel: +49 89 1896585 90 > > > On Mon, Jul 29, 2019 at 2:23 PM Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >> I have a ceph cluster where mon, osd and mgr are running ceph luminous >> >> If I try running ceph features

Re: [ceph-users] Problems understanding 'ceph-features' output

2019-08-05 Thread Massimo Sgaravatto
On Mon, Aug 5, 2019 at 11:43 AM Ilya Dryomov wrote: > On Tue, Jul 30, 2019 at 10:33 AM Massimo Sgaravatto > wrote: > > > > The documentation that I have seen says that the minimum requirements > for clients to use upmap are: > > > > - CentOs 7.5 or kernel 4.5

Re: [ceph-users] how to debug slow requests

2019-07-24 Thread Massimo Sgaravatto
Just so I understand, the duration for this operation is 329 seconds (a lot !) but all the reported events happened ~ at the same time (2019-07-20 23:13:18) Were all the events of this ops reported ? Why do you see a problem with the "waiting for subops from 4" event ? Thanks, Massimo On Wed,

[ceph-users] 3,30,300 GB constraint of block.db size on SSD

2019-09-29 Thread Massimo Sgaravatto
In my ceph cluster I am use spinning disks for bluestore OSDs and SSDs just for the block.db. If I have got it right, right now: a) only 3,30,300GB can be used on the SSD rocksdb spillover to slow device, so you don't have any benefit with e.g. 250 GB reserved on the SSD for block.db wrt a

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-14 Thread Massimo Sgaravatto
As I wrote here: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-January/037909.html I saw the same after an update from Luminous to Nautilus 14.2.6 Cheers, Massimo On Tue, Jan 14, 2020 at 7:45 PM Liam Monahan wrote: > Hi, > > I am getting one inconsistent object on our cluster with

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-15 Thread Massimo Sgaravatto
I guess this is coming from: https://github.com/ceph/ceph/pull/30783 introduced in Nautilus 14.2.5 On Wed, Jan 15, 2020 at 8:10 AM Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > As I wrote here: > > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-J

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-15 Thread Massimo Sgaravatto
e of 128M? I’m thinking about > doubling that limit to 256MB on our cluster. Our largest object is only > about 10% over that limit. > > On Jan 15, 2020, at 3:51 AM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > > I guess this is coming from: > &g

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-16 Thread Massimo Sgaravatto
nables are > adjusted. > > On Jan 15, 2020, at 10:56 AM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > > I never changed the default value for that attribute > > I am missing why I have such big objects around > > I am also wondering what a pg repair would d

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-16 Thread Massimo Sgaravatto
And I confirm that a repair is not useful. As as far I can see it simply "cleans" the error (without modifying the big object) but the error of course reappears when the deep scrub runs again on that PG Cheers, Massimo On Thu, Jan 16, 2020 at 9:35 AM Massimo Sgaravatto < ma

[ceph-users] PGs inconsistents because of "size_too_large"

2020-01-14 Thread Massimo Sgaravatto
I have just finished the update of a ceph cluster from luminous to nautilus Everything seems running, but I keep receiving notifications (about ~ 10 so far, involving different PGs and different OSDs) of PGs in inconsistent state. rados list-inconsistent-obj pg-id --format=json-pretty (an

Re: [ceph-users] PGs inconsistents because of "size_too_large"

2020-01-14 Thread Massimo Sgaravatto
nel(cluster) log [ERR] : 13.4 soid 13:25e2d1bd:::%2fhbWPh36KajAKcJUlCjG9XdqLGQMzkwn3NDrrLDi_mTM%2ffile8:head : size 385888256 > 134217728 is too large On Tue, Jan 14, 2020 at 11:02 AM Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > I have just finished the update of a ce