Re: [ceph-users] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

2016-05-26 Thread Fulvio Galeazzi
Hallo, as I spent the whole afternoon on a similar issue... :-) Run purge (will also remove ceph packages, I am assuming you don't care much about the existing stuff), on all nodes mon/osd/admin remove rm -rf /var/lib/ceph/ on OSD nodes make sure you mount all partitions, then

[ceph-users] Questions on rbd-mirror

2017-03-24 Thread Fulvio Galeazzi
Hallo, apologies for my (silly) questions, I did try to find some doc on rbd-mirror but was unable to, apart from a number of pages explaining how to install it. My environment is CenOS7 and Ceph 10.2.5. Can anyone help me understand a few minor things: - is there a cleaner way to configure

[ceph-users] Disk activation issue on 10.2.9, too (Re: v11.2.0 Disk activation issue while booting)

2017-07-21 Thread Fulvio Galeazzi
Hallo David, all, sorry for hi-jacking the thread but I am seeing the same issue, although on 10.2.7/10.2.9... Note that I am using disks taken from a SAN, so the GUIDs in my case are those relevant to MPATH. As per other messages in this thread, I modified: -

Re: [ceph-users] Disk activation issue on 10.2.9, too (Re: v11.2.0 Disk activation issue while booting)

2017-07-21 Thread Fulvio Galeazzi
Hallo again, replying to my own message to provide some more info, and ask one more question. Not sure I mentioned, but I am on CentOS 7.3. I tried to insert a sleep in ExecStartPre in /usr/lib/systemd/system/ceph-osd@.service but apparently all ceph-osd are started (and retried) at the

Re: [ceph-users] Blocked requests

2017-12-13 Thread Fulvio Galeazzi
Hallo Matthew, I am now facing the same issue and found this message of yours. Were you eventually able to figure what the problem is, with erasure-coded pools? At first sight, the bugzilla page linked by Brian does not seem to specifically mention erasure-coded pools... Thanks for

Re: [ceph-users] Blocked requests

2017-12-14 Thread Fulvio Galeazzi
the instability you mention, experimenting with BlueStore looks like a better alternative. Thanks again Fulvio Original Message Subject: Re: [ceph-users] Blocked requests From: Matthew Stroud <mattstr...@overstock.com> To: Fulvio Galeazzi <ful

[ceph-users] About "ceph balancer": typo in doc, restrict by class

2018-05-28 Thread Fulvio Galeazzi
Hallo, I am using 12.2.4 and started using "ceph balancer". Indeed it does a great job, thanks! I have few comments: - in the documentation http://docs.ceph.com/docs/master/mgr/balancer/ I think there is an error, since ceph config set mgr mgr/balancer/max_misplaced .07

Re: [ceph-users] SSD recommendation

2018-05-31 Thread Fulvio Galeazzi
Hallo Simon, I am also about to buy some new hardware and for SATA ~400GB I was considering Micron 5200 MAX, rated at 5 DWPD, for journaling/FSmetadata. Is anyone using such drives, and to what degree of satisfaction? Thanks Fulvio Original Message

[ceph-users] Missing udev rule for FC disks (Re: mkjournal error creating journal ... : (13) Permission denied)

2018-01-19 Thread Fulvio Galeazzi
Hallo, apologies for reviving an old thread, but I just wasted again one full day as I had forgotten about this issue... To recap, udev rules nowadays do not (at least in my case, I am using disks served via FiberChannel) create the links /dev/disk/by-partuuid that ceph-disk expects.

Re: [ceph-users] Missing udev rule for FC disks (Re: mkjournal error creating journal ... : (13) Permission denied)

2018-01-23 Thread Fulvio Galeazzi
ected. Cheers Tom -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Fulvio Galeazzi Sent: 19 January 2018 15:46 To: ceph-users@lists.ceph.com Subject: [ceph-users] Missing udev rule for FC disks (Re: mkjournal error creating journal ... : (13) Permiss

[ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap

2018-03-12 Thread Fulvio Galeazzi
Hallo all, I am not sure RBD discard is working in my setup, and I am asking for your help. (I searched this mailing list for related messages and found one by Nathan Harper last 29th Jan 2018 "Debugging fstrim issues" which however mentions trimming was masked by logging... so

Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap

2018-03-13 Thread Fulvio Galeazzi
Hallo Jason, thanks for your feedback! Original Message >> * decorated a CentOS image with hw_scsi_model=virtio--scsi,hw_disk_bus=scsi> > Is that just a typo for "hw_scsi_model"? Yes, it was a typo when I wrote my message. The image has virtio-scsi as it should. I

Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap

2018-03-13 Thread Fulvio Galeazzi
Hallo! Discards appear like they are being sent to the device. How big of a temporary file did you create and then delete? Did you sync the file to disk before deleting it? What version of qemu-kvm are you running? I made several test with commands like (issuing sync after each operation):

Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap

2018-03-14 Thread Fulvio Galeazzi
ill...@redhat.com> To: Fulvio Galeazzi <fulvio.galea...@garr.it> CC: Ceph Users <ceph-users@lists.ceph.com> Date: 03/13/2018 06:33 PM Can you provide the output from "rbd info /volume-80838a69-e544-47eb-b981-a4786be89736"? On Tue, Mar 13, 2018 at 12:30 PM, Fulvio Galeazzi <

Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap

2018-03-15 Thread Fulvio Galeazzi
so I present single disks as RAID0) and XFS formatted. Thanks! Fulvio Original Message Subject: Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap From: Jason Dillaman <jdill...@redhat.com> To: Fulvio Galeazzi <fulvio.galea...@

Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap

2018-04-09 Thread Fulvio Galeazzi
ard=unmap From: Jason Dillaman <jdill...@redhat.com> To: Fulvio Galeazzi <fulvio.galea...@garr.it> CC: Ceph Users <ceph-users@lists.ceph.com> Date: 03/15/2018 01:35 PM OK, last suggestion just to narrow the issue down: ensure you have a functional admin socket and librbd log file

[ceph-users] Admin socket on a pure client: is it possible?

2018-04-09 Thread Fulvio Galeazzi
Hallo, I am wondering whether I could have the admin socket functionality enabled on a server which is a pure Ceph client (no MDS/MON/OSD/whatever running on such server). Is this at all possible? How should ceph.conf be configured? Documentation pages led me to write something like this:

[ceph-users] SOLVED Re: Luminous "ceph-disk activate" issue

2018-03-16 Thread Fulvio Galeazzi
Message Subject: Re: [ceph-users] Luminous "ceph-disk activate" issue From: Fulvio Galeazzi <fulvio.galea...@garr.it> To: Paul Emmerich <paul.emmer...@croit.io> CC: Ceph Users <ceph-users@lists.ceph.com> Date: 03/16/2018 04:58 PM Hallo Paul,     You're correct of

[ceph-users] Luminous "ceph-disk activate" issue

2018-03-16 Thread Fulvio Galeazzi
Hallo, I am on Jewel 10.2.10 and willing to upgrade to Luminous. I thought I'd proceed same as for the upgrade to Jewel, by running ceph-ansible on OSD nodes one by one, then on MON nodes one by one. ---> Is this a sensible way to upgrade to Luminous? Problem: on first OSD node

Re: [ceph-users] Luminous "ceph-disk activate" issue

2018-03-16 Thread Fulvio Galeazzi
using ceph-ansible? Thanks for your time and help! Fulvio Original Message Subject: Re: [ceph-users] Luminous "ceph-disk activate" issue From: Paul Emmerich <paul.emmer...@croit.io> To: Fulvio Galeazzi <fulvio.galea...@garr.it> CC

[ceph-users] Luminous (12.2.8 on CentOS), recover or recreate incomplete PG

2018-12-18 Thread Fulvio Galeazzi
Hallo Cephers, I am stuck with an incomplete PG and am seeking help. At some point I had a bad configuration for gnocchi which caused a flooding of tiny objects to the backend Ceph rados pool. While cleaning things up, the load on the OSD disks was such that 3 of them "commited suicide"

Re: [ceph-users] Migrate/convert replicated pool to EC?

2019-01-10 Thread Fulvio Galeazzi
Hallo, I have the same issue as mentioned here, namely converting/migrating a replicated pool to an EC-based one. I have ~20 TB so my problem is far easier, but I'd like to perform this operation without introducing any downtime (or possibly just a minimal one, to rename pools). I am

Re: [ceph-users] Luminous (12.2.8 on CentOS), recover or recreate incomplete PG

2018-12-19 Thread Fulvio Galeazzi
might have success setting osd_find_best_info_ignore_history_les=true on the relevant osds (set it conf, restart those osds). -- dan -- dan On Tue, Dec 18, 2018 at 11:31 AM Fulvio Galeazzi wrote: Hallo Cephers, I am stuck with an incomplete PG and am seeking help. At some point I

[ceph-users] What is the best way to "move" rgw.buckets.data pool to another cluster?

2019-06-28 Thread Fulvio Galeazzi
Hallo! Due to severe maintenance which is going to cause a prolonged shutdown, I need to move my RGW pools to a different cluster (and geographical site): my problem is with default.rgw.buckets.data pool, which is now 100 TB. Moreover, I'd also like to take advantage of the move to convert

Re: [ceph-users] What is the best way to "move" rgw.buckets.data pool to another cluster?

2019-06-28 Thread Fulvio Galeazzi
Hallo again, to reply to my own message... I guess the easiest will be to setup multisite replication. So now I will fight a bit with this and get back to the list in case of troubles. Sorry for the noise... Fulvio On 06/28/2019 10:36 AM, Fulvio Galeazzi wrote