Hallo,
as I spent the whole afternoon on a similar issue... :-)
Run purge (will also remove ceph packages, I am assuming you don't
care much about the existing stuff),
on all nodes mon/osd/admin remove
rm -rf /var/lib/ceph/
on OSD nodes make sure you mount all partitions, then
Hallo, apologies for my (silly) questions, I did try to find some doc on
rbd-mirror but was unable to, apart from a number of pages explaining
how to install it.
My environment is CenOS7 and Ceph 10.2.5.
Can anyone help me understand a few minor things:
- is there a cleaner way to configure
Hallo David, all,
sorry for hi-jacking the thread but I am seeing the same issue,
although on 10.2.7/10.2.9...
Note that I am using disks taken from a SAN, so the GUIDs in my case are
those relevant to MPATH.
As per other messages in this thread, I modified:
-
Hallo again, replying to my own message to provide some more info, and
ask one more question.
Not sure I mentioned, but I am on CentOS 7.3.
I tried to insert a sleep in ExecStartPre in
/usr/lib/systemd/system/ceph-osd@.service but apparently all ceph-osd
are started (and retried) at the
Hallo Matthew,
I am now facing the same issue and found this message of yours.
Were you eventually able to figure what the problem is, with
erasure-coded pools?
At first sight, the bugzilla page linked by Brian does not seem to
specifically mention erasure-coded pools...
Thanks for
the instability you mention, experimenting with
BlueStore looks like a better alternative.
Thanks again
Fulvio
Original Message
Subject: Re: [ceph-users] Blocked requests
From: Matthew Stroud <mattstr...@overstock.com>
To: Fulvio Galeazzi <ful
Hallo,
I am using 12.2.4 and started using "ceph balancer". Indeed it does
a great job, thanks!
I have few comments:
- in the documentation http://docs.ceph.com/docs/master/mgr/balancer/
I think there is an error, since
ceph config set mgr mgr/balancer/max_misplaced .07
Hallo Simon,
I am also about to buy some new hardware and for SATA ~400GB I was
considering Micron 5200 MAX, rated at 5 DWPD, for journaling/FSmetadata.
Is anyone using such drives, and to what degree of satisfaction?
Thanks
Fulvio
Original Message
Hallo,
apologies for reviving an old thread, but I just wasted again one
full day as I had forgotten about this issue...
To recap, udev rules nowadays do not (at least in my case, I am
using disks served via FiberChannel) create the links
/dev/disk/by-partuuid that ceph-disk expects.
ected.
Cheers
Tom
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Fulvio
Galeazzi
Sent: 19 January 2018 15:46
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Missing udev rule for FC disks (Re: mkjournal error
creating journal ... : (13) Permiss
Hallo all,
I am not sure RBD discard is working in my setup, and I am asking
for your help.
(I searched this mailing list for related messages and found one by
Nathan Harper last 29th Jan 2018 "Debugging fstrim issues" which
however mentions trimming was masked by logging... so
Hallo Jason,
thanks for your feedback!
Original Message >> * decorated a CentOS image with
hw_scsi_model=virtio--scsi,hw_disk_bus=scsi> > Is that just a typo for
"hw_scsi_model"?
Yes, it was a typo when I wrote my message. The image has virtio-scsi as
it should.
I
Hallo!
Discards appear like they are being sent to the device. How big of a
temporary file did you create and then delete? Did you sync the file
to disk before deleting it? What version of qemu-kvm are you running?
I made several test with commands like (issuing sync after each operation):
ill...@redhat.com>
To: Fulvio Galeazzi <fulvio.galea...@garr.it>
CC: Ceph Users <ceph-users@lists.ceph.com>
Date: 03/13/2018 06:33 PM
Can you provide the output from "rbd info /volume-80838a69-e544-47eb-b981-a4786be89736"?
On Tue, Mar 13, 2018 at 12:30 PM, Fulvio Galeazzi
<
so I present single disks as RAID0) and XFS formatted.
Thanks!
Fulvio
Original Message
Subject: Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap
From: Jason Dillaman <jdill...@redhat.com>
To: Fulvio Galeazzi <fulvio.galea...@
ard=unmap
From: Jason Dillaman <jdill...@redhat.com>
To: Fulvio Galeazzi <fulvio.galea...@garr.it>
CC: Ceph Users <ceph-users@lists.ceph.com>
Date: 03/15/2018 01:35 PM
OK, last suggestion just to narrow the issue down: ensure you have a
functional admin socket and librbd log file
Hallo,
I am wondering whether I could have the admin socket functionality
enabled on a server which is a pure Ceph client (no MDS/MON/OSD/whatever
running on such server). Is this at all possible? How should ceph.conf
be configured? Documentation pages led me to write something like this:
Message
Subject: Re: [ceph-users] Luminous "ceph-disk activate" issue
From: Fulvio Galeazzi <fulvio.galea...@garr.it>
To: Paul Emmerich <paul.emmer...@croit.io>
CC: Ceph Users <ceph-users@lists.ceph.com>
Date: 03/16/2018 04:58 PM
Hallo Paul,
You're correct of
Hallo,
I am on Jewel 10.2.10 and willing to upgrade to Luminous. I thought
I'd proceed same as for the upgrade to Jewel, by running ceph-ansible on
OSD nodes one by one, then on MON nodes one by one.
---> Is this a sensible way to upgrade to Luminous?
Problem: on first OSD node
using ceph-ansible?
Thanks for your time and help!
Fulvio
Original Message
Subject: Re: [ceph-users] Luminous "ceph-disk activate" issue
From: Paul Emmerich <paul.emmer...@croit.io>
To: Fulvio Galeazzi <fulvio.galea...@garr.it>
CC
Hallo Cephers,
I am stuck with an incomplete PG and am seeking help.
At some point I had a bad configuration for gnocchi which caused a
flooding of tiny objects to the backend Ceph rados pool. While cleaning
things up, the load on the OSD disks was such that 3 of them "commited
suicide"
Hallo,
I have the same issue as mentioned here, namely
converting/migrating a replicated pool to an EC-based one. I have ~20 TB
so my problem is far easier, but I'd like to perform this operation
without introducing any downtime (or possibly just a minimal one, to
rename pools).
I am
might have
success setting osd_find_best_info_ignore_history_les=true on the
relevant osds (set it conf, restart those osds).
-- dan
-- dan
On Tue, Dec 18, 2018 at 11:31 AM Fulvio Galeazzi
wrote:
Hallo Cephers,
I am stuck with an incomplete PG and am seeking help.
At some point I
Hallo!
Due to severe maintenance which is going to cause a prolonged
shutdown, I need to move my RGW pools to a different cluster (and
geographical site): my problem is with default.rgw.buckets.data pool,
which is now 100 TB.
Moreover, I'd also like to take advantage of the move to convert
Hallo again, to reply to my own message... I guess the easiest will be
to setup multisite replication.
So now I will fight a bit with this and get back to the list in case of
troubles.
Sorry for the noise...
Fulvio
On 06/28/2019 10:36 AM, Fulvio Galeazzi wrote
25 matches
Mail list logo