Re: [ceph-users] ceph-volume lvm create leaves half-built OSDs lying around

2019-09-11 Thread Alfredo Deza
On Wed, Sep 11, 2019 at 6:18 AM Matthew Vernon wrote: > > Hi, > > We keep finding part-made OSDs (they appear not attached to any host, > and down and out; but still counting towards the number of OSDs); we > never saw this with ceph-disk. On investigation, this is because > ceph-volume lvm

Re: [ceph-users] Upgrading and lost OSDs

2019-07-26 Thread Alfredo Deza
ob > > On Wed, Jul 24, 2019 at 1:24 PM Alfredo Deza wrote: > >> >> >> On Wed, Jul 24, 2019 at 4:15 PM Peter Eisch >> wrote: >> >>> Hi, >>> >>> >>> >>> I appreciate the insistency that the directions be followed. I who

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Alfredo Deza
gt; recipient(s), or a person designated as responsible for delivering such > messages to the intended recipient, is strictly prohibited and may be > unlawful. This e-mail may contain proprietary, confidential or privileged > information. Any views or opinions expressed are solely those

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Alfredo Deza
ized use, dissemination, distribution, or > reproduction of this message by anyone other than the intended > recipient(s), or a person designated as responsible for delivering such > messages to the intended recipient, is strictly prohibited and may be > unlawful. This e-mail may co

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Alfredo Deza
On Wed, Jul 24, 2019 at 2:56 PM Peter Eisch wrote: > Hi Paul, > > To do better to answer you question, I'm following: > http://docs.ceph.com/docs/nautilus/releases/nautilus/ > > At step 6, upgrade OSDs, I jumped on an OSD host and did a full 'yum > update' for patching the host and rebooted to

Re: [ceph-users] ceph-volume failed after replacing disk

2019-07-05 Thread Alfredo Deza
On Fri, Jul 5, 2019 at 6:23 AM ST Wong (ITSC) wrote: > > Hi, > > > > I target to run just destroy and re-use the ID as stated in manual but seems > not working. > > Seems I’m unable to re-use the ID ? The OSD replacement guide does not mention anything about crush and auth commands. I believe

Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Alfredo Deza
Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > - > --------- > > > Am 27.06.19, 15:09 schrieb "Alfredo Deza" : > > Although ceph-volume does a best-effort to support custom cluster > names,

Re: [ceph-users] pgs incomplete

2019-06-27 Thread Alfredo Deza
On Thu, Jun 27, 2019 at 10:36 AM ☣Adam wrote: > Well that caused some excitement (either that or the small power > disruption did)! One of my OSDs is now down because it keeps crashing > due to a failed assert (stacktraces attached, also I'm apparently > running mimic, not luminous). > > In the

Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-27 Thread Alfredo Deza
Although ceph-volume does a best-effort to support custom cluster names, the Ceph project does not support custom cluster names anymore even though you can still see settings/options that will allow you to set it. For reference see: https://bugzilla.redhat.com/show_bug.cgi?id=1459861 On Thu, Jun

Re: [ceph-users] Changing the release cadence

2019-06-25 Thread Alfredo Deza
On Mon, Jun 17, 2019 at 4:09 PM David Turner wrote: > > This was a little long to respond with on Twitter, so I thought I'd share my > thoughts here. I love the idea of a 12 month cadence. I like October because > admins aren't upgrading production within the first few months of a new >

Re: [ceph-users] Lost OSD from PCIe error, recovered, HOW to restore OSD process

2019-05-19 Thread Alfredo Deza
; >> >> [image: Inactive hide details for "Tarek Zegar" ---05/15/2019 10:32:27 >> AM---TLDR; I activated the drive successfully but the daemon won]"Tarek >> Zegar" ---05/15/2019 10:32:27 AM---TLDR; I activated the drive successfully >> but the daemon wo

Re: [ceph-users] Lost OSD from PCIe error, recovered, to restore OSD process

2019-05-15 Thread Alfredo Deza
On Tue, May 14, 2019 at 7:24 PM Bob R wrote: > > Does 'ceph-volume lvm list' show it? If so you can try to activate it with > 'ceph-volume lvm activate 122 74b01ec2--124d--427d--9812--e437f90261d4' Good suggestion. If `ceph-volume lvm list` can see it, it can probably activate it again. You can

Re: [ceph-users] ceph-volume ignores cluster name?

2019-05-13 Thread Alfredo Deza
On Mon, May 13, 2019 at 6:56 PM wrote: > > All; > > I'm working on spinning up a demonstration cluster using ceph, and yes, I'm > installing it manually, for the purpose of learning. > > I can't seem to correctly create an OSD, as ceph-volume seems to only work if > the cluster name is the

Re: [ceph-users] Custom Ceph-Volume Batch with Mixed Devices

2019-05-10 Thread Alfredo Deza
n configuration file? This might be a good example to take why I am recommending against it: tools will probably not support it. I don't think you can make ceph-ansible do this, unless you are pre-creating the LVs, which if using Ansible shouldn't be too hard anyway > > Best regards, > > On

Re: [ceph-users] Custom Ceph-Volume Batch with Mixed Devices

2019-05-10 Thread Alfredo Deza
On Fri, May 10, 2019 at 2:43 PM Lazuardi Nasution wrote: > > Hi, > > Let's say I have following devices on a host. > > /dev/sda > /dev/sdb > /dev/nvme0n1 > > How can I do ceph-volume batch which create bluestore OSD on HDDs and NVME > (devided to be 4 OSDs) and put block.db of HDDs on the NVME

Re: [ceph-users] ceph-volume activate runs infinitely

2019-05-02 Thread Alfredo Deza
On Thu, May 2, 2019 at 8:28 AM Robert Sander wrote: > > Hi, > > On 02.05.19 13:40, Alfredo Deza wrote: > > > Can you give a bit more details on the environment? How dense is the > > server? If the unit retries is fine and I was hoping at some point it > >

Re: [ceph-users] ceph-volume activate runs infinitely

2019-05-02 Thread Alfredo Deza
On Thu, May 2, 2019 at 5:27 AM Robert Sander wrote: > > Hi, > > The ceph-volume@.service units on an Ubuntu 18.04.2 system > run unlimited and do not finish. > > Only after we create this override config the system boots again: > > # /etc/systemd/system/ceph-volume@.service.d/override.conf >

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Alfredo Deza
nger exists. Do you have output on how it failed before? > > > On Thu, Apr 18, 2019 at 10:10 AM Alfredo Deza wrote: >> >> On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev wrote: >> > >> > Hello, >> > I have a server with 18 disks, and 17 OSD daem

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Alfredo Deza
On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev wrote: > > Hello, > I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD > daemons failed to deploy with ceph-deploy. The reason for failing is > unimportant at this point, I believe it was race condition, as I was running

Re: [ceph-users] bluefs-bdev-expand experience

2019-04-12 Thread Alfredo Deza
On Thu, Apr 11, 2019 at 4:23 PM Yury Shevchuk wrote: > > Hi Igor! > > I have upgraded from Luminous to Nautilus and now slow device > expansion works indeed. The steps are shown below to round up the > topic. > > node2# ceph osd df > ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETA

Re: [ceph-users] v14.2.0 Nautilus released

2019-03-20 Thread Alfredo Deza
On Tue, Mar 19, 2019 at 2:53 PM Benjamin Cherian wrote: > > Hi, > > I'm getting an error when trying to use the APT repo for Ubuntu bionic. Does > anyone else have this issue? Is the mirror sync actually still in progress? > Or was something setup incorrectly? > > E: Failed to fetch >

Re: [ceph-users] v14.2.0 Nautilus released

2019-03-20 Thread Alfredo Deza
There aren't any Debian packages built for this release because we haven't updated the infrastructure to build (and test) Debian packages yet. On Tue, Mar 19, 2019 at 10:24 AM Sean Purdy wrote: > > Hi, > > > Will debian packages be released? I don't see them in the nautilus repo. I > thought

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Alfredo Deza
On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote: > > On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote: > > > > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote: > > > > > > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster > > > wrote:

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Alfredo Deza
On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote: > > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster wrote: > > > > Hi all, > > > > We've just hit our first OSD replacement on a host created with > > `ceph-volume lvm batch` with mixed hdds+ssds. > &

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Alfredo Deza
On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster wrote: > > Hi all, > > We've just hit our first OSD replacement on a host created with > `ceph-volume lvm batch` with mixed hdds+ssds. > > The hdd /dev/sdq was prepared like this: ># ceph-volume lvm batch /dev/sd[m-r] /dev/sdac --yes > > Then

Re: [ceph-users] OSD after OS reinstallation.

2019-02-22 Thread Alfredo Deza
On Fri, Feb 22, 2019 at 9:38 AM Marco Gaiarin wrote: > > Mandi! Alfredo Deza > In chel di` si favelave... > > > The problem is that if there is no PARTUUID ceph-volume can't ensure > > what device is the one actually pointing to data/journal. Being 'GPT' > >

Re: [ceph-users] OSD after OS reinstallation.

2019-02-21 Thread Alfredo Deza
hat device is the one actually pointing to data/journal. Being 'GPT' alone will not be enough here :( > ср, 20 февр. 2019 г. в 17:11, Alfredo Deza : >> >> On Wed, Feb 20, 2019 at 8:40 AM Анатолий Фуников >> wrote: >> > >> > Thanks for the reply. >> > bl

Re: [ceph-users] OSD after OS reinstallation.

2019-02-20 Thread Alfredo Deza
On Wed, Feb 20, 2019 at 10:21 AM Marco Gaiarin wrote: > > Mandi! Alfredo Deza > In chel di` si favelave... > > > I think this is what happens with a non-gpt partition. GPT labels will > > use a PARTUUID to identify the partition, and I just confirmed that > > ce

Re: [ceph-users] OSD after OS reinstallation.

2019-02-20 Thread Alfredo Deza
partition without losing data. My suggestion (if you confirm it is not possible to add the GPT label) is to start the migration towards the new way of creating OSDs > > ср, 20 февр. 2019 г. в 16:27, Alfredo Deza : >> >> On Wed, Feb 20, 2019 at 8:16 AM Анатолий Фуников >>

Re: [ceph-users] OSD after OS reinstallation.

2019-02-20 Thread Alfredo Deza
On Wed, Feb 20, 2019 at 8:16 AM Анатолий Фуников wrote: > > Hello. I need to raise the OSD on the node after reinstalling the OS, some > OSD were made a long time ago, not even a ceph-disk, but a set of scripts. > There was an idea to get their configuration in json via ceph-volume simple >

Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-18 Thread Alfredo Deza
On Mon, Feb 18, 2019 at 2:46 AM Rainer Krienke wrote: > > Hello, > > thanks for your answer, but zapping the disk did not make any > difference. I still get the same error. Looking at the debug output I > found this error message that is probably the root of all trouble: > > # ceph-volume lvm

Re: [ceph-users] Bluestore deploys to tmpfs?

2019-02-04 Thread Alfredo Deza
On Mon, Feb 4, 2019 at 4:43 AM Hector Martin wrote: > > On 02/02/2019 05:07, Stuart Longland wrote: > > On 1/2/19 10:43 pm, Alfredo Deza wrote: > >>> The tmpfs setup is expected. All persistent data for bluestore OSDs > >>> setup with LVM are stored i

Re: [ceph-users] Problem replacing osd with ceph-deploy

2019-02-04 Thread Alfredo Deza
On Fri, Feb 1, 2019 at 6:07 PM Shain Miley wrote: > > Hi, > > I went to replace a disk today (which I had not had to do in a while) > and after I added it the results looked rather odd compared to times past: > > I was attempting to replace /dev/sdk on one of our osd nodes: > > #ceph-deploy disk

Re: [ceph-users] Problem replacing osd with ceph-deploy

2019-02-04 Thread Alfredo Deza
On Fri, Feb 1, 2019 at 6:35 PM Vladimir Prokofev wrote: > > Your output looks a bit weird, but still, this is normal for bluestore. It > creates small separate data partition that is presented as XFS mounted in > /var/lib/ceph/osd, while real data partition is hidden as raw(bluestore) > block

Re: [ceph-users] Bluestore deploys to tmpfs?

2019-02-01 Thread Alfredo Deza
On Fri, Feb 1, 2019 at 3:08 PM Stuart Longland wrote: > > On 1/2/19 10:43 pm, Alfredo Deza wrote: > >>> I think mounting tmpfs for something that should be persistent is highly > >>> dangerous. Is there some flag I should be using when creating the > >

Re: [ceph-users] Bluestore deploys to tmpfs?

2019-02-01 Thread Alfredo Deza
On Fri, Feb 1, 2019 at 6:28 AM Burkhard Linke wrote: > > Hi, > > On 2/1/19 11:40 AM, Stuart Longland wrote: > > Hi all, > > > > I'm just in the process of migrating my 3-node Ceph cluster from > > BTRFS-backed Filestore over to Bluestore. > > > > Last weekend I did this with my first node, and

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-24 Thread Alfredo Deza
On Thu, Jan 24, 2019 at 4:13 PM mlausch wrote: > > > > Am 24.01.19 um 22:02 schrieb Alfredo Deza: > >> > >> Ok with a new empty journal the OSD will not start. I have now rescued > >> the data with dd and the recrypt it with a other key and copied

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-24 Thread Alfredo Deza
On Thu, Jan 24, 2019 at 3:17 PM Manuel Lausch wrote: > > > > On Wed, 23 Jan 2019 16:32:08 +0100 > Manuel Lausch wrote: > > > > > > > > The key api for encryption is *very* odd and a lot of its quirks are > > > undocumented. For example, ceph-volume is stuck supporting naming > > > files and keys

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Alfredo Deza
On Wed, Jan 23, 2019 at 11:03 AM Dietmar Rieder wrote: > > On 1/23/19 3:05 PM, Alfredo Deza wrote: > > On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski wrote: > >> > >> On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote: > >>> Hi, > >>>

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Alfredo Deza
ore of a way to keep existing ceph-disk OSDs and create new ceph-volume OSDs, which you can, as long as this is not Nautilus or newer where ceph-disk doesn't exist > I'm sure there's a way to get them running again, but I imagine you'd rather > not > manually deal with that. > > >

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Alfredo Deza
deployed with ceph-disk. > > Regards > Manuel > > > On Tue, 22 Jan 2019 07:44:02 -0500 > Alfredo Deza wrote: > > > > This is one case we didn't anticipate :/ We supported the wonky > > lockbox setup and thought we wouldn't need to go further back, > > al

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-22 Thread Alfredo Deza
On Tue, Jan 22, 2019 at 6:45 AM Manuel Lausch wrote: > > Hi, > > we want upgrade our ceph clusters from jewel to luminous. And also want > to migrate the osds to ceph-volume described in > http://docs.ceph.com/docs/luminous/ceph-volume/simple/scan/#ceph-volume-simple-scan > > The clusters are

Re: [ceph-users] Problem with OSDs

2019-01-21 Thread Alfredo Deza
On Sun, Jan 20, 2019 at 11:30 PM Brian Topping wrote: > > Hi all, looks like I might have pooched something. Between the two nodes I > have, I moved all the PGs to one machine, reformatted the other machine, > rebuilt that machine, and moved the PGs back. In both cases, I did this by > taking

Re: [ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-01-18 Thread Alfredo Deza
On Fri, Jan 18, 2019 at 10:07 AM Jan Kasprzak wrote: > > Alfredo, > > Alfredo Deza wrote: > : On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote: > : > Eugen Block wrote: > : > : > : > : I think you're running into an issue reported a couple of times. &g

Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-18 Thread Alfredo Deza
On Fri, Jan 18, 2019 at 7:07 AM Hector Martin wrote: > > On 17/01/2019 00:45, Sage Weil wrote: > > Hi everyone, > > > > This has come up several times before, but we need to make a final > > decision. Alfredo has a PR prepared that drops Python 2 support entirely > > in master, which will mean

Re: [ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-01-18 Thread Alfredo Deza
On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote: > > Eugen Block wrote: > : Hi Jan, > : > : I think you're running into an issue reported a couple of times. > : For the use of LVM you have to specify the name of the Volume Group > : and the respective Logical Volume instead of the path, e.g. >

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-12 Thread Alfredo Deza
On Tue, Dec 11, 2018 at 7:28 PM Tyler Bishop wrote: > > Now I'm just trying to figure out how to create filestore in Luminous. > I've read every doc and tried every flag but I keep ending up with > either a data LV of 100% on the VG or a bunch fo random errors for > unsupported flags... An LV

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-12 Thread Alfredo Deza
On Tue, Dec 11, 2018 at 8:16 PM Mark Kirkwood wrote: > > Looks like the 'delaylog' option for xfs is the problem - no longer supported > in later kernels. See > https://github.com/torvalds/linux/commit/444a702231412e82fb1c09679adc159301e9242c > > Offhand I'm not sure where that option is being

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-05 Thread Alfredo Deza
On Tue, Dec 4, 2018 at 6:44 PM Matthew Pounsett wrote: > > > > On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni wrote: >>> >>> >>> Is there a way we can easily set that up without trying to use outdated >>> tools? Presumably if ceph still supports this as the docs claim, there's a >>> way to get it

Re: [ceph-users] Fastest way to find raw device from OSD-ID? (osd -> lvm lv -> lvm pv -> disk)

2018-11-30 Thread Alfredo Deza
On Fri, Nov 30, 2018 at 3:10 PM Paul Emmerich wrote: > > Am Mo., 8. Okt. 2018 um 23:34 Uhr schrieb Alfredo Deza : > > > > On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote: > > > > > > ceph-volume unfortunately doesn't handle completely hanging IOs

Re: [ceph-users] Migration osds to Bluestore on Ubuntu 14.04 Trusty

2018-11-15 Thread Alfredo Deza
On Thu, Nov 15, 2018 at 8:57 AM Klimenko, Roman wrote: > > Hi everyone! > > As I noticed, ceph-volume lacks Ubuntu Trusty compatibility > https://tracker.ceph.com/issues/23496 > > So, I can't follow this instruction > http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/ > > Do

Re: [ceph-users] Unhelpful behaviour of ceph-volume lvm batch with >1 NVME card for block.db

2018-11-14 Thread Alfredo Deza
On Wed, Nov 14, 2018 at 9:10 AM Matthew Vernon wrote: > > Hi, > > We currently deploy our filestore OSDs with ceph-disk (via > ceph-ansible), and I was looking at using ceph-volume as we migrate to > bluestore. > > Our servers have 60 OSDs and 2 NVME cards; each OSD is made up of a > single hdd,

Re: [ceph-users] ceph 12.2.9 release

2018-11-08 Thread Alfredo Deza
On Thu, Nov 8, 2018 at 3:02 AM Janne Johansson wrote: > > Den ons 7 nov. 2018 kl 18:43 skrev David Turner : > > > > My big question is that we've had a few of these releases this year that > > are bugged and shouldn't be upgraded to... They don't have any release > > notes or announcement and

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-06 Thread Alfredo Deza
It is pretty difficult to know what step you are missing if we are getting the `activate --all` command. Maybe if you try one by one, capturing each command, throughout the process, with output. In the filestore-to-bluestore guides we never advertise `activate --all` for example. Something is

Re: [ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Alfredo Deza
On Tue, Nov 6, 2018 at 8:41 AM Pavan, Krish wrote: > > Trying to created OSD with multipath with dmcrypt and it failed . Any > suggestion please?. ceph-disk is known to have issues like this. It is already deprecated in the Mimic release and will no longer be available for the upcoming release

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Alfredo Deza
r /dev/dm-* is expected, as that is created every time the system boots. > > > On Mon, Nov 5, 2018 at 4:14 PM, Alfredo Deza wrote: >> >> On Mon, Nov 5, 2018 at 4:04 PM Hayashida, Mami >> wrote: >> > >> > WOW. With you two guiding me through every

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Alfredo Deza
On Mon, Nov 5, 2018 at 4:04 PM Hayashida, Mami wrote: > > WOW. With you two guiding me through every step, the 10 OSDs in question are > now added back to the cluster as Bluestore disks!!! Here are my responses to > the last email from Hector: > > 1. I first checked the permissions and they

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Alfredo Deza
On Mon, Nov 5, 2018 at 12:54 PM Hayashida, Mami wrote: > > I commented out those lines and, yes, I was able to restart the system and > all the Filestore OSDs are now running. But when I cannot start converted > Bluestore OSDs (service). When I look up the log for osd.60, this is what I >

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Alfredo Deza
h1.device is trying to start, I suspect you have them in > /etc/fstab. You should have a look around /etc to see if you have any > stray references to those devices or old ceph-disk OSDs. > > On 11/6/18 1:37 AM, Hayashida, Mami wrote: > > Alright. Thanks -- I will try this now. >

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Alfredo Deza
said, if you want to do them one by one, then your initial command is fine. > > On Mon, Nov 5, 2018 at 11:31 AM, Alfredo Deza wrote: >> >> On Mon, Nov 5, 2018 at 11:24 AM Hayashida, Mami >> wrote: >> > >> > Thank you for all of your replies. Just to c

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Alfredo Deza
ce That will take care of OSD 60. This is fine if you want to do them one by one. To affect everything from ceph-disk, you would need to: ln -sf /dev/null /etc/systemd/system/ceph-disk@.service > >Then reboot? > > > On Mon, Nov 5, 2018 at 11:17 AM, Alfredo Deza wrote: >

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Alfredo Deza
gle one of the newly-converted Bluestore OSD disks >> (/dev/sd{h..q}1). This will happen with stale ceph-disk systemd units. You can disable those with: ln -sf /dev/null /etc/systemd/system/ceph-disk@.service >> >> >> -- >> >> On Mon, Nov 5, 2018 at 9:57 AM, Alfr

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Alfredo Deza
3.7T 0 disk > └─hdd65-data65 252:15 0 3.7T 0 lvm > sdn 8:208 0 3.7T 0 disk > └─hdd66-data66 252:16 0 3.7T 0 lvm > sdo 8:224 0 3.7T 0 disk > └─hdd67-data67 252:17 0 3.7T 0 lvm > sdp 8:240 0 3.7T 0 disk > └─hdd6

Re: [ceph-users] Filestore to Bluestore migration question

2018-10-31 Thread Alfredo Deza
estion. Are there any changes I need to make to the ceph.conf > file? I did comment out this line that was probably used for creating > Filestore (using ceph-deploy): osd journal size = 40960 Since you've pre-created the LVs the commented out line will not affect anything. > > > > On

Re: [ceph-users] Filestore to Bluestore migration question

2018-10-31 Thread Alfredo Deza
On Wed, Oct 31, 2018 at 5:22 AM Hector Martin wrote: > > On 31/10/2018 05:55, Hayashida, Mami wrote: > > I am relatively new to Ceph and need some advice on Bluestore migration. > > I tried migrating a few of our test cluster nodes from Filestore to > > Bluestore by following this > >

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Alfredo Deza
On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick wrote: > > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick > wrote: > > > > Hello, > > > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and > > running into difficulties getting the current stable release running. > > The versions in

Re: [ceph-users] Fastest way to find raw device from OSD-ID? (osd -> lvm lv -> lvm pv -> disk)

2018-10-08 Thread Alfredo Deza
On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote: > > ceph-volume unfortunately doesn't handle completely hanging IOs too > well compared to ceph-disk. Not sure I follow, would you mind expanding on what you mean by "ceph-volume unfortunately doesn't handle completely hanging IOs" ?

Re: [ceph-users] Fastest way to find raw device from OSD-ID? (osd -> lvm lv -> lvm pv -> disk)

2018-10-08 Thread Alfredo Deza
On Mon, Oct 8, 2018 at 6:09 AM Kevin Olbrich wrote: > > Hi! > > Yes, thank you. At least on one node this works, the other node just freezes > but this might by caused by a bad disk that I try to find. If it is freezing, you could maybe try running the command where it freezes? (ceph-volume

Re: [ceph-users] ceph-volume: recreate OSD with same ID after drive replacement

2018-10-03 Thread Alfredo Deza
to accommodate for that behavior: http://tracker.ceph.com/issues/36307 > > Andras > > On 10/3/18 11:41 AM, Alfredo Deza wrote: > > On Wed, Oct 3, 2018 at 11:23 AM Andras Pataki > > wrote: > >> Thanks - I didn't realize that was such a recent fix. > >

Re: [ceph-users] ceph-volume: recreate OSD with same ID after drive replacement

2018-10-03 Thread Alfredo Deza
t > do (compared to my removal procedure, osd crush remove, auth del, osd rm)? > > Thanks, > > Andras > > > On 10/3/18 10:36 AM, Alfredo Deza wrote: > > On Wed, Oct 3, 2018 at 9:57 AM Andras Pataki > > wrote: > >> After replacing failing drive I'd like

Re: [ceph-users] ceph-volume: recreate OSD with same ID after drive replacement

2018-10-03 Thread Alfredo Deza
On Wed, Oct 3, 2018 at 9:57 AM Andras Pataki wrote: > > After replacing failing drive I'd like to recreate the OSD with the same > osd-id using ceph-volume (now that we've moved to ceph-volume from > ceph-disk). However, I seem to not be successful. The command I'm using: > > ceph-volume lvm

Re: [ceph-users] mimic: 3/4 OSDs crashed on "bluefs enospc"

2018-10-02 Thread Alfredo Deza
On Tue, Oct 2, 2018 at 10:23 AM Alex Litvak wrote: > > Igor, > > Thank you for your reply. So what you are saying there are really no > sensible space requirements for a collocated device? Even if I setup 30 > GB for DB (which I really wouldn't like to do due to a space waste > considerations )

Re: [ceph-users] ceph-ansible

2018-09-21 Thread Alfredo Deza
ersions were being packaged, is there something I've missed? The tags have changed format it seems, from 0.0.11 > > > > > On Thu, Sep 20, 2018 at 3:57 PM Alfredo Deza wrote: >> >> Not sure how you installed ceph-ansible, the requirements mention a >> version of

Re: [ceph-users] ceph-ansible

2018-09-20 Thread Alfredo Deza
Not sure how you installed ceph-ansible, the requirements mention a version of a dependency (the notario module) which needs to be 0.0.13 or newer, and you seem to be using an older one. On Thu, Sep 20, 2018 at 6:53 PM solarflow99 wrote: > > Hi, tying to get this to do a simple deployment, and

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Alfredo Deza
On Fri, Sep 7, 2018 at 3:31 PM, Maged Mokhtar wrote: > On 2018-09-07 14:36, Alfredo Deza wrote: > > On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid > wrote: > > Hi there > > Asking the questions as a newbie. May be asked a number of times before by > many but sorry

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Alfredo Deza
ld only benefit from WAL if you had another device, like an NVMe, where 2GB partitions (or LVs) could be created for block.wal > > On Fri, Sep 7, 2018 at 5:36 PM Alfredo Deza wrote: >> >> On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid >> wrote: >> > Hi there

Re: [ceph-users] WAL/DB size

2018-09-07 Thread Alfredo Deza
On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid wrote: > Hi there > > Asking the questions as a newbie. May be asked a number of times before by > many but sorry, it is not clear yet to me. > > 1. The WAL device is just like journaling device used before bluestore. And > CEPH confirms Write to

Re: [ceph-users] "no valid command found" when running "ceph-deploy osd create"

2018-09-04 Thread Alfredo Deza
On Sun, Sep 2, 2018 at 3:01 PM, David Wahler wrote: > On Sun, Sep 2, 2018 at 1:31 PM Alfredo Deza wrote: >> >> On Sun, Sep 2, 2018 at 12:00 PM, David Wahler wrote: >> > Ah, ceph-volume.log pointed out the actual problem: >> > >> > RuntimeError: Cannot

Re: [ceph-users] SSD OSDs crashing after upgrade to 12.2.7

2018-09-04 Thread Alfredo Deza
have some time to check those logs now > > br > wolfgang > > On 2018-08-30 19:18, Alfredo Deza wrote: >> On Thu, Aug 30, 2018 at 5:24 AM, Wolfgang Lendl >> wrote: >>> Hi Alfredo, >>> >>> >>> caught some logs: >>> https:/

Re: [ceph-users] "no valid command found" when running "ceph-deploy osd create"

2018-09-02 Thread Alfredo Deza
it confusing. I submitted a > PR to add a brief note to the quick-start guide, in case anyone else > makes the same mistake: https://github.com/ceph/ceph/pull/23879 > Thanks for the PR! > Thanks for the assistance! > > -- David > > On Sun, Sep 2, 2018 at 7:44 AM Alfredo Dez

Re: [ceph-users] Slow requests from bluestore osds

2018-09-02 Thread Alfredo Deza
On Sat, Sep 1, 2018 at 12:45 PM, Brett Chancellor wrote: > Hi Cephers, > I am in the process of upgrading a cluster from Filestore to bluestore, > but I'm concerned about frequent warnings popping up against the new > bluestore devices. I'm frequently seeing messages like this, although the >

Re: [ceph-users] "no valid command found" when running "ceph-deploy osd create"

2018-09-02 Thread Alfredo Deza
There should be useful logs from ceph-volume in /var/log/ceph/ceph-volume.log that might show a bit more here. I would also try the command that fails directly on the server (sans ceph-deploy) to see what is it that is actually failing. Seems like the ceph-deploy log output is a bit out of order

Re: [ceph-users] SSD OSDs crashing after upgrade to 12.2.7

2018-08-30 Thread Alfredo Deza
On Thu, Aug 30, 2018 at 5:24 AM, Wolfgang Lendl wrote: > Hi Alfredo, > > > caught some logs: > https://pastebin.com/b3URiA7p That looks like there is an issue with bluestore. Maybe Radoslaw or Adam might know a bit more. > > br > wolfgang > > On 2018-08-29 15:51,

Re: [ceph-users] Error EINVAL: (22) Invalid argument While using ceph osd safe-to-destroy

2018-08-29 Thread Alfredo Deza
I am addressing the doc bug at https://github.com/ceph/ceph/pull/23801 On Mon, Aug 27, 2018 at 2:08 AM, Eugen Block wrote: > Hi, > > could you please paste your osd tree and the exact command you try to > execute? > >> Extra note, the while loop in the instructions look like it's bad. I had >>

Re: [ceph-users] SSD OSDs crashing after upgrade to 12.2.7

2018-08-29 Thread Alfredo Deza
On Wed, Aug 29, 2018 at 2:06 AM, Wolfgang Lendl wrote: > Hi, > > after upgrading my ceph clusters from 12.2.5 to 12.2.7 I'm experiencing > random crashes from SSD OSDs (bluestore) - it seems that HDD OSDs are not > affected. > I destroyed and recreated some of the SSD OSDs which seemed to

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Alfredo Deza
On Thu, Aug 23, 2018 at 11:32 AM, Hervé Ballans wrote: > Le 23/08/2018 à 16:13, Alfredo Deza a écrit : > > What you mean is that, at this stage, I must directly declare the UUID paths > in value of --block.db (i.e. replace /dev/nvme0n1p1 with its PARTUUID), that > is ? > &g

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Alfredo Deza
On Thu, Aug 23, 2018 at 9:56 AM, Hervé Ballans wrote: > Le 23/08/2018 à 15:20, Alfredo Deza a écrit : > > Thanks Alfredo for your reply. I'm using the very last version of Luminous > (12.2.7) and ceph-deploy (2.0.1). > I have no problem in creating my OSD, that's work perfectly.

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Alfredo Deza
On Thu, Aug 23, 2018 at 9:12 AM, Hervé Ballans wrote: > Le 23/08/2018 à 12:51, Alfredo Deza a écrit : >> >> On Thu, Aug 23, 2018 at 5:42 AM, Hervé Ballans >> wrote: >>> >>> Hello all, >>> >>> I would like to continue a thread that da

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Alfredo Deza
On Thu, Aug 23, 2018 at 5:42 AM, Hervé Ballans wrote: > Hello all, > > I would like to continue a thread that dates back to last May (sorry if this > is not a good practice ?..) > > Thanks David for your usefil tips on this thread. > In my side, I created my OSDs with ceph-deploy (in place of

Re: [ceph-users] BlueStore options in ceph.conf not being used

2018-08-22 Thread Alfredo Deza
On Wed, Aug 22, 2018 at 2:48 PM, David Turner wrote: > The config settings for DB and WAL size don't do anything. For journal > sizes they would be used for creating your journal partition with ceph-disk, > but ceph-volume does not use them for creating bluestore OSDs. You need to > create the

Re: [ceph-users] Mimic osd fails to start.

2018-08-20 Thread Alfredo Deza
t-device-class", >> "class": "ssd", "ids": ["48"]}]=-22 (22) Invalid argument v46327) v1 >> to unknown.0 - >> 2018-08-20 08:57:58.785 7f9d85934700 10 mon.mon02@1(peon) e4 >> ms_handle_reset 0x55b4ecf4b200 10.24.52.17:6800/153683 >&g

Re: [ceph-users] Mimic osd fails to start.

2018-08-18 Thread Alfredo Deza
On Fri, Aug 17, 2018 at 7:05 PM, Daznis wrote: > Hello, > > > I have replace one of our failed OSD drives and recreated a new osd > with ceph-deploy and it failes to start. Is it possible you haven't zapped the journal on nvme0n1p13 ? > > Command: ceph-deploy --overwrite-conf osd create

Re: [ceph-users] BlueStore upgrade steps broken

2018-08-17 Thread Alfredo Deza
y "raw partition" you mean an actual partition or a raw device > > On Fri, Aug 17, 2018 at 2:54 PM Alfredo Deza wrote: >> >> On Fri, Aug 17, 2018 at 10:24 AM, Robert Stanford >> wrote: >> > >> > I was using the ceph-volume create command, whic

Re: [ceph-users] BlueStore upgrade steps broken

2018-08-17 Thread Alfredo Deza
the prepare and activate functions. >>> >>> ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc --block.db >>> /dev/sdb --block.wal /dev/sdb >>> >>> That is the command context I've found on the web. Is it wrong? >>> >>>

Re: [ceph-users] BlueStore upgrade steps broken

2018-08-17 Thread Alfredo Deza
ceph-volume will not do this for you. And then you can pass those newly created LVs like: ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc --block.db sdb-vg/block-lv --block.wal sdb-vg/wal-lv > > Thanks > R > > On Fri, Aug 17, 2018 at 5:55 AM Alfredo Deza wrote

Re: [ceph-users] A few questions about using SSD for bluestore journal

2018-08-17 Thread Alfredo Deza
On Thu, Aug 16, 2018 at 4:44 PM, Cody wrote: > Hi everyone, > > As a newbie, I have some questions about using SSD as the Bluestore > journal device. > > 1. Is there a formula to calculate the optimal size of partitions on > the SSD for each OSD, given their capacity and IO performance? Or is >

Re: [ceph-users] BlueStore upgrade steps broken

2018-08-17 Thread Alfredo Deza
On Thu, Aug 16, 2018 at 9:00 PM, Robert Stanford wrote: > > I am following the steps to my filestore journal with a bluestore journal > (http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/). It > is broken at ceph-volume lvm create. Here is my error: > > --> Zapping

Re: [ceph-users] Optane 900P device class automatically set to SSD not NVME

2018-08-13 Thread Alfredo Deza
On Wed, Aug 1, 2018 at 4:33 AM, Jake Grimmett wrote: > Dear All, > > Not sure if this is a bug, but when I add Intel Optane 900P drives, > their device class is automatically set to SSD rather than NVME. Not sure if we can't really tell apart SSDs from NVMe devices, but you can use the

Re: [ceph-users] ceph lvm question

2018-07-30 Thread Alfredo Deza
could be wrong, but this one looks like it might be an issue > with\nmissing quotes. Always quote template expression brackets when > they\nstart a value. For instance:\n\nwith_items:\n - {{ foo > }}\n\nShould be written as:\n\nwith_items:\n - \"{{ foo > }}\"

Re: [ceph-users] ceph lvm question

2018-07-30 Thread Alfredo Deza
On Sat, Jul 28, 2018 at 12:44 AM, Satish Patel wrote: > I have simple question i want to use LVM with bluestore (Its > recommended method), If i have only single SSD disk for osd in that > case i want to keep journal + data on same disk so how should i create > lvm to accommodate ? bluestore

  1   2   3   4   5   >