Re: [ceph-users] udev rule or script to auto add bcache devices?
Hi Stefan, Zitat von Stefan Priebe - Profihost AG: Hello, bcache didn't supported partitions on the past so that a lot of our osds have their data directly on: /dev/bcache[0-9] But that means i can't give them the needed part type of 4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation with udev und ceph-disk does not work. Had anybody already fixed this or hacked something together? we had this running for filestore OSDs for quite some time (on Luminous and before), but have recently moved on to Bluestore, omitting bcache and instead putting block.db on partitions of the SSD devices (or rather partitions on an MD-RAID1 made out of two Toshiba PX02SMF020). We simply mounted the OSD file system via label at boot time per fstab entries, and had the OSDs started via systemd. In case this matters: For historic reasons, the actual mount point wasn't in /var/lib/ceph/osd, but a different directory, with according symlinks set up under /var/lib/ceph/osd/. How many OSDs do you run per bcache SSD caching device? Even at just 4:1, we ran into i/o bottlenecks (using above MD-RAID1 as the caching device), hence moving on to Bluestore. The same hardware now provides a much more responsive storage subsystem, which of course may be very specific to our work load and setup. Regards Jens ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] udev rule or script to auto add bcache devices?
On Mon, Jan 22, 2018 at 1:37 AM, Wido den Hollanderwrote: > > > On 01/20/2018 07:56 PM, Stefan Priebe - Profihost AG wrote: >> >> Hello, >> >> bcache didn't supported partitions on the past so that a lot of our osds >> have their data directly on: >> /dev/bcache[0-9] >> >> But that means i can't give them the needed part type of >> 4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation >> with udev und ceph-disk does not work. >> >> Had anybody already fixed this or hacked something together? Like Wido mentioned, if you are using Luminous, you can do this easily with ceph-volume. There is no need to force partitions on anything or set labels to get recognized by udev. Note that ceph-volume doesn't support encryption yet, although that work is almost complete and should be available soon. > > > Not really. But with ceph-volume around the corner, isn't that something > that might work? It doesn't use udev anymore. > > You need to run Luminous though. > > Wido > > >> >> Greets, >> Stefan >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] udev rule or script to auto add bcache devices?
On 01/20/2018 07:56 PM, Stefan Priebe - Profihost AG wrote: Hello, bcache didn't supported partitions on the past so that a lot of our osds have their data directly on: /dev/bcache[0-9] But that means i can't give them the needed part type of 4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation with udev und ceph-disk does not work. Had anybody already fixed this or hacked something together? Not really. But with ceph-volume around the corner, isn't that something that might work? It doesn't use udev anymore. You need to run Luminous though. Wido Greets, Stefan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] udev rule or script to auto add bcache devices?
Hello, bcache didn't supported partitions on the past so that a lot of our osds have their data directly on: /dev/bcache[0-9] But that means i can't give them the needed part type of 4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation with udev und ceph-disk does not work. Had anybody already fixed this or hacked something together? Greets, Stefan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com