On Wed, Sep 11, 2019 at 6:18 AM Matthew Vernon wrote:
>
> Hi,
>
> We keep finding part-made OSDs (they appear not attached to any host,
> and down and out; but still counting towards the number of OSDs); we
> never saw this with ceph-disk. On investigation, this is because
> ceph-volume lvm
ob
>
> On Wed, Jul 24, 2019 at 1:24 PM Alfredo Deza wrote:
>
>>
>>
>> On Wed, Jul 24, 2019 at 4:15 PM Peter Eisch
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> I appreciate the insistency that the directions be followed. I who
gt; recipient(s), or a person designated as responsible for delivering such
> messages to the intended recipient, is strictly prohibited and may be
> unlawful. This e-mail may contain proprietary, confidential or privileged
> information. Any views or opinions expressed are solely those
ized use, dissemination, distribution, or
> reproduction of this message by anyone other than the intended
> recipient(s), or a person designated as responsible for delivering such
> messages to the intended recipient, is strictly prohibited and may be
> unlawful. This e-mail may co
On Wed, Jul 24, 2019 at 2:56 PM Peter Eisch
wrote:
> Hi Paul,
>
> To do better to answer you question, I'm following:
> http://docs.ceph.com/docs/nautilus/releases/nautilus/
>
> At step 6, upgrade OSDs, I jumped on an OSD host and did a full 'yum
> update' for patching the host and rebooted to
On Fri, Jul 5, 2019 at 6:23 AM ST Wong (ITSC) wrote:
>
> Hi,
>
>
>
> I target to run just destroy and re-use the ID as stated in manual but seems
> not working.
>
> Seems I’m unable to re-use the ID ?
The OSD replacement guide does not mention anything about crush and
auth commands. I believe
Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> -
> ---------
>
>
> Am 27.06.19, 15:09 schrieb "Alfredo Deza" :
>
> Although ceph-volume does a best-effort to support custom cluster
> names,
On Thu, Jun 27, 2019 at 10:36 AM ☣Adam wrote:
> Well that caused some excitement (either that or the small power
> disruption did)! One of my OSDs is now down because it keeps crashing
> due to a failed assert (stacktraces attached, also I'm apparently
> running mimic, not luminous).
>
> In the
Although ceph-volume does a best-effort to support custom cluster
names, the Ceph project does not support custom cluster names anymore
even though you can still see settings/options that will allow you to
set it.
For reference see: https://bugzilla.redhat.com/show_bug.cgi?id=1459861
On Thu, Jun
On Mon, Jun 17, 2019 at 4:09 PM David Turner wrote:
>
> This was a little long to respond with on Twitter, so I thought I'd share my
> thoughts here. I love the idea of a 12 month cadence. I like October because
> admins aren't upgrading production within the first few months of a new
>
;
>>
>> [image: Inactive hide details for "Tarek Zegar" ---05/15/2019 10:32:27
>> AM---TLDR; I activated the drive successfully but the daemon won]"Tarek
>> Zegar" ---05/15/2019 10:32:27 AM---TLDR; I activated the drive successfully
>> but the daemon wo
On Tue, May 14, 2019 at 7:24 PM Bob R wrote:
>
> Does 'ceph-volume lvm list' show it? If so you can try to activate it with
> 'ceph-volume lvm activate 122 74b01ec2--124d--427d--9812--e437f90261d4'
Good suggestion. If `ceph-volume lvm list` can see it, it can probably
activate it again. You can
On Mon, May 13, 2019 at 6:56 PM wrote:
>
> All;
>
> I'm working on spinning up a demonstration cluster using ceph, and yes, I'm
> installing it manually, for the purpose of learning.
>
> I can't seem to correctly create an OSD, as ceph-volume seems to only work if
> the cluster name is the
n configuration file?
This might be a good example to take why I am recommending against it:
tools will probably not support it. I don't think you can make
ceph-ansible do this, unless you are pre-creating the LVs, which if
using Ansible shouldn't be too hard anyway
>
> Best regards,
>
> On
On Fri, May 10, 2019 at 2:43 PM Lazuardi Nasution
wrote:
>
> Hi,
>
> Let's say I have following devices on a host.
>
> /dev/sda
> /dev/sdb
> /dev/nvme0n1
>
> How can I do ceph-volume batch which create bluestore OSD on HDDs and NVME
> (devided to be 4 OSDs) and put block.db of HDDs on the NVME
On Thu, May 2, 2019 at 8:28 AM Robert Sander
wrote:
>
> Hi,
>
> On 02.05.19 13:40, Alfredo Deza wrote:
>
> > Can you give a bit more details on the environment? How dense is the
> > server? If the unit retries is fine and I was hoping at some point it
> >
On Thu, May 2, 2019 at 5:27 AM Robert Sander
wrote:
>
> Hi,
>
> The ceph-volume@.service units on an Ubuntu 18.04.2 system
> run unlimited and do not finish.
>
> Only after we create this override config the system boots again:
>
> # /etc/systemd/system/ceph-volume@.service.d/override.conf
>
nger exists. Do you have output on how it failed before?
>
>
> On Thu, Apr 18, 2019 at 10:10 AM Alfredo Deza wrote:
>>
>> On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev wrote:
>> >
>> > Hello,
>> > I have a server with 18 disks, and 17 OSD daem
On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev wrote:
>
> Hello,
> I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD
> daemons failed to deploy with ceph-deploy. The reason for failing is
> unimportant at this point, I believe it was race condition, as I was running
On Thu, Apr 11, 2019 at 4:23 PM Yury Shevchuk wrote:
>
> Hi Igor!
>
> I have upgraded from Luminous to Nautilus and now slow device
> expansion works indeed. The steps are shown below to round up the
> topic.
>
> node2# ceph osd df
> ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETA
On Tue, Mar 19, 2019 at 2:53 PM Benjamin Cherian
wrote:
>
> Hi,
>
> I'm getting an error when trying to use the APT repo for Ubuntu bionic. Does
> anyone else have this issue? Is the mirror sync actually still in progress?
> Or was something setup incorrectly?
>
> E: Failed to fetch
>
There aren't any Debian packages built for this release because we
haven't updated the infrastructure to build (and test) Debian packages
yet.
On Tue, Mar 19, 2019 at 10:24 AM Sean Purdy wrote:
>
> Hi,
>
>
> Will debian packages be released? I don't see them in the nautilus repo. I
> thought
On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote:
>
> On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote:
> >
> > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
> > >
> > > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster
> > > wrote:
On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
>
> On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster wrote:
> >
> > Hi all,
> >
> > We've just hit our first OSD replacement on a host created with
> > `ceph-volume lvm batch` with mixed hdds+ssds.
> &
On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster wrote:
>
> Hi all,
>
> We've just hit our first OSD replacement on a host created with
> `ceph-volume lvm batch` with mixed hdds+ssds.
>
> The hdd /dev/sdq was prepared like this:
># ceph-volume lvm batch /dev/sd[m-r] /dev/sdac --yes
>
> Then
On Fri, Feb 22, 2019 at 9:38 AM Marco Gaiarin wrote:
>
> Mandi! Alfredo Deza
> In chel di` si favelave...
>
> > The problem is that if there is no PARTUUID ceph-volume can't ensure
> > what device is the one actually pointing to data/journal. Being 'GPT'
> >
hat device is the one actually pointing to data/journal. Being 'GPT'
alone
will not be enough here :(
> ср, 20 февр. 2019 г. в 17:11, Alfredo Deza :
>>
>> On Wed, Feb 20, 2019 at 8:40 AM Анатолий Фуников
>> wrote:
>> >
>> > Thanks for the reply.
>> > bl
On Wed, Feb 20, 2019 at 10:21 AM Marco Gaiarin wrote:
>
> Mandi! Alfredo Deza
> In chel di` si favelave...
>
> > I think this is what happens with a non-gpt partition. GPT labels will
> > use a PARTUUID to identify the partition, and I just confirmed that
> > ce
partition without losing data.
My suggestion (if you confirm it is not possible to add the GPT label)
is to start the migration towards the new way of creating OSDs
>
> ср, 20 февр. 2019 г. в 16:27, Alfredo Deza :
>>
>> On Wed, Feb 20, 2019 at 8:16 AM Анатолий Фуников
>>
On Wed, Feb 20, 2019 at 8:16 AM Анатолий Фуников
wrote:
>
> Hello. I need to raise the OSD on the node after reinstalling the OS, some
> OSD were made a long time ago, not even a ceph-disk, but a set of scripts.
> There was an idea to get their configuration in json via ceph-volume simple
>
On Mon, Feb 18, 2019 at 2:46 AM Rainer Krienke wrote:
>
> Hello,
>
> thanks for your answer, but zapping the disk did not make any
> difference. I still get the same error. Looking at the debug output I
> found this error message that is probably the root of all trouble:
>
> # ceph-volume lvm
On Mon, Feb 4, 2019 at 4:43 AM Hector Martin wrote:
>
> On 02/02/2019 05:07, Stuart Longland wrote:
> > On 1/2/19 10:43 pm, Alfredo Deza wrote:
> >>> The tmpfs setup is expected. All persistent data for bluestore OSDs
> >>> setup with LVM are stored i
On Fri, Feb 1, 2019 at 6:07 PM Shain Miley wrote:
>
> Hi,
>
> I went to replace a disk today (which I had not had to do in a while)
> and after I added it the results looked rather odd compared to times past:
>
> I was attempting to replace /dev/sdk on one of our osd nodes:
>
> #ceph-deploy disk
On Fri, Feb 1, 2019 at 6:35 PM Vladimir Prokofev wrote:
>
> Your output looks a bit weird, but still, this is normal for bluestore. It
> creates small separate data partition that is presented as XFS mounted in
> /var/lib/ceph/osd, while real data partition is hidden as raw(bluestore)
> block
On Fri, Feb 1, 2019 at 3:08 PM Stuart Longland
wrote:
>
> On 1/2/19 10:43 pm, Alfredo Deza wrote:
> >>> I think mounting tmpfs for something that should be persistent is highly
> >>> dangerous. Is there some flag I should be using when creating the
> >
On Fri, Feb 1, 2019 at 6:28 AM Burkhard Linke
wrote:
>
> Hi,
>
> On 2/1/19 11:40 AM, Stuart Longland wrote:
> > Hi all,
> >
> > I'm just in the process of migrating my 3-node Ceph cluster from
> > BTRFS-backed Filestore over to Bluestore.
> >
> > Last weekend I did this with my first node, and
On Thu, Jan 24, 2019 at 4:13 PM mlausch wrote:
>
>
>
> Am 24.01.19 um 22:02 schrieb Alfredo Deza:
> >>
> >> Ok with a new empty journal the OSD will not start. I have now rescued
> >> the data with dd and the recrypt it with a other key and copied
On Thu, Jan 24, 2019 at 3:17 PM Manuel Lausch wrote:
>
>
>
> On Wed, 23 Jan 2019 16:32:08 +0100
> Manuel Lausch wrote:
>
>
> > >
> > > The key api for encryption is *very* odd and a lot of its quirks are
> > > undocumented. For example, ceph-volume is stuck supporting naming
> > > files and keys
On Wed, Jan 23, 2019 at 11:03 AM Dietmar Rieder
wrote:
>
> On 1/23/19 3:05 PM, Alfredo Deza wrote:
> > On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski wrote:
> >>
> >> On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote:
> >>> Hi,
> >>>
ore of a way to keep existing
ceph-disk OSDs and create new ceph-volume OSDs, which you can, as long
as this is not Nautilus or newer where ceph-disk doesn't exist
> I'm sure there's a way to get them running again, but I imagine you'd rather
> not
> manually deal with that.
> >
>
deployed with ceph-disk.
>
> Regards
> Manuel
>
>
> On Tue, 22 Jan 2019 07:44:02 -0500
> Alfredo Deza wrote:
>
>
> > This is one case we didn't anticipate :/ We supported the wonky
> > lockbox setup and thought we wouldn't need to go further back,
> > al
On Tue, Jan 22, 2019 at 6:45 AM Manuel Lausch wrote:
>
> Hi,
>
> we want upgrade our ceph clusters from jewel to luminous. And also want
> to migrate the osds to ceph-volume described in
> http://docs.ceph.com/docs/luminous/ceph-volume/simple/scan/#ceph-volume-simple-scan
>
> The clusters are
On Sun, Jan 20, 2019 at 11:30 PM Brian Topping wrote:
>
> Hi all, looks like I might have pooched something. Between the two nodes I
> have, I moved all the PGs to one machine, reformatted the other machine,
> rebuilt that machine, and moved the PGs back. In both cases, I did this by
> taking
On Fri, Jan 18, 2019 at 10:07 AM Jan Kasprzak wrote:
>
> Alfredo,
>
> Alfredo Deza wrote:
> : On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote:
> : > Eugen Block wrote:
> : > :
> : > : I think you're running into an issue reported a couple of times.
&g
On Fri, Jan 18, 2019 at 7:07 AM Hector Martin wrote:
>
> On 17/01/2019 00:45, Sage Weil wrote:
> > Hi everyone,
> >
> > This has come up several times before, but we need to make a final
> > decision. Alfredo has a PR prepared that drops Python 2 support entirely
> > in master, which will mean
On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote:
>
> Eugen Block wrote:
> : Hi Jan,
> :
> : I think you're running into an issue reported a couple of times.
> : For the use of LVM you have to specify the name of the Volume Group
> : and the respective Logical Volume instead of the path, e.g.
>
On Tue, Dec 11, 2018 at 7:28 PM Tyler Bishop
wrote:
>
> Now I'm just trying to figure out how to create filestore in Luminous.
> I've read every doc and tried every flag but I keep ending up with
> either a data LV of 100% on the VG or a bunch fo random errors for
> unsupported flags...
An LV
On Tue, Dec 11, 2018 at 8:16 PM Mark Kirkwood
wrote:
>
> Looks like the 'delaylog' option for xfs is the problem - no longer supported
> in later kernels. See
> https://github.com/torvalds/linux/commit/444a702231412e82fb1c09679adc159301e9242c
>
> Offhand I'm not sure where that option is being
On Tue, Dec 4, 2018 at 6:44 PM Matthew Pounsett wrote:
>
>
>
> On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni wrote:
>>>
>>>
>>> Is there a way we can easily set that up without trying to use outdated
>>> tools? Presumably if ceph still supports this as the docs claim, there's a
>>> way to get it
On Fri, Nov 30, 2018 at 3:10 PM Paul Emmerich wrote:
>
> Am Mo., 8. Okt. 2018 um 23:34 Uhr schrieb Alfredo Deza :
> >
> > On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote:
> > >
> > > ceph-volume unfortunately doesn't handle completely hanging IOs
On Thu, Nov 15, 2018 at 8:57 AM Klimenko, Roman wrote:
>
> Hi everyone!
>
> As I noticed, ceph-volume lacks Ubuntu Trusty compatibility
> https://tracker.ceph.com/issues/23496
>
> So, I can't follow this instruction
> http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
>
> Do
On Wed, Nov 14, 2018 at 9:10 AM Matthew Vernon wrote:
>
> Hi,
>
> We currently deploy our filestore OSDs with ceph-disk (via
> ceph-ansible), and I was looking at using ceph-volume as we migrate to
> bluestore.
>
> Our servers have 60 OSDs and 2 NVME cards; each OSD is made up of a
> single hdd,
On Thu, Nov 8, 2018 at 3:02 AM Janne Johansson wrote:
>
> Den ons 7 nov. 2018 kl 18:43 skrev David Turner :
> >
> > My big question is that we've had a few of these releases this year that
> > are bugged and shouldn't be upgraded to... They don't have any release
> > notes or announcement and
It is pretty difficult to know what step you are missing if we are
getting the `activate --all` command.
Maybe if you try one by one, capturing each command, throughout the
process, with output. In the filestore-to-bluestore guides we never
advertise `activate --all` for example.
Something is
On Tue, Nov 6, 2018 at 8:41 AM Pavan, Krish wrote:
>
> Trying to created OSD with multipath with dmcrypt and it failed . Any
> suggestion please?.
ceph-disk is known to have issues like this. It is already deprecated
in the Mimic release and will no longer be available for the upcoming
release
r /dev/dm-* is expected, as that is created
every time the system boots.
>
>
> On Mon, Nov 5, 2018 at 4:14 PM, Alfredo Deza wrote:
>>
>> On Mon, Nov 5, 2018 at 4:04 PM Hayashida, Mami
>> wrote:
>> >
>> > WOW. With you two guiding me through every
On Mon, Nov 5, 2018 at 4:04 PM Hayashida, Mami wrote:
>
> WOW. With you two guiding me through every step, the 10 OSDs in question are
> now added back to the cluster as Bluestore disks!!! Here are my responses to
> the last email from Hector:
>
> 1. I first checked the permissions and they
On Mon, Nov 5, 2018 at 12:54 PM Hayashida, Mami wrote:
>
> I commented out those lines and, yes, I was able to restart the system and
> all the Filestore OSDs are now running. But when I cannot start converted
> Bluestore OSDs (service). When I look up the log for osd.60, this is what I
>
h1.device is trying to start, I suspect you have them in
> /etc/fstab. You should have a look around /etc to see if you have any
> stray references to those devices or old ceph-disk OSDs.
>
> On 11/6/18 1:37 AM, Hayashida, Mami wrote:
> > Alright. Thanks -- I will try this now.
>
said, if you want to do them one by one, then your
initial command is fine.
>
> On Mon, Nov 5, 2018 at 11:31 AM, Alfredo Deza wrote:
>>
>> On Mon, Nov 5, 2018 at 11:24 AM Hayashida, Mami
>> wrote:
>> >
>> > Thank you for all of your replies. Just to c
ce
That will take care of OSD 60. This is fine if you want to do them one
by one. To affect everything from ceph-disk, you would need to:
ln -sf /dev/null /etc/systemd/system/ceph-disk@.service
>
>Then reboot?
>
>
> On Mon, Nov 5, 2018 at 11:17 AM, Alfredo Deza wrote:
>
gle one of the newly-converted Bluestore OSD disks
>> (/dev/sd{h..q}1).
This will happen with stale ceph-disk systemd units. You can disable those with:
ln -sf /dev/null /etc/systemd/system/ceph-disk@.service
>>
>>
>> --
>>
>> On Mon, Nov 5, 2018 at 9:57 AM, Alfr
3.7T 0 disk
> └─hdd65-data65 252:15 0 3.7T 0 lvm
> sdn 8:208 0 3.7T 0 disk
> └─hdd66-data66 252:16 0 3.7T 0 lvm
> sdo 8:224 0 3.7T 0 disk
> └─hdd67-data67 252:17 0 3.7T 0 lvm
> sdp 8:240 0 3.7T 0 disk
> └─hdd6
estion. Are there any changes I need to make to the ceph.conf
> file? I did comment out this line that was probably used for creating
> Filestore (using ceph-deploy): osd journal size = 40960
Since you've pre-created the LVs the commented out line will not
affect anything.
>
>
>
> On
On Wed, Oct 31, 2018 at 5:22 AM Hector Martin wrote:
>
> On 31/10/2018 05:55, Hayashida, Mami wrote:
> > I am relatively new to Ceph and need some advice on Bluestore migration.
> > I tried migrating a few of our test cluster nodes from Filestore to
> > Bluestore by following this
> >
On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick
wrote:
>
> On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
> wrote:
> >
> > Hello,
> >
> > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> > running into difficulties getting the current stable release running.
> > The versions in
On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote:
>
> ceph-volume unfortunately doesn't handle completely hanging IOs too
> well compared to ceph-disk.
Not sure I follow, would you mind expanding on what you mean by
"ceph-volume unfortunately doesn't handle completely hanging IOs" ?
On Mon, Oct 8, 2018 at 6:09 AM Kevin Olbrich wrote:
>
> Hi!
>
> Yes, thank you. At least on one node this works, the other node just freezes
> but this might by caused by a bad disk that I try to find.
If it is freezing, you could maybe try running the command where it
freezes? (ceph-volume
to accommodate for that behavior:
http://tracker.ceph.com/issues/36307
>
> Andras
>
> On 10/3/18 11:41 AM, Alfredo Deza wrote:
> > On Wed, Oct 3, 2018 at 11:23 AM Andras Pataki
> > wrote:
> >> Thanks - I didn't realize that was such a recent fix.
> >
t
> do (compared to my removal procedure, osd crush remove, auth del, osd rm)?
>
> Thanks,
>
> Andras
>
>
> On 10/3/18 10:36 AM, Alfredo Deza wrote:
> > On Wed, Oct 3, 2018 at 9:57 AM Andras Pataki
> > wrote:
> >> After replacing failing drive I'd like
On Wed, Oct 3, 2018 at 9:57 AM Andras Pataki
wrote:
>
> After replacing failing drive I'd like to recreate the OSD with the same
> osd-id using ceph-volume (now that we've moved to ceph-volume from
> ceph-disk). However, I seem to not be successful. The command I'm using:
>
> ceph-volume lvm
On Tue, Oct 2, 2018 at 10:23 AM Alex Litvak
wrote:
>
> Igor,
>
> Thank you for your reply. So what you are saying there are really no
> sensible space requirements for a collocated device? Even if I setup 30
> GB for DB (which I really wouldn't like to do due to a space waste
> considerations )
ersions were being packaged, is there
something I've missed? The tags have changed format it seems, from
0.0.11
>
>
>
>
> On Thu, Sep 20, 2018 at 3:57 PM Alfredo Deza wrote:
>>
>> Not sure how you installed ceph-ansible, the requirements mention a
>> version of
Not sure how you installed ceph-ansible, the requirements mention a
version of a dependency (the notario module) which needs to be 0.0.13
or newer, and you seem to be using an older one.
On Thu, Sep 20, 2018 at 6:53 PM solarflow99 wrote:
>
> Hi, tying to get this to do a simple deployment, and
On Fri, Sep 7, 2018 at 3:31 PM, Maged Mokhtar wrote:
> On 2018-09-07 14:36, Alfredo Deza wrote:
>
> On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid
> wrote:
>
> Hi there
>
> Asking the questions as a newbie. May be asked a number of times before by
> many but sorry
ld only
benefit
from WAL if you had another device, like an NVMe, where 2GB partitions
(or LVs) could be created for block.wal
>
> On Fri, Sep 7, 2018 at 5:36 PM Alfredo Deza wrote:
>>
>> On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid
>> wrote:
>> > Hi there
On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid wrote:
> Hi there
>
> Asking the questions as a newbie. May be asked a number of times before by
> many but sorry, it is not clear yet to me.
>
> 1. The WAL device is just like journaling device used before bluestore. And
> CEPH confirms Write to
On Sun, Sep 2, 2018 at 3:01 PM, David Wahler wrote:
> On Sun, Sep 2, 2018 at 1:31 PM Alfredo Deza wrote:
>>
>> On Sun, Sep 2, 2018 at 12:00 PM, David Wahler wrote:
>> > Ah, ceph-volume.log pointed out the actual problem:
>> >
>> > RuntimeError: Cannot
have some
time to check those logs now
>
> br
> wolfgang
>
> On 2018-08-30 19:18, Alfredo Deza wrote:
>> On Thu, Aug 30, 2018 at 5:24 AM, Wolfgang Lendl
>> wrote:
>>> Hi Alfredo,
>>>
>>>
>>> caught some logs:
>>> https:/
it confusing. I submitted a
> PR to add a brief note to the quick-start guide, in case anyone else
> makes the same mistake: https://github.com/ceph/ceph/pull/23879
>
Thanks for the PR!
> Thanks for the assistance!
>
> -- David
>
> On Sun, Sep 2, 2018 at 7:44 AM Alfredo Dez
On Sat, Sep 1, 2018 at 12:45 PM, Brett Chancellor
wrote:
> Hi Cephers,
> I am in the process of upgrading a cluster from Filestore to bluestore,
> but I'm concerned about frequent warnings popping up against the new
> bluestore devices. I'm frequently seeing messages like this, although the
>
There should be useful logs from ceph-volume in
/var/log/ceph/ceph-volume.log that might show a bit more here.
I would also try the command that fails directly on the server (sans
ceph-deploy) to see what is it that is actually failing. Seems like
the ceph-deploy log output is a bit out of order
On Thu, Aug 30, 2018 at 5:24 AM, Wolfgang Lendl
wrote:
> Hi Alfredo,
>
>
> caught some logs:
> https://pastebin.com/b3URiA7p
That looks like there is an issue with bluestore. Maybe Radoslaw or
Adam might know a bit more.
>
> br
> wolfgang
>
> On 2018-08-29 15:51,
I am addressing the doc bug at https://github.com/ceph/ceph/pull/23801
On Mon, Aug 27, 2018 at 2:08 AM, Eugen Block wrote:
> Hi,
>
> could you please paste your osd tree and the exact command you try to
> execute?
>
>> Extra note, the while loop in the instructions look like it's bad. I had
>>
On Wed, Aug 29, 2018 at 2:06 AM, Wolfgang Lendl
wrote:
> Hi,
>
> after upgrading my ceph clusters from 12.2.5 to 12.2.7 I'm experiencing
> random crashes from SSD OSDs (bluestore) - it seems that HDD OSDs are not
> affected.
> I destroyed and recreated some of the SSD OSDs which seemed to
On Thu, Aug 23, 2018 at 11:32 AM, Hervé Ballans
wrote:
> Le 23/08/2018 à 16:13, Alfredo Deza a écrit :
>
> What you mean is that, at this stage, I must directly declare the UUID paths
> in value of --block.db (i.e. replace /dev/nvme0n1p1 with its PARTUUID), that
> is ?
>
&g
On Thu, Aug 23, 2018 at 9:56 AM, Hervé Ballans
wrote:
> Le 23/08/2018 à 15:20, Alfredo Deza a écrit :
>
> Thanks Alfredo for your reply. I'm using the very last version of Luminous
> (12.2.7) and ceph-deploy (2.0.1).
> I have no problem in creating my OSD, that's work perfectly.
On Thu, Aug 23, 2018 at 9:12 AM, Hervé Ballans
wrote:
> Le 23/08/2018 à 12:51, Alfredo Deza a écrit :
>>
>> On Thu, Aug 23, 2018 at 5:42 AM, Hervé Ballans
>> wrote:
>>>
>>> Hello all,
>>>
>>> I would like to continue a thread that da
On Thu, Aug 23, 2018 at 5:42 AM, Hervé Ballans
wrote:
> Hello all,
>
> I would like to continue a thread that dates back to last May (sorry if this
> is not a good practice ?..)
>
> Thanks David for your usefil tips on this thread.
> In my side, I created my OSDs with ceph-deploy (in place of
On Wed, Aug 22, 2018 at 2:48 PM, David Turner wrote:
> The config settings for DB and WAL size don't do anything. For journal
> sizes they would be used for creating your journal partition with ceph-disk,
> but ceph-volume does not use them for creating bluestore OSDs. You need to
> create the
t-device-class",
>> "class": "ssd", "ids": ["48"]}]=-22 (22) Invalid argument v46327) v1
>> to unknown.0 -
>> 2018-08-20 08:57:58.785 7f9d85934700 10 mon.mon02@1(peon) e4
>> ms_handle_reset 0x55b4ecf4b200 10.24.52.17:6800/153683
>&g
On Fri, Aug 17, 2018 at 7:05 PM, Daznis wrote:
> Hello,
>
>
> I have replace one of our failed OSD drives and recreated a new osd
> with ceph-deploy and it failes to start.
Is it possible you haven't zapped the journal on nvme0n1p13 ?
>
> Command: ceph-deploy --overwrite-conf osd create
y "raw partition" you mean an actual partition or a raw device
>
> On Fri, Aug 17, 2018 at 2:54 PM Alfredo Deza wrote:
>>
>> On Fri, Aug 17, 2018 at 10:24 AM, Robert Stanford
>> wrote:
>> >
>> > I was using the ceph-volume create command, whic
the prepare and activate functions.
>>>
>>> ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc --block.db
>>> /dev/sdb --block.wal /dev/sdb
>>>
>>> That is the command context I've found on the web. Is it wrong?
>>>
>>>
ceph-volume will not do this
for you.
And then you can pass those newly created LVs like:
ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc
--block.db sdb-vg/block-lv --block.wal sdb-vg/wal-lv
>
> Thanks
> R
>
> On Fri, Aug 17, 2018 at 5:55 AM Alfredo Deza wrote
On Thu, Aug 16, 2018 at 4:44 PM, Cody wrote:
> Hi everyone,
>
> As a newbie, I have some questions about using SSD as the Bluestore
> journal device.
>
> 1. Is there a formula to calculate the optimal size of partitions on
> the SSD for each OSD, given their capacity and IO performance? Or is
>
On Thu, Aug 16, 2018 at 9:00 PM, Robert Stanford
wrote:
>
> I am following the steps to my filestore journal with a bluestore journal
> (http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/). It
> is broken at ceph-volume lvm create. Here is my error:
>
> --> Zapping
On Wed, Aug 1, 2018 at 4:33 AM, Jake Grimmett wrote:
> Dear All,
>
> Not sure if this is a bug, but when I add Intel Optane 900P drives,
> their device class is automatically set to SSD rather than NVME.
Not sure if we can't really tell apart SSDs from NVMe devices, but you
can use the
could be wrong, but this one looks like it might be an issue
> with\nmissing quotes. Always quote template expression brackets when
> they\nstart a value. For instance:\n\nwith_items:\n - {{ foo
> }}\n\nShould be written as:\n\nwith_items:\n - \"{{ foo
> }}\"
On Sat, Jul 28, 2018 at 12:44 AM, Satish Patel wrote:
> I have simple question i want to use LVM with bluestore (Its
> recommended method), If i have only single SSD disk for osd in that
> case i want to keep journal + data on same disk so how should i create
> lvm to accommodate ?
bluestore
1 - 100 of 465 matches
Mail list logo