I still wanted to thank you for the nicely detailed arguments regarding
this, it is much appreciated. It really gives me the broader perspective
I was lacking.
-Original Message-
From: Warren Wang [mailto:warren.w...@walmart.com]
Sent: maandag 11 juni 2018 17:30
To: Konstantin
I'll chime in as a large scale operator, and a strong proponent of ceph-volume.
Ceph-disk wasn't accomplishing what was needed with anything other than
vanilla use cases (even then, still kind of broken). I'm not going to re-hash
Sage's valid points too much, but trying to manipulate the old
- ceph-disk was replaced for two reasons: (1) It's design was
centered around udev, and it was terrible. We have been plagued for years
with bugs due to race conditions in the udev-driven activation of OSDs,
mostly variations of "I rebooted and not all of my OSDs started." It's
horrible to
On Fri, 8 Jun 2018, Alfredo Deza wrote:
> On Fri, Jun 8, 2018 at 8:13 AM, Sage Weil wrote:
> > I'm going to jump in here with a few points.
> >
> > - ceph-disk was replaced for two reasons: (1) It's design was
> > centered around udev, and it was terrible. We have been plagued for years
> > with
On Fri, Jun 8, 2018 at 8:13 AM, Sage Weil wrote:
> I'm going to jump in here with a few points.
>
> - ceph-disk was replaced for two reasons: (1) It's design was
> centered around udev, and it was terrible. We have been plagued for years
> with bugs due to race conditions in the udev-driven
I'm going to jump in here with a few points.
- ceph-disk was replaced for two reasons: (1) It's design was
centered around udev, and it was terrible. We have been plagued for years
with bugs due to race conditions in the udev-driven activation of OSDs,
mostly variations of "I rebooted and not
http://docs.ceph.com/docs/master/ceph-volume/simple/
?
Only 'scan' & 'activate'. Not 'create'.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Den fre 8 juni 2018 kl 12:35 skrev Marc Roos :
>
> I am getting the impression that not everyone understands the subject
> that has been raised here.
>
Or they do and they do not agree with your vision of how things should be
done.
That is a distinct possibility one has to consider when using
> Answers:
> - unify setup, support for crypto & more
Unify setup by adding a dependency? There is / should be already support
for crypto now, not?
> - none
Costs of lvm can be argued. Something to go through, is worse than
nothing to go through.
Beuh ...
I have other questions:
- why not use LVM, and stick with direct disk access ?
- what are the cost of LVM (performance, latency etc) ?
Answers:
- unify setup, support for crypto & more
- none
Tldr: that technical choice is fine, nothing to argue about.
On 06/08/2018 07:15 AM, Marc
I am getting the impression that not everyone understands the subject
that has been raised here.
Why do osd's need to be via lvm, and why not stick with direct disk
access as it is now?
- Bluestore is created to cut out some fs overhead,
- everywhere 10Gb is recommended because of better
http://docs.ceph.com/docs/master/ceph-volume/simple/
?
From: ceph-users On Behalf Of Konstantin
Shalygin
Sent: 08 June 2018 11:11
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm?
(and just not stick with direct disk access)
What is the reasoning behind switching to lvm? Does it make sense to go
through (yet) another layer to access the disk? Why creating this
dependency and added complexity? It is fine as it is, or not?
In fact, the question is why one tool is replaced by another without
saving functionality.
Yes it is indeed difficult to find a good balance between asking
multiple things in one email and risk that not all are answered, or
putting them as individual questions.
-Original Message-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: donderdag 31 mei 2018 23:50
To: Marc
On Thu, May 31, 2018 at 10:33 PM, Marc Roos wrote:
>
> I actually tried to search the ML before bringing up this topic. Because
> I do not get the logic choosing this direction.
>
> - Bluestore is created to cut out some fs overhead,
> - everywhere 10Gb is recommended because of better latency.
You are also making this entire conversation INCREDIBLY difficult to follow
by creating so many new email threads instead of sticking with one.
On Thu, May 31, 2018 at 5:48 PM David Turner wrote:
> Your question assumes that ceph-disk was a good piece of software. It had
> a bug list a mile
Your question assumes that ceph-disk was a good piece of software. It had
a bug list a mile long and nobody working on it. A common example was how
simple it was to mess up any part of the dozens of components that allowed
an OSD to autostart on boot. One of the biggest problems was when
What is the reasoning behind switching to lvm? Does it make sense to go
through (yet) another layer to access the disk? Why creating this
dependency and added complexity? It is fine as it is, or not?
___
ceph-users mailing list
18 matches
Mail list logo