I am getting the impression that not everyone understands the subject 
that has been raised here.

Why do osd's need to be via lvm, and why not stick with direct disk 
access as it is now?

- Bluestore is created to cut out some fs overhead, 
- everywhere 10Gb is recommended because of better latency. (I even 
posted here something to make ceph better performing with 1Gb eth, 
disregarded because it would add complexity, fine, I can understand)

And then because of some start-up/automation issues (because that is the 
only thing being mentioned here for now), lets add the lvm 
tier? Introducing a layer that is constantly there and adds some 
overhead (maybe not that much) for every read and write operation? 





-----Original Message-----
From: Nick Fisk [mailto:n...@fisk.me.uk] 
Sent: vrijdag 8 juni 2018 12:14
To: 'Konstantin Shalygin'; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Why the change from ceph-disk to ceph-volume 
and lvm? (and just not stick with direct disk access)

http://docs.ceph.com/docs/master/ceph-volume/simple/

?

 

From: ceph-users <ceph-users-boun...@lists.ceph.com> On Behalf Of 
Konstantin Shalygin
Sent: 08 June 2018 11:11
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Why the change from ceph-disk to ceph-volume 
and lvm? (and just not stick with direct disk access)

 

        What is the reasoning behind switching to lvm? Does it make sense 
to go 
        through (yet) another layer to access the disk? Why creating this 
        dependency and added complexity? It is fine as it is, or not?

In fact, the question is why one tool is replaced by another without 
saving functionality.
Why lvm, why not bcache?

It seems to me that in the heads dev team someone has pushed the idea 
that lvm solves all problems.
But this is also added the overhead, and since this is a kernel module 
with a update we can get a performance drop, changes in module settings, 
etc.
I understand that for Red Hat Storage this is a solution, but for a 
community with different distributions and hardware this may be 
superfluous.
I would like to get back possibility of preparing osd's with direct 
access was restored, and let it not be the default.
Also this will save configurations for ceph-ansible. Actually I was 
don't know what is create my osd's ceph-disk/ceph-volume or whatever 
before this deprecation.





k




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to