Re: [ceph-users] Even data distribution across OSD - Impossible Achievement?

2016-10-17 Thread info
Hi Wido, thanks for the explanation, generally speaking what is the best practice when a couple of OSDs are reaching near-full capacity? I could set their weight do something like 0.9 but this seems only a temporary solution. Of course i can add more OSDs, but this change radically my

[ceph-users] Even data distribution across OSD - Impossible Achievement?

2016-10-14 Thread info
Hi all, after encountering a warning about one of my OSDs running out of space i tried to study better how data distribution works. I'm running a Hammer Ceph cluster v. 0.94.7 I did some test with crushtool trying to figure out how to achieve even data distribution across OSDs. Let's

Re: [ceph-users] 2x replica with NVMe

2017-06-08 Thread info
I'm thinking to delay this project until Luminous release to have Bluestore support. So are you telling me that checksum capability will be present in Bluestore and therefore considering using NVMe with 2x replica for production data will be possibile? From: "nick" To:

[ceph-users] 2x replica with NVMe

2017-06-08 Thread info
Hi all, i'm going to build an all-flash ceph cluster, looking around the existing documentation i see lots of guides and and use case scenarios from various vendor testing Ceph with replica 2x. Now, i'm an old school Ceph user, I always considered 2x replica really dangerous for production