Re: [ceph-users] ceph OSD with 95% full

2016-09-08 Thread Ronny Aasen
ceph-dash is VERY easy to set up and get working https://github.com/Crapworks/ceph-dash gives you a nice webpage to manually observe from. the page is also easily read by any alerting software you might have. and you should configure it to alert on anything besides HEALTH_OK Kind regards

Re: [ceph-users] ceph OSD with 95% full

2016-07-20 Thread M Ranga Swami Reddy
Do we have any tool to monitor the OSDs usage with help of UI? Thanks Swami On Tue, Jul 19, 2016 at 6:44 PM, M Ranga Swami Reddy wrote: > +1 .. I agree > > Thanks > Swami > > On Tue, Jul 19, 2016 at 4:57 PM, Lionel Bouton > wrote: >> Hi, >>

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
+1 .. I agree Thanks Swami On Tue, Jul 19, 2016 at 4:57 PM, Lionel Bouton wrote: > Hi, > > On 19/07/2016 13:06, Wido den Hollander wrote: >>> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy : >>> >>> >>> Thanks for the correction...so even

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
>That should be a config option, since reading while writes still block is also >a danger. Multiple clients could read the same object, >perform a in-memory >change and their write will block. >Now, which client will 'win' after the full flag has been removed? >That could lead to data

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Lionel Bouton
Hi, On 19/07/2016 13:06, Wido den Hollander wrote: >> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy : >> >> >> Thanks for the correction...so even one OSD reaches to 95% full, the >> total ceph cluster IO (R/W) will be blocked...Ideally read IO should >> work... >

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Wido den Hollander
> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy : > > > Thanks for the correction...so even one OSD reaches to 95% full, the > total ceph cluster IO (R/W) will be blocked...Ideally read IO should > work... That should be a config option, since reading while writes

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
Thanks for the correction...so even one OSD reaches to 95% full, the total ceph cluster IO (R/W) will be blocked...Ideally read IO should work... Thanks Swami On Tue, Jul 19, 2016 at 3:41 PM, Wido den Hollander wrote: > >> Op 19 juli 2016 om 11:55 schreef M Ranga Swami Reddy

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Wido den Hollander
> Op 19 juli 2016 om 11:55 schreef M Ranga Swami Reddy : > > > Thanks for detail... > When an OSD is 95% full, then that specific OSD's write IO blocked. > No, the *whole* cluster will block. In the OSDMap the flag 'full' is set which causes all I/O to stop (even read!)

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
Thanks for detail... When an OSD is 95% full, then that specific OSD's write IO blocked. Thanks Swami On Tue, Jul 19, 2016 at 3:07 PM, Christian Balzer wrote: > > Hello, > > On Tue, 19 Jul 2016 14:23:32 +0530 M Ranga Swami Reddy wrote: > >> >> Using ceph cluster with 100+ OSDs

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Christian Balzer
Hello, On Tue, 19 Jul 2016 14:23:32 +0530 M Ranga Swami Reddy wrote: > >> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data. > >> One of the OSD is 95% full. > >> If an OSD is 95% full, is it impact the any storage operation? Is this > >> impacts on VM/Instance? > > >Yes,

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
>> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data. >> One of the OSD is 95% full. >> If an OSD is 95% full, is it impact the any storage operation? Is this >> impacts on VM/Instance? >Yes, one OSD will impact whole cluster. It will block write operations to the >cluster

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Henrik Korkuc
On 16-07-19 11:44, M Ranga Swami Reddy wrote: Hi, Using ceph cluster with 100+ OSDs and cluster is filled with 60% data. One of the OSD is 95% full. If an OSD is 95% full, is it impact the any storage operation? Is this impacts on VM/Instance? Yes, one OSD will impact whole cluster. It will

[ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
Hi, Using ceph cluster with 100+ OSDs and cluster is filled with 60% data. One of the OSD is 95% full. If an OSD is 95% full, is it impact the any storage operation? Is this impacts on VM/Instance? Immediately I have reduced the OSD weight, which was filled with 95 % data. After re-weight, data