ceph-dash is VERY easy to set up and get working
https://github.com/Crapworks/ceph-dash
gives you a nice webpage to manually observe from.
the page is also easily read by any alerting software you might have.
and you should configure it to alert on anything besides HEALTH_OK
Kind regards
Do we have any tool to monitor the OSDs usage with help of UI?
Thanks
Swami
On Tue, Jul 19, 2016 at 6:44 PM, M Ranga Swami Reddy
wrote:
> +1 .. I agree
>
> Thanks
> Swami
>
> On Tue, Jul 19, 2016 at 4:57 PM, Lionel Bouton
> wrote:
>> Hi,
>>
+1 .. I agree
Thanks
Swami
On Tue, Jul 19, 2016 at 4:57 PM, Lionel Bouton wrote:
> Hi,
>
> On 19/07/2016 13:06, Wido den Hollander wrote:
>>> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy :
>>>
>>>
>>> Thanks for the correction...so even
>That should be a config option, since reading while writes still block is also
>a danger. Multiple clients could read the same object, >perform a in-memory
>change and their write will block.
>Now, which client will 'win' after the full flag has been removed?
>That could lead to data
Hi,
On 19/07/2016 13:06, Wido den Hollander wrote:
>> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy :
>>
>>
>> Thanks for the correction...so even one OSD reaches to 95% full, the
>> total ceph cluster IO (R/W) will be blocked...Ideally read IO should
>> work...
>
> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy :
>
>
> Thanks for the correction...so even one OSD reaches to 95% full, the
> total ceph cluster IO (R/W) will be blocked...Ideally read IO should
> work...
That should be a config option, since reading while writes
Thanks for the correction...so even one OSD reaches to 95% full, the
total ceph cluster IO (R/W) will be blocked...Ideally read IO should
work...
Thanks
Swami
On Tue, Jul 19, 2016 at 3:41 PM, Wido den Hollander wrote:
>
>> Op 19 juli 2016 om 11:55 schreef M Ranga Swami Reddy
> Op 19 juli 2016 om 11:55 schreef M Ranga Swami Reddy :
>
>
> Thanks for detail...
> When an OSD is 95% full, then that specific OSD's write IO blocked.
>
No, the *whole* cluster will block. In the OSDMap the flag 'full' is set which
causes all I/O to stop (even read!)
Thanks for detail...
When an OSD is 95% full, then that specific OSD's write IO blocked.
Thanks
Swami
On Tue, Jul 19, 2016 at 3:07 PM, Christian Balzer wrote:
>
> Hello,
>
> On Tue, 19 Jul 2016 14:23:32 +0530 M Ranga Swami Reddy wrote:
>
>> >> Using ceph cluster with 100+ OSDs
Hello,
On Tue, 19 Jul 2016 14:23:32 +0530 M Ranga Swami Reddy wrote:
> >> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
> >> One of the OSD is 95% full.
> >> If an OSD is 95% full, is it impact the any storage operation? Is this
> >> impacts on VM/Instance?
>
> >Yes,
>> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
>> One of the OSD is 95% full.
>> If an OSD is 95% full, is it impact the any storage operation? Is this
>> impacts on VM/Instance?
>Yes, one OSD will impact whole cluster. It will block write operations to the
>cluster
On 16-07-19 11:44, M Ranga Swami Reddy wrote:
Hi,
Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
One of the OSD is 95% full.
If an OSD is 95% full, is it impact the any storage operation? Is this
impacts on VM/Instance?
Yes, one OSD will impact whole cluster. It will
Hi,
Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
One of the OSD is 95% full.
If an OSD is 95% full, is it impact the any storage operation? Is this
impacts on VM/Instance?
Immediately I have reduced the OSD weight, which was filled with 95 %
data. After re-weight, data
13 matches
Mail list logo