the non starting osd's
kind regards
Ronny Aasen
On 12. sep. 2016 13:16, Ronny Aasen wrote:
after adding more osd's and having a big backfill running 2 of my osd's
keep on stopping.
We also recently upgraded from 0.94.7 to 0.94.9 but i do not know if
that is related.
the log say.
0> 2016
7p1
where /dev/md127p1 is the xfs partition for the osd.
good luck
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
intel 3500
anyone have a clue to what can be wrong ?
kind regrads
Ronny Aasen
-- log debug_filestore=10 --
-19> 2016-09-12 10:31:08.070947 7f8749125880 10
filestore(/var/lib/ceph/osd/ceph-8) getattr
1.fdd_head/1/1df4bfdd/rb.0.392c.238e1f29.002bd134/head '_' = 266
-18>
Ronny Aasen
On 20.07.2016 15:52, M Ranga Swami Reddy wrote:
Do we have any tool to monitor the OSDs usage with help of UI?
Thanks
Swami
[snip]
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
controller on this list ?
https://wiki.debian.org/LinuxRaidForAdmins
this controller software is often needed for troubleshooting, and can
give status and be monitored as well.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@list
On 06.09.2016 14:45, Ronny Aasen wrote:
On 06. sep. 2016 00:58, Brad Hubbard wrote:
On Mon, Sep 05, 2016 at 12:54:40PM +0200, Ronny Aasen wrote:
> Hello
>
> I have a osd that regularly dies on io, especially scrubbing.
> normaly i would assume a bad disk, and replace it. but the
On 06. sep. 2016 00:58, Brad Hubbard wrote:
On Mon, Sep 05, 2016 at 12:54:40PM +0200, Ronny Aasen wrote:
> Hello
>
> I have a osd that regularly dies on io, especially scrubbing.
> normaly i would assume a bad disk, and replace it. but then i normaly see
> messages in dmesg a
and see if there is any
way to salvage this osd?
And is there any information i should gather before i scratch the
filesystem and recreates it, perhaps there is some valuable insight into
whats's going on ??
kind regards
Ronny Aasen
running debian jessie + hammer 0.94.7
# uname -a
Linux ceph
://ceph.com/planet/ceph-manually-repair-object
so the scrub errors are gone now.
kind regards
Ronny Aasen
On 04. sep. 2016 00:04, Brad Hubbard wrote:
There should actually be "[ERR]" messages in the osd logs some time after
"deep-scrub starts". Can we see those and a pg query for
machine. Saves you 2 slots for osd's and they are quite
reliable. you could even use 2 sd cards if your machine have the
internal SD slot
http://www.dell.com/downloads/global/products/pedge/en/poweredge-idsdm-whitepaper-en.pdf
kind regards
Ronny Aasen
__
thanks for your commends. answers inline.
On 05/09/16 09:53, Christian Balzer wrote:
Hello,
On Mon, 9 May 2016 09:31:20 +0200 Ronny Aasen wrote:
hello
I am running a small lab ceph cluster consisting of 6 old used servers.
That's larger than quite a few production deployments
. so it's
fairly regular.
the raid5 sets are 12TB so i was hoping to be able to fix the problem,
rather then zapping the md and recreating from scratch. I was also
worrying if there was something fundamentaly wrong about running osd's
on software md raid5 devices.
kind regards
Ronny Aasen
101 - 112 of 112 matches
Mail list logo