Re: [ceph-users] Fwd: lost power. monitors died. Cephx errors now

2016-08-13 Thread Sean Sullivan
So with a patched leveldb to skip errors I now have a store.db that I can extract the pg,mon,and osd map from. That said when I try to start kh10-8 it bombs out:: --- --- root@kh10-8:/var/lib/ceph/mon/ceph-kh10-8# ceph-mon -i

Re: [ceph-users] Cascading failure on a placement group

2016-08-13 Thread Goncalo Borges
>It should be worthwhile to check if timezone is/was different in mind. What I meant was that it should be worthwhile to check if timezone is/was different in MONS also. Cheers From: Hein-Pieter van Braam [h...@tmm.cx] Sent: 13 August 2016 22:42 To:

Re: [ceph-users] Cascading failure on a placement group

2016-08-13 Thread Goncalo Borges
Hi HP My 2 cents again. In > http://tracker.ceph.com/issues/9732 There is a comment from Samuel saying "This...is not resolved! The utime_t->hobject_t mapping is timezone dependent. Needs to be not timezone dependent when generating the archive object names." The way I read it is that you

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-13 Thread Alex Gorbachev
On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov wrote: > On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev > wrote: >>> I'm confused. How can a 4M discard not free anything? It's either >>> going to hit an entire object or two adjacent objects,

Re: [ceph-users] Multiple OSD crashing a lot

2016-08-13 Thread Hein-Pieter van Braam
Hi Blade, I was planning to do something similar. Run the OSD in the way you describe, use object copy to copy the data to a new volume, then move the clients to the new volume. Thanks a lot, - HP On Sat, 2016-08-13 at 08:18 -0700, Blade Doyle wrote: > Hi HP. > > Mine was not really a fix, it

Re: [ceph-users] Multiple OSD crashing a lot

2016-08-13 Thread Blade Doyle
Hi HP. Mine was not really a fix, it was just a hack to get the OSD up long enough to make sure I had a full backup, then I rebuilt the cluster from scratch and restored the data. Though the hack did stop the OSD from crashing, it is probably a symptom of some internal problem, and may not be

Re: [ceph-users] Multiple OSD crashing a lot

2016-08-13 Thread Hein-Pieter van Braam
Hi Blade, I appear to be stuck in the same situation you were in. Do you still happen to have a patch to implement this workaround you described? Thanks, - HP ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Cascading failure on a placement group

2016-08-13 Thread Hein-Pieter van Braam
Hi, The timezones on all my systems appear to be the same, I just verified it by running 'date' on all my boxes. - HP On Sat, 2016-08-13 at 12:36 +, Goncalo Borges wrote: > The ticket I mentioned earlier was marked as a duplicate of > > http://tracker.ceph.com/issues/9732 > > Cheers >

Re: [ceph-users] Cascading failure on a placement group

2016-08-13 Thread Goncalo Borges
The ticket I mentioned earlier was marked as a duplicate of http://tracker.ceph.com/issues/9732 Cheers Goncalo From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Goncalo Borges [goncalo.bor...@sydney.edu.au] Sent: 13 August 2016 22:23 To: Hein-Pieter van Braam; ceph-users

Re: [ceph-users] Cascading failure on a placement group

2016-08-13 Thread Hein-Pieter van Braam
Hi Goncalo, Thank you for your response. I had already found that issue but it does not apply to my situation. The timezones are correct and I'm running a pure hammer cluster. - HP On Sat, 2016-08-13 at 12:23 +, Goncalo Borges wrote: > Hi HP. > > I am just a site admin so my opinion should

Re: [ceph-users] Cascading failure on a placement group

2016-08-13 Thread Goncalo Borges
Hi HP. I am just a site admin so my opinion should be validated by proper support staff Seems really similar to http://tracker.ceph.com/issues/14399 The ticket speaks about timezone difference between osds. Maybe it is something worthwhile to check? Cheers Goncalo

Re: [ceph-users] CephFS quota

2016-08-13 Thread Goncalo Borges
Hi Willi If you are using ceph-fuse, to enable quota, you need to pass "--client-quota" option in the mount operation. Cheers Goncalo From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Willi Fehler [willi.feh...@t-online.de] Sent: 13

[ceph-users] Cascading failure on a placement group

2016-08-13 Thread Hein-Pieter van Braam
Hello all, My cluster started to lose OSDs without any warning, whenever an OSD becomes the primary for a particular PG it crashes with the following stacktrace:  ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)  1: /usr/bin/ceph-osd() [0xada722]  2: (()+0xf100) [0x7fc28bca5100]  

Re: [ceph-users] CephFS quota

2016-08-13 Thread w...@42on.com
> Op 13 aug. 2016 om 09:24 heeft Willi Fehler het > volgende geschreven: > > Hello, > > I'm trying to use CephFS quaotas. On my client I've created a subdirectory in > my CephFS mountpoint and used the following command from the documentation. > > setfattr -n

Re: [ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-13 Thread w...@42on.com
> Op 13 aug. 2016 om 08:58 heeft Georgios Dimitrakakis > het volgende geschreven: > > >>> Op 13 aug. 2016 om 03:19 heeft Bill Sharer het volgende geschreven: >>> >>> If all the system disk does is handle the o/s (ie osd journals are >>> on dedicated or osd drives as

[ceph-users] CephFS quota

2016-08-13 Thread Willi Fehler
Hello, I'm trying to use CephFS quaotas. On my client I've created a subdirectory in my CephFS mountpoint and used the following command from the documentation. setfattr -n ceph.quota.max_bytes -v 1 /mnt/cephfs/quota But if I create files bigger than my quota nothing happens. Do I

Re: [ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-13 Thread Georgios Dimitrakakis
Op 13 aug. 2016 om 03:19 heeft Bill Sharer het volgende geschreven: If all the system disk does is handle the o/s (ie osd journals are on dedicated or osd drives as well), no problem. Just rebuild the system and copy the ceph.conf back in when you re-install ceph. Keep a spare copy of your

Re: [ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-13 Thread w...@42on.com
> Op 13 aug. 2016 om 03:19 heeft Bill Sharer het > volgende geschreven: > > If all the system disk does is handle the o/s (ie osd journals are on > dedicated or osd drives as well), no problem. Just rebuild the system and > copy the ceph.conf back in when you