Re: [ceph-users] How to backup hundreds or thousands of TB

2015-05-17 Thread Francois Lafont
Hi, Wido den Hollander wrote: Aren't snapshots something that should protect you against removal? IF snapshots work properly in CephFS you could create a snapshot every hour. Are you talking about the .snap/ directory in a cephfs directory? If yes, does it work well? Because, with Hammer, if

[ceph-users] PG scrubbing taking a long time

2015-05-17 Thread Tu Holmes
Hello everyone. I’m having an interesting thing happening to me. I have a PG that has been doing a deep scrub for 3 days. Other PGs start scrubbing and finish within a minute or two, but this PG just will not finish scrubbing at all. Any ideas as to how I can kick the scrub or nudge it into

Re: [ceph-users] Complete freeze of a cephfs client (unavoidable hard reboot)

2015-05-17 Thread Francois Lafont
Hi, Sorry for my late answer. Gregory Farnum wrote: 1. Is this kind of freeze normal? Can I avoid these freezes with a more recent version of the kernel in the client? Yes, it's normal. Although you should have been able to do a lazy and/or force umount. :) Ah, I haven't tried it. Maybe

Re: [ceph-users] Complete freeze of a cephfs client (unavoidable hard reboot)

2015-05-17 Thread Francois Lafont
John Spray wrote: Greg's response is pretty comprehensive, but for completeness I'll add that the specific case of shutdown blocking is http://tracker.ceph.com/issues/9477 Yes indeed, during the freeze, INFO: task sync:3132 blocked for more than 120 seconds... was exactly the message I have

[ceph-users] new relic ceph plugin

2015-05-17 Thread German Anders
Hi all, I want to know if someone has deploy some new relic (pyhon) plugin for Ceph. Thanks a lot, Best regards, *Ger* ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Interesting re-shuffling of pg's after adding new osd

2015-05-17 Thread Erik Logtenberg
Hi, Two days ago I added a new osd to one of my ceph machines, because one of the existing osd's got rather full. There was quite a difference in disk space usage between osd's, but I understand this is kind of just how ceph works. It spreads data over osd's but not perfectly even. Now check out