[ceph-users] radosgw backup

2015-05-29 Thread Konstantin Ivanov
Hi everyone. I'm wondering - is there way to backup radosgw data? What i already tried. create backup pool - copy .rgw.buckets to backup pool. Then i delete object via s3 client. And then i copy data from backup pool to .rgw.buckets. I still can't see object in s3 client, but can get it via http

Re: [ceph-users] mds crash

2015-05-29 Thread Peter Tiernan
Thank you for your reply I had read the 'mds crashing' thread and i dont think im seeing that bug (http://tracker.ceph.com/issues/10449) . I have enabled debug objector = 10 and here is the full log on starting mds : http://pastebin.com/dbk0uLYy Here is the last part of log: -35

Re: [ceph-users] mds crash

2015-05-29 Thread Peter Tiernan
hi, that appears to have worked. The mds are now stable and I can read and write correctly. thanks for the help and have a good day. On 29/05/15 12:25, John Spray wrote: On 29/05/2015 11:41, Peter Tiernan wrote: ok, thanks. I wasn’t aware of this. Should this command fix everything or is

Re: [ceph-users] mds crash

2015-05-29 Thread John Spray
On 29/05/2015 09:46, Peter Tiernan wrote: -16 2015-05-29 09:28:23.106541 7f78c53a9700 10 mds.0.objecter in handle_osd_op_reply -15 2015-05-29 09:28:23.106543 7f78c53a9700 7 mds.0.objecter handle_osd_op_reply 28 ondisk v 0'0 uv 0 in 11.5ce99960 attempt 1 -14 2015-05-29

Re: [ceph-users] mds crash

2015-05-29 Thread Peter Tiernan
ok, thanks. I wasn’t aware of this. Should this command fix everything or is do i need to delete cephfs and pools and start again: ceph osd tier cache-mode CachePool writeback On 29/05/15 11:37, John Spray wrote: On 29/05/2015 11:34, Peter Tiernan wrote: ok, thats interesting. I had issues

Re: [ceph-users] mds crash

2015-05-29 Thread John Spray
On 29/05/2015 11:34, Peter Tiernan wrote: ok, thats interesting. I had issues before this crash where files were being garbled. I followed what I thought was the correct procedure for erasure coded pool with cache tier: ceph osd pool create ECpool 800 800 erasure default ceph osd pool

Re: [ceph-users] mds crash

2015-05-29 Thread Peter Tiernan
ok, thats interesting. I had issues before this crash where files were being garbled. I followed what I thought was the correct procedure for erasure coded pool with cache tier: ceph osd pool create ECpool 800 800 erasure default ceph osd pool create CachePool 4096 4096 ceph osd tier add

Re: [ceph-users] NFS interaction with RBD

2015-05-29 Thread Georgios Dimitrakakis
All, I 've tried to recreate the issue without success! My configuration is the following: OS (Hypervisor + VM): CentOS 6.6 (2.6.32-504.1.3.el6.x86_64) QEMU: qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64 Ceph: ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047), 20x4TB OSDs equally

Re: [ceph-users] Discuss: New default recovery config settings

2015-05-29 Thread Milosz Tanski
On Fri, May 29, 2015 at 5:47 PM, Samuel Just sj...@redhat.com wrote: Many people have reported that they need to lower the osd recovery config options to minimize the impact of recovery on client io. We are talking about changing the defaults as follows: osd_max_backfills to 1 (from 10)

Re: [ceph-users] Discuss: New default recovery config settings

2015-05-29 Thread Josef Johansson
Hi, We did it the other way around instead, defining a period where the load is lighter and turn off/on backfill/recover. Then you want the backfill values to be the what is default right now. Also, someone said that (think it was Greg?) If you have problems with backfill, your cluster backing

Re: [ceph-users] Hammer 0.94.1 - install-deps.sh script error

2015-05-29 Thread Loic Dachary
Hi, On 28/05/2015 05:13, Dyweni - Ceph-Users wrote: Hi Guys, Running the install-deps.sh script on Debian Squeeze results in the package 'cryptsetup-bin' not being found (and 'cryptsetup' not being used). This is due to the pipe character being deleted. To fix this, I replaced this

Re: [ceph-users] Discuss: New default recovery config settings

2015-05-29 Thread Stillwell, Bryan
I like the idea of turning the defaults down. During the ceph operators session at the OpenStack conference last week Warren described the behavior pretty accurately as Ceph basically DOSes itself unless you reduce those settings. Maybe this is more of a problem when the clusters are small?

Re: [ceph-users] Discuss: New default recovery config settings

2015-05-29 Thread Gregory Farnum
On Fri, May 29, 2015 at 2:47 PM, Samuel Just sj...@redhat.com wrote: Many people have reported that they need to lower the osd recovery config options to minimize the impact of recovery on client io. We are talking about changing the defaults as follows: osd_max_backfills to 1 (from 10)

Re: [ceph-users] Discuss: New default recovery config settings

2015-05-29 Thread Somnath Roy
Sam, We are seeing some good client IO results during recovery by using the following values.. osd recovery max active = 1 osd max backfills = 1 osd recovery threads = 1 osd recovery op priority = 1 It is all flash though. The recovery time in case of entire node (~120 TB) failure/a single

[ceph-users] newstore configuration

2015-05-29 Thread Srikanth Madugundi
Hi, I have setup a cluster with newstore functionality and see that file sized of 100KB are stored in the DB and files 100KB are stored in fragments directory. Is there a way to change this threshold value in ceph.conf? Regards Srikanth ___ ceph-users

Re: [ceph-users] Hammer 0.94.1 - install-deps.sh script error

2015-05-29 Thread Dyweni - Ceph-Users
Looks good to me. Dyweni On 2015-05-29 17:08, Loic Dachary wrote: Hi, On 28/05/2015 05:13, Dyweni - Ceph-Users wrote: Hi Guys, Running the install-deps.sh script on Debian Squeeze results in the package 'cryptsetup-bin' not being found (and 'cryptsetup' not being used). This is due to

Re: [ceph-users] NFS interaction with RBD

2015-05-29 Thread John-Paul Robinson
In the end this came down to one slow OSD. There were no hardware issues so have to just assume something gummed up during rebalancing and peering. I restarted the osd process after setting the cluster to noout. After the osd was restarted the rebalance completed and the cluster returned to

[ceph-users] Discuss: New default recovery config settings

2015-05-29 Thread Samuel Just
Many people have reported that they need to lower the osd recovery config options to minimize the impact of recovery on client io. We are talking about changing the defaults as follows: osd_max_backfills to 1 (from 10) osd_recovery_max_active to 3 (from 15) osd_recovery_op_priority to 1 (from