[ceph-users] What is the best way to "move" rgw.buckets.data pool to another cluster?

2019-06-28 Thread Fulvio Galeazzi
Hallo! Due to severe maintenance which is going to cause a prolonged shutdown, I need to move my RGW pools to a different cluster (and geographical site): my problem is with default.rgw.buckets.data pool, which is now 100 TB. Moreover, I'd also like to take advantage of the move to convert

Re: [ceph-users] MGR Logs after Failure Testing

2019-06-28 Thread Eugen Block
You may want to configure your standby-mds's to be "standby-replay" so the mds that's taking over from the failed one takes less time to take over. To manage this you add to your ceph.conf something like this: ---snip--- [mds.server1] mds_standby_replay = true mds_standby_for_rank = 0

Re: [ceph-users] What does the differences in osd benchmarks mean?

2019-06-28 Thread Lars Täuber
Hi Nathan, yes the osd hosts are dual-socket machines. But does this make such difference? osd.0: bench: wrote 1 GiB in blocks of 4 MiB in 15.0133 sec at 68 MiB/sec 17 IOPS osd.1: bench: wrote 1 GiB in blocks of 4 MiB in 6.98357 sec at 147 MiB/sec 36 IOPS Doubling the IOPS? Thanks, Lars

Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] What is the best way to "move" rgw.buckets.data pool to another cluster?

2019-06-28 Thread Fulvio Galeazzi
Hallo again, to reply to my own message... I guess the easiest will be to setup multisite replication. So now I will fight a bit with this and get back to the list in case of troubles. Sorry for the noise... Fulvio On 06/28/2019 10:36 AM, Fulvio Galeazzi wrote:

Re: [ceph-users] osd-mon failed with "failed to write to db"

2019-06-28 Thread Paul Emmerich
Did you run the cluster with only a single monitor? Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Thu, Jun 27, 2019 at 4:32 PM Anton Aleksandrov wrote: >

[ceph-users] troubleshooting space usage

2019-06-28 Thread Andrei Mikhailovsky
Hi Could someone please explain / show how to troubleshoot the space usage in Ceph and how to reclaim the unused space? I have a small cluster with 40 osds, replica of 2, mainly used as a backend for cloud stack as well as the S3 gateway. The used space doesn't make any sense to me,

Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Alfredo Deza
On Fri, Jun 28, 2019 at 7:53 AM Stolte, Felix wrote: > > Thanks for the update Alfredo. What steps need to be done to rename my > cluster back to "ceph"? That is a tough one, the ramifications of a custom cluster name are wild - it touches everything. I am not sure there is a step-by-step guide

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Patrick Hein
Afaik MDS doesn't delete the objects immediately but defer it for later. If you check that again now, how many objects does it report? Jorge Garcia schrieb am Fr., 28. Juni 2019, 23:16: > > On 6/28/19 9:02 AM, Marc Roos wrote: > > 3. When everything is copied-removed, you should end up with an

Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread Matt Benjamin
Hi Dominic, The reason is likely that RGW doesn't yet support ListObjectsV2. Support is nearly here though: https://github.com/ceph/ceph/pull/28102 Matt On Fri, Jun 28, 2019 at 12:43 PM wrote: > > All; > > I've got a RADOSGW instance setup, backed by my demonstration Ceph cluster. > I'm

Re: [ceph-users] How does monitor know OSD is dead?

2019-06-28 Thread solarflow99
The thing i've seen a lot is where an OSD would get marked down because of a failed drive, then then it would add itself right back again On Fri, Jun 28, 2019 at 9:12 AM Robert LeBlanc wrote: > I'm not sure why the monitor did not mark it down after 600 seconds > (default). The reason it is so

Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread DHilsbos
Matt; Yep, that would certainly explain it. My apologies, I almost searched for that information before sending the email. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. dhils...@performair.com www.PerformAir.com -Original

Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread Matt Benjamin
FYI, this PR just merged. I would expect to see backports at least as far as N, and others would be possible. regards, Matt On Fri, Jun 28, 2019 at 3:43 PM wrote: > > Matt; > > Yep, that would certainly explain it. > > My apologies, I almost searched for that information before sending the

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
This was after a while (I did notice that the number of objects went higher before it went lower). It is actually reporting more objects now. I'm not sure if some co-worker or program is writing to the filesystem... It got to these numbers and hasn't changed for the past couple hours. # ceph

[ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread DHilsbos
All; I've got a RADOSGW instance setup, backed by my demonstration Ceph cluster. I'm using Amazon's S3 SDK, and I've run into an annoying little snag. My code looks like this: amazonS3 = builder.build(); ListObjectsV2Request req = new

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
On 6/28/19 9:02 AM, Marc Roos wrote: 3. When everything is copied-removed, you should end up with an empty datapool with zero objects. I copied the data to a new directory and then removed the data from the old directory, but df still reports some objects in the old pool (not zero). Is

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Robert LeBlanc
Yes, 'mv' on the client is just a metadata operation and not what I'm talking about. The idea is to bring the old pool in as a cache layer, then bring the new pool in as a lower layer, then flush/evict the data from the cache and Ceph will move the data to the new pool, but still be able to access

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
Ok, actually, the problem was somebody writing to the filesystem. So I moved their files and got to 0 objects. But then I tried to remove the original data pool and got an error:   # ceph fs rm_data_pool cephfs cephfs-data   Error EINVAL: cannot remove default data pool So it seems I will

[ceph-users] could not find secret_id--auth to unkown host

2019-06-28 Thread lin zhou
Hi,cephers recently I found auth error logs in most of my osds but not all, escape some nodes I rebooted after the installation. it looks like this osd verify to 10.108.87.250:0 first and then the correct mon.a. 10.108.87.250 now is my radosgw node, maybe I use it as mon in my first

[ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
This seems to be an issue that gets brought up repeatedly, but I haven't seen a definitive answer yet. So, at the risk of repeating a question that has already been asked: How do you migrate a cephfs data pool to a new data pool? The obvious case would be somebody that has set up a replicated

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Marc Roos
What about adding the new data pool, mounting it and then moving the files? (read copy because move between data pools does not what you expect it do) -Original Message- From: Jorge Garcia [mailto:jgar...@soe.ucsc.edu] Sent: vrijdag 28 juni 2019 17:26 To: ceph-users Subject:

Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Erik McCormick
On Fri, Jun 28, 2019, 10:05 AM Alfredo Deza wrote: > On Fri, Jun 28, 2019 at 7:53 AM Stolte, Felix > wrote: > > > > Thanks for the update Alfredo. What steps need to be done to rename my > cluster back to "ceph"? > > That is a tough one, the ramifications of a custom cluster name are > wild -

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Jorge Garcia
Are you talking about adding the new data pool to the current filesystem? Like:   $ ceph fs add_data_pool my_ceph_fs new_ec_pool I have done that, and now the filesystem shows up as having two data pools:   $ ceph fs ls   name: my_ceph_fs, metadata pool: cephfs_meta, data pools: [cephfs_data

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Marc Roos
1. change data pool for a folder on the file system: setfattr -n ceph.dir.layout.pool -v fs_data.ec21 foldername 2. cp /oldlocation /foldername Remember that you preferably want to use mv, but this leaves (meta) data on the old pool, that is not what you want when you want to delete that

Re: [ceph-users] How does monitor know OSD is dead?

2019-06-28 Thread Robert LeBlanc
I'm not sure why the monitor did not mark it down after 600 seconds (default). The reason it is so long is that you don't want to move data around unnecessarily if the osd is just being rebooted/restarted. Usually, you will still have min_size OSDs available for all PGs that will allow IO to

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Robert LeBlanc
Given that the MDS knows everything, it seems trivial to add a ceph 'mv' command to do this. I looked at using tiering to try and do the move, but I don't know how to tell cephfs that the data is now on the new pool instead of the old pool name. Since we can't take a long enough downtime to move

Re: [ceph-users] Migrating a cephfs data pool

2019-06-28 Thread Marc Roos
Afaik is the mv now fast because it is not moving any real data, just some meta data. Thus a real mv will be slow (only in the case between different pools) because it copies the data to the new pool and when successful deletes the old one. This will of course take a lot more time, but you