Re: [ceph-users] details about cloning objects using librados

2019-07-01 Thread nokia ceph
Hi Brad, Thank you for your response , and we will check this video as well. Our requirement is while writing an object into the cluster , if we can provide number of copies to be made , the network consumption between client and cluster will be only for one object write. However , the cluster

[ceph-users] increase pg_num error

2019-07-01 Thread Sylvain PORTIER
Hi all, I am using ceph 14.2.1 (Nautilus) I am unable to increase the pg_num of a pool. I have a pool named Backup, the current pg_num is 64 : ceph osd pool get Backup pg_num => result pg_num: 64 And when I try to increase it using the command ceph osd pool set Backup pg_num 512 => result

Re: [ceph-users] increase pg_num error

2019-07-01 Thread Nathan Fish
I ran into this recently. Try running "ceph osd require-osd-release nautilus". This drops backwards compat with pre-nautilus and allows changing settings. On Mon, Jul 1, 2019 at 4:24 AM Sylvain PORTIER wrote: > > Hi all, > > I am using ceph 14.2.1 (Nautilus) > > I am unable to increase the

Re: [ceph-users] Cannot delete bucket

2019-07-01 Thread J. Eric Ivancich
> On Jun 27, 2019, at 4:53 PM, David Turner wrote: > > I'm still going at 452M incomplete uploads. There are guides online for > manually deleting buckets kinda at the RADOS level that tend to leave data > stranded. That doesn't work for what I'm trying to do so I'll keep going with > this

Re: [ceph-users] increase pg_num error

2019-07-01 Thread Robert LeBlanc
I believe he needs to increase the pgp_num first, then pg_num. Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, Jul 1, 2019 at 7:21 AM Nathan Fish wrote: > I ran into this recently. Try running "ceph osd require-osd-release > nautilus".

Re: [ceph-users] details about cloning objects using librados

2019-07-01 Thread Brett Chancellor
Ceph already does this by default. For each replicated pool, you can set the 'size' which is the number of copies you want Ceph to maintain. The accepted norm for replicas is 3, but you can set it higher if you want to incur the performance penalty. On Mon, Jul 1, 2019, 6:01 AM nokia ceph wrote:

Re: [ceph-users] How does monitor know OSD is dead?

2019-07-01 Thread Gregory Farnum
On Sat, Jun 29, 2019 at 8:13 PM Bryan Henderson wrote: > > > I'm not sure why the monitor did not mark it _out_ after 600 seconds > > (default) > > Well, that part I understand. The monitor didn't mark the OSD out because the > monitor still considered the OSD up. No reason to mark an up OSD

Re: [ceph-users] increase pg_num error

2019-07-01 Thread Brett Chancellor
In Nautilus just pg_num is sufficient for both increases and decreases. On Mon, Jul 1, 2019 at 10:55 AM Robert LeBlanc wrote: > I believe he needs to increase the pgp_num first, then pg_num. > > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1

[ceph-users] ceph-ansible with docker

2019-07-01 Thread Robert LeBlanc
I need some help getting up the learning curve and hope someone can get me on the right track. I need to set up a new cluster, but want the mon, mgr and rgw services as containers on the non-container osd nodes. It seem that doing no containers or all containers is fairly easy but I'm trying to

Re: [ceph-users] increase pg_num error

2019-07-01 Thread Robert LeBlanc
On Mon, Jul 1, 2019 at 11:57 AM Brett Chancellor wrote: > In Nautilus just pg_num is sufficient for both increases and decreases. > > Good to know, I haven't gotten to Nautilus yet. Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1

Re: [ceph-users] Migrating a cephfs data pool

2019-07-01 Thread Gregory Farnum
On Fri, Jun 28, 2019 at 5:41 PM Jorge Garcia wrote: > > Ok, actually, the problem was somebody writing to the filesystem. So I moved > their files and got to 0 objects. But then I tried to remove the original > data pool and got an error: > > # ceph fs rm_data_pool cephfs cephfs-data >

[ceph-users] PGs allocated to osd with weights 0

2019-07-01 Thread Yanko Davila
Hello I can’t get data flushed out of osd with weights set to 0. Is there any way of checking the tasks queued for PG remapping ? Thank You. Yanko. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] How does monitor know OSD is dead?

2019-07-01 Thread Bryan Henderson
> Normally in the case of a restart then somebody who used to have a > connection to the OSD would still be running and flag it as dead. But > if *all* the daemons in the cluster lose their soft state, that can't > happen. OK, thanks. I guess that explains it. But that's a pretty serious design

[ceph-users] ceph-osd not starting after network related issues

2019-07-01 Thread Ian Coetzee
Hi Guys, This is a cross-post from the proxmox ML. This morning I have a bit of a big boo-boo on our production system. After a very sudden network outage somewhere during the night, one of my ceph-osd's is no longer starting up. If I try and start it manually, I get a very spectacular