[ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-03 Thread Matthew Pounsett
I'm in the process of updating some development VMs that use ceph-fs. It looks like recent updates to ceph have deprecated the 'ceph-deploy osd prepare' and 'activate' commands in favour of the previously-optional 'create' command. We're using filestore OSDs on these VMs, but I can't seem to

[ceph-users] Multi tenanted radosgw and existing Keystone users/tenants

2018-12-03 Thread Mark Kirkwood
Hi, I've set up a Luminous RGW with Keystone integration, and subsequently set rgw keystone implicit tenants = true So now all newly created users/tenants (or old ones that never accessed RGW) get their own namespaces. However there are some pre-existing users that have created buckets and

Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Jan Kasprzak
Paul Emmerich wrote: : Upgrading Ceph packages does not restart the services -- exactly for : this reason. : : This means there's something broken with your yum setup if the : services are restarted when only installing the new version. Interesting. I have verified that I have

[ceph-users] HDD spindown problem

2018-12-03 Thread Florian Engelmann
Hello, we are fighting a HDD spin-down problem on our production ceph cluster since two weeks now. The problem is not ceph related but I guess this topic is interesting to the list and to be honest I hope to find a solution here. We do use 6 OSD Nodes like: OS: Suse 12 SP3 Ceph: SES 5.5

[ceph-users] Proxmox 4.4, Ceph hammer, OSD cache link...

2018-12-03 Thread Marco Gaiarin
I've recently added a host to my ceph cluster, using proxmox 'helpers' to add OSD, eg: pveceph createosd /dev/sdb -journal_dev /dev/sda5 and now i've: root@blackpanther:~# ls -la /var/lib/ceph/osd/ceph-12 totale 60 drwxr-xr-x 3 root root 199 nov 21 17:02 . drwxr-xr-x 6 root

[ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Jan Kasprzak
Hello, ceph users, I have a small(-ish) Ceph cluster, where there are osds on each host, and in addition to that, there are mons on the first three hosts. Is it possible to upgrade the cluster to Luminous without service interruption? I have tested that when I run "yum --enablerepo Ceph

Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Paul Emmerich
Upgrading Ceph packages does not restart the services -- exactly for this reason. This means there's something broken with your yum setup if the services are restarted when only installing the new version. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at

Re: [ceph-users] CEPH DR RBD Mount

2018-12-03 Thread Jason Dillaman
FYI -- that "entries_behind_master=175226727" bit is telling you that it has only mirrored about 80% of the recent changes from primary to non-primary. Was the filesystem already in place? Are their any partitions/LVM volumes in-use on the device? Did you map the volume read-only? On Tue, Nov 27,

Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Oliver Freyermuth
There's also an additional issue which made us activate CEPH_AUTO_RESTART_ON_UPGRADE=yes (and of course, not have automatic updates of Ceph): When using compression e.g. with Snappy, it seems that already running OSDs which try to dlopen() the snappy library for some version upgrades become

Re: [ceph-users] PG problem after reweight (1 PG active+remapped)

2018-12-03 Thread Wido den Hollander
Hi, How old is this cluster? As this might be a CRUSH tunables issue where this pops up. You can try (might move a lot of data!) $ ceph osd getcrushmap -o crushmap.backup $ ceph osd crush tunables optimal If things go wrong you always have the old CRUSHmap: $ ceph osd setcrushmap -i

Re: [ceph-users] PG problem after reweight (1 PG active+remapped)

2018-12-03 Thread Athanasios Panterlis
Hi Wido, Yeap its quite old, since 2016. Its from a decommissioned cluster that we just keep in healthy state without much update efforts. I had in plan to do a clean up of unwanted disks snapshots etc, do a few re-weights, update it to latest stable (just like correctly you mentioned) and

Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Dan van der Ster
It's not that simple see http://tracker.ceph.com/issues/21672 For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was updated -- so the rpms restart the ceph.target. What's worse is that this seems to happen before all the new updated files are in place. Our 12.2.8 to 12.2.10

Re: [ceph-users] Disable automatic creation of rgw pools?

2018-12-03 Thread Martin Emrich
Could this include the "Zabbix" module in ceph-mgr? Cheers, Martin Am 30.11.18, 17:26 schrieb "Paul Emmerich" : radosgw-admin likes to create these pools, some monitoring tool might be trying to use it? Paul -- Paul Emmerich Looking for help with

[ceph-users] PG problem after reweight (1 PG active+remapped)

2018-12-03 Thread Athanasios Panterlis
Hi all, I am managing a typical small ceph cluster that consists of 4 nodes with each one having 7 OSDs (some in hdd pool, some in ssd pool) Having a healthy cluster and following some space issues due to bad pg management from ceph, I tried some reweighs in specific OSDs. Unfortunately the

Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Jan Kasprzak
Dan van der Ster wrote: : It's not that simple see http://tracker.ceph.com/issues/21672 : : For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was : updated -- so the rpms restart the ceph.target. : What's worse is that this seems to happen before all the new updated : files are in

Re: [ceph-users] Decommissioning cluster - rebalance questions

2018-12-03 Thread Paul Emmerich
There's unfortunately a difference between an osd with weight 0 and removing one item (OSD) from the crush bucket :( If you want to remove the whole cluster completely anyways: either keep it as down+out in the CRUSH map, i.e., just skip the last step. Or just purge the OSD without setting it to

Re: [ceph-users] PG problem after reweight (1 PG active+remapped)

2018-12-03 Thread Wido den Hollander
Hi, On 12/3/18 4:21 PM, Athanasios Panterlis wrote: > Hi Wido, > > Yeap its quite old, since 2016. Its from a decommissioned cluster that > we just keep in healthy state without much update efforts. > I had in plan to do a clean up of unwanted disks snapshots etc, do a few > re-weights, update

Re: [ceph-users] Upgrade to Luminous (mon+osd)

2018-12-03 Thread Dan van der Ster
On Mon, Dec 3, 2018 at 5:00 PM Jan Kasprzak wrote: > > Dan van der Ster wrote: > : It's not that simple see http://tracker.ceph.com/issues/21672 > : > : For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was > : updated -- so the rpms restart the ceph.target. > : What's worse is

[ceph-users] Decommissioning cluster - rebalance questions

2018-12-03 Thread sinan
Hi, Currently I am decommissioning an old cluster. For example, I want to remove OSD Server X with all its OSD's. I am following these steps for all OSD's of Server X: - ceph osd out - Wait for rebalance (active+clean) - On OSD: service ceph stop osd. Once the steps above are performed, the

[ceph-users] CentOS Dojo at Oak Ridge, Tennessee CFP is now open!

2018-12-03 Thread Mike Perez
Hey Cephers! Just wanted to give a heads up on the CentOS Dojo at Oak Ridge, Tennessee on Tuesday, April 16th, 2019. The CFP is now open, and I would like to encourage our community to participate if you can make the trip. Talks involving deploying Ceph with the community Ceph Ansible playbooks

Re: [ceph-users] CentOS Dojo at Oak Ridge, Tennessee CFP is now open!

2018-12-03 Thread Mike Perez
On 14:26 Dec 03, Mike Perez wrote: > Hey Cephers! > > Just wanted to give a heads up on the CentOS Dojo at Oak Ridge, Tennessee on > Tuesday, April 16th, 2019. > > The CFP is now open, and I would like to encourage our community to > participate > if you can make the trip. Talks involving