[ceph-users] How to ceph-volume on remote hosts?

2020-06-23 Thread steven prothero
Hello, I am new to CEPH and on a few test servers attempting to setup and learn a test ceph system. I started off the install with the "Cephadm" option and it uses podman containers. Followed steps here: https://docs.ceph.com/docs/master/cephadm/install/ I ran the bootstrap, added remote hosts,

[ceph-users] Bluestore performance tuning for hdd with nvme db+wal

2020-06-23 Thread Mark Kirkwood
Hi, We have recently added a new storage node to our Luminous (12.2.13) cluster. The prev nodes are all setup as Filestore: e.g 12 osds on hdd (Seagate Constellations) with one NVMe (Intel P4600) journal. With the new guy we decided to introduce Bluestore so it is configured as: (same HW) 12

[ceph-users] Re: How to remove one of two filesystems

2020-06-23 Thread Francois Legrand
Thanks a lot. It works. I could delete the filesystem and remove the pools (data and metadata). But now I am facing another problem which is that the removal of the pools seems to take a incredible time to free the space (the pool I deleted was about 100TB and in 36h I got back only 10TB). In

[ceph-users] Re: Nautilus: Monitors not listening on msgrv1

2020-06-23 Thread Paul Emmerich
It's only listening on v2 because the mon map says so. How it got into the mon map like this is hard to guess, but that's the place where you have to fix it. Simplest way to change the IP of a mon is destroy and re-create, but you can also edit the monmap manually following these instructions:

[ceph-users] Re: Nautilus: Monitors not listening on msgrv1

2020-06-23 Thread Julian Fölsch
Hi, Sorry about the second mail, I forgot the attachement! [0] https://paste.myftb.de/umefarejol.txt Am 23.06.20 um 22:44 schrieb Julian Fölsch: > Hi, > > I am currently facing the problem that our Ceph Cluster running Nautilus > is only listening on msgrv2 and we are not sure why. > This stops

[ceph-users] Nautilus: Monitors not listening on msgrv1

2020-06-23 Thread Julian Fölsch
Hi, I am currently facing the problem that our Ceph Cluster running Nautilus is only listening on msgrv2 and we are not sure why. This stops us from using block devices via rbd or mounting ceph via the kernel module. Attached[0] you can find the output of 'cat /etc/ceph/ceph.conf', 'ceph mon

[ceph-users] Re: NFS Ganesha 2.7 in Xenial not available

2020-06-23 Thread David Galloway
On 6/23/20 1:21 PM, Ramana Venkatesh Raja wrote: > On Tue, Jun 23, 2020 at 6:59 PM Victoria Martinez de la Cruz > wrote: >> >> Hi folks, >> >> I'm hitting issues with the nfs-ganesha-stable packages [0], the repo url >> [1] is broken. Is there a known issue for this? >> > > The missing

[ceph-users] Re: NFS Ganesha 2.7 in Xenial not available

2020-06-23 Thread Ramana Venkatesh Raja
On Tue, Jun 23, 2020 at 6:59 PM Victoria Martinez de la Cruz wrote: > > Hi folks, > > I'm hitting issues with the nfs-ganesha-stable packages [0], the repo url > [1] is broken. Is there a known issue for this? > The missing packages in chacra could be due to the recent mishap in the sepia long

[ceph-users] NFS Ganesha 2.7 in Xenial not available

2020-06-23 Thread Victoria Martinez de la Cruz
Hi folks, I'm hitting issues with the nfs-ganesha-stable packages [0], the repo url [1] is broken. Is there a known issue for this? Thanks, Victoria [0] https://shaman.ceph.com/repos/nfs-ganesha-stable/V2.7-stable/1a1fb71cdb811c1bac68f269dfbd5fed69c0913f/ceph_nautilus/128925/ [1]

[ceph-users] Re: OSD crash with assertion

2020-06-23 Thread Eugen Block
Hi, although changing an existing EC profile (by force) is possible (I haven't tried in Octopus yet) it won't have any effect on existing pools [1]: Choosing the right profile is important because it cannot be modified after the pool is created: a new pool with a different profile