[ceph-users] multiple nvme per osd

2019-10-21 Thread Frank R
Hi all, Has anyone successfully created multiple partitions on an NVME device using ceph-disk? If so, which commands were used? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ubuntu Disco with most recent Kernel 5.0.0-32 seems to be instable

2019-10-21 Thread Ilya Dryomov
On Mon, Oct 21, 2019 at 6:12 PM Ranjan Ghosh wrote: > > Hi Ilya, > > thanks for your answer - really helpful! We were so desparate today due > to this bug that we downgraded to -23. But it's very good to know that > -31 doesnt contain this bug and we could safely update back to this release. > >

[ceph-users] Re: Ubuntu Disco with most recent Kernel 5.0.0-32 seems to be instable

2019-10-21 Thread Ranjan Ghosh
Hi Ilya, thanks for your answer - really helpful! We were so desparate today due to this bug that we downgraded to -23. But it's very good to know that -31 doesnt contain this bug and we could safely update back to this release. If a new version (say -33 is released): How/Where can I find out if

[ceph-users] Ubuntu Disco with most recent Kernel 5.0.0-32 seems to be instable

2019-10-21 Thread Ranjan Ghosh
Hi all, it seems Ceph on Ubuntu Disco (19.04) with the most recent kernel 5.0.0-32 is instable. It crashes sometimes after a few hours, sometimes even after a few minutes. I found this bug here on CoreOS: https://github.com/coreos/bugs/issues/2616 Which is exactly also the error message I get

[ceph-users] Re: Occasionally ceph.dir.rctime is incorrect (14.2.4 nautilus)

2019-10-21 Thread Toby Darling
Hi again I've managed to simplify this. I think it only affects empty directories. It is still non-deterministic, ceph.dir.rctime will be set correctly between 30% and 80% of the time, the rest of the time it will be the same as the directory's original mtime. #!/bin/bash

[ceph-users] Re: RBD Mirror, Clone non primary Image

2019-10-21 Thread Jason Dillaman
On Mon, Oct 21, 2019 at 8:03 AM wrote: > > Hi, > > I have a working RBD Mirror Setup using ceph version 14.2.4 on both sides. > I want to have a clone of a non primay image. > > I do it this way: > > 1. create snapshot of primary image > 2. wait for the snapshot to appear on the backup cluster >

[ceph-users] Re: Dashboard doesn't respond after failover

2019-10-21 Thread Matthew Stroud
Thanks for responding. It isn’t a session issue, because the port is closed. It wouldn’t bother me if I had to log in again. Thanks, Matthew Stroud On Oct 21, 2019, at 3:25 AM, Volker Theile wrote:  Hi Matthew, that's normal because the session is not authenticated on the failover

[ceph-users] Ceph BlueFS Superblock Lost

2019-10-21 Thread Winger Cheng
Hello Everyone, My osd is broken recently, the first 8M size block have been clean. Since I use ceph-volume create bluestore with all wal, db, slow on one disk, I lost the superblock. Thanks to the lvm backup, I save the superblock of Bluestore, but I can't get the bluefs

[ceph-users] rgw index large omap

2019-10-21 Thread Frank R
I have an rgw index pool that is alerting as "large" in 2 of the 3 osds on the PG. The primary has a large omap. The index is definitely in use by the bucket. Any opinions on the best way to solve this? 1. Remove the 2 osds with large index from cluster and rebalance? 2. Delete 2 of the 3 and

[ceph-users] Ceph Tech Talk October 2019: Ceph at Nasa

2019-10-21 Thread Mike Perez
Hi Cephers, I'm pleased to announce that we're starting Ceph Tech Talks back up (unfortunately probably the last one for this year due to holidays). On October 24th at 17:00 UTC Kevin Hrpcek will be presenting on Ceph at Nasa, and why they use librados instead of higher-level features. For

[ceph-users] RBD Mirror, Clone non primary Image

2019-10-21 Thread yveskretzschmar
Hi, I have a working RBD Mirror Setup using ceph version 14.2.4 on both sides. I want to have a clone of a non primay image. I do it this way: 1. create snapshot of primary image 2. wait for the snapshot to appear on the backup cluster 3. create a clone in backup cluster (using Simplified RBD

[ceph-users] Re: RadosGW cant list objects when there are too many of them

2019-10-21 Thread Paul Emmerich
On Mon, Oct 21, 2019 at 11:20 AM Arash Shams wrote: > Yes listing v2 is not supported yet. I checked metadata osds and all of them > are 600gb 10k hdd I dont think this was the issue. > I will test the --allow-unordered 5 million objects in a single bucket and metadata on HDD is a disaster

[ceph-users] Re: Install error

2019-10-21 Thread Janne Johansson
Den mån 21 okt. 2019 kl 13:15 skrev masud parvez : > I am trying to install ceph in ubuntu 16.04 by this link > https://www.supportsages.com/ceph-part-5-ceph-configuration-on-ubuntu/ > > It's kind of hard to support someone elses documentation, you should really have started with contacting them

[ceph-users] Install error

2019-10-21 Thread masud parvez
I am trying to install ceph in ubuntu 16.04 by this link https://www.supportsages.com/ceph-part-5-ceph-configuration-on-ubuntu/ but when I am run this command #ceph-deploy install ceph-deploy monnode1 osd0 osd1 I am facing this error. [ceph-deploy][WARNIN] E: Sub-process /usr/bin/dpkg returned

[ceph-users] Re: RadosGW cant list objects when there are too many of them

2019-10-21 Thread Arash Shams
Thanks paul Yes listing v2 is not supported yet. I checked metadata osds and all of them are 600gb 10k hdd I dont think this was the issue. I will test the --allow-unordered Regards From: Paul Emmerich Sent: Thursday, October 17, 2019 10:00 AM To: Arash Shams