[ceph-users] EC Pool Disk Performance Toshiba vs Segate

2018-12-12 Thread Ashley Merrick
I have a Mimic Bluestore EC RBD Pool running on 8+2, this is currently running across 4 node's. 3 Node's are running Toshiba disk's while one node is running Segate disks (same size, spinning speed, enterprise disks e.t.c), I have noticed huge difference in IOWAIT and disk latency performance

Re: [ceph-users] mds lost very frequently

2018-12-12 Thread Yan, Zheng
On Thu, Dec 13, 2018 at 2:55 AM Sang, Oliver wrote: > > We are using luminous, we have seven ceph nodes and setup them all as MDS. > > Recently the MDS lost very frequently, and when there is only one MDS left, > the cephfs just degraded to unusable. > > > > Checked the mds log in one ceph node,

[ceph-users] RDMA/RoCE enablement failed with (113) No route to host

2018-12-12 Thread Michael Green
Hello collective wisdom, ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable) here. I have a working cluster here consisting of 3 monitor hosts, 64 OSD processes across 4 osd hosts, plus 2 MDSs, plus 2 MGRs. All of that is consumed by 10 client nodes. Every host in

Re: [ceph-users] ERR scrub mismatch

2018-12-12 Thread Marco Aroldi
Hello, Do you see the cause of the logged errors? I can't find any documentation about that, so I'm stuck. I really need a help. Thanks everybody Marco Il giorno ven 7 dic 2018, 17:30 Marco Aroldi ha scritto: > Thanks Greg, > Yes, I'm using CephFS and RGW (mainly CephFS) > The files are still

Re: [ceph-users] Decommissioning cluster - rebalance questions

2018-12-12 Thread Dyweni - Ceph-Users
Safest to just 'osd crush reweight osd.X 0' and let rebalancing finish. Then 'osd out X' and shutdown/remove osd drive. On 2018-12-04 03:15, Jarek wrote: On Mon, 03 Dec 2018 16:41:36 +0100 si...@turka.nl wrote: Hi, Currently I am decommissioning an old cluster. For example, I want to

[ceph-users] Why does "df" against a mounted cephfs report (vastly) different free space?

2018-12-12 Thread David Young
Hi all, I have a cluster used exclusively for cephfs (A EC "media" pool, and a standard metadata pool for the cephfs). "ceph -s" shows me: --- data: pools: 2 pools, 260 pgs objects: 37.18 M objects, 141 TiB usage: 177 TiB used, 114 TiB / 291 TiB avail pgs: 260

Re: [ceph-users] size of inc_osdmap vs osdmap

2018-12-12 Thread Sergey Dolgov
Those are sizes in file system. I use filestore as a backend On Wed, Dec 12, 2018, 22:53 Gregory Farnum Hmm that does seem odd. How are you looking at those sizes? > > On Wed, Dec 12, 2018 at 4:38 AM Sergey Dolgov wrote: > >> Greq, for example for our cluster ~1000 osd: >> >> size

Re: [ceph-users] size of inc_osdmap vs osdmap

2018-12-12 Thread Gregory Farnum
Hmm that does seem odd. How are you looking at those sizes? On Wed, Dec 12, 2018 at 4:38 AM Sergey Dolgov wrote: > Greq, for example for our cluster ~1000 osd: > > size osdmap.1357881__0_F7FE779D__none = 363KB (crush_version 9860, > modified 2018-12-12 04:00:17.661731) > size

Re: [ceph-users] Mounting DR copy as Read-Only

2018-12-12 Thread Vikas Rana
When i promote the DR image, I could mount it fine root@vtier-node1:~# rbd mirror image promote testm-pool/test01 --force Image promoted to primary root@vtier-node1:~# root@vtier-node1:~# mount /dev/nbd0 /mnt mount: block device /dev/nbd0 is write-protected, mounting read-only On Wed, Dec 12,

Re: [ceph-users] Mounting DR copy as Read-Only

2018-12-12 Thread Vikas Rana
To give more output. This is XFS FS. root@vtier-node1:~# rbd-nbd --read-only map testm-pool/test01 2018-12-12 13:04:56.674818 7f1c56e29dc0 -1 asok(0x560b19b3bdf0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to

Re: [ceph-users] НА: ceph pg backfill_toofull

2018-12-12 Thread Joachim Kraftmayer
In such a situation, we noticed a performance drop (caused by the filesystem) and soon had no free inodes left. ___ Clyso GmbH Am 12.12.2018 um 09:24 schrieb Klimenko, Roman: ​Ok, I'll try these params. thx!

Re: [ceph-users] Luminous v12.2.10 released

2018-12-12 Thread David Galloway
Hey Dan, Thanks for bringing this to our attention. Looks like it did get left out. I just pushed the package and added a step to the release process to make sure packages don't get skipped again like that. - David On 12/12/2018 11:03 AM, Dan van der Ster wrote: > Hey Abhishek, > > We just

Re: [ceph-users] Luminous v12.2.10 released

2018-12-12 Thread Dan van der Ster
Hey Abhishek, We just noticed that the debuginfo is missing for 12.2.10: http://download.ceph.com/rpm-luminous/el7/x86_64/ceph-debuginfo-12.2.10-0.el7.x86_64.rpm Did something break in the publishing? Cheers, Dan On Tue, Nov 27, 2018 at 3:50 PM Abhishek Lekshmanan wrote: > > > We're happy to

Re: [ceph-users] How to troubleshoot rsync to cephfs via nfs-ganesha stalling

2018-12-12 Thread Daniel Gryniewicz
Okay, this all looks fine, and it's extremely unlikely that a text file will have holes in it (I thought holes, because rsync handles holes, but wget would just copy zeros instead). Is this reproducible? If so, can you turn up Ganesha logging and post a log file somewhere? Daniel On

Re: [ceph-users] Mounting DR copy as Read-Only

2018-12-12 Thread Wido den Hollander
On 12/12/18 4:44 PM, Vikas Rana wrote: > Hi, > > We are using Luminous and copying a 100TB RBD image to DR site using RBD > Mirror. > > Everything seems to works fine. > > The question is, can we mount the DR copy as Read-Only? We can do it on > Netapp and we are trying to figure out if

[ceph-users] Mounting DR copy as Read-Only

2018-12-12 Thread Vikas Rana
Hi, We are using Luminous and copying a 100TB RBD image to DR site using RBD Mirror. Everything seems to works fine. The question is, can we mount the DR copy as Read-Only? We can do it on Netapp and we are trying to figure out if somehow we can mount it RO on DR site, then we can do backups at

Re: [ceph-users] Deploying an Active/Active NFS Cluster over CephFS

2018-12-12 Thread David C
Hi Jeff Many thanks for this! Looking forward to testing it out. Could you elaborate a bit on why Nautilus is recommended for this set-up please. Would attempting this with a Luminous cluster be a non-starter? On Wed, 12 Dec 2018, 12:16 Jeff Layton (Sorry for the duplicate email to ganesha

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-12 Thread Alfredo Deza
On Tue, Dec 11, 2018 at 7:28 PM Tyler Bishop wrote: > > Now I'm just trying to figure out how to create filestore in Luminous. > I've read every doc and tried every flag but I keep ending up with > either a data LV of 100% on the VG or a bunch fo random errors for > unsupported flags... An LV

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-12 Thread Alfredo Deza
On Tue, Dec 11, 2018 at 8:16 PM Mark Kirkwood wrote: > > Looks like the 'delaylog' option for xfs is the problem - no longer supported > in later kernels. See > https://github.com/torvalds/linux/commit/444a702231412e82fb1c09679adc159301e9242c > > Offhand I'm not sure where that option is being

Re: [ceph-users] size of inc_osdmap vs osdmap

2018-12-12 Thread Sergey Dolgov
Greq, for example for our cluster ~1000 osd: size osdmap.1357881__0_F7FE779D__none = 363KB (crush_version 9860, modified 2018-12-12 04:00:17.661731) size osdmap.1357882__0_F7FE772D__none = 363KB size osdmap.1357883__0_F7FE74FD__none = 363KB (crush_version 9861, modified 2018-12-12

Re: [ceph-users] yet another deep-scrub performance topic

2018-12-12 Thread Vladimir Prokofev
Thank you all for your input. My best guess at the moment is that deep-scrub performs as it should, and the issue is that it just has no limits on its performance, so it uses all the OSD time it can. Even if it has lower priority than client IO, it still can fill disk queue, and effectively

Re: [ceph-users] How to troubleshoot rsync to cephfs via nfs-ganesha stalling

2018-12-12 Thread Marc Roos
Hi Daniel, thanks for looking at this. These are the mount options type nfs4 (rw,nodev,relatime,vers=4,intr,local_lock=none,retrans=2,proto=tcp,rsize =8192,wsize=8192,hard,namlen=255,sec=sys) I have overwritten the original files, so I cannot examine if they had holes. To be honest I don't

Re: [ceph-users] move directories in cephfs

2018-12-12 Thread Zhenshi Zhou
Hi Thanks for the explanation. I did a test few moments ago. Everything goes just like what I expect. Thanks for your helps :) Konstantin Shalygin 于2018年12月12日周三 下午4:57写道: > Hi > > Than means, the 'mv' operation should be done if src and dst > are in the same pool, and the client should

Re: [ceph-users] civitweb segfaults

2018-12-12 Thread Leon Robinson
That did the trick, we had it set to 0 just on the swift rgw definitions although it was set on other rgw services, I'm guessing someone must have thought there was a different precedence in play in the past. On Tue, 2018-12-11 at 11:41 -0500, Casey Bodley wrote: Hi Leon, Are you running

Re: [ceph-users] move directories in cephfs

2018-12-12 Thread Konstantin Shalygin
Hi Than means, the 'mv' operation should be done if src and dst are in the same pool, and the client should have same permission on both src and dst. Do I have the right understanding? Yes. k ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] НА: ceph pg backfill_toofull

2018-12-12 Thread Klimenko, Roman
?Ok, I'll try these params. thx! От: Maged Mokhtar Отправлено: 12 декабря 2018 г. 10:51 Кому: Klimenko, Roman; ceph-users@lists.ceph.com Тема: Re: [ceph-users] ceph pg backfill_toofull There are 2 relevant params mon_osd_full_ratio 0.95

[ceph-users] mds lost very frequently

2018-12-12 Thread Sang, Oliver
We are using luminous, we have seven ceph nodes and setup them all as MDS. Recently the MDS lost very frequently, and when there is only one MDS left, the cephfs just degraded to unusable. Checked the mds log in one ceph node, I found below >