[ceph-users] CephFS Ganesha NFS for VMWare

2019-10-29 Thread Glen Baars
Hello Ceph Users, I am trialing CephFS / Ganesha NFS for VMWare usage. We are on Mimic / Centos 7.7 / 130 x 12TB 7200rpm OSDs / 13 hosts / 3 replica. So far the read performance has been great. The write performance ( NFS sync ) hasn't been great. We use a lot of 64KB NFS read / writes and the

Re: [ceph-users] Any CEPH's iSCSI gateway users?

2019-06-11 Thread Glen Baars
Interesting performance increase! I'm Iscsi it at a few installations and now a wonder what version of Centos is required to improve performance! Did the cluster go from Luminous to Mimic? Glen -Original Message- From: ceph-users On Behalf Of Heðin Ejdesgaard Møller Sent: Saturday, 8

Re: [ceph-users] op_w_latency

2019-04-02 Thread Glen Baars
doing 500-1000 ops overall. The network is dual 10gbit using lacp. Vlan for private ceph traffic and untagged for public Glen From: Konstantin Shalygin Sent: Wednesday, 3 April 2019 11:39 AM To: Glen Baars Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] op_w_latency Hello Ceph Users

[ceph-users] op_w_latency

2019-04-01 Thread Glen Baars
Hello Ceph Users, I am finding that the write latency across my ceph clusters isn't great and I wanted to see what other people are getting for op_w_latency. Generally I am getting 70-110ms latency. I am using: ceph --admin-daemon /var/run/ceph/ceph-osd.102.asok perf dump | grep -A3

[ceph-users] When to use a separate RocksDB SSD

2019-03-21 Thread Glen Baars
Hello Ceph, What is the best way to find out how the RocksDB is currently performing? I need to build a business case for NVME devices for RocksDB. Kind regards, Glen Baars This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential

Re: [ceph-users] Slow OPS

2019-03-21 Thread Glen Baars
0042766 a:head v 30675'5522366)", "initiated_at": "2019-03-21 16:51:56.862447", "age": 376.527241, "duration": 1.331278, Kind regards, Glen Baars -Original Message- From: Brad Hubbard Sent: Th

Re: [ceph-users] Slow OPS

2019-03-20 Thread Glen Baars
{ "time": "2019-03-21 14:12:43.699872", "event": "commit_sent" }, Does anyone know what that section is waiting for? Kind regards, Glen Baars -Original Message- From: Br

[ceph-users] Slow OPS

2019-03-20 Thread Glen Baars
Hello Ceph Users, Does anyone know what the flag point 'Started' is? Is that ceph osd daemon waiting on the disk subsystem? Ceph 13.2.4 on centos 7.5 "description": "osd_op(client.1411875.0:422573570 5.18ds0 5:b1ed18e5:::rbd_data.6.cf7f46b8b4567.0046e41a:head [read

Re: [ceph-users] Mimic 13.2.4 rbd du slowness

2019-02-28 Thread Glen Baars
:46 AM To: Glen Baars Cc: Wido den Hollander ; ceph-users Subject: Re: [ceph-users] Mimic 13.2.4 rbd du slowness Have you used strace on the du command to see what it's spending its time doing? On Thu, Feb 28, 2019, 8:45 PM Glen Baars mailto:g...@onsitecomputers.com.au>> wrote: Hell

Re: [ceph-users] Mimic 13.2.4 rbd du slowness

2019-02-28 Thread Glen Baars
du command now takes around 2-3 minutes. Kind regards, Glen Baars -Original Message- From: Wido den Hollander Sent: Thursday, 28 February 2019 5:05 PM To: Glen Baars ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Mimic 13.2.4 rbd du slowness On 2/28/19 9:41 AM, Glen Baars wrote

Re: [ceph-users] Mimic 13.2.4 rbd du slowness

2019-02-28 Thread Glen Baars
goto cleanup; } else { vol->target.allocation = info.obj_size * info.num_objs; } ------ Kind regards, Glen Baars -Original Message- From: Wido den Hollander Sent: Thursday, 28 February 2019 3:49 PM To: Glen Baars ; ceph-users@lists.ceph.com Subjec

[ceph-users] Mimic 13.2.4 rbd du slowness

2019-02-27 Thread Glen Baars
running a rbd du on the large images. The limiting factor is the cpu on the rbd du command, it uses 100% of a single core. Our cluster is completely bluestore/mimic 13.2.4. 168 OSDs, 12 Ubuntu 16.04 hosts. Kind regards, Glen Baars This e-mail is intended solely for the benefit of the addressee(s

[ceph-users] Mimic Bluestore memory optimization

2019-02-24 Thread Glen Baars
the hosts are 1 x 6 core CPU, 72GB ram, 14 OSDs, 2 x 10Gbit. LSI cachecade / writeback cache for the HDD and LSI JBOD for SSDs. 9 hosts in this cluster. Kind regards, Glen Baars This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential

[ceph-users] Segfaults on 12.2.9 and 12.2.8

2019-01-14 Thread Glen Baars
adPool::WorkThreadSharded::entry()+0x10) [0x55565ee0f1e0] 19: (()+0x76ba) [0x7fec8af206ba] 20: (clone()+0x6d) [0x7fec89f9741d] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. Kind regards, Glen Baars This e-mail is intended solely for the benefit of the addressee(s) a

[ceph-users] Hyper-v ISCSI support

2018-09-21 Thread Glen Baars
error. I am assuming this is due to SCSI-3 persistent reservations. Has anyone managed to get ceph to serve iscsi to windows clustered shared volumes? If so, how? Kind regards, Glen Baars This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient

Re: [ceph-users] Invalid Object map without flags set

2018-08-20 Thread Glen Baars
Hello K, We have found our issue – we were only fixing the main RDB image in our script rather than the snapshots. Working fine now. Thanks for your help. Kind regards, Glen Baars From: Konstantin Shalygin Sent: Friday, 17 August 2018 11:20 AM To: ceph-users@lists.ceph.com; Glen Baars Subject

[ceph-users] Invalid Object map without flags set

2018-08-16 Thread Glen Baars
quot;format":2,"features":["layering","exclusive-lock","object-map","fast-diff","deep-flatten"],"flags":[],"create_timestamp":"Sat Apr 28 19:45:59 2018"} [Feat]["layering","exclusive-lock&

Re: [ceph-users] RBD journal feature

2018-08-16 Thread Glen Baars
Thanks for your help  Kind regards, Glen Baars From: Jason Dillaman Sent: Thursday, 16 August 2018 10:21 PM To: Glen Baars Cc: ceph-users Subject: Re: [ceph-users] RBD journal feature On Thu, Aug 16, 2018 at 2:37 AM Glen Baars mailto:g...@onsitecomputers.com.au>> wrote: Is the

Re: [ceph-users] RBD journal feature

2018-08-16 Thread Glen Baars
Is there any workaround that you can think of to correctly enable journaling on locked images? Kind regards, Glen Baars From: ceph-users On Behalf Of Glen Baars Sent: Tuesday, 14 August 2018 9:36 PM To: dilla...@redhat.com Cc: ceph-users Subject: Re: [ceph-users] RBD journal feature Hello

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason, Thanks for your help. Here is the output you asked for also. https://pastebin.com/dKH6mpwk Kind regards, Glen Baars From: Jason Dillaman Sent: Tuesday, 14 August 2018 9:33 PM To: Glen Baars Cc: ceph-users Subject: Re: [ceph-users] RBD journal feature On Tue, Aug 14, 2018 at 9

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason, I have now narrowed it down. If the image has an exclusive lock – the journal doesn’t go on the correct pool. Kind regards, Glen Baars From: Jason Dillaman Sent: Tuesday, 14 August 2018 9:29 PM To: Glen Baars Cc: ceph-users Subject: Re: [ceph-users] RBD journal feature On Tue

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason, I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf. it doesn’t seem to make a difference. Also, here is the output: rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a There are 0 metadata on this image. Kind regards, Glen Baars From: Jason

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason, I will also complete testing of a few combinations tomorrow to try and isolate the issue now that we can get it to work with a new image. The cluster started out at 12.2.3 bluestore so there shouldn’t be any old issues from previous versions. Kind regards, Glen Baars From: Jason

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, journaling flags: create_timestamp: Sat May 5 11:39:07 2018 journal: 37c8974b0dc51 mirroring state: disabled Kind regards, Glen Baars From: Jason Dillaman Sent: Tuesday, 14 August 2018 12:04 AM

Re: [ceph-users] RBD journal feature

2018-08-11 Thread Glen Baars
128K writes from 160MB/s down to 14MB/s ). We see no improvement when moving the journal to SSDPOOL ( but we don’t think it is really moving ) Kind regards, Glen Baars From: Jason Dillaman Sent: Saturday, 11 August 2018 11:28 PM To: Glen Baars Cc: ceph-users Subject: Re: [ceph-users] RBD journal

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-10 Thread Glen Baars
had these files in it. ceph.client.admin.keyring ceph.client.primary.keyring ceph.conf primary.client.primary.keyring primary.conf secondary.client.secondary.keyring secondary.conf Kind regards, Glen Baars -Original Message- From: Thode Jocelyn Sent: Thursday, 9 August 2018 1:41 PM

[ceph-users] RBD journal feature

2018-08-10 Thread Glen Baars
and all bluestore. We have also tried the ceph.conf option (rbd journal pool = SSDPOOL ) Has anyone else gotten this working? Kind regards, Glen Baars This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Glen Baars
Hello Erik, We are going to use RBD-mirror to replicate the clusters. This seems to need separate cluster names. Kind regards, Glen Baars From: Erik McCormick Sent: Thursday, 2 August 2018 9:39 AM To: Glen Baars Cc: Thode Jocelyn ; Vasu Kulkarni ; ceph-users@lists.ceph.com Subject: Re: [ceph

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Glen Baars
Hello Ceph Users, Does anyone know how to set the Cluster Name when deploying with Ceph-deploy? I have 3 clusters to configure and need to correctly set the name. Kind regards, Glen Baars -Original Message- From: ceph-users On Behalf Of Glen Baars Sent: Monday, 23 July 2018 5:59 PM

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-23 Thread Glen Baars
How very timely, I am facing the exact same issue. Kind regards, Glen Baars -Original Message- From: ceph-users On Behalf Of Thode Jocelyn Sent: Monday, 23 July 2018 1:42 PM To: Vasu Kulkarni Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name Hi, Yes

Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-21 Thread Glen Baars
. Kind regards, Glen Baars From: Linh Vu Sent: Sunday, 22 July 2018 7:46 AM To: Glen Baars ; ceph-users Subject: Re: 12.2.7 - Available space decreasing when adding disks Something funny going on with your new disks: 138 ssd 0.90970 1.0 931G 820G 111G 88.08 2.71 216 Added 139 ssd

Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-21 Thread Glen Baars
osd.78up 1.0 1.0 79 ssd 0.54579 osd.79up 1.0 1.0 Kind regards, Glen Baars From: Shawn Iverson Sent: Saturday, 21 July 2018 9:21 PM To: Glen Baars Cc: ceph-users Subject: Re: [ceph-users] 12.2.7 - Available space decreasing

[ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-20 Thread Glen Baars
44.07 1.35 104 74 ssd 0.54579 1.0 558G 273G 285G 48.91 1.50 122 75 ssd 0.54579 1.0 558G 281G 276G 50.45 1.55 114 78 ssd 0.54579 1.0 558G 289G 269G 51.80 1.59 133 79 ssd 0.54579 1.0 558G 276G 282G 49.39 1.52 119 Kind regards, Glen Baars BackOnline Manager This e

Re: [ceph-users] 12.2.6 upgrade

2018-07-20 Thread Glen Baars
Thanks, we are fully bluestore and therefore just set osd skip data digest = true Kind regards, Glen Baars -Original Message- From: Dan van der Ster Sent: Friday, 20 July 2018 4:08 PM To: Glen Baars Cc: ceph-users Subject: Re: [ceph-users] 12.2.6 upgrade That's right. But please

Re: [ceph-users] 12.2.6 upgrade

2018-07-20 Thread Glen Baars
I saw that on the release notes. Does that mean that the active+clean+inconsistent PGs will be OK? Is the data still getting replicated even if inconsistent? Kind regards, Glen Baars -Original Message- From: Dan van der Ster Sent: Friday, 20 July 2018 3:57 PM To: Glen Baars Cc: ceph

[ceph-users] 12.2.6 upgrade

2018-07-20 Thread Glen Baars
ead: failed to pick suitable auth object 2018-07-20 12:21:07.463206 osd.124 osd.124 10.4.35.36:6810/1865422 99 : cluster [ERR] 1.275 repair 12 errors, 0 fixed Kind regards, Glen Baars From: ceph-users mailto:ceph-users-boun...@lists.ceph.com>> On Behalf Of Glen Baars Sent: Wednesday, 18 July 20

Re: [ceph-users] 10.2.6 upgrade

2018-07-18 Thread Glen Baars
Hello Sage, Thanks for the response. I new fairly new to ceph. Is there any commands that would help confirm the issue? Kind regards, Glen Baars T 1300 733 328 NZ +64 9280 3561 MOB +61 447 991 234 This e-mail may contain confidential and/or privileged information.If you

[ceph-users] 10.2.6 upgrade

2018-07-18 Thread Glen Baars
a 500TB all bluestore cluster. We are now seeing inconsistent PGs and scrub errors now the scrubbing has resumed. What is the best way forward? 1. Upgrade all nodes to 12.2.7? 2. Remove the 12.2.7 node and rebuild? Kind regards, Glen Baars BackOnline Manager This e-mail is intended solely

Re: [ceph-users] intermittent slow requests on idle ssd ceph clusters

2018-07-16 Thread Glen Baars
with VLANs for ceph front/backend networks. Not sure that it is the same issue but if you want me to do any tests - let me know. Kind regards, Glen Baars -Original Message- From: ceph-users On Behalf Of Xavier Trilla Sent: Tuesday, 17 July 2018 6:16 AM To: Pavel Shub ; Ceph Users Subject

Re: [ceph-users] 12.2.6 CRC errors

2018-07-14 Thread Glen Baars
ion in the repo and upgraded. That turned up two other regressions[2][3]. We have fixes for those, but are working on an additional fix to make the damage from [3] be transparently repaired." Regards, Uwe Am 14.07.2018 um 17:02 schrieb Glen Baars: > Hello Ceph users! &

[ceph-users] 12.2.6 CRC errors

2018-07-14 Thread Glen Baars
the issues. *** Can anyone tell me what to do? Downgrade seems that it won't fix the issue. Maybe remove this node and rebuild with 12.2.5 and resync data? Wait a few days for 12.2.7? Kind regards, Glen Baars This e-mail is intended solely for the benefit of the addressee(s) and any other named