[ceph-users] Re: Ceph RBD iSCSI compatibility

2020-09-03 Thread Salsa
Joe, sorry, I should have been clearer. The incompatible rbd features are exclusive-lock, journaling, object-map and such. The info comes from here: https://documentation.suse.com/ses/6/html/ses-all/ceph-rbd.html -- Salsa Sent with ProtonMail Secure Email. ‐‐‐ Original Message ‐‐‐ On

[ceph-users] Re: Ceph RBD iSCSI compatibility

2020-09-03 Thread Joe Comeau
Salsa Again the doc shows and we have used layering only as a feature for iSCSI Further down it gives you specific settings for the luns/images In our case we let vmware/veeam snapshot and make copies of our VMs There is a new Beta of SES that bypasses the iscsi gateways for Windows servers

[ceph-users] Re: Ceph RBD iSCSI compatibility

2020-09-03 Thread Joe Comeau
Here is a link for iSCSI/RBD implementation guide from SUSE for this year for vmware (Hyper-v should be similar) https://www.suse.com/media/guide/suse-enterprise-storage-implementation-guide-for-vmware-esxi-guide.pdf We've been running rbd/iscsi for 4 years Thanks Joe >>> Salsa 9/2/2020

[ceph-users] Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)

2020-09-03 Thread Reed Dier
Well, it sounds like the pdcache setting may not be possible for SSD's, which is the first I've ever heard of this. I actually just checked another system that I forgot was behind a 3108 controller with SSD's (not ceph, so wasn't considering it). It looks like I ran into the same issue during

[ceph-users] Re: Change fsid of Ceph cluster after splitting it into two clusters

2020-09-03 Thread Wido den Hollander
On 9/3/20 3:55 PM, Dan van der Ster wrote: > Hi Wido, > > Out of curiosity, did you ever work out how to do this? Nope, never did this. So there are two clusters running with the same fsid :-) Wido > > Cheers, Dan > > On Tue, Feb 12, 2019 at 6:17 PM Wido den Hollander wrote: >> >> Hi, >>

[ceph-users] Messenger v2 and IPv6-only still seems to prefer IPv4 (OSDs stuck in booting state)

2020-09-03 Thread Wido den Hollander
Hi, Last night I've spend a couple of hours debugging a issue where OSDs would be marked as 'up', but then PGs stayed in the 'peering' state. Looking through the admin socket I saw these OSDs were in the 'booting' state. Looking at the OSDMap I saw this: osd.3 up in weight 1 up_from 26

[ceph-users] Re: Is it possible to change the cluster network on a production ceph?

2020-09-03 Thread Wido den Hollander
On 9/3/20 3:38 PM, pso...@alticelabs.com wrote: > Hello people, >I am trying to change the cluster network in a production ceph. I'm having > problems, after changing the ceph.conf file and restarting a osd the cluster > is always going to HEALTH_ERROR with blocked requests. Only by

[ceph-users] Cheapest essay writing service

2020-09-03 Thread omeojones
One of the unique features of our portal is our 24/7 live chat support platform. Students from any USA location can use this facility to directly receive essay assignment tips from our easy essay writer. https://myassignmenthelp.com/us/cheap-essay-writing-services.html

[ceph-users] Re: failed to authpin, subtree is being exported in 14.2.11

2020-09-03 Thread Stefan Kooman
On 2020-09-03 09:21, Dan van der Ster wrote: > Any ideas what might have triggered this? This looks like issue: https://tracker.ceph.com/issues/42338 Do you use snapshots on this fs? Gr. Stefan ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)

2020-09-03 Thread VELARTIS Philipp Dürhammer
In theory it should be possible to do this (to change the Block SSD Write Disk Cache Change = Yes setting) 1. Run MegaSCU -adpsettings -write -f mfc.ini -a0 2. Edit the mfc.ini file, setting "blockSSDWriteCacheChange" to 0 instead of 1. 3. Run MegaSCU -adpsettings -read -f mfc.ini -a0

[ceph-users] Re: slow "rados ls"

2020-09-03 Thread Stefan Kooman
On 2020-09-02 23:50, Wido den Hollander wrote: > > Indeed, it shouldn't be. > > This config option should make it easier in a future release: > https://github.com/ceph/ceph/commit/93e4c56ecc13560e0dad69aaa67afc3ca053fb4c > > > [osd] > osd_compact_on_start = true > > Then just restart the

[ceph-users] Re: cephadm grafana url

2020-09-03 Thread Robert Sander
Hi, Am 03.09.20 um 11:57 schrieb Ni-Feng Chang: > > The host where the browser is on needs to be able to reach the Grafana > instance by `ceph01` hostname (maybe add an entry to /etc/hosts?). Yes, that is my intermediate solution. But I cannot tell my trainees next week that they have to

[ceph-users] Re: cephadm grafana url

2020-09-03 Thread Ni-Feng Chang
Hi Robert, The host where the browser is on needs to be able to reach the Grafana instance by `ceph01` hostname (maybe add an entry to /etc/hosts?). There was a fix [1] that allows custom Grafana URL (so the cephadm doesn't override it). It's backported, maybe the version you are using doesn't

[ceph-users] Re: java client cannot visit rgw behind nginx

2020-09-03 Thread Zhenshi Zhou
Hi, Yep I think the header is the cause too. I modify the configuration but it still gets 403 error, which I consider that the header may not be transferred to the backends. But if I set it to level 4 rather than level 7, nginx works well. Mark Kirkwood 于2020年9月3日周四 下午12:53写道: > I think you

[ceph-users] Re: java client cannot visit rgw behind nginx

2020-09-03 Thread Mark Kirkwood
I think you might need to set some headers. Here is what we use (connecting to Swift, but should be generally applicable). We are running nginx and swift (swift proxy server) on the same host. but again maybe some useful ideas for you to try (below). Note that we explicitly stop nginx writing

[ceph-users] Re: failed to authpin, subtree is being exported in 14.2.11

2020-09-03 Thread Dan van der Ster
On Thu, Sep 3, 2020 at 10:25 AM Stefan Kooman wrote: > > On 2020-09-03 09:21, Dan van der Ster wrote: > > > Any ideas what might have triggered this? > > This looks like issue: https://tracker.ceph.com/issues/42338 > > Do you use snapshots on this fs? We don't use snapshots, but *maybe* sometime

[ceph-users] failed to authpin, subtree is being exported in 14.2.11

2020-09-03 Thread Dan van der Ster
Hi all, We had a stuck mds ops this morning on a 14.2.11 cephfs cluster. I tried to ls the path from another client and that blocked. The ops were like this: # egrep 'desc|flag|age' ops.txt "description": "client_request(client.1212755100:37475 lookup #0x1003e229d38/analytics-logs

[ceph-users] Re: cephadm grafana url

2020-09-03 Thread Robert Sander
Hi, Am 02.09.20 um 23:17 schrieb Dimitri Savineau: > Did you try to restart the dashboard mgr module after your change ? > > # ceph mgr module disable dashboard > # ceph mgr module enable dashboard Yes, I should have mentioned that. No effect, though. Regards -- Robert Sander Heinlein Support