Re: [ceph-users] cluster network down

2019-10-25 Thread ceph
Am 1. Oktober 2019 08:20:08 MESZ schrieb "Lars Täuber" : >Mon, 30 Sep 2019 15:21:18 +0200 >Janne Johansson ==> Lars Täuber >: >> > >> > I don't remember where I read it, but it was told that the cluster >is >> > migrating its complete traffic over to the public network when the >cluster >> >

Re: [ceph-users] Problematic inode preventing ceph-mds from starting

2019-10-25 Thread Pickett, Neale T
Hello Patrick Donnelly, Ph.D. Thank you very much for your response. After removing these objects, the mds does start up correctly. But it doesn't take long until it goes into a crash loop again. In the last week we have made a few changes to the down filesystem in an attempt to fix what we

Re: [ceph-users] Crashed MDS (segfault)

2019-10-25 Thread Gustavo Tonini
Well, I coundn't identify which object I need to "rmomapkey" as instructed in https://tracker.ceph.com/issues/38452#note-12. This is the log around the crash: https://pastebin.com/muw34Qdc On Fri, Oct 25, 2019 at 11:27 AM Yan, Zheng wrote: > On Fri, Oct 25, 2019 at 9:42 PM Gustavo Tonini >

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size - FIXED

2019-10-25 Thread Steven Vacaroaia
OK .. fixed Just for posterity, this was not a CEPH / gwcli issue but a VMWare quark There were 2 issues 1. authentication issues ( not sure why) These were addressed by running auth chap=cephuser/paasword for all gwcli hosts then esxcli iscsi adapter auth chap set --direction=uni

[ceph-users] Stuck/confused ceph cluster after physical migration of servers.

2019-10-25 Thread Sam Skipsey
Hello everyone, So: we have a mimic cluster (on the most recent mimic release), 3 mons, 8 data nodes (160 OSDs in total). Recently, we had to physically migrate the cluster to a different location, and had to do this in one go (partly because the new location does not currently have direct

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Steven Vacaroaia
the error seems to indicate mismatched passwords on the gwcli host , /var/log/messages contains the following osd02 kernel: CHAP user or password not set for Initiator ACL Oct 25 10:37:22 osd02 kernel: Security negotiation failed. Oct 25 10:37:22 osd02 kernel: iSCSI Login negotiation failed. Oct

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Steven Vacaroaia
spoke to soon still same issue event after re entering credentials here is an excerpt from ESXi server [esx.problem.storage.iscsi.discovery.login.error] iSCSI discovery to 10.10.35.202 on vmhba64 failed. The Discovery target returned a login error of: 0201. On Fri, 25 Oct 2019 at 10:08, Steven

Re: [ceph-users] Crashed MDS (segfault)

2019-10-25 Thread Yan, Zheng
On Fri, Oct 25, 2019 at 9:42 PM Gustavo Tonini wrote: > > Running "cephfs-data-scan init --force-init" solved the problem. > > Then I had to run "cephfs-journal-tool event recover_dentries summary" and > truncate the journal to fix the corrupted journal. > > CephFS worked well for approximately

Re: [ceph-users] Decreasing the impact of reweighting osds

2019-10-25 Thread Robert LeBlanc
Yout can try adding osd op queue = wpq osd op queue cut off = high To all the osd ceph configs and restarting, That has made reweighting pretty painless for us. Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Tue, Oct 22, 2019 at 8:36 PM

Re: [ceph-users] Decreasing the impact of reweighting osds

2019-10-25 Thread Robert LeBlanc
You can try adding Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Tue, Oct 22, 2019 at 8:36 PM David Turner wrote: > > Most times you are better served with simpler settings like > osd_recovery_sleep, which has 3 variants if you have

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Steven Vacaroaia
I can confirm that, after reentering credentials for the target on each ESXi server and rescanning storage, the device appear and datastore can be increased Thanks for your help and patience Steven On Fri, 25 Oct 2019 at 09:59, Steven Vacaroaia wrote: > I noticed this > >

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Steven Vacaroaia
I noticed this [vob.iscsi.discovery.login.error] discovery failure on vmhba64 to 10.10.35.202 because the target returned a login status of 0201. A restart of rbd services will require reentering chap credentials on targets ? Steven On Fri, 25 Oct 2019 at 09:57, Steven Vacaroaia wrote: >

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Steven Vacaroaia
Yes, I did. I event restarted rbd-target services uname -a Linux osd01.chi.medavail.net 4.18.11-1.el7.elrepo.x86_64 #1 SMP Sat Sep 29 09:42:38 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@osd01 ~]# rpm -qa | grep tcmu tcmu-runner-1.4.0-1.el7.x86_64 On Fri, 25 Oct 2019 at 09:51, Jason Dillaman

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Jason Dillaman
On Fri, Oct 25, 2019 at 9:49 AM Steven Vacaroaia wrote: > > Thanks for your prompt response > Unfortunately , still no luck > Device shows with correct size under "Device backing" but not showing at all > under "increase datastore capacity) > > resize rbd.rep01 7T > ok > /disks> ls > o- disks

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Steven Vacaroaia
Thanks for your prompt response Unfortunately , still no luck Device shows with correct size under "Device backing" but not showing at all under "increase datastore capacity) resize rbd.rep01 7T ok /disks> ls o- disks

Re: [ceph-users] Crashed MDS (segfault)

2019-10-25 Thread Gustavo Tonini
Running "cephfs-data-scan init --force-init" solved the problem. Then I had to run "cephfs-journal-tool event recover_dentries summary" and truncate the journal to fix the corrupted journal. CephFS worked well for approximately 3 hours and then our MDS crashed again, apparently due to the bug

Re: [ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Jason Dillaman
On Fri, Oct 25, 2019 at 9:13 AM Steven Vacaroaia wrote: > > Hi, > I am trying to increase size of a datastore made available through ceph iscsi > rbd > The steps I followed are depicted below > Basically gwcli report correct data and even VMware device capacity is > correct but when tried to

[ceph-users] iscsi resize -vmware datastore cannot increase size

2019-10-25 Thread Steven Vacaroaia
Hi, I am trying to increase size of a datastore made available through ceph iscsi rbd The steps I followed are depicted below Basically gwcli report correct data and even VMware device capacity is correct but when tried to increase it there is no device listed I am using

Re: [ceph-users] Add one more public networks for ceph

2019-10-25 Thread Wido den Hollander
On 10/25/19 5:27 AM, luckydog xf wrote: > Hi, list,  > >     Currently my ceph nodes with 3 MON and 9 OSDs, everything is fine. > Now I plan to add onre more public network, the initial public network > is 103.x/24, and the target network is 109.x/24.  And 103 cannot reach > 109, as I don't

Re: [ceph-users] ceph balancer do not start

2019-10-25 Thread Konstantin Shalygin
connections coming from qemu vm clients. It's generally easy to upgrade. Just switch your Ceph yum repo from jewel to luminous. Then update `librbd` on your hypervisors and migrate your VM's. It's fast and without downtime of your VM's. k ___