Am 1. Oktober 2019 08:20:08 MESZ schrieb "Lars Täuber" :
>Mon, 30 Sep 2019 15:21:18 +0200
>Janne Johansson ==> Lars Täuber
>:
>> >
>> > I don't remember where I read it, but it was told that the cluster
>is
>> > migrating its complete traffic over to the public network when the
>cluster
>> >
Hello Patrick Donnelly, Ph.D. Thank you very much for your response.
After removing these objects, the mds does start up correctly. But it doesn't
take long until it goes into a crash loop again.
In the last week we have made a few changes to the down filesystem in an
attempt to fix what we
Well, I coundn't identify which object I need to "rmomapkey" as instructed
in https://tracker.ceph.com/issues/38452#note-12.
This is the log around the crash: https://pastebin.com/muw34Qdc
On Fri, Oct 25, 2019 at 11:27 AM Yan, Zheng wrote:
> On Fri, Oct 25, 2019 at 9:42 PM Gustavo Tonini
>
OK .. fixed
Just for posterity, this was not a CEPH / gwcli issue but a VMWare quark
There were 2 issues
1. authentication issues ( not sure why)
These were addressed by running
auth chap=cephuser/paasword for all gwcli hosts
then
esxcli iscsi adapter auth chap set --direction=uni
Hello everyone,
So: we have a mimic cluster (on the most recent mimic release), 3 mons, 8
data nodes (160 OSDs in total).
Recently, we had to physically migrate the cluster to a different location,
and had to do this in one go (partly because the new location does not
currently have direct
the error seems to indicate mismatched passwords
on the gwcli host , /var/log/messages contains the following
osd02 kernel: CHAP user or password not set for Initiator ACL
Oct 25 10:37:22 osd02 kernel: Security negotiation failed.
Oct 25 10:37:22 osd02 kernel: iSCSI Login negotiation failed.
Oct
spoke to soon
still same issue event after re entering credentials
here is an excerpt from ESXi server
[esx.problem.storage.iscsi.discovery.login.error] iSCSI discovery to
10.10.35.202 on vmhba64 failed. The Discovery target returned a login error
of: 0201.
On Fri, 25 Oct 2019 at 10:08, Steven
On Fri, Oct 25, 2019 at 9:42 PM Gustavo Tonini wrote:
>
> Running "cephfs-data-scan init --force-init" solved the problem.
>
> Then I had to run "cephfs-journal-tool event recover_dentries summary" and
> truncate the journal to fix the corrupted journal.
>
> CephFS worked well for approximately
Yout can try adding
osd op queue = wpq
osd op queue cut off = high
To all the osd ceph configs and restarting, That has made reweighting
pretty painless for us.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, Oct 22, 2019 at 8:36 PM
You can try adding
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, Oct 22, 2019 at 8:36 PM David Turner wrote:
>
> Most times you are better served with simpler settings like
> osd_recovery_sleep, which has 3 variants if you have
I can confirm that, after reentering credentials for the target on each
ESXi server and rescanning storage, the device appear and datastore can be
increased
Thanks for your help and patience
Steven
On Fri, 25 Oct 2019 at 09:59, Steven Vacaroaia wrote:
> I noticed this
>
>
I noticed this
[vob.iscsi.discovery.login.error] discovery failure on vmhba64 to
10.10.35.202 because the target returned a login status of 0201.
A restart of rbd services will require reentering chap credentials on
targets ?
Steven
On Fri, 25 Oct 2019 at 09:57, Steven Vacaroaia wrote:
>
Yes, I did.
I event restarted rbd-target services
uname -a
Linux osd01.chi.medavail.net 4.18.11-1.el7.elrepo.x86_64 #1 SMP Sat Sep 29
09:42:38 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@osd01 ~]# rpm -qa | grep tcmu
tcmu-runner-1.4.0-1.el7.x86_64
On Fri, 25 Oct 2019 at 09:51, Jason Dillaman
On Fri, Oct 25, 2019 at 9:49 AM Steven Vacaroaia wrote:
>
> Thanks for your prompt response
> Unfortunately , still no luck
> Device shows with correct size under "Device backing" but not showing at all
> under "increase datastore capacity)
>
> resize rbd.rep01 7T
> ok
> /disks> ls
> o- disks
Thanks for your prompt response
Unfortunately , still no luck
Device shows with correct size under "Device backing" but not showing at
all under "increase datastore capacity)
resize rbd.rep01 7T
ok
/disks> ls
o- disks
Running "cephfs-data-scan init --force-init" solved the problem.
Then I had to run "cephfs-journal-tool event recover_dentries summary" and
truncate the journal to fix the corrupted journal.
CephFS worked well for approximately 3 hours and then our MDS crashed
again, apparently due to the bug
On Fri, Oct 25, 2019 at 9:13 AM Steven Vacaroaia wrote:
>
> Hi,
> I am trying to increase size of a datastore made available through ceph iscsi
> rbd
> The steps I followed are depicted below
> Basically gwcli report correct data and even VMware device capacity is
> correct but when tried to
Hi,
I am trying to increase size of a datastore made available through ceph
iscsi rbd
The steps I followed are depicted below
Basically gwcli report correct data and even VMware device capacity is
correct but when tried to increase it there is no device listed
I am using
On 10/25/19 5:27 AM, luckydog xf wrote:
> Hi, list,
>
> Currently my ceph nodes with 3 MON and 9 OSDs, everything is fine.
> Now I plan to add onre more public network, the initial public network
> is 103.x/24, and the target network is 109.x/24. And 103 cannot reach
> 109, as I don't
connections coming from qemu vm clients.
It's generally easy to upgrade. Just switch your Ceph yum repo from
jewel to luminous.
Then update `librbd` on your hypervisors and migrate your VM's. It's
fast and without downtime of your VM's.
k
___
20 matches
Mail list logo