Sorry, typo in topic, this is with 12.2.2. 12.2.1 was ok until update last
friday to 12.2.2.
Regards, Tobi
On 12/13/2017 11:45 AM, Tobias Prousa wrote:
Hi there,
sorry to disturb you again but I'm still not there. After restoring my CephFS to a working state (with a lot of help fro
+0x6d) [0x7fe6f963362d]
NOTE: a copy of the executable, or `objdump -rdS ` is needed to
interpret this.
I now reduced load by removing most active clients again, only 3 mostly idle clients remaining and MDS keeps up and running again. As soon as I
start adding clients trouble returns.
As long
Thank you very much! I feel optimistic that now I got what I need to get that
thing back working again.
I'll report back...
Best regards,
Tobi
On 12/12/2017 02:08 PM, Yan, Zheng wrote:
On Tue, Dec 12, 2017 at 8:29 PM, Tobias Prousa wrote:
Hi Zheng,
the more you tell me the more w
elp!
Best regards,
Tobi
On 12/12/2017 01:10 PM, Yan, Zheng wrote:
On Tue, Dec 12, 2017 at 4:22 PM, Tobias Prousa wrote:
Hi there,
regarding my ML post from yesterday (Upgrade from 12.2.1 to 12.2.2 broke my
CephFs) I was able to get a little further with the suggested
"cephfs-table-to
be fixed automatically. I could live with losing those 10k files,
but I do not get why MDS switches to "standby" and marks FS damaged
rendering it offline.
ceph -s then reports something like: mds: cephfs-0/1/1 1:damaged
1:standby (not pasted but manually typed from my memory)
Btw.
'll see if things got stable again.
Once again thank you very much for your support. I will report back to
the ML when I got news.
Best Regards,
Tobi
On 12/11/2017 05:19 PM, Tobias Prousa wrote:
Hi Zheng,
I did some more tests with cephfs-table-tool. I realized that disaster
reco
table-tool take_inos
--
---
Dipl.-Inf. (FH) Tobias Prousa
Leiter Entwicklung Datenlogger
CAETEC GmbH
Industriestr. 1
D-82140 Olching
www.caetec.de
Gesellschaft mit beschränkter Haftung
Sitz der Gesellschaft: Olching
Handelsregister: Amtsgericht München, HRB 183929
Geschäftsfüh
Hi Zheng,
On 12/11/2017 04:28 PM, Yan, Zheng wrote:
On Mon, Dec 11, 2017 at 11:17 PM, Tobias Prousa wrote:
These are essentially the first commands I did execute, in this exact order.
Additionally I did a:
ceph fs reset cephfs --yes-i-really-mean-it
how many active mds were there before
On 12/11/2017 04:05 PM, Yan, Zheng wrote:
On Mon, Dec 11, 2017 at 10:13 PM, Tobias Prousa wrote:
Hi there,
I'm running a CEPH cluster for some libvirt VMs and a CephFS providing /home
to ~20 desktop machines. There are 4 Hosts running 4 MONs, 4MGRs, 3MDSs (1
active, 2 standby) and 28
I can help, e.g. with
further information, feel free to ask. I'll try to hang around on #ceph
(nick topro/topro_/topro__). FYI, I'm in Central Europe TimeZone (UTC+1).
Thank you so much!
Best regards,
Tobi
--
-------
Dipl.-Inf. (FH)
Hi Ceph,
I recently realized that whenever I'm forced to restart MDS (i.e. stall or crash due to execcive memory consumption, btw. my MDS host has 32GB of RAM) especially while there are still clients having CephFS mounted, open files tend to have their metadata corrupted. Those files, when cor
11 matches
Mail list logo