Re: [ceph-users] CephsFS client hangs if one of mount-used MDS goes offline

2020-01-20 Thread Anton Aleksandrov
    "overall": {     "ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 29,     "ceph version 13.2.8 (5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0) mimic (stable)": 2     } } Anton. On 1/20/2020 5:20 PM, Wido den Hollander wrot

[ceph-users] CephsFS client hangs if one of mount-used MDS goes offline

2020-01-20 Thread Anton Aleksandrov
/ceph-//deploy/ tool and haven't did much of configuration.. Thank you for understanding and help. :) Anton Aleksandrov. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSD fail to start - fsid problem with KVM

2019-11-04 Thread Anton Aleksandrov
Other OSDs are on other computers, they are just one disk=one osd. Running "lvm activate --all" output is --> OSD ID 23 FSID edb31b2a-27ac-4672-8240-9ef232a5bdc8 process is active. Skipping activation this "osd 23" run on ssd and is active and working, but "osd 22" does not appear here.. Is

Re: [ceph-users] OSD fail to start - fsid problem with KVM

2019-11-03 Thread Anton Aleksandrov
other OSDs have 1x8tb, but this host has 2x4tb and just 8gb of ram, therefore we did not want to make two separate OSDs. Anton On 04.11.2019 00:49, Paul Emmerich wrote: On Sun, Nov 3, 2019 at 6:25 PM Anton Aleksandrov wrote: Hello community. We run Ceph on quite old hardware with quite low

[ceph-users] OSD fail to start - fsid problem with KVM

2019-11-03 Thread Anton Aleksandrov
Hello community. We run Ceph on quite old hardware with quite low traffic. Yesterday we had to reboot one of the OSDs and after reboot it did not came up. The error message is: [2019-11-02 15:05:07,317][ceph_volume.process][INFO  ] Running command: /usr/sbin/ceph-volume lvm trigger

[ceph-users] osd-mon failed with "failed to write to db"

2019-06-27 Thread Anton Aleksandrov
Hello community, we have developed a cluster on latest mimic release. We are on quite old hardware, but using Centos7. Monitor, manager and all the same host. Cluster has been running for some week without actual workload. There might have been some sort of power failure (not proved), but at

[ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-24 Thread Anton Aleksandrov
Hello community, We are building CEPH cluster on pretty old (but free) hardware. We will have 12 nodes with 1 OSD per node and migrate data from single RAID5 setup, so our traffic is not very intense, we basically need more space and possibility to expand it. We plan to have data on

Re: [ceph-users] CephFS configuration for millions of small files

2018-07-30 Thread Anton Aleksandrov
? Anton Original message From: Paul Emmerich Date: 30/07/2018 17:55 (GMT+02:00) To: Anton Aleksandrov Cc: Ceph Users Subject: Re: [ceph-users] CephFS configuration for millions of small files 10+1 is a bad idea for obvious reasons (not enough coding chunks, you will be offline

[ceph-users] CephFS configuration for millions of small files

2018-07-30 Thread Anton Aleksandrov
Hello community, I am building first cluster for project, that hosts millions of small (from 20kb) and big (up to 10mb) files. Right now we are moving from local 16tb raid storage to cluster of 12 small machines.  We are planning to have 11 OSD nodes, use erasure coding pool (10+1) and one

[ceph-users] understanding pool capacity and usage

2018-07-27 Thread Anton Aleksandrov
Hello, Might sounds strange, but I could not find answer in google or docs, might be called somehow else. I dont understand pool capacity policy and how to set/define it. I have created simple cluster for CephFS on 4 servers, each has 30gb disk - so in total 120gb. On top I build replicated

Re: [ceph-users] ls operation is too slow in cephfs

2018-07-17 Thread Anton Aleksandrov
You need to give us more details about your OSD setup and hardware specification of nodes (CPU core count, RAM amount) On 2018.07.17. 10:25, Surya Bala wrote: Hi folks, We have production cluster with 8 nodes and each node has 60 disks of size 6TB each. We are using cephfs and FUSE client

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Anton Aleksandrov
I think you will get some useful information from this link: https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ Even though it is dated 2014 - you can get approximate direction. Anton On 10.07.2018 18:25, Satish Patel wrote: Folks, I

[ceph-users] WAL/DB partition on system SSD

2018-07-04 Thread Anton Aleksandrov
Hello, I wonder how good or bad is to reserve space for WAL/DB on the same SSD as OS or is it better to separate them? And what are recommended size for WAL/DB partition? I am building Luminous cluster on Centos7 with 2 OSD per host, each OSD would be 2 * 8tb disks through LVM. Anton.

[ceph-users] incomplete PG for erasure coding pool after OSD failure

2018-06-26 Thread Anton Aleksandrov
Hello, We have small cluster, initially on 4 hosts (1 osd per host, 8tb each) with erasure-coding for data-pool (k=3 m=1). After some time I have added one more small host (1 osd, 2tb). Ceph has synced fine. Then I have powered off one of first 8tb hosts and terminated it. Also removed