Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread Sébastien VIGNERON
% or above). I saw some messages on the list about the fstrim tool which can help reclaim unused free space, but i don’t know if it’s apply to your case. Cordialement / Best regards, Sébastien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du Madrillet 745, avenue de l'Université 76800

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread Sébastien VIGNERON
regards, Sébastien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du Madrillet 745, avenue de l'Université 76800 Saint-Etienne du Rouvray - France tél. +33 2 32 91 42 91 fax. +33 2 32 91 42 92 http://www.criann.fr mailto:sebastien.vigne...@criann.fr support: supp...@criann.fr > Le 12

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread Sébastien VIGNERON
and a recovery in process. Does your OSDs showed some rebalance of your datas? Does your OSDs use percentage change over time? (changes in "ceph osd df") Cordialement / Best regards, Sébastien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du Madrillet 745, avenue de l'Univers

Re: [ceph-users] how to improve performance

2017-11-20 Thread Sébastien VIGNERON
As a jumbo frame test, can you try the following? ping -M do -s 8972 -c 4 IP_of_other_node_within_cluster_network If you have « ping: sendto: Message too long », jumbo frames are not activated. Cordialement / Best regards, Sébastien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du

Re: [ceph-users] how to improve performance

2017-11-20 Thread Sébastien VIGNERON
Hi, MTU size? Did you ran an iperf test to see raw bandwidth? Cordialement / Best regards, Sébastien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du Madrillet 745, avenue de l'Université 76800 Saint-Etienne du Rouvray - France tél. +33 2 32 91 42 91 fax. +33 2 32 91 42 92 http

Re: [ceph-users] how to improve performance

2017-11-20 Thread Sébastien VIGNERON
Your performance hit can be from here. When OSD daemons tries to send a big frame, MTU misconfiguration blocks them and they must send them again with a lower size. On some switches, you have to set the global and the per-interface MTU sizes. Cordialement / Best regards, Sébastien VIGNERON

[ceph-users] Needed help to setup a 3-way replication between 2 datacenters

2017-11-09 Thread Sébastien VIGNERON
"op": "take", "item": -1, "item_name": "default" }, { "op": "chooseleaf_firstn", "num": 0, "type": "host"

Re: [ceph-users] pg count question

2018-08-08 Thread Sébastien VIGNERON
The formula seems correct for a 100 pg/OSD target. > Le 8 août 2018 à 04:21, Satish Patel a écrit : > > Thanks! > > Do you have any comments on Question: 1 ? > > On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON > wrote: >> Question 2: >> >> ceph

Re: [ceph-users] pg count question

2018-08-07 Thread Sébastien VIGNERON
Question 2: ceph osd pool set-quota max_objects|max_bytes set object or byte limit on pool > Le 7 août 2018 à 16:50, Satish Patel a écrit : > > Folks, > > I am little confused so just need clarification, I have 14 osd in my >

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Sébastien VIGNERON
# for a specific pool: ceph osd pool get your_pool_name size > Le 20 juil. 2018 à 10:32, Sébastien VIGNERON a > écrit : > > #for all pools: > ceph osd pool ls detail > > >> Le 20 juil. 2018 à 09:02, si...@turka.nl a écrit : >> >> Hi, >> >&

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Sébastien VIGNERON
#for all pools: ceph osd pool ls detail > Le 20 juil. 2018 à 09:02, si...@turka.nl a écrit : > > Hi, > > How can I see the size of a pool? When I create a new empty pool I can see > the capacity of the pool using 'ceph df', but as I start putting data in > the pool the capacity is decreasing.

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Sébastien VIGNERON
Correct, sorry, I have just read the first question and answered too quickly. As fas as I know the space available is "shared" (the space is a combination of OSD drives and crushmap ) between pools using the same device class but you can define quota for each pool if needed. ceph osd pool

Re: [ceph-users] Issues/questions: ceph df (luminous 12.2.7)

2018-07-21 Thread Sébastien VIGNERON
Hi, > Le 21 juil. 2018 à 11:52, Marc Roos a écrit : > > > > 1. Why is ceph df not always showing 'units' G M k Ceph default plain output show human readable values. > > [@c01 ~]# ceph df > GLOBAL: >SIZE AVAIL RAW USED %RAW USED >81448G 31922G 49526G

Re: [ceph-users] Need advice on Ceph design

2018-07-18 Thread Sébastien VIGNERON
Hello, What is your expected workload? VMs, primary storage, backup, objects storage, ...? How many disks do you plan to put in each OSD node? How many CPU cores? How many RAM per nodes? Ceph access protocol(s): CephFS, RBD or objects? How do you plan to give access to the storage to you

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Sébastien VIGNERON
ble and the ceph-deploy tool is changed. I think it may be a Kernel version consideration: not all distro have the needed minimum version of the kernel (and features) for a full use of luminous. Cordialement / Best regards, Sébastien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du Madrillet 7

Re: [ceph-users] Luminous and Calamari

2018-03-02 Thread Sébastien VIGNERON
Hi, Did you look the OpenAttic project? Cordialement / Best regards, Sébastien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du Madrillet 745, avenue de l'Université 76800 Saint-Etienne du Rouvray - France tél. +33 2 32 91 42 91 fax. +33 2 32 91 42 92 http://www.criann.fr