Hello Florian,
> On Tue, Jun 21, 2016 at 3:11 PM, Yoann Moulin <yoann.mou...@epfl.ch> wrote:
>> Hello,
>>
>> I found a performance drop between kernel 3.13.0-88 (default kernel on Ubuntu
>> Trusty 14.04) and kernel 4.4.0.24.14 (default kernel on Ubuntu Xenial 1
ernel 3.13.0-88-generic : ceph tell osd.ID => average ~81MB/s
Kernel 4.2.0-38-generic : ceph tell osd.ID => average ~109MB/s
Kernel 4.4.0-24-generic : ceph tell osd.ID => average ~50MB/s
Does anyone get a similar behaviour on their cluster ?
Best regards
ernel 3.13.0-88-generic : ceph tell osd.ID => average ~81MB/s
Kernel 4.2.0-38-generic : ceph tell osd.ID => average ~109MB/s
Kernel 4.4.0-24-generic : ceph tell osd.ID => average ~50MB/s
Does anyone get a similar behaviour on their cluster ?
Best regards
Le 23/06/2016 08:25, Sarni Sofiane a écrit :
> Hi Florian,
>
> On 23.06.16 06:25, "ceph-users on behalf of Florian Haas"
> <ceph-users-boun...@lists.ceph.com on behalf of flor...@hastexo.com> wrote:
>
>> On Wed, Jun 22, 2016 at 10:56 AM, Yoann Moulin &l
ss gonna switch into kernel
status "D+" . You won't be able to kill that process even by kill -9. To
stop it, you will have to reboot the server.
you can give a look here how to manipulate scsi bus:
http://fibrevillage.com/storage/279-hot-add-remove-rescan-of-scsi-devices-on-linux
you can install th
it.
>
> Why I can't umount?
is "lsof -n | grep /dev/sdf" give something ?
and are you sure /dev/sdf is the disk for osd 13 ?
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
riginal Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Yoann Moulin
>> Sent: 09 March 2016 16:01
>> To: ceph-us...@ceph.com
>> Subject: [ceph-users] how to choose EC plugins and rulesets
>>
>> Hello,
>>
>>
g/wiki/GUID_Partition_Table
That what I read on the irc channel, it seem to be a common mistake, might be
good to talk about that in the doc or FAQ ?
Yoann
> On Tue, Mar 8, 2016 at 5:21 PM, Yoann Moulin <yoann.mou...@epfl.ch
> <mailto:yoann.mou...@epfl.ch>> wrote:
>
> Hell
of Memory
OS Storage: 2 x SSD 240GB Intel S3500 DC (raid 1)
Journal Storage: 2 x SSD 400GB Intel S3300 DC (no Raid)
OSD Disk: 10 x HGST ultrastar-7k6000 6TB
Network: 1 x 10Gb/s
OS: Ubuntu 14.04
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph
t let ceph-disk create journal
partition.
Yoann
> On Thu, Mar 3, 2016 at 3:42 PM, Yoann Moulin <yoann.mou...@epfl.ch
> <mailto:yoann.mou...@epfl.ch>> wrote:
>
> Hello,
>
> I'm (almost) a new user of ceph (couple of month). In my university, we
> start to
>
1280 active+clean
> 1024 creating+incomplete
We have install this cluster at the begin of February. We did not use that
cluster at all even at the begin to troubleshoot an issue with ceph-ansible. We
did not push any data neither create pool. What could explain this behaviour ?
T
set the pg_num to 16384 at the beginning ? 16384 is high, isn't it ?
Thanks for your help
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
iours
--
Yoann
> On Mon, Jul 25, 2016 at 6:45 PM, Yoann Moulin <yoann.mou...@epfl.ch
> <mailto:yoann.mou...@epfl.ch>> wrote:
>
> Hello,
>
> (this is a repost, my previous message seems to be slipping under the
> radar)
>
> Does anyone get a similar b
d by security team.
anyway, I'm still in test, I can test kernels to try to find from which one the
regression start.
> Sorry I can't be of more help!
no problems :)
--
Yoann
> On 07/25/2016 10:45 AM, Yoann Moulin wrote:
>> Hello,
>>
>> (this is a repost, my previous me
nch6 / 14.04 / Jewel / kernel 4.4 : 65.82 MB/s
bench7 / 16.04 / Jewel / kernel 4.4 : 61.57 MB/s
If needed, I have the raw output of "ceph tell osd.* bench"
Best regards
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
cep
oughput (~40%) with
kernel 4.4 compare of kernel 4.2. I didn't do the bench on latency but maybe the
issue impact the latency too.
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.54 MB/s
bench5 / 14.04 / Infernalis / kernel 4.4 : 53.61 MB/s
bench6 / 14.04 / Jewel / kernel 4.4 : 65.82 MB/s
bench7 / 16.04 / Jewel / kernel 4.4 : 61.57 MB/s
If needed, I have the raw output of "ceph tell osd.* bench"
> What I find curious is that no-one else on the list has apparently run
> into this. Any Ubuntu xenial users out there, or perhaps folks on
> trusty who choose to install linux-image-generic-lts-xenial?
Anyone to try on their side if they have the same behaviour ?
Cheers,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
com/docs/master/man/8/radosgw-admin/
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
gion get > region.json
> $ radosgw-admin period get > period.json
> $ radosgw-admin period list > period_list.json
I have 60TB of data in this RadosGW, can I fix this issue without having to
repupload all those data ?
Thanks for you help !
Best regards
--
Yoann Moulin
EPFL IC-IT
met
-rgw-zonegroup --master=false
if you check in files in my previous mail in metadata_zonegroup-map.json and
metadata_zonegroup.json, there is only one zonegroup with name
"default" but in metadata_zonegroup.json, the id is "default" and in
metadata_zonegroup-map.json it is "4d982
uot;: []
}
],
"default_placement": "default-placement",
"realm_id": "ccc2e663-66d3-49a6-9e3a-f257785f2d9a"
}
$ radosgw-admin bucket list
2016-09-06 11:21:04.787391 7fb8a1f0b900 0 Error updating periodmap, multiple
master zonegroups c
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
w zonegroup then set as the default zonegroup, update the
zonegroup-map, zone etc, then delete the zonegroup with the ID
"default" it should work ?
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
y because it's not the
>>> same
>>> ID...)
>>
>> if I create a new zonegroup then set as the default zonegroup, update the
>> zonegroup-map, zone etc, then delete the zonegroup with the ID
>> "default" it should work ?
>
oot --yes-i-really-really-mean-it
create a new realm id and set it as default
> radosgw-admin realm create --rgw-realm=default --default
Edit the 2 json files to change the realm id with the new one
> vim default_zone.json #change realm with the new one
> vim default_
et bucket:images-eu-v1 | jq .data.bucket.bucket_id| tr -d '"')
On that point, I don't know, I never configure index sharding
Best Regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ags
> hashpspool stripe_width 0
Be-careful, if you set size 2 and min_size 2, your cluster will be in
HEALTH_ERR state if you loose only OSD, if you want to set "size 2" (which
is not recommended) you should set min_size to 1.
Best Regards.
Yoann Moulin
> On Mon, Sep 26, 2016
issues/17239
the "id" of the zonegroup shouldn't be "default" but an uuid afaik
Best regards
Yoann Moulin
> root@arh-ibstorage1-ib:~# radosgw-admin zonegroup get --rgw-zonegroup=default
> {
> "id": "default",
> "name": "defa
=1024
If I push this file through ma radosgw, How I can find all replicats on the
OSDs ?
And another question, for really small files, on an EC pool, files will be
replicated with k+m replica, won't they ?
Thanks
--
Yoann Moulin
EPFL IC-IT
___
ceph
oing on.
Anyone can help us to understand what's happening ?
thanks for your help
--
Yoann Moulin
EPFL IC-IT
$ ceph --version
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
$ uname -a
Linux icadmin004 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_
s OSD to recover. Afterwards you should set
> the backfill traffic settings to the minimum (e.g. max_backfills = 1)
> and unset the flags to allow the cluster to perform the outstanding recovery
> operation.
>
> As the others already pointed out, these actions might help to get the
> cluster up and running again, but you need to find the actual reason for
> the problems.
This is exactly what I want
Thanks for the help !
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t;
> If the cluster is just *slow* somehow, then increasing that might
> help. If there is something systematically broken, increasing would
> just postpone the inevitable.
Ok, I'm going to study this option with my colleagues
thanks
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
is that possible to shard the index of existing buckets ?
I have more than 100TB of data in a couples of buckets, I'd like to avoid to re
upload everythings.
Thanks for your help,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph
ne have had any luck with this?
does apache send $host variable to the backend ?
something like "ProxyPreserveHost On"
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
0
> storing 1 entries at orphan.scan.erasure.linked.43
> storing 1 entries at orphan.scan.erasure.linked.47
> storing 1 entries at orphan.scan.erasure.linked.56
> storing 1 entries at orphan.scan.erasure.linked.63
> storing 1 entries at orphan.scan.erasure.linked.9
> storing
increasing the pg_num of the pool that has the biggest PGs.
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
tell the difference because each test gives
> the speeds on same range. I did not test kernel 4.4 in ubuntu 14
>
>
> --
> Lomayani
>
> On Tue, Jul 26, 2016 at 9:39 AM, Yoann Moulin <yoann.mou...@epfl.ch
> <mailto:yoann.mou...@epfl.ch>> wrote:
>
> Hello,
>
> "rclone/v1.37"
rgw logs :
> ==> ceph/luminous-rgw-iccluster015.log <==
> 2017-09-15 10:37:53.005424 7ff1f28f1700 1 == starting new request
> req=0x7ff1f28eb1f0 =
> 2017-09-15 10:37:53.007192 7ff1f28f1700 1 == req done req=0x7ff1f28eb1f0
> op status=0 http_status=200 ==
> 2017-09-15 10:37:53.007282 7ff1f28f1700 1 civetweb: 0x56061586e000:
> 127.0.0.1 - - [15/Sep/2017:10:37:53 +0200] "GET / HTTP/1.0" 1 0 - rclone/v1.37
Thanks for you help
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
"GET / HTTP/1.0" 1 0 - -
>
>
>
> ### rclone : list bucket ###
>
>
> root@iccluster012:~# rclone lsd testadmin:
> -1 2017-08-28 12:27:33-1 image-net
> root@iccluster012:~#
>
> nginx (as revers proxy) log :
>
>> ==> ngin
all.ym file :
I have added it manually and now it works (ansible-playbook didn't add it, I
must figure out why).
Thanks for you help
Best regards,
Yoann Moulin
>>> I have a fresh luminous cluster in test and I made a copy of a bucket (4TB
>>> 1.5M files) with rclone, I'm able to
seems to work in sigv2 :)
>> The second was in the rgw section into ceph.conf file. The line "rgw dns
>> name" was missing.
>
> Depending on your setup, "rgw dns name" may be required, yes.
in my case, it seems to be mandatory
Best regards,
--
Yoan
caps mds = "allow rw path=/foo1"
caps mon = "allow r"
caps osd = "allow rw pool=cephfs_data"
# ceph auth get client.foo2
exported keyring for client.foo2
[client.foo2]
key = XXX2
caps mds = "allow r, allow rw path=/foo2&qu
ot;: 1082,
> "imported_inodes": 1209280
> }
> }
> root@iccluster054:~# ceph --cluster container daemon
> mds.iccluster054.iccluster.epfl.ch perf dump mds
> {
> "mds": {
> "request": 267620366,
> "reply": 25
/kubernetes/examples/blob/master/staging/volumes/cephfs/cephfs.yaml
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e best way to do it
so if I have multiple solutions, that will be great.
Thanks,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
, I need to create a pool
per user (one user can have multiple containers).
I'm gonna give a look to cephfs, it seems possible to allow access only to a
subdirectory per user, could you confirm it ?
Thanks,
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
Is there a possibility to have "root_squash" option on cephfs volume for a
specific client.user + secret?
Is it possible to allow a specific user to mount only /bla and disallow to
mount the cephfs root "/"?
Or is there another way to do that?
Thanks,
--
pool=cephfs_data"
With this, the user foo is able to mount the root of the cephfs and read
everything, of course, he cannot modify but my problem here is he is
still able to have read access to everything with uid=0.
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ything with uid=0.
>
> I think that is because of the older kernel client, like mentioned here?>
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39734.html
Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96
What is exactly an older kernel client ? 4.4 is old ?
i
_data" mds "allow rw path=/foo"
Error EINVAL: key for client.foo exists but cap mds does not match
# ceph fs authorize cephfs client.foo /foo rw
Error EINVAL: key for client.foo exists but cap mds does not match
Thanks,
--
Yoann Moulin
EPFL IC-IT
_
ter054:/foo /mnt -v -o
name=foo,secret=[snip]
parsing options: name=foo,secret=[snip]
# df /mnt
Filesystem1K-blocks Used Available Use%
Mounted on
10.90.38.17,10.90.38.18,10.90.39.5:/foo 70324469760 26267648 70298202112 1%
/mnt
It seems to work as I want.
Thanks a lot !
Cheers,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rw
>> pool=cephfs_data" mds "allow rw path=/foo"
>> updated caps for client.foo
>
> In cases like this you also want to set RADOS namespaces for each tenant’s
> directory in the CephFS layout and give them OSD access to only that
> nam
-865f-3dbb053803c4.44353.1
> {
> "key":
> "bucket.instance:image-net:69d2fd65-fcf9-461b-865f-3dbb053803c4.44353.1",
> "ver": {
> "tag": "_HJUIdLuc8HJdxWhortpLiE7",
> "ver": 3
> },
> "mtime": "2017-09-26 14:14:47.749267Z",
> "data": {
>
ot;auth caps" portion gives the client permission on
> the OSD to access the namespace "foo". The file layouts place the
> CephFS file data into that namespace.
OK, I will give a look next week.
Thank you.
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
it zone and zonegroup :)
>> On the "default" zonegroup (which is not set as default), the
>> "bucket_index_max_shards" is set to "0", can I modify it without reaml ?
>>
> I just updated this section in this pr:
> https://github.com/ceph/ce
h user ?
The question about namespace is still open, if I have a namespace in the osd
caps, I can't create rbd volume. How I can isolate each client to
only his own volumes ?
Thanks for your help
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
h container do not have access to others rbd ? Is
the namespace good to isolate each user ?
I haven't used a lot rbd before and never use client keys capabilities, it
might a bit confuse for me.
Thanks for your help
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
/bar.secret
if you try to mount the cephfs root, you should get an access denied
# mount.ceph mds1,mds2,mds3:/ /bar -v -o
name=bar,secretfile=/path/to/bar.secret
In the case you want to increase the security, you might give a look to
namespace and file layout
http://docs.ceph.com/docs/master/
77] libceph: mon0 10.90.38.17:6789 missing required protocol
> features
> [ 2646.962255] libceph: mon1 10.90.38.18:6789 feature set mismatch, my
> 107b84a842aca < server's 40107b84a842aca, missing 4000000
> [ 2646.979228] libceph: mon1 10.90.38.18:6789 missing required protocol
&g
to change it, is to comment out in the crushmap the option "*tunable
> chooseleaf_stable 1*" and inject the crushmap again in the cluster (of
> course that would produce on a lot of data moving on the pgs)
Thanks a lot, I removed the line "tunable chooseleaf_stable 1" f
of ceph-disk as
deprecated in a minor release is not what I expect for a stable storage
systems but I also understand the necessity to move forward with ceph-volume
(and bluestore). I think keeping ceph-disk in mimic is necessary,
even though there is no update, just for compatibility with old sc
> And ceph's:
> [client.radosgw.gateway]
> host = rgw
> rgw_frontends = civetweb port=127.0.0.1:1234
> keyring = /etc/ceph/keyring.radosgw.gateway
In my rgw section I also have this :
rgw dns name =
that allows s3cmd to access to bucket with %(bucket)s.test.iccluster.epfl.c
t need to be updated.
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rrectly created but the group not.
> # grep ceph /etc/passwd
> ceph:x:64045:64045::/home/ceph:/bin/false
> # grep ceph /etc/group
> #
Is there a workaround for that?
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
cep
" ] && SERVER_UID=64045 # alloc by Debian base-passwd
> maintainer
> [ -z "$SERVER_GID" ] && SERVER_GID=$SERVER_UID
I can change the SERVER_UID / SERVER_GID and or SERVER_USER
I'm gonna try to create a specific ceph user in the ldap and use it for c
Hello,
What is the best kernel for Luminous on Ubuntu 16.04 ?
Is linux-image-virtual-lts-xenial still the best one ? Or
linux-virtual-hwe-16.04 will offer some improvement ?
Thanks,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users
fe10:::.dir.c9724aff-5fa0-4dd9-b494-57bdb48fab4e.314528.19:head:
> failed to pick suitable auth object
> 2018-02-21 09:08:33.727333 7fb7b8222700 -1 log_channel(cluster) log [ERR] :
> 11.5f repair 3 errors, 0 fixed
I set "debug_osd 20/20" on osd.78 and start the repair again, the log
Le 22/02/2018 à 05:23, Brad Hubbard a écrit :
> On Wed, Feb 21, 2018 at 6:40 PM, Yoann Moulin <yoann.mou...@epfl.ch> wrote:
>> Hello,
>>
>> I migrated my cluster from jewel to luminous 3 weeks ago (using ceph-ansible
>> playbook), a few days after, ceph status tol
.
> After adding the 2 additional OSD disks I'm seeing that the load is beign
> distributed among the cluster.
> Please I need your help.
Could you give us the output of
ceph osd df
ceph osd pool ls detail
ceph osd tree
Best regards,
--
Yoann Moulin
EPFL IC-IT
can do here is added two disks to pf-us1-dfs3.
The second one would be, moving one disk from one of the 2 other servers to
pf-us1-dfs3 if you can't quickly get new disks. I don't know what
is the best way to do that, I never had this case on my cluster.
Best regards,
Yoann
> On Tue, Jan 8
I guess). Each host will store 1/3 of data (1 replica) pf-us1-dfs3 only
have half of the 2 others, you won't be able to put more than 3x
(osd.2+osd.4) even though there are free spaces on others OSDs.
Best regards,
Yoann
> On Tue, Jan 8, 2019 at 10:36 AM Yoann Moulin <mailto:yoann.mou.
16TiB 484GiB 71.15 0.94 183
> 197 hdd 1.63739 1.0 1.64TiB 1.28TiB 370GiB 77.94 1.03 197
> 192 hdd 1.63739 1.0 1.64TiB 1.26TiB 382GiB 77.24 1.02 200
> 196 hdd 1.63739 1.0 1.64TiB 1.24TiB 402GiB 76.02 1.00 201
> 193 hdd 1.63739 1.0 1.64TiB 1.24TiB 409GiB 75.59 1.00 186
> 198 hdd 1.63739 1.0 1.64TiB 1.15TiB 501GiB 70.13 0.92 175
> 194 hdd 1.63739 1.0 1.64TiB 1.29TiB 353GiB 78.98 1.04 202
> 199 hdd 1.63739 1.0 1.64TiB 1.34TiB 309GiB 81.58 1.07 221
> TOTAL 65.5TiB 49.7TiB 15.8TiB 75.94
> MIN/MAX VAR: 0.86/1.09 STDDEV: 3.92
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
admin004:~$ ceph osd pool ls detail | grep cephfs_data
> pool 14 'cephfs_data' erasure size 6 min_size 5 crush_rule 1 object_hash
> rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 2646 flags
> hashpspool stripe_width 16384
Why min_si
hash
>>> rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 2646
>>> flags hashpspool stripe_width 16384
>>
>> Why min_size = 5 and not 4 ?
>>
> this question comes up regularly and is been discussed just now:
&g
w; oldest blocked for > 62502.675839 secs
> 2019-09-19 08:53:53.961792 mds.icadmin006 [WRN] 10 slow requests, 0 included
> below; oldest blocked for > 62507.675948 secs
> 2019-09-19 08:53:57.529113 mds.icadmin007 [WRN] 3 slow requests, 0 included
> below; oldest blocked for > 625
the best way to do it ?
Add those 3 lines in all.yml or osds.yml ?
ceph_conf_overrides:
global:
osd_op_queue_cut_off: high
Is there another (better?) way to do that?
Thanks for your help.
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
76 matches
Mail list logo