Re: [ceph-users] performance issue with jewel on ubuntu xenial (kernel)

2016-06-22 Thread Yoann Moulin
Hello Florian, > On Tue, Jun 21, 2016 at 3:11 PM, Yoann Moulin <yoann.mou...@epfl.ch> wrote: >> Hello, >> >> I found a performance drop between kernel 3.13.0-88 (default kernel on Ubuntu >> Trusty 14.04) and kernel 4.4.0.24.14 (default kernel on Ubuntu Xenial 1

[ceph-users] performance issue with jewel on ubuntu xenial (kernel)

2016-06-21 Thread Yoann Moulin
ernel 3.13.0-88-generic : ceph tell osd.ID => average ~81MB/s Kernel 4.2.0-38-generic : ceph tell osd.ID => average ~109MB/s Kernel 4.4.0-24-generic : ceph tell osd.ID => average ~50MB/s Does anyone get a similar behaviour on their cluster ? Best regards

[ceph-users] performance issue with jewel on ubuntu xenial (kernel)

2016-06-21 Thread Yoann Moulin
ernel 3.13.0-88-generic : ceph tell osd.ID => average ~81MB/s Kernel 4.2.0-38-generic : ceph tell osd.ID => average ~109MB/s Kernel 4.4.0-24-generic : ceph tell osd.ID => average ~50MB/s Does anyone get a similar behaviour on their cluster ? Best regards

Re: [ceph-users] performance issue with jewel on ubuntu xenial (kernel)

2016-06-23 Thread Yoann Moulin
Le 23/06/2016 08:25, Sarni Sofiane a écrit : > Hi Florian, > > On 23.06.16 06:25, "ceph-users on behalf of Florian Haas" > <ceph-users-boun...@lists.ceph.com on behalf of flor...@hastexo.com> wrote: > >> On Wed, Jun 22, 2016 at 10:56 AM, Yoann Moulin &l

Re: [ceph-users] can not umount ceph osd partition

2016-02-04 Thread Yoann Moulin
ss gonna switch into kernel status "D+" . You won't be able to kill that process even by kill -9. To stop it, you will have to reboot the server. you can give a look here how to manipulate scsi bus: http://fibrevillage.com/storage/279-hot-add-remove-rescan-of-scsi-devices-on-linux you can install th

Re: [ceph-users] can not umount ceph osd partition

2016-02-03 Thread Yoann Moulin
it. > > Why I can't umount? is "lsof -n | grep /dev/sdf" give something ? and are you sure /dev/sdf is the disk for osd 13 ? -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] how to choose EC plugins and rulesets

2016-03-10 Thread Yoann Moulin
riginal Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Yoann Moulin >> Sent: 09 March 2016 16:01 >> To: ceph-us...@ceph.com >> Subject: [ceph-users] how to choose EC plugins and rulesets >> >> Hello, >> >>

Re: [ceph-users] OSDs go down with infernalis

2016-03-09 Thread Yoann Moulin
g/wiki/GUID_Partition_Table That what I read on the irc channel, it seem to be a common mistake, might be good to talk about that in the doc or FAQ ? Yoann > On Tue, Mar 8, 2016 at 5:21 PM, Yoann Moulin <yoann.mou...@epfl.ch > <mailto:yoann.mou...@epfl.ch>> wrote: > > Hell

[ceph-users] how to choose EC plugins and rulesets

2016-03-09 Thread Yoann Moulin
of Memory OS Storage: 2 x SSD 240GB Intel S3500 DC (raid 1) Journal Storage: 2 x SSD 400GB Intel S3300 DC (no Raid) OSD Disk: 10 x HGST ultrastar-7k6000 6TB Network: 1 x 10Gb/s OS: Ubuntu 14.04 -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph

Re: [ceph-users] OSDs go down with infernalis

2016-03-08 Thread Yoann Moulin
t let ceph-disk create journal partition. Yoann > On Thu, Mar 3, 2016 at 3:42 PM, Yoann Moulin <yoann.mou...@epfl.ch > <mailto:yoann.mou...@epfl.ch>> wrote: > > Hello, > > I'm (almost) a new user of ceph (couple of month). In my university, we > start to >

[ceph-users] OSDs go down with infernalis

2016-03-03 Thread Yoann Moulin
1280 active+clean > 1024 creating+incomplete We have install this cluster at the begin of February. We did not use that cluster at all even at the begin to troubleshoot an issue with ceph-ansible. We did not push any data neither create pool. What could explain this behaviour ? T

[ceph-users] journal or cache tier on SSDs ?

2016-05-10 Thread Yoann Moulin
set the pg_num to 16384 at the beginning ? 16384 is high, isn't it ? Thanks for your help -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Jewel + kernel 4.4 Massive performance regression (-50%)

2016-07-26 Thread Yoann Moulin
iours -- Yoann > On Mon, Jul 25, 2016 at 6:45 PM, Yoann Moulin <yoann.mou...@epfl.ch > <mailto:yoann.mou...@epfl.ch>> wrote: > > Hello, > > (this is a repost, my previous message seems to be slipping under the > radar) > > Does anyone get a similar b

Re: [ceph-users] Jewel + kernel 4.4 Massive performance regression (-50%)

2016-07-26 Thread Yoann Moulin
d by security team. anyway, I'm still in test, I can test kernels to try to find from which one the regression start. > Sorry I can't be of more help! no problems :) -- Yoann > On 07/25/2016 10:45 AM, Yoann Moulin wrote: >> Hello, >> >> (this is a repost, my previous me

[ceph-users] Jewel + kernel 4.4 Massive performance regression (-50%)

2016-07-25 Thread Yoann Moulin
nch6 / 14.04 / Jewel / kernel 4.4 : 65.82 MB/s bench7 / 16.04 / Jewel / kernel 4.4 : 61.57 MB/s If needed, I have the raw output of "ceph tell osd.* bench" Best regards -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list cep

[ceph-users] Re: Infernalis -> Jewel, 10x+ RBD latency increase

2016-07-22 Thread Yoann Moulin
oughput (~40%) with kernel 4.4 compare of kernel 4.2. I didn't do the bench on latency but maybe the issue impact the latency too. -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] performance issue with jewel on ubuntu xenial (kernel)

2016-07-01 Thread Yoann Moulin
.54 MB/s bench5 / 14.04 / Infernalis / kernel 4.4 : 53.61 MB/s bench6 / 14.04 / Jewel / kernel 4.4 : 65.82 MB/s bench7 / 16.04 / Jewel / kernel 4.4 : 61.57 MB/s If needed, I have the raw output of "ceph tell osd.* bench" > What I find curious is that no-one else on the list has apparently run > into this. Any Ubuntu xenial users out there, or perhaps folks on > trusty who choose to install linux-image-generic-lts-xenial? Anyone to try on their side if they have the same behaviour ? Cheers, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] How to change the owner of a bucket

2017-02-14 Thread Yoann Moulin
com/docs/master/man/8/radosgw-admin/ -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] RadosGW Error : Error updating periodmap, multiple master zonegroups configured

2016-09-06 Thread Yoann Moulin
gion get > region.json > $ radosgw-admin period get > period.json > $ radosgw-admin period list > period_list.json I have 60TB of data in this RadosGW, can I fix this issue without having to repupload all those data ? Thanks for you help ! Best regards -- Yoann Moulin EPFL IC-IT met

Re: [ceph-users] RadosGW Error : Error updating periodmap, multiple master zonegroups configured

2016-09-06 Thread Yoann Moulin
-rgw-zonegroup --master=false if you check in files in my previous mail in metadata_zonegroup-map.json and metadata_zonegroup.json, there is only one zonegroup with name "default" but in metadata_zonegroup.json, the id is "default" and in metadata_zonegroup-map.json it is "4d982

Re: [ceph-users] RadosGW Error : Error updating periodmap, multiple master zonegroups configured

2016-09-06 Thread Yoann Moulin
uot;: [] } ], "default_placement": "default-placement", "realm_id": "ccc2e663-66d3-49a6-9e3a-f257785f2d9a" } $ radosgw-admin bucket list 2016-09-06 11:21:04.787391 7fb8a1f0b900 0 Error updating periodmap, multiple master zonegroups c

[ceph-users] RadosGW zonegroup id error

2016-09-01 Thread Yoann Moulin
"bucket_index_max_shards": 0, "read_only": "false" } ], "placement_targets": [ { "name": "default-placement",

Re: [ceph-users] RadosGW zonegroup id error

2016-09-02 Thread Yoann Moulin
w zonegroup then set as the default zonegroup, update the zonegroup-map, zone etc, then delete the zonegroup with the ID "default" it should work ? Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RadosGW zonegroup id error

2016-09-05 Thread Yoann Moulin
y because it's not the >>> same >>> ID...) >> >> if I create a new zonegroup then set as the default zonegroup, update the >> zonegroup-map, zone etc, then delete the zonegroup with the ID >> "default" it should work ? >

[ceph-users] RadosGW : troubleshoooting zone / zonegroup / period

2016-09-12 Thread Yoann Moulin
oot --yes-i-really-really-mean-it create a new realm id and set it as default > radosgw-admin realm create --rgw-realm=default --default Edit the 2 json files to change the realm id with the new one > vim default_zone.json #change realm with the new one > vim default_

Re: [ceph-users] RadosGW index-sharding on Jewel

2016-09-14 Thread Yoann Moulin
et bucket:images-eu-v1 | jq .data.bucket.bucket_id| tr -d '"') On that point, I don't know, I never configure index sharding Best Regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph full cluster

2016-09-26 Thread Yoann Moulin
ags > hashpspool stripe_width 0 Be-careful, if you set size 2 and min_size 2, your cluster will be in HEALTH_ERR state if you loose only OSD, if you want to set "size 2" (which is not recommended) you should set min_size to 1. Best Regards. Yoann Moulin > On Mon, Sep 26, 2016

Re: [ceph-users] radosgw - http status 400 while creating a bucket

2016-11-09 Thread Yoann Moulin
issues/17239 the "id" of the zonegroup shouldn't be "default" but an uuid afaik Best regards Yoann Moulin > root@arh-ibstorage1-ib:~# radosgw-admin zonegroup get --rgw-zonegroup=default > { > "id": "default", > "name": "defa

[ceph-users] How files are split into PGs ?

2016-11-11 Thread Yoann Moulin
=1024 If I push this file through ma radosgw, How I can find all replicats on the OSDs ? And another question, for really small files, on an EC pool, files will be replicated with k+m replica, won't they ? Thanks -- Yoann Moulin EPFL IC-IT ___ ceph

[ceph-users] HELP ! Cluster unusable with lots of "hit suicide timeout"

2016-10-19 Thread Yoann Moulin
oing on. Anyone can help us to understand what's happening ? thanks for your help -- Yoann Moulin EPFL IC-IT $ ceph --version ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) $ uname -a Linux icadmin004 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_

Re: [ceph-users] HELP ! Cluster unusable with lots of "hitsuicidetimeout"

2016-10-19 Thread Yoann Moulin
s OSD to recover. Afterwards you should set > the backfill traffic settings to the minimum (e.g. max_backfills = 1) > and unset the flags to allow the cluster to perform the outstanding recovery > operation. > > As the others already pointed out, these actions might help to get the > cluster up and running again, but you need to find the actual reason for > the problems. This is exactly what I want Thanks for the help ! -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] HELP ! Cluster unusable with lots of "hit suicide timeout"

2016-10-19 Thread Yoann Moulin
t; > If the cluster is just *slow* somehow, then increasing that might > help. If there is something systematically broken, increasing would > just postpone the inevitable. Ok, I'm going to study this option with my colleagues thanks -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] index-sharding on existing bucket ?

2016-11-17 Thread Yoann Moulin
Hello, is that possible to shard the index of existing buckets ? I have more than 100TB of data in a couples of buckets, I'd like to avoid to re upload everythings. Thanks for your help, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph

Re: [ceph-users] rgw / s3website, MethodNotAllowed on Jewel 10.2.3

2016-10-26 Thread Yoann Moulin
ne have had any luck with this? does apache send $host variable to the backend ? something like "ProxyPreserveHost On" Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Loop in radosgw-admin orphan find

2016-10-13 Thread Yoann Moulin
0 > storing 1 entries at orphan.scan.erasure.linked.43 > storing 1 entries at orphan.scan.erasure.linked.47 > storing 1 entries at orphan.scan.erasure.linked.56 > storing 1 entries at orphan.scan.erasure.linked.63 > storing 1 entries at orphan.scan.erasure.linked.9 > storing

Re: [ceph-users] stalls caused by scrub on jewel

2016-12-01 Thread Yoann Moulin
increasing the pg_num of the pool that has the biggest PGs. -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Jewel + kernel 4.4 Massive performance regression (-50%)

2016-12-19 Thread Yoann Moulin
tell the difference because each test gives > the speeds on same range. I did not test kernel 4.4 in ubuntu 14 > > > -- > Lomayani > > On Tue, Jul 26, 2016 at 9:39 AM, Yoann Moulin <yoann.mou...@epfl.ch > <mailto:yoann.mou...@epfl.ch>> wrote: > > Hello, >

[ceph-users] s3cmd not working with luminous radosgw

2017-09-15 Thread Yoann Moulin
> "rclone/v1.37" rgw logs : > ==> ceph/luminous-rgw-iccluster015.log <== > 2017-09-15 10:37:53.005424 7ff1f28f1700 1 == starting new request > req=0x7ff1f28eb1f0 = > 2017-09-15 10:37:53.007192 7ff1f28f1700 1 == req done req=0x7ff1f28eb1f0 > op status=0 http_status=200 == > 2017-09-15 10:37:53.007282 7ff1f28f1700 1 civetweb: 0x56061586e000: > 127.0.0.1 - - [15/Sep/2017:10:37:53 +0200] "GET / HTTP/1.0" 1 0 - rclone/v1.37 Thanks for you help -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] s3cmd not working with luminous radosgw

2017-09-19 Thread Yoann Moulin
"GET / HTTP/1.0" 1 0 - - > > > > ### rclone : list bucket ### > > > root@iccluster012:~# rclone lsd testadmin: > -1 2017-08-28 12:27:33-1 image-net > root@iccluster012:~# > > nginx (as revers proxy) log : > >> ==> ngin

Re: [ceph-users] s3cmd not working with luminous radosgw

2017-09-21 Thread Yoann Moulin
all.ym file : I have added it manually and now it works (ansible-playbook didn't add it, I must figure out why). Thanks for you help Best regards, Yoann Moulin >>> I have a fresh luminous cluster in test and I made a copy of a bucket (4TB >>> 1.5M files) with rclone, I'm able to

Re: [ceph-users] s3cmd not working with luminous radosgw

2017-09-21 Thread Yoann Moulin
seems to work in sigv2 :) >> The second was in the rgw section into ceph.conf file. The line "rgw dns >> name" was missing. > > Depending on your setup, "rgw dns name" may be required, yes. in my case, it seems to be mandatory Best regards, -- Yoan

Re: [ceph-users] Unable to restrict a CephFS client to a subdirectory

2017-10-10 Thread Yoann Moulin
caps mds = "allow rw path=/foo1" caps mon = "allow r" caps osd = "allow rw pool=cephfs_data" # ceph auth get client.foo2 exported keyring for client.foo2 [client.foo2] key = XXX2 caps mds = "allow r, allow rw path=/foo2&qu

[ceph-users] Luminous : 3 clients failing to respond to cache pressure

2017-10-17 Thread Yoann Moulin
ot;: 1082, > "imported_inodes": 1209280 > } > } > root@iccluster054:~# ceph --cluster container daemon > mds.iccluster054.iccluster.epfl.ch perf dump mds > { > "mds": { > "request": 267620366, > "reply": 25

Re: [ceph-users] Luminous : 3 clients failing to respond to cache pressure

2017-10-17 Thread Yoann Moulin
/kubernetes/examples/blob/master/staging/volumes/cephfs/cephfs.yaml -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread Yoann Moulin
e best way to do it so if I have multiple solutions, that will be great. Thanks, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin
, I need to create a pool per user (one user can have multiple containers). I'm gonna give a look to cephfs, it seems possible to allow access only to a subdirectory per user, could you confirm it ? Thanks, Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
. Is there a possibility to have "root_squash" option on cephfs volume for a specific client.user + secret? Is it possible to allow a specific user to mount only /bla and disallow to mount the cephfs root "/"? Or is there another way to do that? Thanks, --

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
pool=cephfs_data" With this, the user foo is able to mount the root of the cephfs and read everything, of course, he cannot modify but my problem here is he is still able to have read access to everything with uid=0. -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
ything with uid=0. > > I think that is because of the older kernel client, like mentioned here?> > https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39734.html Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 What is exactly an older kernel client ? 4.4 is old ? i

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
_data" mds "allow rw path=/foo" Error EINVAL: key for client.foo exists but cap mds does not match # ceph fs authorize cephfs client.foo /foo rw Error EINVAL: key for client.foo exists but cap mds does not match Thanks, -- Yoann Moulin EPFL IC-IT _

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
ter054:/foo /mnt -v -o name=foo,secret=[snip] parsing options: name=foo,secret=[snip] # df /mnt Filesystem1K-blocks Used Available Use% Mounted on 10.90.38.17,10.90.38.18,10.90.39.5:/foo 70324469760 26267648 70298202112 1% /mnt It seems to work as I want. Thanks a lot ! Cheers, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
rw >> pool=cephfs_data" mds "allow rw path=/foo" >> updated caps for client.foo > > In cases like this you also want to set RADOS namespaces for each tenant’s > directory in the CephFS layout and give them OSD access to only that > nam

[ceph-users] zone, zonegroup and resharding bucket on luminous

2017-09-29 Thread Yoann Moulin
-865f-3dbb053803c4.44353.1 > { > "key": > "bucket.instance:image-net:69d2fd65-fcf9-461b-865f-3dbb053803c4.44353.1", > "ver": { > "tag": "_HJUIdLuc8HJdxWhortpLiE7", > "ver": 3 > }, > "mtime": "2017-09-26 14:14:47.749267Z", > "data": { >

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
ot;auth caps" portion gives the client permission on > the OSD to access the namespace "foo". The file layouts place the > CephFS file data into that namespace. OK, I will give a look next week. Thank you. -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] zone, zonegroup and resharding bucket on luminous

2017-10-03 Thread Yoann Moulin
it zone and zonegroup :) >> On the "default" zonegroup (which is not set as default), the >> "bucket_index_max_shards" is set to "0", can I modify it without reaml ? >> > I just updated this section in this pr: > https://github.com/ceph/ce

Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin
h user ? The question about namespace is still open, if I have a namespace in the osd caps, I can't create rbd volume. How I can isolate each client to only his own volumes ? Thanks for your help Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin
h container do not have access to others rbd ? Is the namespace good to isolate each user ? I haven't used a lot rbd before and never use client keys capabilities, it might a bit confuse for me. Thanks for your help Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Unable to restrict a CephFS client to a subdirectory

2017-10-10 Thread Yoann Moulin
/bar.secret if you try to mount the cephfs root, you should get an access denied # mount.ceph mds1,mds2,mds3:/ /bar -v -o name=bar,secretfile=/path/to/bar.secret In the case you want to increase the security, you might give a look to namespace and file layout http://docs.ceph.com/docs/master/

[ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread Yoann Moulin
77] libceph: mon0 10.90.38.17:6789 missing required protocol > features > [ 2646.962255] libceph: mon1 10.90.38.18:6789 feature set mismatch, my > 107b84a842aca < server's 40107b84a842aca, missing 4000000 > [ 2646.979228] libceph: mon1 10.90.38.18:6789 missing required protocol &g

Re: [ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread Yoann Moulin
to change it, is to comment out in the crushmap the option "*tunable > chooseleaf_stable 1*" and inject the crushmap again in the cluster (of > course that would produce on a lot of data moving on the pgs) Thanks a lot, I removed the line "tunable chooseleaf_stable 1" f

Re: [ceph-users] ceph-disk is now deprecated

2017-11-29 Thread Yoann Moulin
of ceph-disk as deprecated in a minor release is not what I expect for a stable storage systems but I also understand the necessity to move forward with ceph-volume (and bluestore). I think keeping ceph-disk in mimic is necessary, even though there is no update, just for compatibility with old sc

Re: [ceph-users] Ceph S3 nginx Proxy

2017-11-03 Thread Yoann Moulin
> And ceph's: > [client.radosgw.gateway] > host = rgw > rgw_frontends = civetweb port=127.0.0.1:1234 > keyring = /etc/ceph/keyring.radosgw.gateway In my rgw section I also have this : rgw dns name = that allows s3cmd to access to bucket with %(bucket)s.test.iccluster.epfl.c

[ceph-users] [Docs] s/ceph-disk/ceph-volume/g ?

2017-12-04 Thread Yoann Moulin
t need to be updated. -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph's UID/GID 65045 in conflict with user's UID/GID in a ldap

2018-05-15 Thread Yoann Moulin
rrectly created but the group not. > # grep ceph /etc/passwd > ceph:x:64045:64045::/home/ceph:/bin/false > # grep ceph /etc/group > # Is there a workaround for that? -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list cep

Re: [ceph-users] ceph's UID/GID 65045 in conflict with user's UID/GID in a ldap

2018-05-15 Thread Yoann Moulin
" ] && SERVER_UID=64045 # alloc by Debian base-passwd > maintainer > [ -z "$SERVER_GID" ] && SERVER_GID=$SERVER_UID I can change the SERVER_UID / SERVER_GID and or SERVER_USER I'm gonna try to create a specific ceph user in the ldap and use it for c

[ceph-users] Luminous/Ubuntu 16.04 kernel recommendation ?

2018-02-04 Thread Yoann Moulin
Hello, What is the best kernel for Luminous on Ubuntu 16.04 ? Is linux-image-virtual-lts-xenial still the best one ? Or linux-virtual-hwe-16.04 will offer some improvement ? Thanks, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users

[ceph-users] PG_DAMAGED Possible data damage: 1 pg inconsistent

2018-02-21 Thread Yoann Moulin
fe10:::.dir.c9724aff-5fa0-4dd9-b494-57bdb48fab4e.314528.19:head: > failed to pick suitable auth object > 2018-02-21 09:08:33.727333 7fb7b8222700 -1 log_channel(cluster) log [ERR] : > 11.5f repair 3 errors, 0 fixed I set "debug_osd 20/20" on osd.78 and start the repair again, the log

Re: [ceph-users] PG_DAMAGED Possible data damage: 1 pg inconsistent

2018-02-22 Thread Yoann Moulin
Le 22/02/2018 à 05:23, Brad Hubbard a écrit : > On Wed, Feb 21, 2018 at 6:40 PM, Yoann Moulin <yoann.mou...@epfl.ch> wrote: >> Hello, >> >> I migrated my cluster from jewel to luminous 3 weeks ago (using ceph-ansible >> playbook), a few days after, ceph status tol

Re: [ceph-users] Problem with CephFS - No space left on device

2019-01-08 Thread Yoann Moulin
. > After adding the 2 additional OSD disks I'm seeing that the load is beign > distributed among the cluster. > Please I need your help. Could you give us the output of ceph osd df ceph osd pool ls detail ceph osd tree Best regards, -- Yoann Moulin EPFL IC-IT

Re: [ceph-users] Problem with CephFS - No space left on device

2019-01-08 Thread Yoann Moulin
can do here is added two disks to pf-us1-dfs3. The second one would be, moving one disk from one of the 2 other servers to pf-us1-dfs3 if you can't quickly get new disks. I don't know what is the best way to do that, I never had this case on my cluster. Best regards, Yoann > On Tue, Jan 8

Re: [ceph-users] Problem with CephFS - No space left on device

2019-01-08 Thread Yoann Moulin
I guess). Each host will store 1/3 of data (1 replica) pf-us1-dfs3 only have half of the 2 others, you won't be able to put more than 3x (osd.2+osd.4) even though there are free spaces on others OSDs. Best regards, Yoann > On Tue, Jan 8, 2019 at 10:36 AM Yoann Moulin <mailto:yoann.mou.

[ceph-users] cephfs free space issue

2019-01-09 Thread Yoann Moulin
16TiB 484GiB 71.15 0.94 183 > 197 hdd 1.63739 1.0 1.64TiB 1.28TiB 370GiB 77.94 1.03 197 > 192 hdd 1.63739 1.0 1.64TiB 1.26TiB 382GiB 77.24 1.02 200 > 196 hdd 1.63739 1.0 1.64TiB 1.24TiB 402GiB 76.02 1.00 201 > 193 hdd 1.63739 1.0 1.64TiB 1.24TiB 409GiB 75.59 1.00 186 > 198 hdd 1.63739 1.0 1.64TiB 1.15TiB 501GiB 70.13 0.92 175 > 194 hdd 1.63739 1.0 1.64TiB 1.29TiB 353GiB 78.98 1.04 202 > 199 hdd 1.63739 1.0 1.64TiB 1.34TiB 309GiB 81.58 1.07 221 > TOTAL 65.5TiB 49.7TiB 15.8TiB 75.94 > MIN/MAX VAR: 0.86/1.09 STDDEV: 3.92 -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Nautilus, k+m erasure coding a profile vs size+min_size

2019-05-21 Thread Yoann Moulin
admin004:~$ ceph osd pool ls detail | grep cephfs_data > pool 14 'cephfs_data' erasure size 6 min_size 5 crush_rule 1 object_hash > rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 2646 flags > hashpspool stripe_width 16384 Why min_si

Re: [ceph-users] Nautilus, k+m erasure coding a profile vs size+min_size

2019-05-21 Thread Yoann Moulin
hash >>> rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 2646 >>> flags hashpspool stripe_width 16384 >> >> Why min_size = 5 and not 4 ? >> > this question comes up regularly and is been discussed just now: &g

[ceph-users] cephfs performance issue MDSs report slow requests and osd memory usage

2019-09-19 Thread Yoann Moulin
w; oldest blocked for > 62502.675839 secs > 2019-09-19 08:53:53.961792 mds.icadmin006 [WRN] 10 slow requests, 0 included > below; oldest blocked for > 62507.675948 secs > 2019-09-19 08:53:57.529113 mds.icadmin007 [WRN] 3 slow requests, 0 included > below; oldest blocked for > 625

Re: [ceph-users] cephfs performance issue MDSs report slow requests and osd memory usage

2019-09-24 Thread Yoann Moulin
the best way to do it ? Add those 3 lines in all.yml or osds.yml ? ceph_conf_overrides: global: osd_op_queue_cut_off: high Is there another (better?) way to do that? Thanks for your help. Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com