Re: [ceph-users] Move from own crush map rule (SSD / HDD) to Luminous device class

2019-03-14 Thread Konstantin Shalygin
in the beginning, I create separate crush rules for SSD and HDD pool ( six Ceph nodes), following this HOWTO: https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ Now I want to migrate to the standard crush rules, which comes with Luminous. What is the

[ceph-users] Newly added OSDs will not stay up

2019-03-14 Thread Josh Haft
Hello fellow Cephers, My 12.2.2 cluster is pretty full so I've been adding new nodes/OSDs. Last week I added two new nodes with 12 OSDs each and they are still backfilling. I have max_backfills tuned quite low across the board to minimize client impact. Yesterday I brought two more nodes online

Re: [ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-14 Thread Stefan Kooman
Quoting Zack Brenton (z...@imposium.com): > On Tue, Mar 12, 2019 at 6:10 AM Stefan Kooman wrote: > > > Hmm, 6 GiB of RAM is not a whole lot. Especially if you are going to > > increase the amount of OSDs (partitions) like Patrick suggested. By > > default it will take 4 GiB per OSD ... Make sure

Re: [ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-14 Thread Zack Brenton
On Tue, Mar 12, 2019 at 6:10 AM Stefan Kooman wrote: > Hmm, 6 GiB of RAM is not a whole lot. Especially if you are going to > increase the amount of OSDs (partitions) like Patrick suggested. By > default it will take 4 GiB per OSD ... Make sure you set the > "osd_memory_target" parameter

Re: [ceph-users] Intel D3-S4610 performance

2019-03-14 Thread Alexandre DERUMIER
Hi, I'm running dc p4610 6TB (nvme), no performance problem. not sure what is the difference with d3-s4610. - Mail original - De: "Kai Wembacher" À: "ceph-users" Envoyé: Mardi 12 Mars 2019 09:13:44 Objet: [ceph-users] Intel D3-S4610 performance Hi everyone, I have an Intel

Re: [ceph-users] [EXTERNAL] Re: OSD service won't stay running - pg incomplete

2019-03-14 Thread Benjamin . Zieglmeier
Would you be willing to elaborate on what configuration specifically is bad? That would be helpful for future reference. Yes, we have tried to access with ceph-objectstore-tool to export the shard. The command spits out the tcmalloc lines shown in my previous output and then crashes with an

[ceph-users] Move from own crush map rule (SSD / HDD) to Luminous device class

2019-03-14 Thread Denny Fuchs
Hi, in the beginning, I create separate crush rules for SSD and HDD pool ( six Ceph nodes), following this HOWTO: https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ Now I want to migrate to the standard crush rules, which comes with Luminous. What is the

Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Matt Benjamin
Yes, sorry to misstate that. I was conflating with lifecycle configuration support. Matt On Thu, Mar 14, 2019 at 10:06 AM Konstantin Shalygin wrote: > > On 3/14/19 8:58 PM, Matt Benjamin wrote: > > Sorry, object tagging. There's a bucket tagging question in another thread > > :) > > Luminous

Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Konstantin Shalygin
On 3/14/19 8:58 PM, Matt Benjamin wrote: Sorry, object tagging. There's a bucket tagging question in another thread :) Luminous works fine with object tagging, at least on 12.2.11 getObjectTagging and putObjectTagging. [k0ste@WorkStation]$ curl -s

Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Matt Benjamin
Sorry, object tagging. There's a bucket tagging question in another thread :) Matt On Thu, Mar 14, 2019 at 9:58 AM Matt Benjamin wrote: > > Hi Konstantin, > > Luminous does not support bucket tagging--although I've done Luminous > backports for downstream use, and would be willing to help with

Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Matt Benjamin
Hi Konstantin, Luminous does not support bucket tagging--although I've done Luminous backports for downstream use, and would be willing to help with upstream backports if there is community support. Matt On Thu, Mar 14, 2019 at 9:53 AM Konstantin Shalygin wrote: > > On 3/14/19 8:36 PM, Casey

Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Konstantin Shalygin
On 3/14/19 8:36 PM, Casey Bodley wrote: The bucket policy documentation just lists which actions the policy engine understands. Bucket tagging isn't supported, so those requests were misinterpreted as normal PUT requests to create a bucket. I opened https://github.com/ceph/ceph/pull/26952 to

Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Casey Bodley
The bucket policy documentation just lists which actions the policy engine understands. Bucket tagging isn't supported, so those requests were misinterpreted as normal PUT requests to create a bucket. I opened https://github.com/ceph/ceph/pull/26952 to return 405 Method Not Allowed there

Re: [ceph-users] Problems creating a balancer plan

2019-03-14 Thread Massimo Sgaravatto
For the record, the problem was that the new OSDs didn't have the weight-set defined. After having manually defined the weight-set for the new OSDs, I am able to create a plan. More info in the 'weight-set defined for some OSDs and not defined for the new installed ones' thread Cheers, Massimo

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
I tried setting the weight set for the 'new' OSDs as you suggested. What looks strange to me is that it was enough to set it for a single OSD to have the weight-set defined for all the OSDs. I defined the weight set for osd.12, and it got defined also for osd.13..19 ... [*] At any rate after

[ceph-users] Error in Mimic repo for Ubunut 18.04

2019-03-14 Thread Robert Sander
Hi, when running "apt update" I get the following error: Err:6 http://download.ceph.com/debian-mimic bionic/main amd64 Packages File has unexpected size (13881 != 13883). Mirror sync in progress? [IP: 158.69.68.124 80] Hashes of expected file: - Filesize:13883 [weak] -

Re: [ceph-users] bluestore compression enabled but no data compressed

2019-03-14 Thread Ragan, Tj (Dr.)
Hi Frank, Did you ever get the 0.5 compression ratio thing figured out? Thanks -TJ Ragan On 23 Oct 2018, at 16:56, Igor Fedotov mailto:ifedo...@suse.de>> wrote: Hi Frank, On 10/23/2018 2:56 PM, Frank Schilder wrote: Dear David and Igor, thank you very much for your help. I have one more

Re: [ceph-users] OSD service won't stay running - pg incomplete

2019-03-14 Thread Paul Emmerich
You should never run a production cluster with this configuration. Have you tried to access the disk with ceph-objectstoretool? The goal would be export the shard of the PG on that disk and import it into any other OSD. Paul Paul -- Paul Emmerich Looking for help with your Ceph cluster?

Re: [ceph-users] cluster is not stable

2019-03-14 Thread Zhenshi Zhou
Hi huang, I think I've found the root cause which make the monmap contains no feature. As I moved the servers from one place to another, I modified the monmap once. However, not all monmap is the same on all mons. I modified monmap on one of the mons, and create from scratch on the other two

[ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Konstantin Shalygin
Hi. I CC'ed Casey Bodley as new RGW tech lead. Luminous doc [1] tells that s3:GetBucketTagging & s3:PutBucketTagging methods is supported.But actually PutBucketTagging fails on Luminous 12.2.11 RGW with "provided input did not specify location constraint correctly", I think is issue [2], but

Re: [ceph-users] cluster is not stable

2019-03-14 Thread Zhenshi Zhou
Hi, I'll try that command soon. It's a new cluster installed mimic. Not sure what the exact reason, but as far as I can think of, 2 things may cause this issue. One is that I moved these servers from a datacenter to this one, followed by steps [1]. Another is that I create a bridge using the

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Konstantin Shalygin
On 3/14/19 2:15 PM, Massimo Sgaravatto wrote: I have some clients running centos7.4 with kernel 3.10 I was told that the minimum requirements are kernel >=4.13 or CentOS >= 7.5. Yes, this is correct. k ___ ceph-users mailing list

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
I have some clients running centos7.4 with kernel 3.10 I was told that the minimum requirements are kernel >=4.13 or CentOS >= 7.5. On Thu, Mar 14, 2019 at 8:11 AM Konstantin Shalygin wrote: > On 3/14/19 2:10 PM, Massimo Sgaravatto wrote: > > I am using Luminous everywhere > > I'm mean, what

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Konstantin Shalygin
On 3/14/19 2:10 PM, Massimo Sgaravatto wrote: I am using Luminous everywhere I'm mean, what is version of your kernel clients? k ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
I am using Luminous everywhere On Thu, Mar 14, 2019 at 8:09 AM Konstantin Shalygin wrote: > On 3/14/19 2:09 PM, Massimo Sgaravatto wrote: > > I plan to use upmap after having migrated all my clients to CentOS 7.6 > > What is your current release? > > > > k > >

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Konstantin Shalygin
On 3/14/19 2:09 PM, Massimo Sgaravatto wrote: I plan to use upmap after having migrated all my clients to CentOS 7.6 What is your current release? k ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
I plan to use upmap after having migrated all my clients to CentOS 7.6 On Thu, Mar 14, 2019 at 8:03 AM Konstantin Shalygin wrote: > On 3/14/19 2:02 PM, Massimo Sgaravatto wrote: > > Oh, I missed this information. > > > > So this means that, after having run once the balancer in compat mode, > >

Re: [ceph-users] cluster is not stable

2019-03-14 Thread huang jun
You can try that commands, but maybe you need to find the root cause why the current monmap contains no features at all, do you upgrade cluster from luminous to mimic, or it's a new cluster installed mimic? Zhenshi Zhou 于2019年3月14日周四 下午2:37写道: > > Hi huang, > > It's a pre-production

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Konstantin Shalygin
On 3/14/19 2:02 PM, Massimo Sgaravatto wrote: Oh, I missed this information. So this means that, after having run once the balancer in compat mode, if you add new OSDs you MUST manually defined the weight-set for these newly added OSDs if you want to use the balancer, right ? This is an

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
Oh, I missed this information. So this means that, after having run once the balancer in compat mode, if you add new OSDs you MUST manually defined the weight-set for these newly added OSDs if you want to use the balancer, right ? This is an important piece of information that IMHO should be in

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Konstantin Shalygin
On 3/14/19 1:53 PM, Massimo Sgaravatto wrote: So if I try to run the balancer in the current compat mode, should this define the weight-set also for the new OSDs ? But if I try to create a balancer plan, I get an error [*] (while it worked before adding the new OSDs). Nope, balancer creates

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
Ok, understood, thanks ! So if I try to run the balancer in the current compat mode, should this define the weight-set also for the new OSDs ? But if I try to create a balancer plan, I get an error [*] (while it worked before adding the new OSDs). [*] [root@c-mon-01 balancer]# ceph balancer

Re: [ceph-users] cluster is not stable

2019-03-14 Thread Zhenshi Zhou
Hi huang, It's a pre-production environment. If everything is fine, I'll use it for production. My cluster is version mimic, should I set all features you listed in the command? Thanks huang jun 于2019年3月14日周四 下午2:11写道: > sorry, the script should be > for f in kraken luminous mimic

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Konstantin Shalygin
On 3/14/19 1:11 PM, Massimo Sgaravatto wrote: Thanks I will try to set the weight-set for the new OSDs But I am wondering what I did wrong to be in such scenario. You don't. You just use legacy. But why? Jewel clients? Old kernels? Is it normal that a new created OSD has no weight-set

Re: [ceph-users] cluster is not stable

2019-03-14 Thread huang jun
sorry, the script should be for f in kraken luminous mimic osdmap-prune; do ceph mon feature set $f --yes-i-really-mean-it done huang jun 于2019年3月14日周四 下午2:04写道: > > ok, if this is a **test environment**, you can try > for f in 'kraken,luminous,mimic,osdmap-prune'; do > ceph mon feature set

Re: [ceph-users] weight-set defined for some OSDs and not defined for the new installed ones

2019-03-14 Thread Massimo Sgaravatto
Thanks I will try to set the weight-set for the new OSDs But I am wondering what I did wrong to be in such scenario. Is it normal that a new created OSD has no weight-set defined ? Who is supposed to initially set the weight-set for a OSD ? Thanks again, MAssimo On Thu, Mar 14, 2019 at 6:52

Re: [ceph-users] cluster is not stable

2019-03-14 Thread huang jun
ok, if this is a **test environment**, you can try for f in 'kraken,luminous,mimic,osdmap-prune'; do ceph mon feature set $f --yes-i-really-mean-it done If it is a production environment, you should eval the risk first, and maybe setup a test cluster to testing first. Zhenshi Zhou