in the beginning, I create separate crush rules for SSD and HDD pool (
six Ceph nodes), following this HOWTO:
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Now I want to migrate to the standard crush rules, which comes with
Luminous. What is the
Hello fellow Cephers,
My 12.2.2 cluster is pretty full so I've been adding new nodes/OSDs.
Last week I added two new nodes with 12 OSDs each and they are still
backfilling. I have max_backfills tuned quite low across the board to
minimize client impact. Yesterday I brought two more nodes online
Quoting Zack Brenton (z...@imposium.com):
> On Tue, Mar 12, 2019 at 6:10 AM Stefan Kooman wrote:
>
> > Hmm, 6 GiB of RAM is not a whole lot. Especially if you are going to
> > increase the amount of OSDs (partitions) like Patrick suggested. By
> > default it will take 4 GiB per OSD ... Make sure
On Tue, Mar 12, 2019 at 6:10 AM Stefan Kooman wrote:
> Hmm, 6 GiB of RAM is not a whole lot. Especially if you are going to
> increase the amount of OSDs (partitions) like Patrick suggested. By
> default it will take 4 GiB per OSD ... Make sure you set the
> "osd_memory_target" parameter
Hi,
I'm running dc p4610 6TB (nvme), no performance problem.
not sure what is the difference with d3-s4610.
- Mail original -
De: "Kai Wembacher"
À: "ceph-users"
Envoyé: Mardi 12 Mars 2019 09:13:44
Objet: [ceph-users] Intel D3-S4610 performance
Hi everyone,
I have an Intel
Would you be willing to elaborate on what configuration specifically is bad?
That would be helpful for future reference.
Yes, we have tried to access with ceph-objectstore-tool to export the shard.
The command spits out the tcmalloc lines shown in my previous output and then
crashes with an
Hi,
in the beginning, I create separate crush rules for SSD and HDD pool (
six Ceph nodes), following this HOWTO:
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Now I want to migrate to the standard crush rules, which comes with
Luminous. What is the
Yes, sorry to misstate that. I was conflating with lifecycle
configuration support.
Matt
On Thu, Mar 14, 2019 at 10:06 AM Konstantin Shalygin wrote:
>
> On 3/14/19 8:58 PM, Matt Benjamin wrote:
> > Sorry, object tagging. There's a bucket tagging question in another thread
> > :)
>
> Luminous
On 3/14/19 8:58 PM, Matt Benjamin wrote:
Sorry, object tagging. There's a bucket tagging question in another thread :)
Luminous works fine with object tagging, at least on 12.2.11
getObjectTagging and putObjectTagging.
[k0ste@WorkStation]$ curl -s
Sorry, object tagging. There's a bucket tagging question in another thread :)
Matt
On Thu, Mar 14, 2019 at 9:58 AM Matt Benjamin wrote:
>
> Hi Konstantin,
>
> Luminous does not support bucket tagging--although I've done Luminous
> backports for downstream use, and would be willing to help with
Hi Konstantin,
Luminous does not support bucket tagging--although I've done Luminous
backports for downstream use, and would be willing to help with
upstream backports if there is community support.
Matt
On Thu, Mar 14, 2019 at 9:53 AM Konstantin Shalygin wrote:
>
> On 3/14/19 8:36 PM, Casey
On 3/14/19 8:36 PM, Casey Bodley wrote:
The bucket policy documentation just lists which actions the policy
engine understands. Bucket tagging isn't supported, so those requests
were misinterpreted as normal PUT requests to create a bucket. I
opened https://github.com/ceph/ceph/pull/26952 to
The bucket policy documentation just lists which actions the policy
engine understands. Bucket tagging isn't supported, so those requests
were misinterpreted as normal PUT requests to create a bucket. I opened
https://github.com/ceph/ceph/pull/26952 to return 405 Method Not Allowed
there
For the record, the problem was that the new OSDs didn't have the
weight-set defined.
After having manually defined the weight-set for the new OSDs, I am able to
create a plan.
More info in the 'weight-set defined for some OSDs and not defined for the
new installed ones' thread
Cheers, Massimo
I tried setting the weight set for the 'new' OSDs as you suggested.
What looks strange to me is that it was enough to set it for a single OSD
to have the weight-set defined for all the OSDs.
I defined the weight set for osd.12, and it got defined also for osd.13..19
... [*]
At any rate after
Hi,
when running "apt update" I get the following error:
Err:6 http://download.ceph.com/debian-mimic bionic/main amd64 Packages
File has unexpected size (13881 != 13883). Mirror sync in progress? [IP:
158.69.68.124 80]
Hashes of expected file:
- Filesize:13883 [weak]
-
Hi Frank,
Did you ever get the 0.5 compression ratio thing figured out?
Thanks
-TJ Ragan
On 23 Oct 2018, at 16:56, Igor Fedotov
mailto:ifedo...@suse.de>> wrote:
Hi Frank,
On 10/23/2018 2:56 PM, Frank Schilder wrote:
Dear David and Igor,
thank you very much for your help. I have one more
You should never run a production cluster with this configuration.
Have you tried to access the disk with ceph-objectstoretool? The goal
would be export the shard of the PG on that disk and import it into
any other OSD.
Paul
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster?
Hi huang,
I think I've found the root cause which make the monmap contains no
feature. As I moved the servers from one place to another, I modified
the monmap once.
However, not all monmap is the same on all mons. I modified monmap
on one of the mons, and create from scratch on the other two
Hi.
I CC'ed Casey Bodley as new RGW tech lead.
Luminous doc [1] tells that s3:GetBucketTagging & s3:PutBucketTagging
methods is supported.But actually PutBucketTagging fails on Luminous
12.2.11 RGW with "provided input did not specify location constraint
correctly", I think is issue [2], but
Hi,
I'll try that command soon.
It's a new cluster installed mimic. Not sure what the exact reason, but as
far as I can think of, 2 things may cause this issue. One is that I moved
these servers from a datacenter to this one, followed by steps [1]. Another
is that I create a bridge using the
On 3/14/19 2:15 PM, Massimo Sgaravatto wrote:
I have some clients running centos7.4 with kernel 3.10
I was told that the minimum requirements are kernel >=4.13 or CentOS
>= 7.5.
Yes, this is correct.
k
___
ceph-users mailing list
I have some clients running centos7.4 with kernel 3.10
I was told that the minimum requirements are kernel >=4.13 or CentOS >= 7.5.
On Thu, Mar 14, 2019 at 8:11 AM Konstantin Shalygin wrote:
> On 3/14/19 2:10 PM, Massimo Sgaravatto wrote:
> > I am using Luminous everywhere
>
> I'm mean, what
On 3/14/19 2:10 PM, Massimo Sgaravatto wrote:
I am using Luminous everywhere
I'm mean, what is version of your kernel clients?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I am using Luminous everywhere
On Thu, Mar 14, 2019 at 8:09 AM Konstantin Shalygin wrote:
> On 3/14/19 2:09 PM, Massimo Sgaravatto wrote:
> > I plan to use upmap after having migrated all my clients to CentOS 7.6
>
> What is your current release?
>
>
>
> k
>
>
On 3/14/19 2:09 PM, Massimo Sgaravatto wrote:
I plan to use upmap after having migrated all my clients to CentOS 7.6
What is your current release?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
I plan to use upmap after having migrated all my clients to CentOS 7.6
On Thu, Mar 14, 2019 at 8:03 AM Konstantin Shalygin wrote:
> On 3/14/19 2:02 PM, Massimo Sgaravatto wrote:
> > Oh, I missed this information.
> >
> > So this means that, after having run once the balancer in compat mode,
> >
You can try that commands, but maybe you need to find the root cause
why the current monmap contains no features at all, do you upgrade
cluster from luminous to mimic,
or it's a new cluster installed mimic?
Zhenshi Zhou 于2019年3月14日周四 下午2:37写道:
>
> Hi huang,
>
> It's a pre-production
On 3/14/19 2:02 PM, Massimo Sgaravatto wrote:
Oh, I missed this information.
So this means that, after having run once the balancer in compat mode,
if you add new OSDs you MUST manually defined the weight-set for these
newly added OSDs if you want to use the balancer, right ?
This is an
Oh, I missed this information.
So this means that, after having run once the balancer in compat mode, if
you add new OSDs you MUST manually defined the weight-set for these newly
added OSDs if you want to use the balancer, right ?
This is an important piece of information that IMHO should be in
On 3/14/19 1:53 PM, Massimo Sgaravatto wrote:
So if I try to run the balancer in the current compat mode, should
this define the weight-set also for the new OSDs ?
But if I try to create a balancer plan, I get an error [*] (while it
worked before adding the new OSDs).
Nope, balancer creates
Ok, understood, thanks !
So if I try to run the balancer in the current compat mode, should this
define the weight-set also for the new OSDs ?
But if I try to create a balancer plan, I get an error [*] (while it worked
before adding the new OSDs).
[*]
[root@c-mon-01 balancer]# ceph balancer
Hi huang,
It's a pre-production environment. If everything is fine, I'll use it for
production.
My cluster is version mimic, should I set all features you listed in the
command?
Thanks
huang jun 于2019年3月14日周四 下午2:11写道:
> sorry, the script should be
> for f in kraken luminous mimic
On 3/14/19 1:11 PM, Massimo Sgaravatto wrote:
Thanks
I will try to set the weight-set for the new OSDs
But I am wondering what I did wrong to be in such scenario.
You don't. You just use legacy. But why? Jewel clients? Old kernels?
Is it normal that a new created OSD has no weight-set
sorry, the script should be
for f in kraken luminous mimic osdmap-prune; do
ceph mon feature set $f --yes-i-really-mean-it
done
huang jun 于2019年3月14日周四 下午2:04写道:
>
> ok, if this is a **test environment**, you can try
> for f in 'kraken,luminous,mimic,osdmap-prune'; do
> ceph mon feature set
Thanks
I will try to set the weight-set for the new OSDs
But I am wondering what I did wrong to be in such scenario.
Is it normal that a new created OSD has no weight-set defined ?
Who is supposed to initially set the weight-set for a OSD ?
Thanks again, MAssimo
On Thu, Mar 14, 2019 at 6:52
ok, if this is a **test environment**, you can try
for f in 'kraken,luminous,mimic,osdmap-prune'; do
ceph mon feature set $f --yes-i-really-mean-it
done
If it is a production environment, you should eval the risk first, and
maybe setup a test cluster to testing first.
Zhenshi Zhou
37 matches
Mail list logo