Thanks guys.
Regards,
Kees
On 04-03-19 22:18, Smith, Eric wrote:
> This will cause data migration.
>
> -Original Message-
> From: ceph-users On Behalf Of Paul
> Emmerich
> Sent: Monday, March 4, 2019 2:32 PM
> To: Kees Meijs
> Cc: Ceph Users
> Subject: Re: [ceph-users] Altering crush-
sure thing.
sv5-ceph-rgw1
zonegroup get
{
"id": "de6af748-1a2f-44a1-9d44-30799cf1313e",
"name": "us",
"api_name": "us",
"is_master": "true",
"endpoints": [
"http://sv5-ceph-rgw1.savagebeast.com:8080";
],
"hostnames": [],
"hostnames_s3website": [],
"maste
Christian,
Can you provide your zonegroup and zones configurations for all 3 rgw sites?
(run the commands for each site please)
Thanks,
From: Christian Rice
Sent: Monday, March 4, 2019 5:34 PM
To: Matthew H; ceph-users
Subject: Re: radosgw sync falling behind r
Hello.
I successfully created the role and attached the permission policy, but it
still didn't work as expected.
When I request the root path, it returns an HTTP 400 error:
Request:
POST / HTTP/1.1
Host: 192.168.199.81:8080
Accept-Encoding: identity
Content-Length: 159
Content-Type: applicatio
I have just spun up a small test environment to give the first RC a test
run.
Have managed to get a MON / MGR running fine on latest .dev packages on
Ubuntu 18.04, however when I go to try enable the dashboard I get the
following error.
ceph mgr module enable dashboard
Error ENOENT: all mgr daemo
Hi,
If QDV10130 pre-dates feb/march 2018, you may have suffered the same
firmware bug as existed on the DC S4600 series. I'm under NDA so I
can't bitch and moan about specifics, but your symptoms sounds very
familiar.
It's entirely possible that there's *something* about bluestore that
has access
So we upgraded everything from 12.2.8 to 12.2.11, and things have gone to hell.
Lots of sync errors, like so:
sudo radosgw-admin sync error list
[
{
"shard_id": 0,
"entries": [
{
"id": "1_1549348245.870945_5163821.1",
"section": "da
This will cause data migration.
-Original Message-
From: ceph-users On Behalf Of Paul Emmerich
Sent: Monday, March 4, 2019 2:32 PM
To: Kees Meijs
Cc: Ceph Users
Subject: Re: [ceph-users] Altering crush-failure-domain
Yes, these parts of the profile are just used to create a crush rule.
Yes, these parts of the profile are just used to create a crush rule.
You can change the crush rule like any other crush rule.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896
Hi Cephers,
Documentation on
http://docs.ceph.com/docs/master/rados/operations/erasure-code/ states:
> Choosing the right profile is important because it cannot be modified
> after the pool is created: a new pool with a different profile needs
> to be created and all objects from the previous poo
On Mon, Mar 4, 2019 at 5:53 PM Jeff Layton wrote:
>
> On Mon, 2019-03-04 at 17:26 +, David C wrote:
> > Looks like you're right, Jeff. Just tried to write into the dir and am
> > now getting the quota warning. So I guess it was the libcephfs cache
> > as you say. That's fine for me, I don't n
Looks like you're right, Jeff. Just tried to write into the dir and am now
getting the quota warning. So I guess it was the libcephfs cache as you
say. That's fine for me, I don't need the quotas to be too strict, just a
failsafe really.
Interestingly, if I create a new dir, set the same 100MB quo
Thanks for the suggestions.
I've tried both -- setting osd_find_best_info_ignore_history_les = true and
restarting all OSDs, as well as 'ceph osd-force-create-pg' -- but both
still show incomplete
PG_AVAILABILITY Reduced data availability: 2 pgs inactive, 2 pgs incomplete
pg 18.c is incomple
Bloated to ~4 GB per OSD and you are on HDDs?
13.2.3 backported the cache auto-tuning which targets 4 GB memory
usage by default.
See https://ceph.com/releases/13-2-4-mimic-released/
The bluestore_cache_* options are no longer needed. They are replaced
by osd_memory_target, defaulting to 4GB. Bl
There are two steps that have to be performed before calling AssumeRole:
1. A role named S3Access needs to be created to which it is mandatory to
attach an assume role policy document. For example,
radosgw-admin role create --role-name=S3Access
--path=/application_abc/component_xyz/
--assume-role
List Members,
patched a centos 7 based cluster from 13.2.2 to 13.2.4 last monday, everything
appeared working fine.
Only this morning I found all OSDs in the cluster to be bloated in memory foot
print, possible after weekend backup through MDS.
Anyone else seeing possible memory leak in 13.2.
I want to use the STS service to generate temporary credentials for use by
third-party clients.
I configured STS lite based on the documentation.
http://docs.ceph.com/docs/master/radosgw/STSLite/
This is my configuration file:
[global]
fsid = 42a7cae1-84d1-423e-93f4-04b0736c14aa
mon_initial_me
Fixed: use only user id (swamireddy) instead of full openID url.
On Thu, Feb 28, 2019 at 7:04 PM M Ranga Swami Reddy
wrote:
>
> I tried to login to ceph tracker - it failing with openID url.?
>
> I tried with my OpenID:
> http://tracker.ceph.com/login
>
> my id: https://code.launchpad.net/~swam
On 02/03/2019 01:02, Ravi Patel wrote:
Hello,
My question is how crush distributes chunks throughout the cluster with
erasure coded pools. Currently, we have 4 OSD nodes with 36 drives(OSD
daemons) per node. If we use ceph_failire_domaon=host, then we are
necessarily limited to k=3,m=1, or k
19 matches
Mail list logo