[ceph-users] Ceph Octopus 15.2.11 - rbd diff --from-snap lists all objects

2021-05-12 Thread David Herselman
Hi, Has something change with 'rbd diff' in Octopus or have I hit a bug? I am no longer able to obtain the list of objects that have changed between two snapshots of an image, it always lists all allocated regions of the RBD image. This behaviour however only occurs when I add the

[ceph-users] Re: Manager carries wrong information until killing it

2021-05-12 Thread Nico Schottelius
Reed Dier writes: > I don't have a solution to offer, but I've seen this for years with no > solution. > Any time a MGR bounces, be it for upgrades, or a new daemon coming online, > etc, I'll see a scale spike like is reported below. Interesting to read that we are not the only ones. >

[ceph-users] Re: CRUSH rule for EC 6+2 on 6-node cluster

2021-05-12 Thread Bryan Stillwell
I was able to figure out the solution with this rule: step take default step choose indep 0 type host step chooseleaf indep 1 type osd step emit step take default step choose indep 0 type host step chooseleaf indep 1 type osd step

[ceph-users] CRUSH rule for EC 6+2 on 6-node cluster

2021-05-12 Thread Bryan Stillwell
I'm trying to figure out a CRUSH rule that will spread data out across my cluster as much as possible, but not more than 2 chunks per host. If I use the default rule with an osd failure domain like this: step take default step choose indep 0 type osd step emit I get clustering of 3-4 chunks on

[ceph-users] May 10 Upstream Lab Outage

2021-05-12 Thread David Galloway
Hi all, I wanted to provide an RCA for the outage you may have been affected by yesterday. Some services that went down: - All CI/testing - quay.ceph.io - telemetry.ceph.com (your cluster may have gone into HEALTH_WARN if you report telemetry data) - lists.ceph.io (so all mailing lists) All

[ceph-users] Re: monitor connection error

2021-05-12 Thread Tuffli, Chuck
> -Original Message- > From: Eugen Block [mailto:ebl...@nde.ag] > Sent: Tuesday, May 11, 2021 11:39 PM > To: ceph-users@ceph.io > Subject: [ceph-users] Re: monitor connection error > > Hi, > > > What is this error trying to tell me? TIA > > it tells you that the cluster is not reachable

[ceph-users] Re: Manager carries wrong information until killing it

2021-05-12 Thread Reed Dier
I don't have a solution to offer, but I've seen this for years with no solution. Any time a MGR bounces, be it for upgrades, or a new daemon coming online, etc, I'll see a scale spike like is reported below. Just out of curiosity, which MGR plugins are you using? I have historically used the

[ceph-users] Re: Write Ops on CephFS Increasing exponentially

2021-05-12 Thread Kyle Dean
Hi Partick, Thanks for getting back to me. Looks like I found the issue. Its due to the fact that I had thought I had increased the max_file_size on ceph to 20TB turns out I missed a zero and set it to 1.89 TB. I had originally tried to fallocate the space for the 8TB volume which kept

[ceph-users] Re: Ceph Month June 2021 Event

2021-05-12 Thread Mike Perez
Hi everyone, Today is the last day to get your proposal in for the Ceph June Month event! The types of talks include: * Lightning talk - 5 minutes * Presentation - 20 minutes with q/a * Unconference (Bof) - 40 minutes We will be confirming with speakers for the date/time by May 16th.

[ceph-users] Re: RGW federated user cannot access created bucket

2021-05-12 Thread Pritha Srivastava
The federated user will be allowed to perform only those s3 actions that are explicitly allowed by the role's permission policy. The permission policy is there for someone to exercise finer grained control over what s3 action is allowed and what is not, hence it differs from what regular users are

[ceph-users] Re: Using ID of a federated user in a bucket policy in RGW

2021-05-12 Thread Pritha Srivastava
Hi, Can you try with the following ARN: arn:aws:iam:::user/oidc$7f71c7c5-c24f-418e-87ac-aa8fe271289b The format of the user id is: $$ , and in $oidc$7f71c7c5-c24f-418e-87ac-aa8fe271289b, the '$' before oidc is a separator for a tenant which is empty here, and ARN for a user is of the format:

[ceph-users] Re: Ceph stretch mode enabling

2021-05-12 Thread Eugen Block
Hi, I just deployed a test cluster to try that out, too. I only deployed three MONs, but this should also apply. I tried to create the third datacenter and put the tiebreaker there but got the following error:

[ceph-users] RGW segmentation fault on Pacific 16.2.1 with multipart upload

2021-05-12 Thread Daniel Iwan
Hi I have started to see segfaults during multiplart upload to one of the buckets File is about 60MB in size Upload of the same file to a brand new bucket works OK Command used aws --profile=tester --endpoint=$HOST_S3_API --region="" s3 cp ./pack-a9201afb4682b74c7c5a5d6070e661662bdfea1a.pack

[ceph-users] RGW federated user cannot access created bucket

2021-05-12 Thread Daniel Iwan
Hi all Scenario is as follows Federated user assumes a role via AssumeRoleWithWebIdentity, which gives permission to create a bucket. User creates a bucket and becomes an owner (this is visible in Ceph's web ui as Owner $oidc$7f71c7c5-c24f-418e-87ac-aa8fe271289b). User cannot list the content of

[ceph-users] Using ID of a federated user in a bucket policy in RGW

2021-05-12 Thread Daniel Iwan
Hi all I'm working on the following scenario User is authenticated with OIDC and tries to access a bucket which it does not own. How to specify user ID etc. to give access to such a user? By trial and error I found out that principal can be specified as "Principal":

[ceph-users] Re: monitor connection error

2021-05-12 Thread Eugen Block
Hi, What is this error trying to tell me? TIA it tells you that the cluster is not reachable to the client, this can have various reasons. Can you show the output of your conf file? cat /etc/ceph/es-c1.conf Is the monitor service up running? I take it you don't use cephadm yet so it's