Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-10-10 Thread Paul Emmerich
I've also encountered this issue on a cluster yesterday; one CPU got stuck in an infinite loop in get_obj_data::flush and it stopped serving requests. I've updated the tracker issue accordingly. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io

Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-26 Thread Vladimir Brik
or on its own. I’m going to check with others who’re more familiar with this code path. Begin forwarded message: *From:*Vladimir Brik <mailto:vladimir.b...@icecube.wisc.edu>> *Subject:**Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred* *Date:*August 21, 2019

Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-23 Thread Eric Ivancich
ven though the machine is not being used for data transfers >> (nothing in radosgw logs, couple of KB/s of network). >> >> This situation can affect any number of our rados gateways, lasts from few >> hours to few days and stops if radosgw process is restarted or on i

Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-21 Thread Vladimir Brik
> Are you running multisite? No > Do you have dynamic bucket resharding turned on? Yes. "radosgw-admin reshard list" prints "[]" > Are you using lifecycle? I am not sure. How can I check? "radosgw-admin lc list" says "[]" > And just to be clear -- sometimes all 3 of your rados gateways are >

Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-21 Thread J. Eric Ivancich
On 8/21/19 10:22 AM, Mark Nelson wrote: > Hi Vladimir, > > > On 8/21/19 8:54 AM, Vladimir Brik wrote: >> Hello >> [much elided] > You might want to try grabbing a a callgraph from perf instead of just > running perf top or using my wallclock profiler to see if you can drill > down and find out

Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-21 Thread Vladimir Brik
Correction: the number of threads stuck using 100% of a CPU core varies from 1 to 5 (it's not always 5) Vlad On 8/21/19 8:54 AM, Vladimir Brik wrote: Hello I am running a Ceph 14.2.1 cluster with 3 rados gateways. Periodically, radosgw process on those machines starts consuming 100% of 5

Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-21 Thread Paul Emmerich
On Wed, Aug 21, 2019 at 3:55 PM Vladimir Brik wrote: > > Hello > > I am running a Ceph 14.2.1 cluster with 3 rados gateways. Periodically, > radosgw process on those machines starts consuming 100% of 5 CPU cores > for days at a time, even though the machine is not being used for data > transfers

Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-21 Thread Mark Nelson
Hi Vladimir, On 8/21/19 8:54 AM, Vladimir Brik wrote: Hello I am running a Ceph 14.2.1 cluster with 3 rados gateways. Periodically, radosgw process on those machines starts consuming 100% of 5 CPU cores for days at a time, even though the machine is not being used for data transfers

[ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-21 Thread Vladimir Brik
Hello I am running a Ceph 14.2.1 cluster with 3 rados gateways. Periodically, radosgw process on those machines starts consuming 100% of 5 CPU cores for days at a time, even though the machine is not being used for data transfers (nothing in radosgw logs, couple of KB/s of network). This

Re: [ceph-users] radosgw (beast): how to enable verbose log? request, user-agent, etc.

2019-08-07 Thread Félix Barbeira
gt; > But at least in our case we switched again to civetweb because it don’t > provide a clear log without a lot verbose. > > > > Regards > > > > Manuel > > > > > > *De:* ceph-users *En nombre de *Félix > Barbeira > *Enviado el:* martes, 6 de ag

Re: [ceph-users] RadosGW (Ceph Object Gateay) Pools

2019-08-06 Thread EDH - Manuel Rios Fernandez
Hi, I think -> default.rgw.buckets.index for us it reach 2k-6K iops for a index size of 23GB. Regards Manuel -Mensaje original- De: ceph-users En nombre de dhils...@performair.com Enviado el: miércoles, 7 de agosto de 2019 1:41 Para: ceph-users@lists.ceph.com Asunto: [ceph-us

[ceph-users] RadosGW (Ceph Object Gateay) Pools

2019-08-06 Thread DHilsbos
All; Based on the PG Calculator, on the Ceph website, I have this list of pools to pre-create for my Object Gateway: .rgw.root default.rgw.control default.rgw.data.root default.rgw.gc default.rgw.log default.rgw.intent-log default.rgw.meta default.rgw.usage default.rgw.users.keys

Re: [ceph-users] radosgw (beast): how to enable verbose log? request, user-agent, etc.

2019-08-06 Thread EDH - Manuel Rios Fernandez
Enviado el: martes, 6 de agosto de 2019 17:43 Para: Ceph Users Asunto: [ceph-users] radosgw (beast): how to enable verbose log? request, user-agent, etc. Hi, I'm testing radosgw with beast backend and I did not found a way to view more information on logfile. This is an example: 2019-08-06

[ceph-users] radosgw (beast): how to enable verbose log? request, user-agent, etc.

2019-08-06 Thread Félix Barbeira
Hi, I'm testing radosgw with beast backend and I did not found a way to view more information on logfile. This is an example: 2019-08-06 16:59:14.488 7fc808234700 1 == starting new request req=0x5608245646f0 = 2019-08-06 16:59:14.496 7fc808234700 1 == req done req=0x5608245646f0 op

[ceph-users] radosgw user audit trail

2019-07-08 Thread shubjero
Good day, We have a sizeable ceph deployment and use object-storage heavily. We also integrate our object-storage with OpenStack but sometimes we are required to create S3 keys for some of our users (aws-cli, java apps that speak s3, etc). I was wondering if it is possible to see an audit trail

Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread Matt Benjamin
benj...@redhat.com] > Sent: Friday, June 28, 2019 9:48 AM > To: Dominic Hilsbos > Cc: ceph-users > Subject: Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored? > > Hi Dominic, > > The reason is likely that RGW doesn't yet support ListObjectsV2. > > Support is nearly here tho

Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread DHilsbos
- From: Matt Benjamin [mailto:mbenj...@redhat.com] Sent: Friday, June 28, 2019 9:48 AM To: Dominic Hilsbos Cc: ceph-users Subject: Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored? Hi Dominic, The reason is likely that RGW doesn't yet support ListObjectsV2. Support is nearly here

Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread Matt Benjamin
Hi Dominic, The reason is likely that RGW doesn't yet support ListObjectsV2. Support is nearly here though: https://github.com/ceph/ceph/pull/28102 Matt On Fri, Jun 28, 2019 at 12:43 PM wrote: > > All; > > I've got a RADOSGW instance setup, backed by my demonstration Ceph cluster. > I'm

[ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread DHilsbos
All; I've got a RADOSGW instance setup, backed by my demonstration Ceph cluster. I'm using Amazon's S3 SDK, and I've run into an annoying little snag. My code looks like this: amazonS3 = builder.build(); ListObjectsV2Request req = new

Re: [ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-25 Thread M Ranga Swami Reddy
Thank you.. Looking into the URL... On Tue, 25 Jun, 2019, 12:18 PM Torben Hørup, wrote: > Hi > > You could look into the radosgw elasicsearch sync module, and use that > to find the objects last modified. > > http://docs.ceph.com/docs/master/radosgw/elastic-sync-module/ > > /Torben > > On

Re: [ceph-users] Radosgw federation replication

2019-06-25 Thread Marcelo Mariano Miziara
Marcelo M. Miziara Serviço Federal de Processamento de Dados - SERPRO marcelo.mizi...@serpro.gov.br De: "Behnam Loghmani" Para: "ceph-users" Enviadas: Terça-feira, 25 de junho de 2019 4:07:08 Assunto: [ceph-users] Radosgw federation replication Hi there, I have a Ceph

[ceph-users] Radosgw federation replication

2019-06-25 Thread Behnam Loghmani
Hi there, I have a Ceph cluster with radosgw and I use it in my production environment for a while. Now I decided to set up another cluster in another geo place to have a disaster recovery plan. I read some docs like http://docs.ceph.com/docs/jewel/radosgw/federated-config/, but all of them is

Re: [ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-25 Thread Torben Hørup
Hi You could look into the radosgw elasicsearch sync module, and use that to find the objects last modified. http://docs.ceph.com/docs/master/radosgw/elastic-sync-module/ /Torben On 25.06.2019 08:19, M Ranga Swami Reddy wrote: Thanks for the reply. Btw, one my customer wants to get the

Re: [ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-25 Thread M Ranga Swami Reddy
Thanks for the reply. Btw, one my customer wants to get the objects based on last modified date filed. How do we can achive this? On Thu, Jun 13, 2019 at 7:09 PM Paul Emmerich wrote: > There's no (useful) internal ordering of these entries, so there isn't a > more efficient way than getting

[ceph-users] radosgw multisite replication segfaults on init in 13.2.6

2019-06-14 Thread Płaza Tomasz
Hi, We have a standalone ceph cluster v13.2.6 and wanted to replicate it to another DC. After going through "Migrating a Single Site System to Multi-Site" and "Configure a Secondary Zone" from http://docs.ceph.com/docs/master/radosgw/multisite/, We have setted up all buckets to "disable

Re: [ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-13 Thread Paul Emmerich
There's no (useful) internal ordering of these entries, so there isn't a more efficient way than getting everything and sorting it :( Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49

[ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-13 Thread M Ranga Swami Reddy
hello - Can we list the objects in rgw, via last modified date? For example - I wanted to list all the objects which were modified 01 Jun 2019. Thanks Swami ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] radosgw dying

2019-06-09 Thread DHilsbos
To: Paul Emmerich Cc: Dominic Hilsbos; Ceph Users Subject: Re: [ceph-users] radosgw dying For just core rgw services it will need these 4 .rgw.root default.rgw.control default.rgw.meta default.rgw.log When creating

Re: [ceph-users] radosgw dying

2019-06-09 Thread DHilsbos
. From: Sent: Saturday, June 08, 2019 3:35 AM To: Dominic Hilsbos Subject: Re: [ceph-users] radosgw dying Can you post this? ceph osd df On Fri, Jun 7, 2019 at 7:31 PM mailto:dhils...@performair.com>> wrote: All; I have a test and demonstration cluster running (3 hosts, MON, MGR, 2x O

Re: [ceph-users] radosgw dying

2019-06-09 Thread Torben Hørup
For just core rgw services it will need these 4 .rgw.root

Re: [ceph-users] radosgw dying

2019-06-09 Thread Paul Emmerich
rgw uses more than one pool. (5 or 6 IIRC) -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sun, Jun 9, 2019 at 7:00 PM wrote: > Huan; > > I get that, but the pool

Re: [ceph-users] radosgw dying

2019-06-09 Thread Brett Chancellor
radosgw will try and create all if the default pools if they are missing. The number of pools changes depending on the version, but it's somewhere around 5. On Sun, Jun 9, 2019, 1:00 PM wrote: > Huan; > > I get that, but the pool already exists, why is radosgw trying to create > one? > >

Re: [ceph-users] radosgw dying

2019-06-09 Thread DHilsbos
Huan; I get that, but the pool already exists, why is radosgw trying to create one? Dominic Hilsbos Get Outlook for Android On Sat, Jun 8, 2019 at 2:55 AM -0700, "huang jun" mailto:hjwsm1...@gmail.com>> wrote: >From the error message, i'm decline to that

Re: [ceph-users] radosgw dying

2019-06-08 Thread huang jun
From the error message, i'm decline to that 'mon_max_pg_per_osd' was exceed, you can check the value of it, and its default value is 250, so you can at most have 1500pgs(250*6osds), and for replicated pools with size=3, you can have 500pgs for all pools, you already have 448pgs, so the next pool

[ceph-users] radosgw dying

2019-06-07 Thread DHilsbos
All; I have a test and demonstration cluster running (3 hosts, MON, MGR, 2x OSD per host), and I'm trying to add a 4th host for gateway purposes. The radosgw process keeps dying with: 2019-06-07 15:59:50.700 7fc4ef273780 0 ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972)

Re: [ceph-users] Radosgw in container

2019-06-05 Thread Brett Chancellor
It works okay. You need a ceph.conf and a generic radosgw cephx key. That's it. On Wed, Jun 5, 2019, 5:37 AM Marc Roos wrote: > > > Has anyone put the radosgw in a container? What files do I need to put > in the sandbox directory? Are there other things I should consider? > > > >

[ceph-users] Radosgw in container

2019-06-05 Thread Marc Roos
Has anyone put the radosgw in a container? What files do I need to put in the sandbox directory? Are there other things I should consider? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] radosgw index all keys in all buckets [EXT]

2019-05-13 Thread Matthew Vernon
Hi, On 02/05/2019 22:00, Aaron Bassett wrote: > With these caps I'm able to use a python radosgw-admin lib to list > buckets and acls and users, but not keys. This user is also unable to > read buckets and/or keys through the normal s3 api. Is there a way to > create an s3 user that has read

Re: [ceph-users] Radosgw object size limit?

2019-05-10 Thread Jan Kasprzak
Hello, thanks for your help. Casey Bodley wrote: : It looks like the default.rgw.buckets.non-ec pool is missing, which : is where we track in-progress multipart uploads. So I'm guessing : that your perl client is not doing a multipart upload, where s3cmd : does by default. : : I'd

Re: [ceph-users] Radosgw object size limit?

2019-05-10 Thread Casey Bodley
On 5/10/19 10:20 AM, Jan Kasprzak wrote: Hello Casey (and the ceph-users list), I am returning to my older problem to which you replied: Casey Bodley wrote: : There is a rgw_max_put_size which defaults to 5G, which limits the : size of a single PUT request. But in that case, the http

Re: [ceph-users] Radosgw object size limit?

2019-05-10 Thread Jan Kasprzak
Hello Casey (and the ceph-users list), I am returning to my older problem to which you replied: Casey Bodley wrote: : There is a rgw_max_put_size which defaults to 5G, which limits the : size of a single PUT request. But in that case, the http response : would be 400 EntityTooLarge. For

[ceph-users] radosgw daemons constantly reading default.rgw.log pool

2019-05-03 Thread Vladimir Brik
Hello I have set up rados gateway using "ceph-deploy rgw create" (default pools, 3 machines acting as gateways) on Ceph 13.2.5. For over 2 weeks now, the three rados gateways have been generating constant ~30MB/s 4K ops/s of read i/o on default.rgw.log even though nothing is using the rados

[ceph-users] radosgw index all keys in all buckets

2019-05-02 Thread Aaron Bassett
Hello, I'm trying to write a tool to index all keys in all buckets stored in radosgw. I've created a user with the following caps: "caps": [ { "type": "buckets", "perm": "read" }, { "type": "metadata", "perm": "read"

Re: [ceph-users] RadosGW ops log lag?

2019-04-17 Thread Matt Benjamin
It should not be best effort. As written, exactly rgw_usage_log_flush_threshold outstanding log entries will be buffered. The default value for this parameter is 1024, which is probably not high for a sustained workload, but you could experiment with reducing it. Matt On Fri, Apr 12, 2019 at

[ceph-users] radosgw in Nautilus: message "client_io->complete_request() returned Broken pipe"

2019-04-17 Thread Francois Lafont
Hi @ll, I have a Nautilus Ceph cluster UP with radosgw in a zonegroup. I'm using the web frontend Beast (the default in Nautilus). All seems to work fine but in the log of radosgw I have this message: Apr 17 14:02:56 rgw-m-1 ceph-m-rgw.rgw-m-1.rgw0[888]: 2019-04-17 14:02:56.410

Re: [ceph-users] RadosGW ops log lag?

2019-04-12 Thread Aaron Bassett
Ok thanks. Is the expectation that events will be available on that socket as soon as the occur or is it more of a best effort situation? I'm just trying to nail down which side of the socket might be lagging. It's pretty difficult to recreate this as I have to hit the cluster very hard to get

Re: [ceph-users] RadosGW ops log lag?

2019-04-12 Thread Matt Benjamin
Hi Aaron, I don't think that exists currently. Matt On Fri, Apr 12, 2019 at 11:12 AM Aaron Bassett wrote: > > I have an radogw log centralizer that we use to for an audit trail for data > access in our ceph clusters. We've enabled the ops log socket and added > logging of the

[ceph-users] RadosGW ops log lag?

2019-04-12 Thread Aaron Bassett
I have an radogw log centralizer that we use to for an audit trail for data access in our ceph clusters. We've enabled the ops log socket and added logging of the http_authorization header to it: rgw log http headers = "http_authorization" rgw ops log socket path = /var/run/ceph/rgw-ops.sock

Re: [ceph-users] radosgw cloud sync aws s3 auth failed

2019-04-08 Thread Robin H. Johnson
On Mon, Apr 08, 2019 at 06:38:59PM +0800, 黄明友 wrote: > > hi,all > >I had test the cloud sync module in radosgw. ceph verion is >13.2.5 , git commit id is >cbff874f9007f1869bfd3821b7e33b2a6ffd4988; Reading src/rgw/rgw_rest_client.cc shows that it only generates v2

[ceph-users] radosgw cloud sync aws s3 auth failed

2019-04-08 Thread 黄明友
hi,all I had test the cloud sync module in radosgw. ceph verion is 13.2.5 , git commit id is cbff874f9007f1869bfd3821b7e33b2a6ffd4988; when sync to a aws s3 endpoint ,get http 400 error , so I use http:// protocol ,use the tcpick tool to dump some message like this. PUT /wuxi01

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-11 Thread Trey Palmer
HI Casey, We're still trying to figure this sync problem out, if you could possibly tell us anything further we would be deeply grateful! Our errors are coming from 'data sync'. In `sync status` we pretty constantly show one shard behind, but a different one each time we run it. Here's a

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-08 Thread Casey Bodley
(cc ceph-users) Can you tell whether these sync errors are coming from metadata sync or data sync? Are they blocking sync from making progress according to your 'sync status'? On 3/8/19 10:23 AM, Trey Palmer wrote: Casey, Having done the 'reshard stale-instances delete' earlier on the

Re: [ceph-users] Radosgw object size limit?

2019-03-07 Thread Casey Bodley
There is a rgw_max_put_size which defaults to 5G, which limits the size of a single PUT request. But in that case, the http response would be 400 EntityTooLarge. For multipart uploads, there's also a rgw_multipart_part_upload_limit that defaults to 1 parts, which would cause a 416

[ceph-users] Radosgw object size limit?

2019-03-07 Thread Jan Kasprzak
Hello, Ceph users, does radosgw have an upper limit of object size? I tried to upload a 11GB file using s3cmd, but it failed with InvalidRange error: $ s3cmd put --verbose centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso s3://mybucket/ INFO: No cache file found, creating it.

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-06 Thread Trey Palmer
It appears we eventually got 'data sync init' working. At least, it's worked on 5 of the 6 sync directions in our 3-node cluster. The sixth has not run without an error returned, although 'sync status' does say "preparing for full sync". Thanks, Trey On Wed, Mar 6, 2019 at 1:22 PM Trey Palmer

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-06 Thread Trey Palmer
Casey, This was the result of trying 'data sync init': root@c2-rgw1:~# radosgw-admin data sync init ERROR: source zone not specified root@c2-rgw1:~# radosgw-admin data sync init --source-zone= WARNING: cannot find source zone id for name= ERROR: sync.init_sync_status() returned ret=-2

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-06 Thread Trey Palmer
Casey, You are spot on that almost all of these are deleted buckets. At some point in the last few months we deleted and replaced buckets with underscores in their names, and those are responsible for most of these errors. Thanks very much for the reply and explanation. We’ll give ‘data sync

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-06 Thread Casey Bodley
Hi Trey, I think it's more likely that these stale metadata entries are from deleted buckets, rather than accidental bucket reshards. When a bucket is deleted in a multisite configuration, we don't delete its bucket instance because other zones may still need to sync the object deletes - and

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-05 Thread Trey Palmer
Casey, Thanks very much for the reply! We definitely have lots of errors on sync-disabled buckets and the workaround for that is obvious (most of them are empty anyway). Our second form of error is stale buckets. We had dynamic resharding enabled but have now disabled it (having discovered it

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-05 Thread Casey Bodley
Hi Christian, I think you've correctly intuited that the issues are related to the use of 'bucket sync disable'. There was a bug fix for that feature in http://tracker.ceph.com/issues/26895, and I recently found that a block of code was missing from its luminous backport. That missing code is

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-05 Thread Matthew H
Hi Christian, To be on the safe side and future proof yourself will want to go ahead and set the following in your ceph.conf file, and then issue a restart to your RGW instances. rgw_dynamic_resharding = false There are a number of issues with dynamic resharding, multisite rgw problems being

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-05 Thread Christian Rice
Matthew, first of all, let me say we very much appreciate your help! So I don’t think we turned dynamic resharding on, nor did we manually reshard buckets. Seems like it defaults to on for luminous but the mimic docs say it’s not supported in multisite. So do we need to disable it manually

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-05 Thread Trey Palmer
"endpoints": [ > > "http://sv3-ceph-rgw1:8080; > > ], > > "log_meta": "false", > > "log_data": "true", > > "bucket_index_max_shards":

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-04 Thread Christian Rice
quot;: "dc11-prod.rgw.buckets.non-ec", "index_type": 0, "compression": "" } } ], "metadata_heap": "", "tier_config": [], "realm_id": "b3e2afe7

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-04 Thread Matthew H
ing and data syncing ( both separate issues ) that you could be hitting. Thanks, ________ From: ceph-users on behalf of Christian Rice Sent: Wednesday, February 27, 2019 7:05 PM To: ceph-users Subject: [ceph-users] radosgw sync falling behind regularly Debian 9;

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-04 Thread Christian Rice
be hitting. Thanks, ____ From: ceph-users on behalf of Christian Rice Sent: Wednesday, February 27, 2019 7:05 PM To: ceph-users Subject: [ceph-users] radosgw sync falling behind regularly Debian 9; ceph 12.8.8-bpo90+1; no rbd or cephfs, just radosgw; three clu

Re: [ceph-users] radosgw sync falling behind regularly

2019-02-28 Thread Christian Rice
, From: ceph-users on behalf of Christian Rice Sent: Wednesday, February 27, 2019 7:05 PM To: ceph-users Subject: [ceph-users] radosgw sync falling behind regularly Debian 9; ceph 12.8.8-bpo90+1; no rbd or cephfs, just radosgw; three clusters in one zonegroup

Re: [ceph-users] radosgw sync falling behind regularly

2019-02-27 Thread Matthew H
, From: ceph-users on behalf of Christian Rice Sent: Wednesday, February 27, 2019 7:05 PM To: ceph-users Subject: [ceph-users] radosgw sync falling behind regularly Debian 9; ceph 12.8.8-bpo90+1; no rbd or cephfs, just radosgw; three clusters in one zonegroup. Often we find either metadata

[ceph-users] radosgw sync falling behind regularly

2019-02-27 Thread Christian Rice
Debian 9; ceph 12.8.8-bpo90+1; no rbd or cephfs, just radosgw; three clusters in one zonegroup. Often we find either metadata or data sync behind, and it doesn’t look to ever recover until…we restart the endpoint radosgw target service. eg at 15:45:40: dc11-ceph-rgw1:/var/log/ceph#

Re: [ceph-users] radosgw-admin reshard stale-instances rm experience

2019-02-26 Thread Wido den Hollander
On 2/21/19 9:19 PM, Paul Emmerich wrote: > On Thu, Feb 21, 2019 at 4:05 PM Wido den Hollander wrote: >> This isn't available in 13.2.4, but should be in 13.2.5, so on Mimic you >> will need to wait. But this might bite you at some point. > > Unfortunately it hasn't been backported to Mimic: >

Re: [ceph-users] radosgw-admin reshard stale-instances rm experience

2019-02-21 Thread Konstantin Shalygin
My advise: Upgrade to 12.2.11 and run the stale-instances list asap and see if you need to rm data. This isn't available in 13.2.4, but should be in 13.2.5, so on Mimic you will need to wait. But this might bite you at some point. I hope I can prevent some admins from having sleepless nights

Re: [ceph-users] radosgw-admin reshard stale-instances rm experience

2019-02-21 Thread Paul Emmerich
On Thu, Feb 21, 2019 at 4:05 PM Wido den Hollander wrote: > This isn't available in 13.2.4, but should be in 13.2.5, so on Mimic you > will need to wait. But this might bite you at some point. Unfortunately it hasn't been backported to Mimic: http://tracker.ceph.com/issues/37447 This is the

[ceph-users] radosgw-admin reshard stale-instances rm experience

2019-02-21 Thread Wido den Hollander
Hi, For the last few months I've been getting question about people seeing warnings about large OMAP objects after scrubs. I've been digging for a few months (You'll also find multiple threads about this) and it all seemed to trace back to RGW indexes. Resharding didn't clean up old indexes

Re: [ceph-users] Radosgw s3 subuser permissions

2019-01-27 Thread Marc Roos
: The Exoteric Order of the Squid Cybernetic Subject: Re: [ceph-users] Radosgw s3 subuser permissions On 24/01/2019, Marc Roos wrote: > > > This should do it sort of. > > { > "Id": "Policy1548367105316", > "Version": "2012-10-17", >

Re: [ceph-users] Radosgw s3 subuser permissions

2019-01-25 Thread Adam C. Emerson
On 24/01/2019, Marc Roos wrote: > > > This should do it sort of. > > { > "Id": "Policy1548367105316", > "Version": "2012-10-17", > "Statement": [ > { > "Sid": "Stmt1548367099807", > "Effect": "Allow", > "Action": "s3:ListBucket", > "Principal": { "AWS":

Re: [ceph-users] Radosgw s3 subuser permissions

2019-01-24 Thread Marc Roos
"Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket" ], "Principal": { "AWS": "arn:aws:iam::Company:user/testuser" }, "Resource": "arn:aws:s3:::archive/folder2/*"

Re: [ceph-users] Radosgw s3 subuser permissions

2019-01-24 Thread Matt Benjamin
Hi Marc, I'm not actually certain whether the traditional ACLs permit any solution for that, but I believe with bucket policy, you can achieve precise control within and across tenants, for any set of desired resources (buckets). Matt On Thu, Jan 24, 2019 at 3:18 PM Marc Roos wrote: > > > It

[ceph-users] Radosgw s3 subuser permissions

2019-01-24 Thread Marc Roos
It is correct that it is NOT possible for s3 subusers to have different permissions on folders created by the parent account? Thus the --access=[ read | write | readwrite | full ] is for everything the parent has created, and it is not possible to change that for specific folders/buckets?

[ceph-users] RadosGW replication and failover issues

2019-01-22 Thread Rom Freiman
Hi, We are running the following radosgw( luminous 12.2.8) replications scenario. 1) We have 2 clusters, each running a radosgw, Cluster1 defined as master, and Cluster2 as slave. 2) We create a number of bucket with objects via master and slave 3) We shutdown the Cluster1 4) We execute failover

[ceph-users] RadosGW replication and failover issues

2019-01-21 Thread Ronnie Lazar
Hi, We are running the following radosgw( luminous 12.2.8) replications scenario. 1) We have 2 clusters, each running a radosgw, Cluster1 defined as master, and Cluster2 as slave. 2) We create a number of bucket with objects via master and slave 3) We shutdown the Cluster1 4) We execute failover

[ceph-users] Radosgw cannot create pool

2019-01-17 Thread Jan Kasprzak
Hello, Ceph users, TL;DR: radosgw fails on me with the following message: 2019-01-17 09:34:45.247721 7f52722b3dc0 0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g.

Re: [ceph-users] radosgw-admin unable to store user information

2019-01-02 Thread Casey Bodley
On 12/26/18 4:58 PM, Dilip Renkila wrote: Hi all, Some useful information >>/>> />>///What do the following return?/ >>/>> >> />>/>> >> $ radosgw-admin zone get/ /root@ctrl1:~# radosgw-admin zone get { "id": "8bfdf8a3-c165-44e9-9ed6-deff8a5d852f", "name": "default", "domain_root":

Re: [ceph-users] radosgw-admin unable to store user information

2018-12-26 Thread Dilip Renkila
Hi all, Some useful information >>* >> *>> * What do the following return?* >>* >> >> *>>* >> >> $ radosgw-admin zone get* *root@ctrl1:~# radosgw-admin zone get { "id": "8bfdf8a3-c165-44e9-9ed6-deff8a5d852f", "name": "default", "domain_root": "default.rgw.meta:root",

[ceph-users] radosgw-admin unable to store user information

2018-12-26 Thread Dilip Renkila
Hi all, I have a ceph radosgw deployment as openstack swift backend with multitenancy enabled in rgw. I can create containers and store data through swift api. I am trying to retrieve user data from radosgw-admin cli tool for an user. I am able to get only admin user info but no one else. $

Re: [ceph-users] radosgw, Keystone integration, and the S3 API

2018-11-22 Thread Florian Haas
On 19/11/2018 16:23, Florian Haas wrote: > Hi everyone, > > I've recently started a documentation patch to better explain Swift > compatibility and OpenStack integration for radosgw; a WIP PR is at > https://github.com/ceph/ceph/pull/25056/. I have, however, run into an > issue that I would

[ceph-users] radosgw, Keystone integration, and the S3 API

2018-11-19 Thread Florian Haas
Hi everyone, I've recently started a documentation patch to better explain Swift compatibility and OpenStack integration for radosgw; a WIP PR is at https://github.com/ceph/ceph/pull/25056/. I have, however, run into an issue that I would really *like* to document, except I don't know whether

Re: [ceph-users] radosgw s3 bucket acls

2018-10-19 Thread Niels Denissen
Hi, I’m currently running into a similar problem. My goal is to ensure all S3 users are able to list any buckets/objects that are available within ceph. Haven’t found a way around that yet, I indeed found also that linking buckets to users allows them to list anything, but only for the user the

Re: [ceph-users] Radosgw index has been inconsistent with reality

2018-10-18 Thread Yang Yang
Hmm, It's useful to rebuild the index by rewriting a object. But at first, I need know the all keys of objects. If I want to know all keys, I need list_objects ... Maybe I can make an union set of instances, then copy all of them into themselves. Anyway, I want to find out more about why it

Re: [ceph-users] Radosgw index has been inconsistent with reality

2018-10-18 Thread Yehuda Sadeh-Weinraub
On Wed, Oct 17, 2018 at 1:14 AM Yang Yang wrote: > > Hi, > A few weeks ago I found radosgw index has been inconsistent with reality. > Some object I can not list, but I can get them by key. Please see the details > below: > > BACKGROUND: > Ceph version 12.2.4

[ceph-users] RadosGW multipart completion is already in progress

2018-10-18 Thread Yang Yang
Hi, I copy some big files to radosgw with awscli. But I found some copy will failed, like : * aws s3 --endpoint=XXX cp ./bigfile s3://mybucket/bigfile* *upload failed: ./bigfile to s3://mybucket/bigfile An error occurred (InternalError) when calling the CompleteMultipartUpload operation

[ceph-users] Radosgw index has been inconsistent with reality

2018-10-17 Thread Yang Yang
Hi, A few weeks ago I found radosgw index has been inconsistent with reality. Some object I can not list, but I can get them by key. Please see the details below: *BACKGROUND:* Ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable) Index pool is on ssd.

[ceph-users] radosgw lifecycle not removing delete markers

2018-10-15 Thread Sean Purdy
Hi, Versions 12.2.7 and 12.2.8. I've set up a bucket with versioning enabled and upload a lifecycle configuration. I upload some files and delete them, inserting delete markers. The configured lifecycle DOES remove the deleted binaries (non current versions). The lifecycle DOES NOT remove

Re: [ceph-users] radosgw bucket stats vs s3cmd du

2018-10-09 Thread David Turner
Have you looked at your Garbage Collection. I would guess that your GC is behind and that radosgw-admin is accounting for that space knowing that it hasn't been freed up yet, whiles 3cmd doesn't see it since it no longer shows in the listing. On Tue, Sep 18, 2018 at 4:45 AM Luis Periquito

Re: [ceph-users] radosgw rest API to retrive rgw log entries

2018-09-23 Thread Robin H. Johnson
On Fri, Sep 21, 2018 at 04:17:35PM -0400, Jin Mao wrote: > I am looking for an API equivalent of 'radosgw-admin log list' and > 'radosgw-admin log show'. Existing /usage API only reports bucket level > numbers like 'radosgw-admin usage show' does. Does anyone know if this is > possible from rest

[ceph-users] radosgw rest API to retrive rgw log entries

2018-09-21 Thread Jin Mao
I am looking for an API equivalent of 'radosgw-admin log list' and 'radosgw-admin log show'. Existing /usage API only reports bucket level numbers like 'radosgw-admin usage show' does. Does anyone know if this is possible from rest API? Thanks. Jin.

[ceph-users] radosgw bucket stats vs s3cmd du

2018-09-18 Thread Luis Periquito
Hi all, I have a couple of very big s3 buckets that store temporary data. We keep writing to the buckets some files which are then read and deleted. They serve as a temporary storage. We're writing (and deleting) circa 1TB of data daily in each of those buckets, and their size has been mostly

Re: [ceph-users] radosgw: need couple of blind (indexless) buckets, how-to?

2018-08-25 Thread Konstantin Shalygin
Thank you very much! If anyone would like to help update these docs, I would be happy to help with guidance/review. I was make a try half year ago - http://tracker.ceph.com/issues/23081 k ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] radosgw: need couple of blind (indexless) buckets, how-to?

2018-08-24 Thread Casey Bodley
On 08/24/2018 06:44 AM, Konstantin Shalygin wrote: Answer to myself. radosgw-admin realm create --rgw-realm=default --default radosgw-admin zonegroup modify --rgw-zonegroup=default --rgw-realm=default radosgw-admin period update --commit radosgw-admin zonegroup placement add

Re: [ceph-users] radosgw: need couple of blind (indexless) buckets, how-to?

2018-08-24 Thread Konstantin Shalygin
Answer to myself. radosgw-admin realm create --rgw-realm=default --default radosgw-admin zonegroup modify --rgw-zonegroup=default --rgw-realm=default radosgw-admin period update --commit radosgw-admin zonegroup placement add --rgw-zonegroup="default" \   --placement-id="indexless-placement"

[ceph-users] radosgw: need couple of blind (indexless) buckets, how-to?

2018-08-23 Thread Konstantin Shalygin
I need bucket without index for 5000 objects, how to properly create a indexless bucket in next to indexed buckets? This is "default radosgw" Luminous instance. I was take a look to cli, as far as I understand I will need to create placement rule via "zone placement add" and add this key

  1   2   3   4   5   6   7   8   9   10   >