Re: [ceph-users] Cosbench with ceph s3

2016-06-20 Thread Jaroslaw Owsiewski
Hi,

attached.

Regards,
-- 
Jarek

-- 
Jarosław Owsiewski

2016-06-20 11:01 GMT+02:00 Kanchana. P :

> Hi,
>
> Do anyone have a working configuration of ceph s3 to run with cosbench
> tool.
>
> Thanks,
> Kanchana.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



http://s3.domain"/>



http://s3.domain"/>


http://s3.domain"/>





http://s3.domain"/>


http://s3.domain"/>





http://s3.domain"/>


http://s3.domain"/>






http://s3.domain"/>


http://s3.domain"/>





http://s3.domain"/>


http://s3.domain"/>





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Increasing time to save RGW objects

2016-02-09 Thread Jaroslaw Owsiewski
FYI
-- 
Jarek

-- Forwarded message --
From: Jaroslaw Owsiewski <jaroslaw.owsiew...@allegrogroup.com>
Date: 2016-02-09 12:00 GMT+01:00
Subject: Re: [ceph-users] Increasing time to save RGW objects
To: Wade Holler <wade.hol...@gmail.com>


Hi,

For example:

# ceph --admin-daemon=ceph-osd.98.asok perf dump

generaly:

ceph --admin-daemon=/path/to/osd.asok help

Best Regards

-- 
Jarek


2016-02-09 11:21 GMT+01:00 Wade Holler <wade.hol...@gmail.com>:

> Hi there,
>
> What is the best way to "look at the rgw admin socket " to see what
> operations are taking a long time ?
>
> Best Regards
> Wade
>
> On Mon, Feb 8, 2016 at 12:16 PM Gregory Farnum <gfar...@redhat.com> wrote:
>
>> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka <ju...@ejurka.com> wrote:
>> >
>> > I've been testing the performance of ceph by storing objects through
>> RGW.
>> > This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW
>> > instances.  Initially the storage time was holding reasonably steady,
>> but it
>> > has started to rise recently as shown in the attached chart.
>> >
>> > The test repeatedly saves 100k objects of 55 kB size using multiple
>> threads
>> > (50) against multiple RGW gateways (4).  It uses a sequential
>> identifier as
>> > the object key and shards the bucket name using id % 100.  The buckets
>> have
>> > index sharding enabled with 64 index shards per bucket.
>> >
>> > ceph status doesn't appear to show any issues.  Is there something I
>> should
>> > be looking at here?
>> >
>> >
>> > # ceph status
>> > cluster 3fc86d01-cf9c-4bed-b130-7a53d7997964
>> >  health HEALTH_OK
>> >  monmap e2: 5 mons at
>> > {condor=
>> 192.168.188.90:6789/0,duck=192.168.188.140:6789/0,eagle=192.168.188.100:6789/0,falcon=192.168.188.110:6789/0,shark=192.168.188.118:6789/0
>> }
>> > election epoch 18, quorum 0,1,2,3,4
>> > condor,eagle,falcon,shark,duck
>> >  osdmap e674: 40 osds: 40 up, 40 in
>> >   pgmap v258756: 3128 pgs, 10 pools, 1392 GB data, 27282 kobjects
>> > 4784 GB used, 69499 GB / 74284 GB avail
>> > 3128 active+clean
>> >   client io 268 kB/s rd, 1100 kB/s wr, 493 op/s
>>
>> It's probably a combination of your bucket indices getting larger and
>> your PGs getting split into subfolders on the OSDs. If you keep
>> running tests and things get slower it's the first; if they speed
>> partway back up again it's the latter.
>> Other things to check:
>> * you can look at your OSD stores and how the object files are divvied up.
>> * you can look at the rgw admin socket and/or logs to see what
>> operations are the ones taking time
>> * you can check the dump_historic_ops on the OSDs to see if there are
>> any notably slow ops
>> -Greg
>>
>> >
>> >
>> > Kris Jurka
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cannot change the gateway port (civetweb)

2016-02-17 Thread Jaroslaw Owsiewski
Probably this is the reason:

https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html

Regards,
-- 
Jarosław Owsiewski

2016-02-17 15:28 GMT+01:00 Alexandr Porunov :

> Hello,
>
> I have problem with port changes of rados gateway node.
> I don't know why but I cannot change listening port of civetweb.
>
> My steps to install radosgw:
> *ceph-deploy install --rgw gateway*
> *ceph-deploy admin gateway*
> *ceph-deploy create rgw gateway*
>
> (gateway starts on port 7480 as expected)
>
> To change the port I add the following lines to ceph.conf:
> *[client.rgw.gateway]*
> *rgw_frontends = "civetweb port=80"*
>
> then I update it on all nodes:
> *ceph-deploy --overwrite-conf config push admin-node node1 node2 node3
> gateway*
>
> After this I try to restart rados gateway:
> *systemctl restart ceph-radosgw@rgw.gateway*
>
> But after restart it doesn't work and
> /var/log/ceph/ceph-client.rgw.gateway.log shows this:
> 2016-02-17 16:06:03.766890 7f3a9215d880  0 set uid:gid to 167:167
> 2016-02-17 16:06:03.766976 7f3a9215d880  0 ceph version 9.2.0
> (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299), process radosgw, pid 2810
> 2016-02-17 16:06:03.859469 7f3a9215d880  0 framework: civetweb
> 2016-02-17 16:06:03.859480 7f3a9215d880  0 framework conf key: port, val:
> 80
> 2016-02-17 16:06:03.859488 7f3a9215d880  0 starting handler: civetweb
> 2016-02-17 16:06:03.859534 7f3a9215d880  0 civetweb: 0x7f3a92846b00:
> set_ports_option: cannot bind to 80: 13 (Permission denied)
> 2016-02-17 16:06:03.876508 7f3a5f7fe700  0 ERROR: can't read user header:
> ret=-2
> 2016-02-17 16:06:03.876516 7f3a5f7fe700  0 ERROR: sync_user() failed,
> user=alex ret=-2
>
> I have added 80 port to iptables and I haven't any firewalls on nodes.
>
> Please help me change the port.
>
> Regards, Alexandr
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow requests on cluster.

2016-07-14 Thread Jaroslaw Owsiewski
2016-07-14 15:26 GMT+02:00 Luis Periquito :

> Hi Jaroslaw,
>
> several things are springing up to mind. I'm assuming the cluster is
> healthy (other than the slow requests), right?
>
>
Yes.



> From the (little) information you send it seems the pools are
> replicated with size 3, is that correct?
>
>
True.


> Are there any long running delete processes? They usually have a
> negative impact on performance, specially as they don't really show up
> in the IOPS statistics.
>

During normal troughput we have small amount of deletes.


> I've also something like this happen when there's a slow disk/osd. You
> can try to check with "ceph osd perf" and look for higher numbers.
> Usually restarting that OSD brings back the cluster to life, if that's
> the issue.
>

I will check this.



> If nothing shows, try a "ceph tell osd.* version"; if there's a
> misbehaving OSD they usually don't respond to the command (slow or
> even timing out).
>
> Also you also don't say how many scrub/deep-scrub processes are
> running. If not properly handled they are also a performance killer.
>
>
Scrub/deep-scrub processes are disabled


Last, but by far not least, have you ever thought of creating a SSD
> pool (even small) and move all pools but .rgw.buckets there? The other
> ones are small enough, but enjoy having their own "reserved" osds...
>
>
>

This is one idea we had some time ago, we will try that.

One important thing:

sysop@s41617:~/bin$ ceph osd pool get .rgw.buckets pg_num
pg_num: 4470
sysop@s41617:~/bin$ ceph osd pool get .rgw.buckets.index pg_num
pg_num: 2048

Could be this a main problem?


Regards
-- 
Jarek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow requests on cluster.

2016-07-14 Thread Jaroslaw Owsiewski
I think that first symptoms of out problems occurred when we posted this
issue:

http://tracker.ceph.com/issues/15727

Regards
-- 
Jarek

-- 
Jarosław Owsiewski

2016-07-14 15:43 GMT+02:00 Jaroslaw Owsiewski <
jaroslaw.owsiew...@allegrogroup.com>:

> 2016-07-14 15:26 GMT+02:00 Luis Periquito <periqu...@gmail.com>:
>
>> Hi Jaroslaw,
>>
>> several things are springing up to mind. I'm assuming the cluster is
>> healthy (other than the slow requests), right?
>>
>>
> Yes.
>
>
>
>> From the (little) information you send it seems the pools are
>> replicated with size 3, is that correct?
>>
>>
> True.
>
>
>> Are there any long running delete processes? They usually have a
>> negative impact on performance, specially as they don't really show up
>> in the IOPS statistics.
>>
>
> During normal troughput we have small amount of deletes.
>
>
>> I've also something like this happen when there's a slow disk/osd. You
>> can try to check with "ceph osd perf" and look for higher numbers.
>> Usually restarting that OSD brings back the cluster to life, if that's
>> the issue.
>>
>
> I will check this.
>
>
>
>> If nothing shows, try a "ceph tell osd.* version"; if there's a
>> misbehaving OSD they usually don't respond to the command (slow or
>> even timing out).
>>
>> Also you also don't say how many scrub/deep-scrub processes are
>> running. If not properly handled they are also a performance killer.
>>
>>
> Scrub/deep-scrub processes are disabled
>
>
> Last, but by far not least, have you ever thought of creating a SSD
>> pool (even small) and move all pools but .rgw.buckets there? The other
>> ones are small enough, but enjoy having their own "reserved" osds...
>>
>>
>>
>
> This is one idea we had some time ago, we will try that.
>
> One important thing:
>
> sysop@s41617:~/bin$ ceph osd pool get .rgw.buckets pg_num
> pg_num: 4470
> sysop@s41617:~/bin$ ceph osd pool get .rgw.buckets.index pg_num
> pg_num: 2048
>
> Could be this a main problem?
>
>
> Regards
> --
> Jarek
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Slow requests on cluster.

2016-07-14 Thread Jaroslaw Owsiewski
Hi,

we have problem with drastic performance slowing down on a cluster. We used
radosgw with S3 protocol. Our configuration:

153 OSD SAS 1.2TB with journal on SSD disks (ratio 4:1)
- no problems with networking, no hardware issues, etc.

Output from "ceph df":

GLOBAL:
SIZE AVAIL RAW USED %RAW USED
166T  129T   38347G 22.44
POOLS:
NAME   ID USED   %USED MAX AVAIL
OBJECTS
.rgw   9  70330k 039879G
 393178
.rgw.root  10848 039879G
  3
.rgw.control   11  0 039879G
  8
.rgw.gc12  0 039879G
 32
.rgw.buckets   13 10007G  5.8639879G
331079052
.rgw.buckets.index 14  0 039879G
2994652
.rgw.buckets.extra 15  0 039879G
  2
.log   16   475M 039879G
408
.intent-log17  0 039879G
  0
.users 19729 039879G
 49
.users.email   20414 039879G
 26
.users.swift   21  0 039879G
  0
.users.uid 22  17170 039879G
 89

Problems began on last saturday,
Troughput was 400k req per hour - mostly PUTs and HEADs ~100kb.

Ceph version is hammer.


We have two clusters with similar configuration and both experienced same
problems at once.

Any hints


Latest output from "ceph -w":

2016-07-14 14:43:16.197131 osd.26 [WRN] 17 slow requests, 16 included
below; oldest blocked for > 34.766976 secs
2016-07-14 14:43:16.197138 osd.26 [WRN] slow request 32.99 seconds old,
received at 2016-07-14 14:42:43.641440: osd_op(client.75866283.0:20130084
.dir.default.75866283.65796.3 [delete] 14.122252f4
ondisk+write+known_if_redirected e18788) currently commit_sent
2016-07-14 14:43:16.197145 osd.26 [WRN] slow request 32.536551 seconds old,
received at 2016-07-14 14:42:43.660487: osd_op(client.75866283.0:20130121
.dir.default.75866283.65799.6 [delete] 14.d2dc1672
ondisk+write+known_if_redirected e18788) currently commit_sent
2016-07-14 14:43:16.197153 osd.26 [WRN] slow request 30.971549 seconds old,
received at 2016-07-14 14:42:45.225490: osd_op(client.75866283.0:20132345
gc.12 [call rgw.gc_set_entry] 12.a45046b8
ack+ondisk+write+known_if_redirected e18788) currently waiting for rw locks
2016-07-14 14:43:16.197158 osd.26 [WRN] slow request 30.967568 seconds old,
received at 2016-07-14 14:42:45.229471: osd_op(client.76495939.0:20147494
gc.12 [call rgw.gc_set_entry] 12.a45046b8
ack+ondisk+write+known_if_redirected e18788) currently waiting for rw locks
2016-07-14 14:43:16.197162 osd.26 [WRN] slow request 32.253169 seconds old,
received at 2016-07-14 14:42:43.943870: osd_op(client.75866283.0:20130663
.dir.default.75866283.65805.7 [delete] 14.2b5a1672
ondisk+write+known_if_redirected e18788) currently commit_sent
2016-07-14 14:43:17.197429 osd.26 [WRN] 3 slow requests, 2 included below;
oldest blocked for > 31.967882 secs
2016-07-14 14:43:17.197434 osd.26 [WRN] slow request 31.579897 seconds old,
received at 2016-07-14 14:42:45.617456: osd_op(client.76495939.0:20147877
gc.12 [call rgw.gc_set_entry] 12.a45046b8
ack+ondisk+write+known_if_redirected e18788) currently waiting for rw locks
2016-07-14 14:43:17.197439 osd.26 [WRN] slow request 30.897873 seconds old,
received at 2016-07-14 14:42:46.299480: osd_op(client.76495939.0:20148668
gc.12 [call rgw.gc_set_entry] 12.a45046b8
ack+ondisk+write+known_if_redirected e18788) currently waiting for rw locks


Regards
-- 
Jarosław Owsiewski
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph on different OS version

2016-09-23 Thread Jaroslaw Owsiewski
2016-09-22 16:20 GMT+02:00 Wido den Hollander :

>
> > Op 22 september 2016 om 16:13 schreef Matteo Dacrema  >:
> >
> >
> > To be more precise, the node with different OS are only the OSD nodes.
> >
>
> I haven't seen real issues, but a few which I could think of which
> *potentially* might be a problem:
>
> - Different tcmalloc version
> - Different libc versions
> - Different kernel behavior with TCP connections
>
> Now, again, I haven't seen any problems, but these are the ones I could
> think of.
>
> The best thing is to make sure all the Ceph versions are identical.
>
> However, I could recommend to upgrade all machines to the same OS version
> since it just makes administrating them a bit easier.
>
>

We have Ceph cluster with mixed tcmalloc and jemalloc OSD's running
diferrent kernel versions. We don't see any problems. Throughput is quite
big - 400k PUTs per hour.

-- 
Jarek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Radosgw pool creation (jewel / Ubuntu16.04)

2016-11-10 Thread Jaroslaw Owsiewski
https://www.suse.com/documentation/ses-3/book_storage_admin/data/ceph_rgw_manual.html
- this is example how documentation should look like :-).

Regards

-- 
Jarek

-- 
Jarosław Owsiewski

2016-11-09 15:48 GMT+01:00 Matthew Vernon :

> Hi,
>
> I have a jewel/Ubuntu16.40 ceph cluster. I attempted to add some
> radosgws, having already made the pools I thought they would need per
> http://docs.ceph.com/docs/jewel/radosgw/config-ref/#pools
>
> i.e. .rgw and so on:
> .rgw
> .rgw.control
> .rgw.gc
> .log
> .intent-log
> .usage
> .users
> .users.email
> .users.swift
> .users.uid
>
>
> But in fact, it's created a bunch of pools under default:
> default.rgw.control
> default.rgw.data.root
> default.rgw.gc
> default.rgw.log
> default.rgw.users.uid
>
> So, should I have created these pools instead, or is there some way to
> make radosgw do what I intended? Relatedly, is it going to create e.g.
> default.rgw.users.swift as and when I enable the swift gateway? [rather
> than .users.swift as the docs suggest]
>
> Thanks,
>
> Matthew
>
>
> --
>  The Wellcome Trust Sanger Institute is operated by Genome Research
>  Limited, a charity registered in England with number 1021457 and a
>  company registered in England with number 2742969, whose registered
>  office is 215 Euston Road, London, NW1 2BE.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW: how to get a list of defined radosgw users?

2017-08-01 Thread Jaroslaw Owsiewski
Hi,

$ radosgw-admin metadata list user

-- 
Jarek

-- 
Jarosław Owsiewski

2017-08-01 9:52 GMT+02:00 Diedrich Ehlerding <
diedrich.ehlerd...@ts.fujitsu.com>:

> Hello,
>
> according to the manpages of radosgw-admin, it is possible to
> suspend, resume, create, remove a single radosgw  user, but I
> haven't yet found a method to see a list of all defined radoswg
> users. Is that possible, and how is it possible?
>
> TIA,
> Diedrich
> --
> Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
> MIS ITST CE PS WST, Hildesheimer Str 25, D-30880 Laatzen
> Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
> Firmenangaben: http://de.ts.fujitsu.com/imprint.html
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Restart is required?

2017-11-16 Thread Jaroslaw Owsiewski
Thanks for your reply and information. Yes, we are using filestore. Will it
still work in Luminous: ??

http://docs.ceph.com/docs/master/rados/configuration/filestore-config-ref/ :

"filestore merge threshold

Description: Min number of files in a subdir before merging into parent
NOTE: A negative value means to disable subdir merging"

will variable definition like "filestore_merge_treshold = -50" (negative
value) work? (in Jewel it worked like a charm)

Regards

-- 

Jarek

-- 
Jarosław Owsiewski

2017-11-16 16:47 GMT+01:00 David Turner <drakonst...@gmail.com>:

> The filestore_split_multiple command does indeed need a restart of the OSD
> daemon to take effect.  Same with the filestore_merge_threshold.  These
> settings also only affect filestore.  If you're using bluestore, then they
> don't mean anything.
>
> You can utilize the ceph-objectstore-tool to split subfolders while the
> OSD is offline as well.  I use the following command to split the
> subfolders on my clusters while the OSDs on a node are offline.  It should
> grab all of your OSDs on a node and split their subfolders to match the
> settings in the ceph.conf file.  Make sure you understand what the commands
> do and test them in your environment before running this (or your
> personalized version of it).
>
>
> ceph osd set noout
> sudo systemctl stop ceph-osd.target
> for osd in $(mount | grep -Eo ceph-[0-9]+ | cut -d- -f2 | sort -nu); do
>   for run_in_background in true; do
> echo "Starting osd $osd"
> sudo -u ceph ceph-osd --flush-journal -i=${osd}
> for pool in $(ceph osd lspools | gawk 'BEGIN {RS=","} {print $2}'); do
>   sudo -u ceph ceph-objectstore-tool --data-path
> /var/lib/ceph/osd/ceph-${osd} \
> --journal-path /var/lib/ceph/osd/ceph-${osd}/journal \
> --log-file=/var/log/ceph/objectstore_tool.${osd}.log \
> --op apply-layout-settings \
> --pool $pool \
> --debug
> done
> echo "Finished osd.${osd}"
> sudo systemctl start ceph-osd@$osd.service
>   done &
> done
> wait
> sudo systemctl start ceph-osd.target
>
> On Thu, Nov 16, 2017 at 9:19 AM Piotr Dałek <piotr.da...@corp.ovh.com>
> wrote:
>
>> On 17-11-16 02:44 PM, Jaroslaw Owsiewski wrote:
>> > HI,
>> >
>> > what exactly means message:
>> >
>> > filestore_split_multiple = '24' (not observed, change may require
>> restart)
>> >
>> > This has happend after command:
>> >
>> > # ceph tell osd.0 injectargs '--filestore-split-multiple 24'
>>
>> It means that "filestore split multiple" is not observed for runtime
>> changes, meaning that new value will be stored in osd.0 process memory,
>> but
>> not used at all.
>>
>> > Do I really need to restart OSD to make changes to take effect?
>> >
>> > ceph version 12.2.1 () luminous (stable)
>>
>> Yes.
>>
>> --
>> Piotr Dałek
>> piotr.da...@corp.ovh.com
>> https://www.ovh.com/us/
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Restart is required?

2017-11-16 Thread Jaroslaw Owsiewski
HI,

what exactly means message:

filestore_split_multiple = '24' (not observed, change may require restart)

This has happend after command:

# ceph tell osd.0 injectargs '--filestore-split-multiple 24'


Do I really need to restart OSD to make changes to take effect?

ceph version 12.2.1 () luminous (stable)

Regards
-- 
Jarosław Owsiewski
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Log entrys from RGW.

2017-11-12 Thread Jaroslaw Owsiewski
http://tracker.ceph.com/issues/22015 - someone else has this issue?

Regards
-- 
Jarek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Buckets Backup

2019-09-26 Thread Jaroslaw Owsiewski
Hi,

rclone can be your friend: https://rclone.org/

Regards,
--
Jarek

czw., 26 wrz 2019 o 14:55 CUZA Frédéric  napisał(a):

> Hi everyone,
>
> As aynone ever made  a backup of a ceph bucket into Amazon Glacier ?
>
> If so did you use a script that use the api to “migrate” the objects ?
>
>
>
> If no one use amazon s3, how did you make those backups ?
>
>
>
> Thanks in advance.
>
>
>
> Regards,
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com