[ceph-users] Problems removing buckets with --bypass-gc

2017-10-31 Thread Bryan Stillwell
As mentioned in another thread I'm trying to remove several thousand buckets on a hammer cluster (0.94.10), but I'm running into a problem using --bypass-gc. I usually see either this error: # radosgw-admin bucket rm --bucket=sg2pl598 --purge-objects --bypass-gc 2017-10-31 09:21:04.111599

[ceph-users] Luminous on Debian 9 Timeout During create-initial

2017-10-31 Thread Tyn Li
Hello, I am having trouble setting up a cluster using Ceph Luminous (version 12.2.1) on Debian 9, kernel 4.9.51.  I was able to create and configure the cluster successfully using Jewel, but I would prefer using the latest LTS version. Everything works until I run ceph-deploy mon create-initial,

[ceph-users] Recover ceph fs files

2017-10-31 Thread Mazzystr
I've been experimenting with moving Ceph from single node to mult node on the tiniest x86 that money buys. Tell me if you've heard this one before. :) Over the weekend I unexpectedly lost my root file system and along with it the single mons. I reinstalled the os, reinstalled ceph, recovered

[ceph-users] ceph 12.2.2 release date

2017-10-31 Thread Pavan, Krish
Do you have any approximate release date for 12.2.2? Krish ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] 回复: Re: mkfs rbd image is very slow

2017-10-31 Thread shadow_lin
Hi Jason, Thank you for your advice. The no discard option works gread.It now takes 5min to format 5t rbd image in xfs and only seconds to format in ext4. Is there any drawback to format rbd image with no discard option? Thanks 2017-10-31 lin.yunfan 发件人:Jason Dillaman

Re: [ceph-users] Problem making RadosGW dual stack

2017-10-31 Thread Wido den Hollander
> Op 30 oktober 2017 om 17:42 schreef alastair.dewhu...@stfc.ac.uk: > > > Hello > > We have a dual stack test machine running RadosGW. It is currently > configured for IPv4 only. This is done in the ceph.conf with: > rgw_frontends="civetweb port=443s >

Re: [ceph-users] 回复: Re: mkfs rbd image is very slow

2017-10-31 Thread Ilya Dryomov
On Tue, Oct 31, 2017 at 8:05 AM, shadow_lin wrote: > Hi Jason, > Thank you for your advice. > The no discard option works gread.It now takes 5min to format 5t rbd image > in xfs and only seconds to format in ext4. > Is there any drawback to format rbd image with no discard

[ceph-users] 回复: Re: [luminous]OSD memory usage increase when writing a lot of data to cluster

2017-10-31 Thread shadow_lin
Hi Sage, We have tried compiled the latest ceph source code from github. The build is ceph version 12.2.1-249-g42172a4 (42172a443183ffe6b36e85770e53fe678db293bf) luminous (stable). The memory problem seems better but the memory usage of osd is still keep increasing as more data are wrote into