[ceph-users] s3cmd --disable-multipart

2015-12-10 Thread Deneau, Tom
If using s3cmd to radosgw and using s3cmd's --disable-multipart option, is there any limit to the size of the object that can be stored thru radosgw? Also, is there a recommendation for multipart chunk size for radosgw? -- Tom ___ ceph-users mailing

Re: [ceph-users] s3cmd --disable-multipart

2015-12-10 Thread Yehuda Sadeh-Weinraub
On Thu, Dec 10, 2015 at 11:10 AM, Deneau, Tom wrote: > If using s3cmd to radosgw and using s3cmd's --disable-multipart option, is > there any limit to the size of the object that can be stored thru radosgw? > rgw limits plain uploads to 5GB > Also, is there a recommendation

Re: [ceph-users] Preventing users from deleting their own bucket in S3

2015-12-10 Thread Gregory Farnum
On Thu, Dec 10, 2015 at 2:26 AM, Xavier Serrano wrote: > Hello, > > We are using ceph version 0.94.4, with radosgw offering S3 storage > to our users. > > Each user is assigned one bucket (and only one; max_buckets is set to 1). > The bucket name is actually the user

Re: [ceph-users] Blocked requests after "osd in"

2015-12-10 Thread Christian Kauhaus
Am 10.12.2015 um 06:38 schrieb Robert LeBlanc: > I noticed this a while back and did some tracing. As soon as the PGs > are read in by the OSD (very limited amount of housekeeping done), the > OSD is set to the "in" state so that peering with other OSDs can > happen and the recovery process can

Re: [ceph-users] New cluster performance analysis

2015-12-10 Thread Adrien Gillard
Hi Kris, Indeed I am seeing some spikes on the latency, they seem to be linked to other spikes on throughput and cluster global IOPS. I also see some spikes on the OSD (I guess this is when the journal is flushed) but IO on the journals are quite steady. I already tuned a bit the osd filestore

Re: [ceph-users] problem after reinstalling system

2015-12-10 Thread Jacek Jarosiewicz
Unfortunately I haven't found a newer package for centos in ceph repos. Not even an src.rpm so I could build the newer package on CentOS. I've re-created the monitor on that machine from scratch (this is fairly simple and quick). Ubuntu has leveldb 1.15, CentOS has 1.12. I've found lveldb

Re: [ceph-users] Client io blocked when removing snapshot

2015-12-10 Thread Florent Manens
Hi, Can you try modifying osd_snap_trim_sleep ? The default value is 0, I have good results with 0.25 with a ceph cluster using SATA disks : ceph tell osd.* injectargs -- --osd_snap_trim_sleep 0.25 Best regards, - Le 10 Déc 15, à 7:52, Wukongming a écrit : > Hi,

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Stolte, Felix
Hi Loic, I applied the fixed version. I don't get error messages when running ceph-disk list, but the output is not as I expect it to be (On hammer release I saw all partitions): ceph-disk list /dev/cciss/c0d0 other, unknown /dev/cciss/c0d1 other, unknown /dev/cciss/c0d2 other, unknown

Re: [ceph-users] High disk utilisation

2015-12-10 Thread Christian Balzer
On Thu, 10 Dec 2015 09:11:46 +0100 Dan van der Ster wrote: > On Thu, Dec 10, 2015 at 5:06 AM, Christian Balzer wrote: > > > > Hello, > > > > On Wed, 9 Dec 2015 15:57:36 + MATHIAS, Bryn (Bryn) wrote: > > > >> to update this, the error looks like it comes from updatedb scanning

Re: [ceph-users] Cannot create Initial Monitor

2015-12-10 Thread Aakanksha Pudipeddi-SSI
Thanks a lot for your help, Varada. Since I was deploying Ceph via ceph-deploy I could not see the actual errors. Low disk space led to a failure in creating monfs. Things are now working fine. Thanks, Aakanksha From: Varada Kari [mailto:varada.k...@sandisk.com] Sent: Tuesday, December 08,

Re: [ceph-users] Kernel RBD hang on OSD Failure

2015-12-10 Thread Matt Conner
Hi Ilya, I had already recovered but I managed to recreate the problem again. I ran the commands against rbd_data.f54f9422698a8. which was one of those listed in osdc this time. We have 2048 PGs in the pool so the list is long. As for when I fetched the object using rados, it

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Loic Dachary
Hi, I missed two, could you please try again with: https://raw.githubusercontent.com/dachary/ceph/b1ad205e77737cfc42400941ffbb56907508efc5/src/ceph-disk This is from https://github.com/ceph/ceph/pull/6880 Thanks for your patience :-) Cheers On 10/12/2015 10:27, Stolte, Felix wrote: > Hi

[ceph-users] Preventing users from deleting their own bucket in S3

2015-12-10 Thread Xavier Serrano
Hello, We are using ceph version 0.94.4, with radosgw offering S3 storage to our users. Each user is assigned one bucket (and only one; max_buckets is set to 1). The bucket name is actually the user name (typical unix login name, up to 8 characters long). Users can read and write objects in

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Loic Dachary
Thanks, I'll look into that. On 10/12/2015 10:27, Stolte, Felix wrote: > Hi Loic, > > I applied the fixed version. I don't get error messages when running > ceph-disk list, but the output is not as I expect it to be (On hammer > release I saw all partitions): > > ceph-disk list >

[ceph-users] 答复: Client io blocked when removing snapshot

2015-12-10 Thread Wukongming
When I adjusted the third parameter of OPTION(osd_snap_trim_sleep, OPT_FLOAT, 0) from 0 to 1, the issue could be fixed. I tried again with the value 0.1, it would not cause any problem either. So what is the best choice, Have you got a commended value? Thanks!! Kongming Wu

Re: [ceph-users] Client io blocked when removing snapshot

2015-12-10 Thread Jan Schermer
Removing snapshot means looking for every *potential* object the snapshot can have, and this takes a very long time (6TB snapshot will consist of 1.5M objects (in one replica) assuming the default 4MB object size). The same applies to large thin volumes (don't try creating and then dropping a 1

Re: [ceph-users] problem after reinstalling system

2015-12-10 Thread Dan van der Ster
On Wed, Dec 9, 2015 at 1:25 PM, Jacek Jarosiewicz wrote: > 2015-12-09 13:11:51.171377 7fac03c7f880 -1 > filestore(/var/lib/ceph/osd/ceph-5) Error initializing leveldb : Corruption: > 29 missing files; e.g.: /var/lib/ceph/osd/ceph-5/current/omap/046388.sst Did you have

[ceph-users] [CEPH-LIST]: problem with osd to view up

2015-12-10 Thread Andrea Annoè
Hi, I try to test ceph 9.2 cluster. My lab have 1 mon and 2 osd with 4 disks each. Only 1 osd server (with 4 disks) are online. The disks of second osd don't go up ... Some info about environment: [ceph@OSD1 ~]$ sudo ceph osd tree ID WEIGHT TYPE NAME UP/DOWN

Re: [ceph-users] problem after reinstalling system

2015-12-10 Thread Jacek Jarosiewicz
On 12/10/2015 02:50 PM, Dan van der Ster wrote: On Wed, Dec 9, 2015 at 1:25 PM, Jacek Jarosiewicz wrote: 2015-12-09 13:11:51.171377 7fac03c7f880 -1 filestore(/var/lib/ceph/osd/ceph-5) Error initializing leveldb : Corruption: 29 missing files; e.g.:

Re: [ceph-users] Client io blocked when removing snapshot

2015-12-10 Thread Sage Weil
On Thu, 10 Dec 2015, Jan Schermer wrote: > Removing snapshot means looking for every *potential* object the snapshot can > have, and this takes a very long time (6TB snapshot will consist of 1.5M > objects (in one replica) assuming the default 4MB object size). The same > applies to large thin

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Stolte, Felix
Hi Loic, output is still the same: ceph-disk list /dev/cciss/c0d0 other, unknown /dev/cciss/c0d1 other, unknown /dev/cciss/c0d2 other, unknown /dev/cciss/c0d3 other, unknown /dev/cciss/c0d4 other, unknown /dev/cciss/c0d5 other, unknown /dev/cciss/c0d6 other, unknown /dev/cciss/c0d7 other,

Re: [ceph-users] F21 pkgs for Ceph Hammer release ?

2015-12-10 Thread Deepak Shetty
On Wed, Dec 2, 2015 at 7:35 PM, Alfredo Deza wrote: > On Tue, Dec 1, 2015 at 4:59 AM, Deepak Shetty wrote: > > Hi, > > Does anybody how/where I can get the F21 repo for ceph hammer release ? > > > > In download.ceph.com/rpm-hammer/ I only see F20 dir, not

Re: [ceph-users] Client io blocked when removing snapshot

2015-12-10 Thread Jan Schermer
> On 10 Dec 2015, at 15:14, Sage Weil wrote: > > On Thu, 10 Dec 2015, Jan Schermer wrote: >> Removing snapshot means looking for every *potential* object the snapshot >> can have, and this takes a very long time (6TB snapshot will consist of 1.5M >> objects (in one replica)

[ceph-users] [Ceph] Feature Ceph Geo-replication

2015-12-10 Thread Andrea Annoè
Hi to all Someone has news about Geo-replication? I have find this really nice article by Sebastien http://www.sebastien-han.fr/blog/2013/01/28/ceph-geo-replication-sort-of/ but it's 3 years ago... My question is about configuration (and limitation : TTL, distance, flapping network

Re: [ceph-users] [Ceph] Feature Ceph Geo-replication

2015-12-10 Thread Jan Schermer
If you don't need synchronnous replication then asynchronnous is the way to go, but Ceph doesn't offer that natively. (not for RBD anyway, not sure how radosgw could be set up). 200km will add at least 1ms of latency network-wise, 2ms RTT, for TCP it will be more. For sync replication (which