Re: [ceph-users] RGW/Civet: Reads too much data when client doesn't close the connection

2017-07-12 Thread Jens Rosenboom
2017-07-12 15:23 GMT+00:00 Aaron Bassett : > I have a situation where a client is GET'ing a large key (100GB) from RadosGW > and just reading the first few bytes to determine if it's a gzip file or not, > and then just moving on without closing the connection. I'm RadosGW then goes > on to read

Re: [ceph-users] Access rights of /var/lib/ceph with Jewel

2017-07-10 Thread Jens Rosenboom
2017-07-10 10:40 GMT+00:00 Christian Balzer : > On Mon, 10 Jul 2017 11:27:26 +0200 Marc Roos wrote: > >> Looks to me by design (from rpm install), and the settings of the >> directorys below are probably the result of a user umask setting. > > I know it's deliberate, I'm asking why. It seems to ha

Re: [ceph-users] Rados maximum object size issue since Luminous?

2017-07-04 Thread Jens Rosenboom
2017-07-04 12:10 GMT+00:00 Martin Emrich : ... > So as striping is not backwards-compatible (and this pools is indeed for > backup/archival purposes where large objects are no problem): > > How can I restore the behaviour of jewel (allowing 50GB objects)? > > The only option I found was "osd max w

Re: [ceph-users] RGW: Truncated objects and bad error handling

2017-06-12 Thread Jens Rosenboom
Adding ceph-devel as this now involves two bugs that are IMO critical, one resulting in data loss, the other in data not getting removed properly. 2017-06-07 9:23 GMT+00:00 Jens Rosenboom : > 2017-06-01 18:52 GMT+00:00 Gregory Farnum : >> >> >> On Thu, Jun 1, 2017 at 2:03 AM

Re: [ceph-users] RGW: Truncated objects and bad error handling

2017-06-07 Thread Jens Rosenboom
2017-06-01 18:52 GMT+00:00 Gregory Farnum : > > > On Thu, Jun 1, 2017 at 2:03 AM Jens Rosenboom wrote: >> >> On a large Hammer-based cluster (> 1 Gobjects) we are seeing a small >> amount of objects being truncated. All of these objects are between >> 51

[ceph-users] RGW: Truncated objects and bad error handling

2017-06-01 Thread Jens Rosenboom
On a large Hammer-based cluster (> 1 Gobjects) we are seeing a small amount of objects being truncated. All of these objects are between 512kB and 4MB in size and they are not uploaded as multipart, so the first 512kB get stored into the head object and the next chunks should be in tail objects nam

Re: [ceph-users] Internalls of RGW data store

2017-05-24 Thread Jens Rosenboom
2017-05-24 6:26 GMT+00:00 Anton Dmitriev : > Hi > > Correct me if I am wrong: when uploading file to RGW it becomes split into > stripe units and this stripe units mapped to RADOS objects. This RADOS > objects are files on OSD filestore. Yes, see this blog post that explains this in a bit more det

Re: [ceph-users] ceph df space for rgw.buckets.data shows used even when files are deleted

2017-05-15 Thread Jens Rosenboom
2017-05-12 2:55 GMT+00:00 Ben Hines : > It actually seems like these values aren't being honored, i actually see > many more objects being processed by gc (as well as kraken object > lifecycle), even though my values are at the default 32 objs. > > 19:52:44 root@<> /var/run/ceph $ ceph --admin-daem

[ceph-users] Analysing performance for RGW requests

2017-05-12 Thread Jens Rosenboom
When switching from Apache+fcgi to civetweb it seems like we are losing access to some useful information, like the response time and the response size. Do others also have this issue or is there maybe a solution that I have not found yet? I have opened a feature request for this, just in case: h

[ceph-users] Why is cls_log_add logging so much?

2017-04-04 Thread Jens Rosenboom
On a busy cluster, I'm seeing a couple of OSDs logging millions of lines like this: 2017-04-04 06:35:18.240136 7f40ff873700 0 cls/log/cls_log.cc:129: storing entry at 1_1491287718.237118_57657708.1 2017-04-04 06:35:18.244453 7f4102078700 0 cls/log/cls_log.cc:129: storing entry at 1_1491287718.

Re: [ceph-users] Bcache, partitions and BlueStore

2016-09-26 Thread Jens Rosenboom
2016-09-26 11:31 GMT+02:00 Wido den Hollander : ... > Does anybody know the proper route we need to take to get this fixed > upstream? Has any contacts with the bcache developers? I do not have direct contacts either, but having partitions on bcache would be really great. Currently we do some nas

Re: [ceph-users] Large directory block size on XFS may be harmful

2016-02-18 Thread Jens Rosenboom
2016-02-18 15:10 GMT+01:00 Dan van der Ster : > Hi, > > Thanks for linking to a current update on this problem [1] [2]. I > really hope that new Ceph installations aren't still following that > old advice... it's been known to be a problem for around a year and a > half [3]. > That said, the "-n si

[ceph-users] Large directory block size on XFS may be harmful

2016-02-18 Thread Jens Rosenboom
Various people have noticed performance problems and sporadic kernel log messages like kernel: XFS: possible memory allocation deadlock in kmem_alloc (mode:0x8250) with their Ceph clusters. We have seen this in one of our clusters ourselves, but not been able to reproduce it in a lab environment

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Jens Rosenboom
2015-12-11 9:16 GMT+01:00 Stolte, Felix : > Hi Jens, > > output is attached (stderr + stdout) O.k., so now "ls -l /sys/dev/block /sys/dev/block/104:0 /sys/dev/block/104:112" please. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Jens Rosenboom
2015-12-11 8:19 GMT+01:00 Stolte, Felix : > Hi Loic, > > output is still the same: > > ceph-disk list > /dev/cciss/c0d0 other, unknown > /dev/cciss/c0d1 other, unknown > /dev/cciss/c0d2 other, unknown > /dev/cciss/c0d3 other, unknown > /dev/cciss/c0d4 other, unknown > /dev/cciss/c0d5 other, unknown

Re: [ceph-users] FW: RGW performance issue

2015-11-13 Thread Jens Rosenboom
2015-11-13 5:47 GMT+01:00 Pavan Rallabhandi : > If you are on >=hammer builds, you might want to consider the option of > using 'rgw_num_rados_handles', which opens up more handles to the cluster > from RGW. This would help in scenarios, where you have enough number of OSDs > to drive the cluster b

Re: [ceph-users] Ceph OSDs with bcache experience

2015-11-13 Thread Jens Rosenboom
2015-10-20 16:00 GMT+02:00 Wido den Hollander : ... > The system consists out of 39 hosts: > > 2U SuperMicro chassis: > * 80GB Intel SSD for OS > * 240GB Intel S3700 SSD for Journaling + Bcache > * 6x 3TB disk I'm currently testing a similar setup, but it turns out that setup and operations are a