Re: [ceph-users] Disable automatic creation of rgw pools?

2018-12-03 Thread Martin Emrich
Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 Am Fr., 30. Nov. 2018 um 12:18 Uhr schrieb Martin Emrich : > > Hello, > > how can

[ceph-users] Disable automatic creation of rgw pools?

2018-11-30 Thread Martin Emrich
Hello, how can I disable the automatic creation of the rgw pools? I have no radosgw instances running, and currently do not intend to do so on this cluster. But these pools keep reappearing: .rgw.root default.rgw.meta default.rgw.log I just don't want them to eat up pgs for no reason... My

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-06 Thread Martin Emrich
Hi! Am 02.03.18 um 13:27 schrieb Federico Lucifredi: We do speak to the Xen team every once in a while, but while there is interest in adding Ceph support on their side, I think we are somewhat down the list of their priorities. Maybe things change with XCP-ng (https://xcp-ng.github.io).

Re: [ceph-users] How to "apply" and monitor bluestore compression?

2018-02-26 Thread Martin Emrich
Hi! Am 26.02.18 um 16:26 schrieb Igor Fedotov: I'm working on adding compression statistics to ceph/rados df reports. And AFAIK currently the only way to monitor compression ration is to inspect osd performance counters. Awesome, looking forward to it :) Cheers, Martin

[ceph-users] How to "apply" and monitor bluestore compression?

2018-02-26 Thread Martin Emrich
Hi! I just migrated my backup cluster from filestore to bluestore (8 OSDs, one OSD at a time, took two weeks but went smoothly). I also enabled compression on a pool beforehand and am impressed by the compression ratio (snappy, agressive, default parameters). So apparently during

Re: [ceph-users] NFS-Ganesha: Files disappearing?

2018-02-13 Thread Martin Emrich
After a minute or so, they "reappear". I found this message in https://github.com/ceph/ceph/blob/master/src/rgw/rgw_file.h Cheers, Martin Am 12.02.18 um 12:07 schrieb Martin Emrich: Hi! I am trying out NFS-Ganesha-RGW (2.5.4 and also Git V2.5-stable) with Ceph 12.2.2. Mounting th

[ceph-users] NFS-Ganesha: Files disappearing?

2018-02-12 Thread Martin Emrich
Hi! I am trying out NFS-Ganesha-RGW (2.5.4 and also Git V2.5-stable) with Ceph 12.2.2. Mounting the RGW works fine, but if I try to archive all files, some paths seem to "disappear": ... tar: /store/testbucket/nhxYgfUgFivgzRxw: File removed before we read it tar:

Re: [ceph-users] Luminous/Ubuntu 16.04 kernel recommendation ?

2018-02-08 Thread Martin Emrich
Am 08.02.18 um 11:50 schrieb Ilya Dryomov: On Thu, Feb 8, 2018 at 11:20 AM, Martin Emrich <martin.emr...@empolis.com> wrote: I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally, running linux-generic-hwe-16.04 (4.13.0-32-generic). Works fine, except that it does not s

[ceph-users] Ceph Day Germany :)

2018-02-08 Thread Martin Emrich
Hi! I just want to thank all organizers and speakers for the awesome Ceph Day at Darmstadt, Germany yesterday. I learned of some cool stuff I'm eager to try out (NFS-Ganesha for RGW, openATTIC,...), Organization and food were great, too. Cheers, Martin

Re: [ceph-users] Luminous/Ubuntu 16.04 kernel recommendation ?

2018-02-08 Thread Martin Emrich
I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally, running linux-generic-hwe-16.04 (4.13.0-32-generic). Works fine, except that it does not support the latest features: I had to disable exclusive-lock,fast-diff,object-map,deep-flatten on the image. Otherwise it runs well.

Re: [ceph-users] Bug in RadosGW resharding? Hangs again...

2018-01-17 Thread Martin Emrich
, Martin Von: Orit Wasserman <owass...@redhat.com> Datum: Mittwoch, 17. Januar 2018 um 11:57 An: Martin Emrich <martin.emr...@empolis.com> Cc: ceph-users <ceph-users@lists.ceph.com> Betreff: Re: [ceph-users] Bug in RadosGW resharding? Hangs again... On Wed, Jan 17, 2018 a

Re: [ceph-users] Bug in RadosGW resharding? Hangs again...

2018-01-17 Thread Martin Emrich
ome random test data to create logs I can share. I will also test whether the versioning itself is the culprit, or if it is the lifecycle rule. Regards, Martin Von: Orit Wasserman <owass...@redhat.com> Datum: Dienstag, 16. Januar 2018 um 18:38 An: Martin Emrich <martin.emr...@empolis.com>

[ceph-users] Bug in RadosGW resharding? Hangs again...

2018-01-15 Thread Martin Emrich
Hi! After having a completely broken radosgw setup due to damaged buckets, I completely deleted all rgw pools, and started from scratch. But my problem is reproducible. After pushing ca. 10 objects into a bucket, the resharding process appears to start, and the bucket is now

Re: [ceph-users] How to "reset" rgw?

2018-01-11 Thread Martin Emrich
Ok thanks, I'll try it out... Regards, Martin Am 10.01.18 um 18:48 schrieb Casey Bodley: On 01/10/2018 04:34 AM, Martin Emrich wrote: Hi! As I cannot find any solution for my broken rgw pools, the only way out is to give up and "reset". How do I throw away all rgw data f

[ceph-users] How to "reset" rgw?

2018-01-10 Thread Martin Emrich
Hi! As I cannot find any solution for my broken rgw pools, the only way out is to give up and "reset". How do I throw away all rgw data from a ceph cluster? Just delete all rgw pools? Or are some parts stored elsewhere (monitor, ...)? Thanks, Martin

[ceph-users] RadosGW still stuck on buckets

2018-01-05 Thread Martin Emrich
Hello! Hope you all started the new year well... New year, same problem: Still having the issue with the frozen radosgw buckets. Some information: * Ceph 12.2.2 with bluestore * 3 OSD nodes, each housing 2 SSD OSDs for bucket index and 4 OSDs for bucket data, each having 64GB RAM and 16

Re: [ceph-users] using s3cmd to put object into cluster with version?

2018-01-03 Thread Martin Emrich
Hi! I use the aws CLI tool, like this: aws --endpoint-url=http://your-rgw:7480 s3api put-bucket-versioning --bucket yourbucket --versioning-configuration Status=Enabled I also set a lifecycle configuration to expire older versions, e.g.: aws --endpoint-url=http://your-rgw:7480 s3api

Re: [ceph-users] Understanding reshard issues

2017-12-14 Thread Martin Emrich
Hi! Am 13.12.17 um 20:50 schrieb Graham Allan: After our Jewel to Luminous 12.2.2 upgrade, I ran into some of the same issues reported earlier on the list under "rgw resharding operation seemingly won't end". Yes, that were/are my threads, I also have this issue. I was able to correct the

Re: [ceph-users] Resharding issues / How long does it take?

2017-12-12 Thread Martin Emrich
Hi! (By the way, now a second bucket has this problem, it apparently occurs when the automatic resharding commences while data is being written to the bucket). Am 12.12.17 um 09:53 schrieb Orit Wasserman: On Mon, Dec 11, 2017 at 11:45 AM, Martin Emrich <martin.emr...@empolis.com>

Re: [ceph-users] How to remove a faulty bucket?

2017-12-11 Thread Martin Emrich
nson" <robb...@gentoo.org>: On Mon, Dec 11, 2017 at 09:29:11AM +, Martin Emrich wrote: > > Yes indeed. Running "radosgw-admin bi list" results in an incomplete 300MB JSON file, before it freezes. That's a very good starting point to debug. The bucket

Re: [ceph-users] How to remove a faulty bucket?

2017-12-11 Thread Martin Emrich
Hi! Am 09.12.17, 00:19 schrieb "Robin H. Johnson" : If you use 'radosgw-admin bi list', you can get a listing of the raw bucket index. I'll bet that the objects aren't being shown at the S3 layer because something is wrong with them. But since they are in the

Re: [ceph-users] Luminous rgw hangs after sighup

2017-12-11 Thread Martin Emrich
Hi! This sounds like http://tracker.ceph.com/issues/20763 (or indeed http://tracker.ceph.com/issues/20866). It is still present in 12.2.2 (just tried it). My workaround is to exclude radosgw from logrotate (remove "radosgw" from /etc/logrotate.d/ceph) from being SIGHUPed, and to rotate the

Re: [ceph-users] How to remove a faulty bucket?

2017-12-08 Thread Martin Emrich
nst...@gmail.com> Datum: Freitag, 8. Dezember 2017 um 15:19 An: Martin Emrich <martin.emr...@empolis.com> Cc: ceph-users <ceph-users@lists.ceph.com> Betreff: Re: [ceph-users] How to remove a faulty bucket? [WAS:Re: Resharding issues / How long does it take?] First off, you can rename a bucke

[ceph-users] How to remove a faulty bucket? [WAS:Re: Resharding issues / How long does it take?]

2017-12-08 Thread Martin Emrich
s bucket? Thanks Martin Am 07.12.17, 16:05 schrieb "ceph-users im Auftrag von Martin Emrich" <ceph-users-boun...@lists.ceph.com im Auftrag von martin.emr...@empolis.com>: Hi all! Apparently, one of my buckets went wonko during automatic resharding, the frontend applica

[ceph-users] Resharding issues / How long does it take?

2017-12-07 Thread Martin Emrich
Hi all! Apparently, one of my buckets went wonko during automatic resharding, the frontend application only gehts a timeout after 90s. After an attempt to fix the index using “radosgw-admin bucket check –fix”, I tried to reshard id (6,3GB of data in ca. 23 objects). The resharding command

Re: [ceph-users] Luminous radosgw hangs after a few hours

2017-08-19 Thread Martin Emrich
I see the same issue with ceph v12.1.4 as well. We are not using openstack or keystone, and see these errors in the rgw log. RGW is not hanging though. Thanks, Nitin From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Martin Emrich <martin.emr

Re: [ceph-users] Radosgw returns 404 Not Found

2017-08-16 Thread Martin Emrich
Datum: Mittwoch, 16. August 2017 um 16:31 An: Martin Emrich <martin.emr...@empolis.com>, "ceph-us...@ceph.com" <ceph-us...@ceph.com> Betreff: Re: [ceph-users] Radosgw returns 404 Not Found You need to fix your endpoint URLs. The line that shows this is. 2017-08-16 14:02:21.7259

[ceph-users] Radosgw returns 404 Not Found

2017-08-16 Thread Martin Emrich
Hi! I have the following issue: While “radosgw bucket list” shows me my buckets, S3 API clients only get a “404 Not Found”. With debug level 20, I see the following output of the radosgw service: 2017-08-16 14:02:21.725959 7fc7f5317700 20 rgw::auth::s3::LocalEngine granted access 2017-08-16

Re: [ceph-users] Luminous radosgw hangs after a few hours

2017-07-24 Thread Martin Emrich
I created an issue: http://tracker.ceph.com/issues/20763 Regards, Martin Von: Vasu Kulkarni <vakul...@redhat.com> Datum: Montag, 24. Juli 2017 um 19:26 An: Vaibhav Bhembre <vaib...@digitalocean.com> Cc: Martin Emrich <martin.emr...@empolis.com>, "ceph-users@lists

Re: [ceph-users] How to set up bluestore manually?

2017-07-07 Thread Martin Emrich
Hi all! I got quite far till now, and I documented my progress here: https://github.com/MartinEmrich/kb/blob/master/ceph/Manual-Bluestore.md Cheers, Martin Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von Martin Emrich Gesendet: Freitag, 30. Juni 2017 17:32 An: ceph

Re: [ceph-users] How to set up bluestore manually?

2017-07-07 Thread Martin Emrich
You seem to use a whole disk, but I used a partition, which cannot (sanely) be partitioned further. But could you tell me with which sizes for sdd1 and sdd2 you ended up? Thanks, Martin Von: Ashley Merrick [mailto:ash...@amerrick.co.uk] Gesendet: Freitag, 7. Juli 2017 09:08 An: Martin Emrich

Re: [ceph-users] How to set up bluestore manually?

2017-07-07 Thread Martin Emrich
://tracker.ceph.com/issues/20540 One more question: What are the size requirements for the WAL and the DB? Cheers, Martin Von: Vasu Kulkarni [mailto:vakul...@redhat.com] Gesendet: Donnerstag, 6. Juli 2017 20:45 An: Martin Emrich <martin.emr...@empolis.com> Cc: Loris Cuoghi <loris.cuo...@artificiale.n

Re: [ceph-users] How to set up bluestore manually?

2017-07-06 Thread Martin Emrich
ore :( Thanks, Martin -Ursprüngliche Nachricht- Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von Martin Emrich Gesendet: Dienstag, 4. Juli 2017 22:02 An: Loris Cuoghi <loris.cuo...@artificiale.net>; ceph-users@lists.ceph.com Betreff: Re: [ceph-users] How t

Re: [ceph-users] How to set up bluestore manually?

2017-07-04 Thread Martin Emrich
found some open issues about this: http://tracker.ceph.com/issues/6042, and http://tracker.ceph.com/issues/5461.. Could this be related? Cheers, Martin -Ursprüngliche Nachricht- Von: Loris Cuoghi [mailto:loris.cuo...@artificiale.net] Gesendet: Montag, 3. Juli 2017 15:48 An: Martin Emrich <

Re: [ceph-users] Rados maximum object size issue since Luminous? SOLVED

2017-07-04 Thread Martin Emrich
, Martin -Ursprüngliche Nachricht- Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de] Gesendet: Dienstag, 4. Juli 2017 14:42 An: Martin Emrich <martin.emr...@empolis.com> Cc: Gregory Farnum <gfar...@redhat.com>; ceph-users@lists.ceph.com Betreff: Re: [ceph-users] Rados maximum objec

Re: [ceph-users] Rados maximum object size issue since Luminous?

2017-07-04 Thread Martin Emrich
behaviour of jewel (allowing 50GB objects)? The only option I found was "osd max write size" but that seems not to be the right one, as its default of 90MB is lower than my observed 128MB. Cheers, Martin -Ursprüngliche Nachricht- Von: ceph-users [mailto:ceph-users-boun...@lists.

Re: [ceph-users] Rados maximum object size issue since Luminous?

2017-07-04 Thread Martin Emrich
19:59 An: Martin Emrich <martin.emr...@empolis.com> Cc: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] Rados maximum object size issue since Luminous? On Mon, Jul 3, 2017 at 10:17 AM, Martin Emrich <martin.emr...@empolis.com> wrote: > Hi! > > > > Having to interrup

[ceph-users] Rados maximum object size issue since Luminous?

2017-07-03 Thread Martin Emrich
Hi! Having to interrupt my bluestore test, I have another issue since upgrading from Jewel to Luminous: My backup system (Bareos with RadosFile backend) can no longer write Volumes (objects) larger than around 128MB. (Of course, I did not test that on my test cluster prior to upgrading the

Re: [ceph-users] How to set up bluestore manually?

2017-07-03 Thread Martin Emrich
(I have different categories of OSD hosts for different use cases, split by appropriate CRUSH rules). Thanks Martin -Ursprüngliche Nachricht- Von: Loris Cuoghi [mailto:loris.cuo...@artificiale.net] Gesendet: Montag, 3. Juli 2017 13:39 An: Martin Emrich <martin.emr...@empolis.com&

Re: [ceph-users] How to set up bluestore manually?

2017-07-03 Thread Martin Emrich
(and I won’t risk messing with my cluster by attempting to “retrofit” ceph-deply to it)… I’ll setup a single VM cluster to squeeze out the necessary commands and check back… Regards, Martin Von: Vasu Kulkarni [mailto:vakul...@redhat.com] Gesendet: Freitag, 30. Juni 2017 17:58 An: Martin Emrich

[ceph-users] How to set up bluestore manually?

2017-06-30 Thread Martin Emrich
Hi! I'd like to set up new OSDs with bluestore: the real data ("block") on a spinning disk, and DB+WAL on a SSD partition. But I do not use ceph-deploy, and never used ceph-disk (I set up the filestore OSDs manually). Google tells me that ceph-disk does not (yet) support splitting the

[ceph-users] Luminous radosgw hangs after a few hours

2017-06-29 Thread Martin Emrich
Since upgrading to 12.1, our Object Gateways hang after a few hours, I only see these messages in the log file: 2017-06-29 07:52:20.877587 7fa8e01e5700 0 ERROR: keystone revocation processing returned error r=-22 2017-06-29 08:07:20.877761 7fa8e01e5700 0 ERROR: keystone revocation processing

Re: [ceph-users] Radosgw versioning S3 compatible?

2017-06-29 Thread Martin Emrich
] Radosgw versioning S3 compatible? On 06/28/2017 12:52 PM, Yehuda Sadeh-Weinraub wrote: > On Wed, Jun 28, 2017 at 8:13 AM, Martin Emrich > <martin.emr...@empolis.com> wrote: >> Correction: It’s about the Version expiration, not the versioning itself. >> >> We send this

Re: [ceph-users] Radosgw versioning S3 compatible?

2017-06-28 Thread Martin Emrich
: { ExpiredObjectDeleteMarker: true }, ID: 'expire-60days' } ] Should that be supported? Thanks Martin Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von Martin Emrich Gesendet: Mittwoch, 28. Juni 2017 16:13 An: ceph-users@lists.ceph.com Betreff

[ceph-users] Radosgw versioning S3 compatible?

2017-06-28 Thread Martin Emrich
Hi! Is the Object Gateway S3 API supposed to be compatible with Amazon S3 regarding versioning? Object Versioning is listed as supported in Ceph 12.1, but using the standard Node.js aws-sdk module (s3.putBucketVersioning()) results in "NotImplemented". Thanks Martin

[ceph-users] Bad performance while deleting many small objects via radosgw S3

2016-07-08 Thread Martin Emrich
Hi! Our little dev ceph cluster (nothing fancy; 3x1 OSD with 100GB each, 3x monitor with radosgw) takes over 20 minutes to delete ca. 44000 small objects (<1GB in total). Deletion is done by listing objects in blocks of 1000 and then deleting them in one call for each block; each deletion of