Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Am Fr., 30. Nov. 2018 um 12:18 Uhr schrieb Martin Emrich
:
>
> Hello,
>
> how can
Hello,
how can I disable the automatic creation of the rgw pools?
I have no radosgw instances running, and currently do not intend to do
so on this cluster. But these pools keep reappearing:
.rgw.root
default.rgw.meta
default.rgw.log
I just don't want them to eat up pgs for no reason...
My
Hi!
Am 02.03.18 um 13:27 schrieb Federico Lucifredi:
We do speak to the Xen team every once in a while, but while there is
interest in adding Ceph support on their side, I think we are somewhat
down the list of their priorities.
Maybe things change with XCP-ng (https://xcp-ng.github.io).
Hi!
Am 26.02.18 um 16:26 schrieb Igor Fedotov:
I'm working on adding compression statistics to ceph/rados df reports.
And AFAIK currently the only way to monitor compression ration is to
inspect osd performance counters.
Awesome, looking forward to it :)
Cheers,
Martin
Hi!
I just migrated my backup cluster from filestore to bluestore (8 OSDs,
one OSD at a time, took two weeks but went smoothly).
I also enabled compression on a pool beforehand and am impressed by the
compression ratio (snappy, agressive, default parameters). So apparently
during
After a
minute or so, they "reappear".
I found this message in
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_file.h
Cheers,
Martin
Am 12.02.18 um 12:07 schrieb Martin Emrich:
Hi!
I am trying out NFS-Ganesha-RGW (2.5.4 and also Git V2.5-stable) with
Ceph 12.2.2.
Mounting th
Hi!
I am trying out NFS-Ganesha-RGW (2.5.4 and also Git V2.5-stable) with
Ceph 12.2.2.
Mounting the RGW works fine, but if I try to archive all files, some
paths seem to "disappear":
...
tar: /store/testbucket/nhxYgfUgFivgzRxw: File removed before we read it
tar:
Am 08.02.18 um 11:50 schrieb Ilya Dryomov:
On Thu, Feb 8, 2018 at 11:20 AM, Martin Emrich
<martin.emr...@empolis.com> wrote:
I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
running linux-generic-hwe-16.04 (4.13.0-32-generic).
Works fine, except that it does not s
Hi!
I just want to thank all organizers and speakers for the awesome Ceph
Day at Darmstadt, Germany yesterday.
I learned of some cool stuff I'm eager to try out (NFS-Ganesha for RGW,
openATTIC,...), Organization and food were great, too.
Cheers,
Martin
I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
running linux-generic-hwe-16.04 (4.13.0-32-generic).
Works fine, except that it does not support the latest features: I had
to disable exclusive-lock,fast-diff,object-map,deep-flatten on the
image. Otherwise it runs well.
,
Martin
Von: Orit Wasserman <owass...@redhat.com>
Datum: Mittwoch, 17. Januar 2018 um 11:57
An: Martin Emrich <martin.emr...@empolis.com>
Cc: ceph-users <ceph-users@lists.ceph.com>
Betreff: Re: [ceph-users] Bug in RadosGW resharding? Hangs again...
On Wed, Jan 17, 2018 a
ome random test data to create logs I can share.
I will also test whether the versioning itself is the culprit, or if it is the
lifecycle rule.
Regards,
Martin
Von: Orit Wasserman <owass...@redhat.com>
Datum: Dienstag, 16. Januar 2018 um 18:38
An: Martin Emrich <martin.emr...@empolis.com>
Hi!
After having a completely broken radosgw setup due to damaged buckets, I
completely deleted all rgw pools, and started from scratch.
But my problem is reproducible. After pushing ca. 10 objects into a
bucket, the resharding process appears to start, and the bucket is now
Ok thanks, I'll try it out...
Regards,
Martin
Am 10.01.18 um 18:48 schrieb Casey Bodley:
On 01/10/2018 04:34 AM, Martin Emrich wrote:
Hi!
As I cannot find any solution for my broken rgw pools, the only way
out is to give up and "reset".
How do I throw away all rgw data f
Hi!
As I cannot find any solution for my broken rgw pools, the only way out
is to give up and "reset".
How do I throw away all rgw data from a ceph cluster? Just delete all
rgw pools? Or are some parts stored elsewhere (monitor, ...)?
Thanks,
Martin
Hello!
Hope you all started the new year well...
New year, same problem: Still having the issue with the frozen radosgw
buckets. Some information:
* Ceph 12.2.2 with bluestore
* 3 OSD nodes, each housing 2 SSD OSDs for bucket index and 4 OSDs for
bucket data, each having 64GB RAM and 16
Hi!
I use the aws CLI tool, like this:
aws --endpoint-url=http://your-rgw:7480 s3api put-bucket-versioning
--bucket yourbucket --versioning-configuration Status=Enabled
I also set a lifecycle configuration to expire older versions, e.g.:
aws --endpoint-url=http://your-rgw:7480 s3api
Hi!
Am 13.12.17 um 20:50 schrieb Graham Allan:
After our Jewel to Luminous 12.2.2 upgrade, I ran into some of the same
issues reported earlier on the list under "rgw resharding operation
seemingly won't end".
Yes, that were/are my threads, I also have this issue.
I was able to correct the
Hi!
(By the way, now a second bucket has this problem, it apparently occurs
when the automatic resharding commences while data is being written to
the bucket).
Am 12.12.17 um 09:53 schrieb Orit Wasserman:
On Mon, Dec 11, 2017 at 11:45 AM, Martin Emrich
<martin.emr...@empolis.com>
nson" <robb...@gentoo.org>:
On Mon, Dec 11, 2017 at 09:29:11AM +, Martin Emrich wrote:
>
> Yes indeed. Running "radosgw-admin bi list" results in an incomplete
300MB JSON file, before it freezes.
That's a very good starting point to debug.
The bucket
Hi!
Am 09.12.17, 00:19 schrieb "Robin H. Johnson" :
If you use 'radosgw-admin bi list', you can get a listing of the raw bucket
index. I'll bet that the objects aren't being shown at the S3 layer
because something is wrong with them. But since they are in the
Hi!
This sounds like http://tracker.ceph.com/issues/20763 (or indeed
http://tracker.ceph.com/issues/20866).
It is still present in 12.2.2 (just tried it). My workaround is to exclude
radosgw from logrotate (remove "radosgw" from /etc/logrotate.d/ceph) from being
SIGHUPed, and to rotate the
nst...@gmail.com>
Datum: Freitag, 8. Dezember 2017 um 15:19
An: Martin Emrich <martin.emr...@empolis.com>
Cc: ceph-users <ceph-users@lists.ceph.com>
Betreff: Re: [ceph-users] How to remove a faulty bucket? [WAS:Re: Resharding
issues / How long does it take?]
First off, you can rename a bucke
s bucket?
Thanks
Martin
Am 07.12.17, 16:05 schrieb "ceph-users im Auftrag von Martin Emrich"
<ceph-users-boun...@lists.ceph.com im Auftrag von martin.emr...@empolis.com>:
Hi all!
Apparently, one of my buckets went wonko during automatic resharding, the
frontend applica
Hi all!
Apparently, one of my buckets went wonko during automatic resharding, the
frontend application only gehts a timeout after 90s.
After an attempt to fix the index using “radosgw-admin bucket check –fix”, I
tried to reshard id (6,3GB of data in ca. 23 objects).
The resharding command
I see the same issue with ceph v12.1.4 as well. We are not using openstack
or keystone, and see these errors in the rgw log. RGW is not hanging though.
Thanks,
Nitin
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Martin
Emrich <martin.emr
Datum: Mittwoch, 16. August 2017 um 16:31
An: Martin Emrich <martin.emr...@empolis.com>, "ceph-us...@ceph.com"
<ceph-us...@ceph.com>
Betreff: Re: [ceph-users] Radosgw returns 404 Not Found
You need to fix your endpoint URLs. The line that shows this is.
2017-08-16 14:02:21.7259
Hi!
I have the following issue: While “radosgw bucket list” shows me my buckets, S3
API clients only get a “404 Not Found”. With debug level 20, I see the
following output of the radosgw service:
2017-08-16 14:02:21.725959 7fc7f5317700 20 rgw::auth::s3::LocalEngine granted
access
2017-08-16
I created an issue: http://tracker.ceph.com/issues/20763
Regards,
Martin
Von: Vasu Kulkarni <vakul...@redhat.com>
Datum: Montag, 24. Juli 2017 um 19:26
An: Vaibhav Bhembre <vaib...@digitalocean.com>
Cc: Martin Emrich <martin.emr...@empolis.com>, "ceph-users@lists
Hi all!
I got quite far till now, and I documented my progress here:
https://github.com/MartinEmrich/kb/blob/master/ceph/Manual-Bluestore.md
Cheers,
Martin
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von
Martin Emrich
Gesendet: Freitag, 30. Juni 2017 17:32
An: ceph
You seem to use a whole disk, but I used a partition, which cannot (sanely) be
partitioned further.
But could you tell me with which sizes for sdd1 and sdd2 you ended up?
Thanks,
Martin
Von: Ashley Merrick [mailto:ash...@amerrick.co.uk]
Gesendet: Freitag, 7. Juli 2017 09:08
An: Martin Emrich
://tracker.ceph.com/issues/20540
One more question: What are the size requirements for the WAL and the DB?
Cheers,
Martin
Von: Vasu Kulkarni [mailto:vakul...@redhat.com]
Gesendet: Donnerstag, 6. Juli 2017 20:45
An: Martin Emrich <martin.emr...@empolis.com>
Cc: Loris Cuoghi <loris.cuo...@artificiale.n
ore :(
Thanks,
Martin
-Ursprüngliche Nachricht-
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von
Martin Emrich
Gesendet: Dienstag, 4. Juli 2017 22:02
An: Loris Cuoghi <loris.cuo...@artificiale.net>; ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] How t
found some open issues about this: http://tracker.ceph.com/issues/6042, and
http://tracker.ceph.com/issues/5461.. Could this be related?
Cheers,
Martin
-Ursprüngliche Nachricht-
Von: Loris Cuoghi [mailto:loris.cuo...@artificiale.net]
Gesendet: Montag, 3. Juli 2017 15:48
An: Martin Emrich <
,
Martin
-Ursprüngliche Nachricht-
Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de]
Gesendet: Dienstag, 4. Juli 2017 14:42
An: Martin Emrich <martin.emr...@empolis.com>
Cc: Gregory Farnum <gfar...@redhat.com>; ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] Rados maximum objec
behaviour of jewel (allowing 50GB objects)?
The only option I found was "osd max write size" but that seems not to be the
right one, as its default of 90MB is lower than my observed 128MB.
Cheers,
Martin
-Ursprüngliche Nachricht-
Von: ceph-users [mailto:ceph-users-boun...@lists.
19:59
An: Martin Emrich <martin.emr...@empolis.com>
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] Rados maximum object size issue since Luminous?
On Mon, Jul 3, 2017 at 10:17 AM, Martin Emrich <martin.emr...@empolis.com>
wrote:
> Hi!
>
>
>
> Having to interrup
Hi!
Having to interrupt my bluestore test, I have another issue since upgrading
from Jewel to Luminous: My backup system (Bareos with RadosFile backend) can no
longer write Volumes (objects) larger than around 128MB.
(Of course, I did not test that on my test cluster prior to upgrading the
(I have different categories of OSD
hosts for different use cases, split by appropriate CRUSH rules).
Thanks
Martin
-Ursprüngliche Nachricht-
Von: Loris Cuoghi [mailto:loris.cuo...@artificiale.net]
Gesendet: Montag, 3. Juli 2017 13:39
An: Martin Emrich <martin.emr...@empolis.com&
(and I won’t risk messing with my cluster by attempting to
“retrofit” ceph-deply to it)…
I’ll setup a single VM cluster to squeeze out the necessary commands and check
back…
Regards,
Martin
Von: Vasu Kulkarni [mailto:vakul...@redhat.com]
Gesendet: Freitag, 30. Juni 2017 17:58
An: Martin Emrich
Hi!
I'd like to set up new OSDs with bluestore: the real data ("block") on a
spinning disk, and DB+WAL on a SSD partition.
But I do not use ceph-deploy, and never used ceph-disk (I set up the filestore
OSDs manually).
Google tells me that ceph-disk does not (yet) support splitting the
Since upgrading to 12.1, our Object Gateways hang after a few hours, I only see
these messages in the log file:
2017-06-29 07:52:20.877587 7fa8e01e5700 0 ERROR: keystone revocation
processing returned error r=-22
2017-06-29 08:07:20.877761 7fa8e01e5700 0 ERROR: keystone revocation
processing
] Radosgw versioning S3 compatible?
On 06/28/2017 12:52 PM, Yehuda Sadeh-Weinraub wrote:
> On Wed, Jun 28, 2017 at 8:13 AM, Martin Emrich
> <martin.emr...@empolis.com> wrote:
>> Correction: It’s about the Version expiration, not the versioning itself.
>>
>> We send this
: {
ExpiredObjectDeleteMarker: true
},
ID: 'expire-60days'
}
]
Should that be supported?
Thanks
Martin
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von
Martin Emrich
Gesendet: Mittwoch, 28. Juni 2017 16:13
An: ceph-users@lists.ceph.com
Betreff
Hi!
Is the Object Gateway S3 API supposed to be compatible with Amazon S3 regarding
versioning?
Object Versioning is listed as supported in Ceph 12.1, but using the standard
Node.js aws-sdk module (s3.putBucketVersioning()) results in "NotImplemented".
Thanks
Martin
Hi!
Our little dev ceph cluster (nothing fancy; 3x1 OSD with 100GB each, 3x monitor
with radosgw) takes over 20 minutes to delete ca. 44000 small objects (<1GB in
total).
Deletion is done by listing objects in blocks of 1000 and then deleting them in
one call for each block; each deletion of
46 matches
Mail list logo