I have to admit that it's probably buried deep within the backlog [1].
Immediate term alternative solutions to presenting a RBD-backed block
device that support journaling is available via rbd-nbd (creates
/dev/nbdX devices) and also via LIO's tcmu-runner and a loopback
target (creates /dev/sdX
The following is based on the discussion in:
http://tracker.ceph.com/issues/21388
--
There is a particular scenario which if identified can be repaired
manually. In this case the automatic repair rejects all copies because
none match the selected_object_info thus setting
With “osd max scrubs” set to 1 in ceph.conf, which I believe is also
the default, at almost all times, there are 2-3 deep scrubs running.
3 simultaneous deep scrubs is enough to cause a constant stream of:
mon.ceph1 [WRN] Health check update: 69 slow requests are blocked > 32
sec (REQUEST_SLOW)
Keep in mind you can also do prefix-based cephx with caps. That was set up
so you can give a key ring access to specific RBD images (although you
can’t do live updates on what the client can access without making him
reconnect).
On Tue, Sep 26, 2017 at 7:44 AM Jason Dillaman
Nick,
Thanks, I will look into the latest bareos version. They did mention
libradosstriper on github.
There is another question. On jewel I have 25GB size objects. Once I
upgrade to luminous those objects will be "out of bounds".
1. Will OSD start and Will I be able to read them?
2. Will they
On Tue, Sep 26, 2017 at 9:36 AM, Yoann Moulin wrote:
>
>>> ok, I don't know where I read the -o option to write the key but the file
>>> was empty I do a ">" and seems to work to list or create rbd now.
>>>
>>> and for what I have tested then, the good syntax is « mon
On 09/26/2017 01:10 AM, Dietmar Rieder wrote:
thanks David,
that's confirming what I was assuming. To bad that there is no
estimate/method to calculate the db partition size.
It's possible that we might be able to get ranges for certain kinds of
scenarios. Maybe if you do lots of small
On 2017-09-25 14:29, Ilya Dryomov wrote:
> On Sat, Sep 23, 2017 at 12:07 AM, Muminul Islam Russell
> wrote:
>
>> Hi Ilya,
>>
>> Hope you are doing great.
>> Sorry for bugging you. I did not find enough resources for my question. I
>> would be really helped if you could
>> ok, I don't know where I read the -o option to write the key but the file
>> was empty I do a ">" and seems to work to list or create rbd now.
>>
>> and for what I have tested then, the good syntax is « mon 'profile rbd' osd
>> 'profile rbd pool=rbd' »
>>
>>> In the case we give access to
You can update the server with the mapped rbd and shouldn't see as much as
a blip on your VMs.
On Tue, Sep 26, 2017, 3:32 AM Götz Reinicke
wrote:
> Hi Thanks David & David,
>
> we don’t use the fuse code. And may be I was a bit unclear, but your
> feedback clears
Hello,
We successfully use rados to store backup volumes in jewel version of CEPH.
Typical volume size is 25-50GB. Backup software (bareos) use Rados objects
as backup volumes and it works fine. Recently we tried luminous for the
same purpose.
In luminous developers reduced osd_max_object_size
On Tue, Sep 26, 2017 at 4:52 AM, Yoann Moulin wrote:
> Hello,
>
>> I try to give access to a rbd to a client on a fresh Luminous cluster
>>
>> http://docs.ceph.com/docs/luminous/rados/operations/user-management/
>>
>> first of all, I'd like to know the exact syntax for auth
Hi
No cause for concern:
https://github.com/ceph/ceph/pull/17348/commits/2b5f84586ec4d20ebb5aacd6f3c71776c621bf3b
2017-09-26 11:23 GMT+03:00 Stefan Kooman :
> Hi,
>
> I noticed the ceph version still gives "rc" although we are using the
> latest Ceph packages: 12.2.0-1xenial
>
Hello,
> I try to give access to a rbd to a client on a fresh Luminous cluster
>
> http://docs.ceph.com/docs/luminous/rados/operations/user-management/
>
> first of all, I'd like to know the exact syntax for auth caps
>
> the result of "ceph auth ls" give this :
>
>> osd.9
>> key:
Hi,
I noticed the ceph version still gives "rc" although we are using the
latest Ceph packages: 12.2.0-1xenial
(https://download.ceph.com/debian-luminous xenial/main amd64 Packages):
ceph daemon mon.mon5 version
{"version":"12.2.0","release":"luminous","release_type":"rc"}
Why is this important
Hello,
I try to give access to a rbd to a client on a fresh Luminous cluster
http://docs.ceph.com/docs/luminous/rados/operations/user-management/
first of all, I'd like to know the exact syntax for auth caps
the result of "ceph auth ls" give this :
> osd.9
> key:
Hi Thanks David & David,
we don’t use the fuse code. And may be I was a bit unclear, but your feedback
clears some other aspects in that context.
I did an update already on our OSD/MONs while a NFS Fileserver still had a rbd
connected and was exporting files (Virtual disks for XEN server)
thanks David,
that's confirming what I was assuming. To bad that there is no
estimate/method to calculate the db partition size.
Dietmar
On 09/25/2017 05:10 PM, David Turner wrote:
> db/wal partitions are per OSD. DB partitions need to be made as big as
> you need them. If they run out of
18 matches
Mail list logo