Re: [ceph-users] RBD features(kernel client) with kernel version

2017-09-26 Thread Jason Dillaman
I have to admit that it's probably buried deep within the backlog [1]. Immediate term alternative solutions to presenting a RBD-backed block device that support journaling is available via rbd-nbd (creates /dev/nbdX devices) and also via LIO's tcmu-runner and a loopback target (creates /dev/sdX

Re: [ceph-users] inconsistent pg will not repair

2017-09-26 Thread David Zafman
The following is based on the discussion in: http://tracker.ceph.com/issues/21388 -- There is a particular scenario which if identified can be repaired manually. In this case the automatic repair rejects all copies because none match the selected_object_info thus setting

[ceph-users] osd max scrubs not honored?

2017-09-26 Thread J David
With “osd max scrubs” set to 1 in ceph.conf, which I believe is also the default, at almost all times, there are 2-3 deep scrubs running. 3 simultaneous deep scrubs is enough to cause a constant stream of: mon.ceph1 [WRN] Health check update: 69 slow requests are blocked > 32 sec (REQUEST_SLOW)

Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Gregory Farnum
Keep in mind you can also do prefix-based cephx with caps. That was set up so you can give a key ring access to specific RBD images (although you can’t do live updates on what the client can access without making him reconnect). On Tue, Sep 26, 2017 at 7:44 AM Jason Dillaman

Re: [ceph-users] osd crashes with large object size (>10GB) in luminos Rados

2017-09-26 Thread Alexander Kushnirenko
Nick, Thanks, I will look into the latest bareos version. They did mention libradosstriper on github. There is another question. On jewel I have 25GB size objects. Once I upgrade to luminous those objects will be "out of bounds". 1. Will OSD start and Will I be able to read them? 2. Will they

Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Jason Dillaman
On Tue, Sep 26, 2017 at 9:36 AM, Yoann Moulin wrote: > >>> ok, I don't know where I read the -o option to write the key but the file >>> was empty I do a ">" and seems to work to list or create rbd now. >>> >>> and for what I have tested then, the good syntax is « mon

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-26 Thread Mark Nelson
On 09/26/2017 01:10 AM, Dietmar Rieder wrote: thanks David, that's confirming what I was assuming. To bad that there is no estimate/method to calculate the db partition size. It's possible that we might be able to get ranges for certain kinds of scenarios. Maybe if you do lots of small

Re: [ceph-users] RBD features(kernel client) with kernel version

2017-09-26 Thread Maged Mokhtar
On 2017-09-25 14:29, Ilya Dryomov wrote: > On Sat, Sep 23, 2017 at 12:07 AM, Muminul Islam Russell > wrote: > >> Hi Ilya, >> >> Hope you are doing great. >> Sorry for bugging you. I did not find enough resources for my question. I >> would be really helped if you could

Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin
>> ok, I don't know where I read the -o option to write the key but the file >> was empty I do a ">" and seems to work to list or create rbd now. >> >> and for what I have tested then, the good syntax is « mon 'profile rbd' osd >> 'profile rbd pool=rbd' » >> >>> In the case we give access to

Re: [ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-26 Thread David Turner
You can update the server with the mapped rbd and shouldn't see as much as a blip on your VMs. On Tue, Sep 26, 2017, 3:32 AM Götz Reinicke wrote: > Hi Thanks David & David, > > we don’t use the fuse code. And may be I was a bit unclear, but your > feedback clears

[ceph-users] osd crashes with large object size (>10GB) in luminos Rados

2017-09-26 Thread Alexander Kushnirenko
Hello, We successfully use rados to store backup volumes in jewel version of CEPH. Typical volume size is 25-50GB. Backup software (bareos) use Rados objects as backup volumes and it works fine. Recently we tried luminous for the same purpose. In luminous developers reduced osd_max_object_size

Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Jason Dillaman
On Tue, Sep 26, 2017 at 4:52 AM, Yoann Moulin wrote: > Hello, > >> I try to give access to a rbd to a client on a fresh Luminous cluster >> >> http://docs.ceph.com/docs/luminous/rados/operations/user-management/ >> >> first of all, I'd like to know the exact syntax for auth

Re: [ceph-users] Ceph Luminous release_type "rc"

2017-09-26 Thread Irek Fasikhov
Hi No cause for concern: https://github.com/ceph/ceph/pull/17348/commits/2b5f84586ec4d20ebb5aacd6f3c71776c621bf3b 2017-09-26 11:23 GMT+03:00 Stefan Kooman : > Hi, > > I noticed the ceph version still gives "rc" although we are using the > latest Ceph packages: 12.2.0-1xenial >

Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin
Hello, > I try to give access to a rbd to a client on a fresh Luminous cluster > > http://docs.ceph.com/docs/luminous/rados/operations/user-management/ > > first of all, I'd like to know the exact syntax for auth caps > > the result of "ceph auth ls" give this : > >> osd.9 >> key:

[ceph-users] Ceph Luminous release_type "rc"

2017-09-26 Thread Stefan Kooman
Hi, I noticed the ceph version still gives "rc" although we are using the latest Ceph packages: 12.2.0-1xenial (https://download.ceph.com/debian-luminous xenial/main amd64 Packages): ceph daemon mon.mon5 version {"version":"12.2.0","release":"luminous","release_type":"rc"} Why is this important

[ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin
Hello, I try to give access to a rbd to a client on a fresh Luminous cluster http://docs.ceph.com/docs/luminous/rados/operations/user-management/ first of all, I'd like to know the exact syntax for auth caps the result of "ceph auth ls" give this : > osd.9 > key:

Re: [ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-26 Thread Götz Reinicke
Hi Thanks David & David, we don’t use the fuse code. And may be I was a bit unclear, but your feedback clears some other aspects in that context. I did an update already on our OSD/MONs while a NFS Fileserver still had a rbd connected and was exporting files (Virtual disks for XEN server)

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-26 Thread Dietmar Rieder
thanks David, that's confirming what I was assuming. To bad that there is no estimate/method to calculate the db partition size. Dietmar On 09/25/2017 05:10 PM, David Turner wrote: > db/wal partitions are per OSD.  DB partitions need to be made as big as > you need them.  If they run out of