Re: [ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-09-29 Thread Gregory Farnum
I haven't used the striper, but it appears to make you specify sizes, stripe units, and stripe counts. I would expect you need to make sure that the size is an integer multiple of the stripe unit. And it probably defaults to a 4MB object if you don't specify one? On Fri, Sep 29, 2017 at 2:09 AM

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread Matthew Stroud
Yeah, I don’t have access to the hypervisors, nor the vms on said hypervisors. Having some sort of ceph-top would be awesome, I wish they would implement that. Thanks, Matthew Stroud On 9/29/17, 11:49 AM, "Jason Dillaman" wrote: There is a feature in the backlog for a

Re: [ceph-users] Large amount of files - cephfs?

2017-09-29 Thread Josef Zelenka
Hi everyone, thanks for the advice, we consulted it and we're gonna test it out with cephfs first. Object storage is a possibility if it misbehaves. Hopefully it will go well :) On 28/09/17 08:20, Henrik Korkuc wrote: On 17-09-27 14:57, Josef Zelenka wrote: Hi, we are currently working

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread Jason Dillaman
There is a feature in the backlog for a "rbd top"-like utility which could provide a probabilistic view of the top X% of RBD image stats against the cluster. The data collection would be by each OSD individually which it why it would be probabilistic stats instead of an absolute. It also would

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread Matthew Stroud
Yeah, that is the core problem. I have been working with those teams that manage those. However, there isn’t a way I can check on my side as it appears. From: David Turner Date: Friday, September 29, 2017 at 11:08 AM To: Maged Mokhtar , Matthew

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread David Turner
His dilemma sounded like he has access to the cluster, but not any of the clients where the RBDs are used or even the hypervisors in charge of those. On Fri, Sep 29, 2017 at 12:03 PM Maged Mokhtar wrote: > On 2017-09-29 17:13, Matthew Stroud wrote: > > Is there a way I

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread Maged Mokhtar
On 2017-09-29 17:13, Matthew Stroud wrote: > Is there a way I could get a performance stats for rbd images? I'm looking > for iops and throughput. > > This issue we are dealing with is that there was a sudden jump in throughput > and I want to be able to find out with rbd volume might be

Re: [ceph-users] Get rbd performance stats

2017-09-29 Thread David Turner
There is no tool on the Ceph side to see which RBDs are doing what. Generally you need to monitor the mount points for the RBDs to track that down with iostat or something. That said, there are some tricky things you could probably do to track down the RBD that is doing a bunch of stuff (as long

Re: [ceph-users] New OSD missing from part of osd crush tree

2017-09-29 Thread Sean Purdy
On Thu, 10 Aug 2017, John Spray said: > On Thu, Aug 10, 2017 at 4:31 PM, Sean Purdy wrote: > > Luminous 12.1.1 rc And 12.2.1 stable > > We added a new disk and did: > > That worked, created osd.18, OSD has data. > > > > However, mgr output at

Re: [ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread David Turner
The reason it is recommended not to raid your disks is to give them all to Ceph. When a disk fails, Ceph can generally recover faster than the raid can. The biggest problem with raid is that you need to replace the disk and rebuild the raid asap. When a disk fails in Ceph, the cluster just

Re: [ceph-users] Ceph OSD get blocked and start to make inconsistent pg from time to time

2017-09-29 Thread David Turner
I'm going to assume you're dealing with your scrub errors and have a game plan for those as you didn't mention them in your question at all. One thing I'm always leary of when I see blocked requests happening is that the PGs might be splitting subfolders. It is pretty much a guarantee if you're

[ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread Hauke Homburg
Hello, Ich think that the Ceph Users don't recommend on ceph osd on Hardware RAID. But i haven't found a technical Solution for this. Can anybody give me so a Solution? Thanks for your help Regards Hauke -- www.w3-creative.de www.westchat.de ___

[ceph-users] Get rbd performance stats

2017-09-29 Thread Matthew Stroud
Is there a way I could get a performance stats for rbd images? I’m looking for iops and throughput. This issue we are dealing with is that there was a sudden jump in throughput and I want to be able to find out with rbd volume might be causing it. I just manage the ceph cluster, not the

Re: [ceph-users] osd max scrubs not honored?

2017-09-29 Thread David Turner
If you're scheduling them appropriately so that no deep scrubs will happen on their own, then you can just check the cluster status if any PGs are deep scrubbing at all. If you're only scheduling them for specific pools, then you can confirm which PGs are being deep scrubbed in a specific pool

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
>> In cases like this you also want to set RADOS namespaces for each tenant’s >> directory in the CephFS layout and give them OSD access to only that >> namespace. That will prevent malicious users from tampering with the raw >> RADOS objects of other users. > > You mean by doing something

[ceph-users] zone, zonegroup and resharding bucket on luminous

2017-09-29 Thread Yoann Moulin
Hello, I'm doing some tests on the radosgw on luminous (12.2.1), I have a few questions. In the documentation[1], there is a reference to "radosgw-admin region get" but it seems not to be available anymore. It should be "radosgw-admin zonegroup get" I guess. 1.

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Gregory Farnum
On Fri, Sep 29, 2017 at 7:34 AM Yoann Moulin wrote: > Hi, > > > Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 > > > > What is exactly an older kernel client ? 4.4 is old ? > > > > See > > >

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
Hi, > Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 > > What is exactly an older kernel client ? 4.4 is old ? > > See > http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version > > If you're on Ubuntu Xenial I would advise to use

[ceph-users] Objecter and librados logs on rbd image operations

2017-09-29 Thread Chamarthy, Mahati
Hi - I'm trying to get logs of Objecter and librados while doing operations(read/write) on an rbd image. Here is my ceph.conf: [global] debug_objecter = 20 debug_rados = 20 [client] rbd_cache = false log file = /self/ceph/ceph-rbd.log debug rbd = 20 debug

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Gregory Farnum
In cases like this you also want to set RADOS namespaces for each tenant’s directory in the CephFS layout and give them OSD access to only that namespace. That will prevent malicious users from tampering with the raw RADOS objects of other users. -Greg On Fri, Sep 29, 2017 at 4:33 AM Yoann Moulin

Re: [ceph-users] rados_read versus rados_aio_read performance

2017-09-29 Thread Gregory Farnum
It sounds like you are doing synchronous reads of small objects here. In that case you are dominated by the per-op already rather than the throughout of your cluster. Using aio or multiple threads will let you parallelism requests. -Greg On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko

Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial

2017-09-29 Thread Ronny Aasen
"apt-cache policy" shows you the different versions that are possible to install, and the prioritized order they have. the highest version will normally be installed unless priorities are changed. example: apt-cache policy ceph ceph:   Installed: 12.2.1-1~bpo90+1   Candidate: 12.2.1-1~bpo90+1  

Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial

2017-09-29 Thread Kashif Mumtaz
Dear Stefan, Thanks for your help. You are right. I was missing apt update" after adding repo.  After doing apt update I am able to install luminous cadmin@admin:~/my-cluster$ ceph --versionceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable) I am not much in

Re: [ceph-users] ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)

2017-09-29 Thread Brad Hubbard
On Fri, Sep 29, 2017 at 8:58 PM, Matthew Vernon wrote: > Hi, > > On 29/09/17 01:00, Brad Hubbard wrote: >> This looks similar to >> https://bugzilla.redhat.com/show_bug.cgi?id=1458007 or one of the >> bugs/trackers attached to that. > > Yes, although increasing the timeout

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 What is exactly an older kernel client ? 4.4 is old ? >>> >>> See >>> http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version >>> >>> If you're on Ubuntu Xenial I would advise to use >>>

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Maged Mokhtar
On 2017-09-29 11:31, Maged Mokhtar wrote: > On 2017-09-29 10:44, Adrian Saul wrote: > > Do you mean that after you delete and remove the crush and auth entries for > the OSD, when you go to create another OSD later it will re-use the previous > OSD ID that you have destroyed in the past? > >

Re: [ceph-users] ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)

2017-09-29 Thread Matthew Vernon
Hi, On 29/09/17 01:00, Brad Hubbard wrote: > This looks similar to > https://bugzilla.redhat.com/show_bug.cgi?id=1458007 or one of the > bugs/trackers attached to that. Yes, although increasing the timeout still leaves the issue that if the timeout fires you don't get anything resembling a

Re: [ceph-users] RGW how to delete orphans

2017-09-29 Thread Andreas Calminder
Ok, thanks! So I'll wait a few days for the command to complete and see what kind to of output it produces then. Regards, Andreas On 29 Sep 2017 12:32 a.m., "Christian Wuerdig" wrote: > I'm pretty sure the orphan find command does exactly just that - > finding

[ceph-users] rados_read versus rados_aio_read performance

2017-09-29 Thread Alexander Kushnirenko
Hello, We see very poor performance when reading/writing rados objects. The speed is only 3-4MB/sec, compared to 95MB rados benchmarking. When you look on underline code it uses librados and linradosstripper libraries (both have poor performance) and the code uses rados_read and rados_write

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Stefan Kooman
Quoting Yoann Moulin (yoann.mou...@epfl.ch): > > >> Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 > >> > >> What is exactly an older kernel client ? 4.4 is old ? > > > > See > > http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version > > > > If you're on

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
>> Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 >> >> What is exactly an older kernel client ? 4.4 is old ? > > See > http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version > > If you're on Ubuntu Xenial I would advise to use > "linux-generic-hwe-16.04".

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Maged Mokhtar
On 2017-09-29 10:44, Adrian Saul wrote: > Do you mean that after you delete and remove the crush and auth entries for > the OSD, when you go to create another OSD later it will re-use the previous > OSD ID that you have destroyed in the past? > > Because I have seen that behaviour as well -

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Luis Periquito
On Fri, Sep 29, 2017 at 9:44 AM, Adrian Saul wrote: > > Do you mean that after you delete and remove the crush and auth entries for > the OSD, when you go to create another OSD later it will re-use the previous > OSD ID that you have destroyed in the past? > The

[ceph-users] Ceph OSD get blocked and start to make inconsistent pg from time to time

2017-09-29 Thread Gonzalo Aguilar Delgado
Hi, I discovered that my cluster starts to make slow requests and all disk activity get blocked. This happens once a day. And the ceph OSD get 100% CPU. In the ceph health I get something like: 2017-09-29 10:49:01.227257 [INF] pgmap v67494428: 764 pgs: 1

Re: [ceph-users] osd max scrubs not honored?

2017-09-29 Thread Stefan Kooman
Quoting Christian Balzer (ch...@gol.com): > > On Thu, 28 Sep 2017 22:36:22 + Gregory Farnum wrote: > > > Also, realize the deep scrub interval is a per-PG thing and (unfortunately) > > the OSD doesn't use a global view of its PG deep scrub ages to try and > > schedule them intelligently

[ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-09-29 Thread Alexander Kushnirenko
Hi, I'm trying to use CEPH-12.2.0 as storage for with Bareos-16.2.4 backup with libradosstriper1 support. Libradosstriber was suggested on this list to solve the problem, that current CEPH-12 discourages users from using object with very big size (>128MB). Bareos treat Rados Object as Volume

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Stefan Kooman
Quoting Yoann Moulin (yoann.mou...@epfl.ch): > > Kernels on client is 4.4.0-93 and on ceph node are 4.4.0-96 > > What is exactly an older kernel client ? 4.4 is old ? See http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version If you're on Ubuntu Xenial I would advise to

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
We are working on a POC with containers (kubernetes) and cephfs (for permanent storage). The main idea is to give to a user access to a subdirectory of the cephfs but be sure he won't be able to access to the rest of the storage. As k8s works, the user will have

Re: [ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Adrian Saul
Do you mean that after you delete and remove the crush and auth entries for the OSD, when you go to create another OSD later it will re-use the previous OSD ID that you have destroyed in the past? Because I have seen that behaviour as well - but only for previously allocated OSD IDs that

[ceph-users] osd create returns duplicate ID's

2017-09-29 Thread Luis Periquito
Hi all, I use puppet to deploy and manage my clusters. Recently, as I have been doing a removal of old hardware and adding of new I've noticed that sometimes the "ceph osd create" is returning repeated IDs. Usually it's on the same server, but yesterday I saw it in different servers. I was

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
>> We are working on a POC with containers (kubernetes) and cephfs (for >> permanent storage). >> >> The main idea is to give to a user access to a subdirectory of the >> cephfs but be sure he won't be able to access to the rest of the >> storage. As k8s works, the user will have access to

Re: [ceph-users] Cephfs : security questions?

2017-09-29 Thread Marc Roos
Maybe this will get you started with the permissions for only this fs path /smb sudo ceph auth get-or-create client.cephfs.smb mon 'allow r' mds 'allow r, allow rw path=/smb' osd 'allow rwx pool=fs_meta,allow rwx pool=fs_data' -Original Message- From: Yoann Moulin

[ceph-users] Cephfs : security questions?

2017-09-29 Thread Yoann Moulin
Hello, We are working on a POC with containers (kubernetes) and cephfs (for permanent storage). The main idea is to give to a user access to a subdirectory of the cephfs but be sure he won't be able to access to the rest of the storage. As k8s works, the user will have access to the yml file

[ceph-users] OpenStack Sydney Forum - Ceph BoF proposal

2017-09-29 Thread Blair Bethwaite
Hi all, I just submitted an OpenStack Forum proposal for a Ceph BoF session at OpenStack Sydney. If you're interested in seeing this happen then please hit up http://forumtopics.openstack.org/cfp/details/46 with your comments / +1's. -- Cheers, ~Blairo