Re: [ceph-users] Preventing pool from allocating PG to OSD belonging not beloning to the device class defined in crush rule

2018-07-26 Thread Benoit Hudzia
Sorry missing the pg dump : 2.1 0 00 0 0 0 0 0 stale+peering 2018-07-26 19:38:13.381673 0'0125:9 [3] 3[3] 30'0 2018-07-26 15:20:08.965357 0'0 2018-07-26 15:20:08.965357 0 2.0 0

Re: [ceph-users] Preventing pool from allocating PG to OSD belonging not beloning to the device class defined in crush rule

2018-07-26 Thread Benoit Hudzia
You are correct the PG are stale ( not allocated ) [root@stratonode1 /]# ceph status cluster: id: ea0df043-7b25-4447-a43d-e9b2af8fe069 health: HEALTH_WARN Reduced data availability: 256 pgs inactive, 256 pgs peering, 256 pgs stale services: mon: 3 daemons, quorum

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-26 Thread Daniel Carrasco
Hello, just to report, Looks like change the message type to simple help to avoid the memory leak. Just about a day later the memory still OK: 1264 ceph 20 0 12,547g 1,247g 16652 S 3,3 8,2 110:16.93 ceph-mds The memory usage is more than 2x of MDS limit (512Mb), but maybe is the

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-26 Thread Alex Gorbachev
On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote: > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev > wrote: >> >> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev >> wrote: >> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman >> > wrote: >> >> >> >> >> >> On Wed, Jul 25, 2018 at 5:41 PM

Re: [ceph-users] Preventing pool from allocating PG to OSD belonging not beloning to the device class defined in crush rule

2018-07-26 Thread John Spray
On Thu, Jul 26, 2018 at 4:57 PM Benoit Hudzia wrote: > HI, > > We currently segregate ceph pool PG allocation using the crush device > class ruleset as described: > https://ceph.com/community/new-luminous-crush-device-classes/ > simply using the following command to define the rule : ceph osd

[ceph-users] Preventing pool from allocating PG to OSD belonging not beloning to the device class defined in crush rule

2018-07-26 Thread Benoit Hudzia
HI, We currently segregate ceph pool PG allocation using the crush device class ruleset as described: https://ceph.com/community/new-luminous-crush-device-classes/ simply using the following command to define the rule : ceph osd crush rule create-replicated default host However, we noticed

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-26 Thread Alex Gorbachev
On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote: > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev > wrote: >> >> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev >> wrote: >> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman >> > wrote: >> >> >> >> >> >> On Wed, Jul 25, 2018 at 5:41 PM

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-26 Thread Ilya Dryomov
On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev wrote: > > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > wrote: > > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote: > >> > >> > >> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev > >> wrote: > >>> > >>> I am not sure this related to

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-26 Thread Alex Gorbachev
On Thu, Jul 26, 2018 at 9:21 AM, Ilya Dryomov wrote: > On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev > wrote: >> >> On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev >> wrote: >> > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev >> > wrote: >> >> On Wed, Jul 25, 2018 at 5:51 PM, Jason

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-26 Thread Ilya Dryomov
On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev wrote: > > On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev > wrote: > > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > > wrote: > >> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman > >> wrote: > >>> > >>> > >>> On Wed, Jul 25, 2018 at 5:41

Re: [ceph-users] active directory integration with cephfs

2018-07-26 Thread Benjeman Meekhof
I can comment on that docker image: We built that to bake in a certain amount of config regarding nfs-ganesha serving CephFS and using LDAP to do idmap lookups (example ldap entries are in readme). At least as we use it the server-side uid/gid information is pulled from sssd using a config file

[ceph-users] ceph raw data usage and rgw multisite replication

2018-07-26 Thread Florian Philippon
Hello ceph users! I have a question regarding the ceph data usage and the rados gateway multisite replication. Our test cluster have the following setup: * 3 monitors * 12 osds (raw size : 5gb, journal size 1gb, colocated on the same drive) * osd pool default size is set to 2, min size to 1

Re: [ceph-users] Fwd: Mons stucking in election afther 3 Days offline

2018-07-26 Thread Benjamin Naber
hi wido, after adding the hosts back to monmap the following error occurs in ceph-mon log. e5 ms_verify_authorizer bad authorizer from mon 10.111.73.3:6789/0 i tried to copy the mon key ring to all other nodes, but porblem still exists. kind regards Ben > Benjamin Naber hat am 26. Juli

Re: [ceph-users] Fwd: Mons stucking in election afther 3 Days offline

2018-07-26 Thread Benjamin Naber
hi Wido, i have now one monitor online. i hve removed the two others from monmap. how can i procedure, to reset that mon hosts and add them as new monitors to the monmap? king regards Ben > Wido den Hollander hat am 26. Juli 2018 um 11:52 geschrieben: > > > > > On 07/26/2018 11:50 AM,

Re: [ceph-users] Fwd: Mons stucking in election afther 3 Days offline

2018-07-26 Thread Wido den Hollander
On 07/26/2018 11:50 AM, Benjamin Naber wrote: > hi Wido, > > got the folowing outputt since ive changed the debug setting: > This is only debug_ms it seems? debug_mon = 10 debug_ms = 10 Those two shoud be set where debug_mon will tell more about the election process. Wido > 2018-07-26

Re: [ceph-users] Fwd: Mons stucking in election afther 3 Days offline

2018-07-26 Thread Benjamin Naber
hi Wido, got the folowing outputt since ive changed the debug setting: 2018-07-26 11:46:21.004490 7f819e968700 10 -- 10.111.73.1:6789/0 >> 10.111.73.3:0/1033315403 conn(0x55aa46c4a800 :6789 s=STATE_OPEN pgs=71 cs=1 l=1)._try_send sent bytes 9 remaining bytes 0 2018-07-26 11:46:21.004520

Re: [ceph-users] Fwd: Mons stucking in election afther 3 Days offline

2018-07-26 Thread Wido den Hollander
On 07/26/2018 10:33 AM, Benjamin Naber wrote: > hi Wido, > > thx for your reply. > time is also in sync. i forced time sync again to be sure. > Try setting debug_mon to 10 or even 20 and check the logs about what the MONs are saying. debug_ms = 10 might also help to get some more

[ceph-users] Erasure coded pools - overhead, data distribution

2018-07-26 Thread Josef Zelenka
Hi everyone, we run a cluster for our customer, ubuntu 16.04, ceph luminous 12.2.4 - its use is exclusively for cephFS now. We use multiple cephFS(i'm aware it's an experimental feature, but it works fine so far) for our storage purposes. The data pools for all the cephFS filesystems are

Re: [ceph-users] Fwd: Mons stucking in election afther 3 Days offline

2018-07-26 Thread Benjamin Naber
hi Wido, thx for your reply. time is also in sync. i forced time sync again to be sure. kind regards Ben > Wido den Hollander hat am 26. Juli 2018 um 10:18 geschrieben: > > > > > On 07/26/2018 10:12 AM, Benjamin Naber wrote: > > Hi together, > > > > we currently have some problems with

Re: [ceph-users] active directory integration with cephfs

2018-07-26 Thread John Hearns
NFS Ganesha certainly works with Cephfs. I would investigate that also. http://docs.ceph.com/docs/master/cephfs/nfs/ Regarding Active Directory, I have done a lot of work recently with sssd. Not entirely relevant to this list, please send me a mail offline. Not sure if this is any direct use

Re: [ceph-users] Why LZ4 isn't built with ceph?

2018-07-26 Thread Elias Abacioglu
Cool, then it's time to upgrade to Mimic. Thanks for the info! On Wed, Jul 25, 2018 at 6:37 PM, Casey Bodley wrote: > > On 07/25/2018 08:39 AM, Elias Abacioglu wrote: > >> Hi >> >> I'm wondering why LZ4 isn't built by default for newer Linux distros like >> Ubuntu Xenial? >> I understand that

Re: [ceph-users] Fwd: Mons stucking in election afther 3 Days offline

2018-07-26 Thread Wido den Hollander
On 07/26/2018 10:12 AM, Benjamin Naber wrote: > Hi together, > > we currently have some problems with monitor quorum after shutting down all > cluster nodes for migration to another location. > > mon_status gives uns the following outputt: > > { > "name": "mon01", > "rank": 0, > "state":

[ceph-users] Fwd: Mons stucking in election afther 3 Days offline

2018-07-26 Thread Benjamin Naber
Hi together, we currently have some problems with monitor quorum after shutting down all cluster nodes for migration to another location. mon_status gives uns the following outputt: { "name": "mon01", "rank": 0, "state": "electing", "election_epoch": 20345, "quorum": [], "features": {

Re: [ceph-users] active directory integration with cephfs

2018-07-26 Thread Serkan Çoban
You can do it by exporting cephfs by samba. I don't think any other way exists for cephfs. On Thu, Jul 26, 2018 at 9:12 AM, Manuel Sopena Ballesteros wrote: > Dear Ceph community, > > > > I am quite new to Ceph but trying to learn as much quick as I can. We are > deploying our first Ceph

[ceph-users] active directory integration with cephfs

2018-07-26 Thread Manuel Sopena Ballesteros
Dear Ceph community, I am quite new to Ceph but trying to learn as much quick as I can. We are deploying our first Ceph production cluster in the next few weeks, we choose luminous and our goal is to have cephfs. One of the question I have been asked by other members of our team is if there is