Re: [ceph-users] Need help for PG problem

2016-03-23 Thread Zhang Qiang
I adjusted the crush map, everything's OK now. Thanks for your help! On Wed, 23 Mar 2016 at 23:13 Matt Conner wrote: > Hi Zhang, > > In a 2 copy pool, each placement group is spread across 2 OSDs - that is > why you see such a high number of placement groups per OSD.

Re: [ceph-users] Periodic evicting & flushing

2016-03-23 Thread Christian Balzer
Hello, On Wed, 23 Mar 2016 04:46:50 -0400 Maran wrote: > Hey, > > Original Message > Hello, > > On Wed, 23 Mar 2016 03:24:33 -0400 Maran wrote: > > >> Hey, > >> > >> Thanks for the prompt response. > >> > >> Let me put some inline comments. > >> > >Those are much more

Re: [ceph-users] root and non-root user for ceph/ceph-deploy

2016-03-23 Thread yang
Hi, Oliver, It's very kind of you, thanks. But I still have some questios, Why ceph-mon can auto-start when the host reboot when I deploy it with root, while it can not when I deploy it use non-root? Mybe this is a "Trouble" as you say below? You say "With the hammer release you are using, you

Re: [ceph-users] CEPHFS file or directories disappear when ls (metadata problem)

2016-03-23 Thread FaHui Lin
Dear Greg, Lincoln, and all, Thank you for your suggestion. I think I'll leave the kernel intact and use ceph-fuse on most computing nodes, considering the compatibility of other software used by our jobs. Hope that will improve the stability. For sure, we'll try a few nodes with newer

Re: [ceph-users] root and non-root user for ceph/ceph-deploy

2016-03-23 Thread Oliver Dzombic
Hi Yang, ceph is working with udev. There is nothing in /etc/fstab. If you are using root, you have rights on everything. If you are not using root, you have not. If you run ceph-deploy as root, all commands on the target host will be executed as root. If you run ceph-deploy as non root,

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread Zhang Qiang
Yes it was the crush map. I updated it, distributed 20 OSDs across 2 hosts correctly, finally all pgs are healthy. Thanks guys, I really appreciate your help! On Thu, 24 Mar 2016 at 07:25 Goncalo Borges wrote: > Hi Zhang... > > I think you are dealing with two

[ceph-users] root and non-root user for ceph/ceph-deploy

2016-03-23 Thread yang
Anyone who can help me? -- Original -- From: "yang";; Date: Wed, Mar 23, 2016 11:30 AM To: "ceph-users"; Subject: root and non-root user for ceph/ceph-deploy Hi, everyone, In my ceph cluster, first I deploy my

Re: [ceph-users] ceph deploy osd install broken on centos 7 with hammer 0.94.6

2016-03-23 Thread Oliver Dzombic
Hi, after i copied /lib/lsb/* ( was not existing on my new centos 7.2 ) system now # service ceph start Error EINVAL: entity osd.18 exists but key does not match ERROR:ceph-disk:Failed to activate ceph-disk: Command '['/usr/bin/ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd',

Re: [ceph-users] DONTNEED fadvise flag

2016-03-23 Thread Yan, Zheng
> On Mar 24, 2016, at 01:28, Gregory Farnum wrote: > > On Mon, Mar 21, 2016 at 6:02 AM, Yan, Zheng wrote: >> >>> On Mar 21, 2016, at 18:17, Kenneth Waegeman >>> wrote: >>> >>> Thanks! As we are using the kernel client of EL7,

[ceph-users] ceph deploy osd install broken on centos 7 with hammer 0.94.6

2016-03-23 Thread Oliver Dzombic
Hi, i try to add a node to an existing cluster: ceph-deploy install newceph2 --release hammer works fine. I try to add an osd: ceph-deploy osd create newceph2:/dev/sdc:/dev/sda works fine: [newceph2][WARNIN] Executing /sbin/chkconfig ceph on [newceph2][INFO ] checking OSD status...

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread 施柏安
​It seems that you only have two host in your crush map. But the default ruleset would separate the object by host. If you set size 3 for pools, then there would be one object can't build ​because you only have two hosts. 2016-03-23 20:17 GMT+08:00 Zhang Qiang : > And

[ceph-users] ceph-deploy from hammer server installs infernalis on nodes

2016-03-23 Thread Oliver Dzombic
Hi, running ceph-deploy install from a ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403) cluster, will install ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd) on the node. Seen on centos 7.2 I was searching here now for 2 houres why strange things happens. I

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread Goncalo Borges
Hi Zhang... I think you are dealing with two different problems. The first problem refers to number of PGs per OSD. That was already discussed, and now there is no more messages concerning it. The second problem you are experiencing seems to be that all your OSDs are under the same host.

Re: [ceph-users] recorded data digest != on disk

2016-03-23 Thread David Zafman
On 3/23/16 7:45 AM, Gregory Farnum wrote: On Tue, Mar 22, 2016 at 11:59 AM, Max A. Krasilnikov wrote: Hello! On Tue, Mar 22, 2016 at 11:40:39AM -0700, gfarnum wrote: On Tue, Mar 22, 2016 at 1:19 AM, Max A. Krasilnikov wrote: -1> 2016-03-21

Re: [ceph-users] DONTNEED fadvise flag

2016-03-23 Thread Jan Schermer
So the OSDs pass this through to the filestore so it doesn't pollute the cache? That would be... surprising. Jan > On 23 Mar 2016, at 18:28, Gregory Farnum wrote: > > On Mon, Mar 21, 2016 at 6:02 AM, Yan, Zheng wrote: >> >>> On Mar 21, 2016, at 18:17,

Re: [ceph-users] DONTNEED fadvise flag

2016-03-23 Thread Gregory Farnum
On Mon, Mar 21, 2016 at 6:02 AM, Yan, Zheng wrote: > >> On Mar 21, 2016, at 18:17, Kenneth Waegeman >> wrote: >> >> Thanks! As we are using the kernel client of EL7, does someone knows if that >> client supports it? >> > > fadvise DONTNEED is

Re: [ceph-users] CEPHFS file or directories disappear when ls (metadata problem)

2016-03-23 Thread Gregory Farnum
On Wed, Mar 23, 2016 at 9:18 AM, Lincoln Bryant wrote: > Hi, > > If you are using the kernel client, I would suggest trying something newer > than 3.10.x. I ran into this issue in the past, but it was fixed by updating > my kernel to something newer. You may want to check

Re: [ceph-users] CEPHFS file or directories disappear when ls (metadata problem)

2016-03-23 Thread Lincoln Bryant
Hi, If you are using the kernel client, I would suggest trying something newer than 3.10.x. I ran into this issue in the past, but it was fixed by updating my kernel to something newer. You may want to check the OS recommendations page as well:

[ceph-users] CEPHFS file or directories disappear when ls (metadata problem)

2016-03-23 Thread FaHui Lin
Dear Ceph experts, We meet a nasty problem with our CephFS from time to time: When we try to list a directory under CephFS, some files or directories do not show up. For example: This is the complete directory content: # ll /cephfs/ies/home/mika drwxr-xr-x 1 10035 11 1559018781 Feb 2

Re: [ceph-users] mds "Behing on trimming"

2016-03-23 Thread Dzianis Kahanovich
(mistake, copy to list) John Spray пишет: >> Looks happened both time at night - probably on long backup/write operations >> (something like compressed local root backup to cephfs). Also all local >> mounts >> inside cluster (fuse) moved to automout to reduce clients pressure. Still 5 >>

[ceph-users] Crush Map tunning recommendation and validation

2016-03-23 Thread German Anders
Hi all, I had a question, I'm in the middle of a new ceph deploy cluster and I've 6 OSD servers between two racks, so rack1 would have osdserver1,3 and 5, and rack2 osdserver2,4 and 6. I've edited the following crush map and I want to know if it's ok and also if the objects would be stored one on

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread Matt Conner
Hi Zhang, In a 2 copy pool, each placement group is spread across 2 OSDs - that is why you see such a high number of placement groups per OSD. There is a PG calculator at http://ceph.com/pgcalc/. Based on your setup, it may be worth using 2048 instead of 4096. As for stuck/degraded PGs, most are

Re: [ceph-users] recorded data digest != on disk

2016-03-23 Thread Gregory Farnum
On Tue, Mar 22, 2016 at 11:59 AM, Max A. Krasilnikov wrote: > Hello! > > On Tue, Mar 22, 2016 at 11:40:39AM -0700, gfarnum wrote: > >> On Tue, Mar 22, 2016 at 1:19 AM, Max A. Krasilnikov >> wrote: >>> >>> -1> 2016-03-21 17:36:09.048201 7f253f912700

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread koukou73gr
Are you runnig with the default failure domain of 'host'? If so, with a pool size of 3 and your 20 OSDs physically only on 2 hosts Ceph is unable to find a 3rd host to map the 3rd replica. Either add a host and move some OSDs there or reduce pool size to 2. -K. On 03/23/2016 02:17 PM, Zhang

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread koukou73gr
You should have settled with the nearest power of 2, which for 666 is 512. Since you created the cluster and IIRC is a testbed, you may as well recreate it again, however it will less of a hassle to just increase the pgs to the next power of two: 1024 Your 20 ods appear to be equal sized in your

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread Zhang Qiang
And here's the osd tree if it matters. ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 22.39984 root default -2 21.39984 host 10 0 1.06999 osd.0up 1.0 1.0 1 1.06999 osd.1up 1.0 1.0 2 1.06999

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread Zhang Qiang
Oliver, Goncalo, Sorry to disturb again, but recreating the pool with a smaller pg_num didn't seem to work, now all 666 pgs are degraded + undersized. New status: cluster d2a69513-ad8e-4b25-8f10-69c4041d624d health HEALTH_WARN 666 pgs degraded 82 pgs stuck

Re: [ceph-users] ceph for ubuntu 16.04

2016-03-23 Thread Robertz C.
Thanks a lot James. Does this mean we can run ceph (both client and server) safely on the coming ubuntu 16.04 for production apps? 在 2016-03-23 17:12,James Page 写道: Hi On Wed, 23 Mar 2016 at 08:04 Robertz C. wrote: Hi members, Ubuntu 16.04 will get released soon. I

Re: [ceph-users] ceph for ubuntu 16.04

2016-03-23 Thread James Page
Hi On Wed, 23 Mar 2016 at 08:04 Robertz C. wrote: > Hi members, > > Ubuntu 16.04 will get released soon. > I want to make sure that, will ceph integrated with the new ubuntu > system work fine in production environment? > Do you have any suggestion on this? The ceph

Re: [ceph-users] ceph for ubuntu 16.04

2016-03-23 Thread Alexandre DERUMIER
I think the biggest change is systemd ? It's works fine with debian jessie, so I think it should be trivial to make it run on ubuntu 16.04 - Mail original - De: "Robertz C." À: "ceph-users" Envoyé: Mercredi 23 Mars 2016 09:04:32 Objet:

Re: [ceph-users] Periodic evicting & flushing

2016-03-23 Thread Maran
Hey, Original Message Hello, On Wed, 23 Mar 2016 03:24:33 -0400 Maran wrote: >> Hey, >> >> Thanks for the prompt response. >> >> Let me put some inline comments. >> >Those are much more readable if properly indented/quoted by ye olde ">". My apologies, I'm using a new

[ceph-users] ceph for ubuntu 16.04

2016-03-23 Thread Robertz C.
Hi members, Ubuntu 16.04 will get released soon. I want to make sure that, will ceph integrated with the new ubuntu system work fine in production environment? Do you have any suggestion on this? Thanks. ___ ceph-users mailing list

Re: [ceph-users] Periodic evicting & flushing

2016-03-23 Thread Maran
Hey, Thanks for the prompt response. Let me put some inline comments. Hello, On Tue, 22 Mar 2016 12:28:22 -0400 Maran wrote: > Hey guys, > > I'm trying to wrap my head about the Ceph Cache Tiering to discover if > what I want is achievable. > > My cluster exists of 6 OSD nodes with normal HDD

Re: [ceph-users] v0.94.6 Hammer released

2016-03-23 Thread Sage Weil
On Wed, 23 Mar 2016, Loic Dachary wrote: > On 23/03/2016 01:12, Chris Dunlop wrote: > > Hi Loïc, > > > > On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote: > >> On 23/03/2016 00:39, Chris Dunlop wrote: > >>> "The old OS'es" that were being supported up to v0.94.5 includes debian > >>>