Re: [ceph-users] OSD removal is not cleaning entry from osd listing

2015-08-04 Thread Mallikarjun Biradar
Hi Robert,

It works.  Thanks.

-Regards,
Mallikarjun

-- Forwarded message --
From: Robert LeBlanc rob...@leblancnet.us
Date: Fri, Jul 31, 2015 at 10:39 PM
Subject: Re: [ceph-users] OSD removal is not cleaning entry from osd listing
To: Mallikarjun Biradar mallikarjuna.bira...@gmail.com
Cc: ceph-users@lists.ceph.com ceph-users@lists.ceph.com


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I usually do the crush rm step second to last. I don't know if your
modifying the osd after removing it from the CRUSH is putting it back
in.
1. Stop OSD process
2. ceph osd rm
3. ceph osd crush rm osd.
4. ceph auth del osd.

Can you try the crush rm command again for kicks and giggles?

-BEGIN PGP SIGNATURE-
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJVu6u+CRDmVDuy+mK58QAAe8sP/2FCIN7Aufifp0BA8vGu
k9qJiCxwq59t/ucTWmb1iJo0wxyWtElImIs72b+f7bcZfuds2IU9jPys0AJJ
83AairDCcmTD8f8X+IuFF3jG2L3pt1SBB2I1fpxjvaDHCjZVsB8EHFixjadM
DtxY0UDocU8gfVFNA2OWguqvu1tphsZ6p2muZehZZ7AZIvFyi8Ls7IZD5kGf
wmXL3Omv0q/b9Es8NXXk3OwwThxp5lYLz2RkNoe6ThXd4R65uaNL/iZt9RvD
Xtsjgik9sT9L/jXieY6kPG0IumuiYivJkswy1SnWyPeRPF/yTTzSAKC2cx6D
KMfBNwqxYIx5BymVFu7k38clY64U9uIhqbaW7VujvQ/Bs0/1ERv1mltoajZb
1fS8s75xpWPf5W2B80rg361ukExzH5y+X+fZvVjbcKDBE8GECN9T0oy3YNM0
C7S/YRkJr5yr0/scaL7Z5nrq2/MLgJHF2bK1y25SGDkdjm5d2YpF0LMeT8Gp
MIpKDA0LJnznEs5YkIa7u6NkWhQ3netiNJkC8XOlr5NYrBfDQlVrkDtiJPHl
GGoIk/vPuDWNp2x0g2rAbRLS61zSi2Oo1D6PNFa6cFU9/QW8cGWZ8zGDOf+C
GepwY8UHA0uDJv31IOWvsTABPvI7D1I3rimkBZU72QYbrrS8/uu/hZEQwF5k
Ltce
=hyW9
-END PGP SIGNATURE-

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Jul 31, 2015 at 1:15 AM, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
 Hi,

 I had 27 OSD's in my cluster. I removed two of the OSD from (osd.20)
 host-3  (osd.22) host-6.

 user@host-1:~$ sudo ceph osd tree
 ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 184.67990 root default
 -7  82.07996 chassis chassis2
 -4  41.03998 host host-3
  8   6.84000 osd.8  up  1.0  1.0
  9   6.84000 osd.9  up  1.0  1.0
 10   6.84000 osd.10 up  1.0  1.0
 11   6.84000 osd.11 up  1.0  1.0
 20   6.84000 osd.20 up  1.0  1.0
 21   6.84000 osd.21 up  1.0  1.0
 -5  41.03998 host host-6
 12   6.84000 osd.12 up  1.0  1.0
 13   6.84000 osd.13 up  1.0  1.0
 14   6.84000 osd.14 up  1.0  1.0
 15   6.84000 osd.15 up  1.0  1.0
 22   6.84000 osd.22 up  1.0  1.0
 23   6.84000 osd.23 up  1.0  1.0
 -6 102.59995 chassis chassis1
 -2  47.87997 host host-1
  0   6.84000 osd.0  up  1.0  1.0
  1   6.84000 osd.1  up  1.0  1.0
  2   6.84000 osd.2  up  1.0  1.0
  3   6.84000 osd.3  up  1.0  1.0
 16   6.84000 osd.16 up  1.0  1.0
 17   6.84000 osd.17 up  1.0  1.0
 24   6.84000 osd.24 up  1.0  1.0
 -3  54.71997 host host-2
  4   6.84000 osd.4  up  1.0  1.0
  5   6.84000 osd.5  up  1.0  1.0
  6   6.84000 osd.6  up  1.0  1.0
  7   6.84000 osd.7  up  1.0  1.0
 18   6.84000 osd.18 up  1.0  1.0
 19   6.84000 osd.19 up  1.0  1.0
 25   6.84000 osd.25 up  1.0  1.0
 26   6.84000 osd.26 up  1.0  1.0
 user@host-1:~$

 Steps used to remove OSD:
 user@host-1:~$ ceph auth del osd.20; ceph osd crush rm osd.20; ceph
 osd down osd.20; ceph osd rm osd.20
 updated
 removed item id 20 name 'osd.20' from crush map
 marked down osd.22.
 removed osd.22

 Removed both of OSD's osd.20  osd.22

 But, even after removing them, ceph osd tree is listing deleted OSD's
  ceph -s reporting total number of OSD's as 27.

 user@host-1:~$ sudo ceph osd tree
 ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 184.67990 root default
 -7  82.07996 chassis chassis2
 -4  41.03998 host host-3
  8   6.84000 osd.8  up  1.0  1.0
  9   6.84000 osd.9  up  1.0  1.0
 10   6.84000 osd.10

Re: [ceph-users] OSD removal is not cleaning entry from osd listing

2015-07-31 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I usually do the crush rm step second to last. I don't know if your
modifying the osd after removing it from the CRUSH is putting it back
in.
1. Stop OSD process
2. ceph osd rm
3. ceph osd crush rm osd.
4. ceph auth del osd.

Can you try the crush rm command again for kicks and giggles?

-BEGIN PGP SIGNATURE-
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJVu6u+CRDmVDuy+mK58QAAe8sP/2FCIN7Aufifp0BA8vGu
k9qJiCxwq59t/ucTWmb1iJo0wxyWtElImIs72b+f7bcZfuds2IU9jPys0AJJ
83AairDCcmTD8f8X+IuFF3jG2L3pt1SBB2I1fpxjvaDHCjZVsB8EHFixjadM
DtxY0UDocU8gfVFNA2OWguqvu1tphsZ6p2muZehZZ7AZIvFyi8Ls7IZD5kGf
wmXL3Omv0q/b9Es8NXXk3OwwThxp5lYLz2RkNoe6ThXd4R65uaNL/iZt9RvD
Xtsjgik9sT9L/jXieY6kPG0IumuiYivJkswy1SnWyPeRPF/yTTzSAKC2cx6D
KMfBNwqxYIx5BymVFu7k38clY64U9uIhqbaW7VujvQ/Bs0/1ERv1mltoajZb
1fS8s75xpWPf5W2B80rg361ukExzH5y+X+fZvVjbcKDBE8GECN9T0oy3YNM0
C7S/YRkJr5yr0/scaL7Z5nrq2/MLgJHF2bK1y25SGDkdjm5d2YpF0LMeT8Gp
MIpKDA0LJnznEs5YkIa7u6NkWhQ3netiNJkC8XOlr5NYrBfDQlVrkDtiJPHl
GGoIk/vPuDWNp2x0g2rAbRLS61zSi2Oo1D6PNFa6cFU9/QW8cGWZ8zGDOf+C
GepwY8UHA0uDJv31IOWvsTABPvI7D1I3rimkBZU72QYbrrS8/uu/hZEQwF5k
Ltce
=hyW9
-END PGP SIGNATURE-

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Jul 31, 2015 at 1:15 AM, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
 Hi,

 I had 27 OSD's in my cluster. I removed two of the OSD from (osd.20)
 host-3  (osd.22) host-6.

 user@host-1:~$ sudo ceph osd tree
 ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 184.67990 root default
 -7  82.07996 chassis chassis2
 -4  41.03998 host host-3
  8   6.84000 osd.8  up  1.0  1.0
  9   6.84000 osd.9  up  1.0  1.0
 10   6.84000 osd.10 up  1.0  1.0
 11   6.84000 osd.11 up  1.0  1.0
 20   6.84000 osd.20 up  1.0  1.0
 21   6.84000 osd.21 up  1.0  1.0
 -5  41.03998 host host-6
 12   6.84000 osd.12 up  1.0  1.0
 13   6.84000 osd.13 up  1.0  1.0
 14   6.84000 osd.14 up  1.0  1.0
 15   6.84000 osd.15 up  1.0  1.0
 22   6.84000 osd.22 up  1.0  1.0
 23   6.84000 osd.23 up  1.0  1.0
 -6 102.59995 chassis chassis1
 -2  47.87997 host host-1
  0   6.84000 osd.0  up  1.0  1.0
  1   6.84000 osd.1  up  1.0  1.0
  2   6.84000 osd.2  up  1.0  1.0
  3   6.84000 osd.3  up  1.0  1.0
 16   6.84000 osd.16 up  1.0  1.0
 17   6.84000 osd.17 up  1.0  1.0
 24   6.84000 osd.24 up  1.0  1.0
 -3  54.71997 host host-2
  4   6.84000 osd.4  up  1.0  1.0
  5   6.84000 osd.5  up  1.0  1.0
  6   6.84000 osd.6  up  1.0  1.0
  7   6.84000 osd.7  up  1.0  1.0
 18   6.84000 osd.18 up  1.0  1.0
 19   6.84000 osd.19 up  1.0  1.0
 25   6.84000 osd.25 up  1.0  1.0
 26   6.84000 osd.26 up  1.0  1.0
 user@host-1:~$

 Steps used to remove OSD:
 user@host-1:~$ ceph auth del osd.20; ceph osd crush rm osd.20; ceph
 osd down osd.20; ceph osd rm osd.20
 updated
 removed item id 20 name 'osd.20' from crush map
 marked down osd.22.
 removed osd.22

 Removed both of OSD's osd.20  osd.22

 But, even after removing them, ceph osd tree is listing deleted OSD's
  ceph -s reporting total number of OSD's as 27.

 user@host-1:~$ sudo ceph osd tree
 ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 184.67990 root default
 -7  82.07996 chassis chassis2
 -4  41.03998 host host-3
  8   6.84000 osd.8  up  1.0  1.0
  9   6.84000 osd.9  up  1.0  1.0
 10   6.84000 osd.10 up  1.0  1.0
 11   6.84000 osd.11 up  1.0  1.0
 21   6.84000 osd.21 up  1.0  1.0
 -5  41.03998 host host-6
 12   6.84000 osd.12 up  1.0  1.0
 13   6.84000 osd.13 up  1.0  

Re: [ceph-users] OSD removal is not cleaning entry from osd listing

2015-07-31 Thread Mallikarjun Biradar
For a moment it de-list removed OSD's and after sometime it again
comes up in ceph osd tree listing.

On Fri, Jul 31, 2015 at 12:45 PM, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
 Hi,

 I had 27 OSD's in my cluster. I removed two of the OSD from (osd.20)
 host-3  (osd.22) host-6.

 user@host-1:~$ sudo ceph osd tree
 ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 184.67990 root default
 -7  82.07996 chassis chassis2
 -4  41.03998 host host-3
  8   6.84000 osd.8  up  1.0  1.0
  9   6.84000 osd.9  up  1.0  1.0
 10   6.84000 osd.10 up  1.0  1.0
 11   6.84000 osd.11 up  1.0  1.0
 20   6.84000 osd.20 up  1.0  1.0
 21   6.84000 osd.21 up  1.0  1.0
 -5  41.03998 host host-6
 12   6.84000 osd.12 up  1.0  1.0
 13   6.84000 osd.13 up  1.0  1.0
 14   6.84000 osd.14 up  1.0  1.0
 15   6.84000 osd.15 up  1.0  1.0
 22   6.84000 osd.22 up  1.0  1.0
 23   6.84000 osd.23 up  1.0  1.0
 -6 102.59995 chassis chassis1
 -2  47.87997 host host-1
  0   6.84000 osd.0  up  1.0  1.0
  1   6.84000 osd.1  up  1.0  1.0
  2   6.84000 osd.2  up  1.0  1.0
  3   6.84000 osd.3  up  1.0  1.0
 16   6.84000 osd.16 up  1.0  1.0
 17   6.84000 osd.17 up  1.0  1.0
 24   6.84000 osd.24 up  1.0  1.0
 -3  54.71997 host host-2
  4   6.84000 osd.4  up  1.0  1.0
  5   6.84000 osd.5  up  1.0  1.0
  6   6.84000 osd.6  up  1.0  1.0
  7   6.84000 osd.7  up  1.0  1.0
 18   6.84000 osd.18 up  1.0  1.0
 19   6.84000 osd.19 up  1.0  1.0
 25   6.84000 osd.25 up  1.0  1.0
 26   6.84000 osd.26 up  1.0  1.0
 user@host-1:~$

 Steps used to remove OSD:
 user@host-1:~$ ceph auth del osd.20; ceph osd crush rm osd.20; ceph
 osd down osd.20; ceph osd rm osd.20
 updated
 removed item id 20 name 'osd.20' from crush map
 marked down osd.22.
 removed osd.22

 Removed both of OSD's osd.20  osd.22

 But, even after removing them, ceph osd tree is listing deleted OSD's
  ceph -s reporting total number of OSD's as 27.

 user@host-1:~$ sudo ceph osd tree
 ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 184.67990 root default
 -7  82.07996 chassis chassis2
 -4  41.03998 host host-3
  8   6.84000 osd.8  up  1.0  1.0
  9   6.84000 osd.9  up  1.0  1.0
 10   6.84000 osd.10 up  1.0  1.0
 11   6.84000 osd.11 up  1.0  1.0
 21   6.84000 osd.21 up  1.0  1.0
 -5  41.03998 host host-6
 12   6.84000 osd.12 up  1.0  1.0
 13   6.84000 osd.13 up  1.0  1.0
 14   6.84000 osd.14 up  1.0  1.0
 15   6.84000 osd.15 up  1.0  1.0
 23   6.84000 osd.23 up  1.0  1.0
 -6 102.59995 chassis chassis1
 -2  47.87997 host host-1
  0   6.84000 osd.0  up  1.0  1.0
  1   6.84000 osd.1  up  1.0  1.0
  2   6.84000 osd.2  up  1.0  1.0
  3   6.84000 osd.3  up  1.0  1.0
 16   6.84000 osd.16 up  1.0  1.0
 17   6.84000 osd.17 up  1.0  1.0
 24   6.84000 osd.24 up  1.0  1.0
 -3  54.71997 host host-2
  4   6.84000 osd.4  up  1.0  1.0
  5   6.84000 osd.5  up  1.0  1.0
  6   6.84000 osd.6  up  1.0  1.0
  7   6.84000 osd.7  up  1.0  1.0
 18   6.84000 osd.18 up  1.0

Re: [ceph-users] OSD removal is not cleaning entry from osd listing

2015-07-31 Thread John Spray



On 31/07/15 09:47, Mallikarjun Biradar wrote:

For a moment it de-list removed OSD's and after sometime it again
comes up in ceph osd tree listing.



Is the OSD service itself definitely stopped?  Are you using any 
orchestration systems (puppet, chef) that might be re-creating its auth 
key etc?


John
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD removal is not cleaning entry from osd listing

2015-07-31 Thread Mallikarjun Biradar
Yeah. OSD service stopped.
Nope, I am not using any orchestration system.

user@host-1:~$ ps -ef | grep ceph
root  2305 1  7 Jul27 ?06:52:36 /usr/bin/ceph-osd
--cluster=ceph -i 3 -f
root  2522 1  6 Jul27 ?06:19:42 /usr/bin/ceph-osd
--cluster=ceph -i 0 -f
root  2792 1  6 Jul27 ?06:07:49 /usr/bin/ceph-osd
--cluster=ceph -i 2 -f
root  2904 1  8 Jul27 ?07:48:19 /usr/bin/ceph-osd
--cluster=ceph -i 1 -f
root 13368 1  5 Jul28 ?04:15:31 /usr/bin/ceph-osd
--cluster=ceph -i 17 -f
root 16685 1  6 Jul28 ?04:36:54 /usr/bin/ceph-osd
--cluster=ceph -i 16 -f
root 26942 1  7 Jul29 ?03:54:45 /usr/bin/ceph-osd
--cluster=ceph -i 24 -f
user  42767 42749  0 15:58 pts/300:00:00 grep --color=auto ceph
use@host-1:~$ ceph osd tree
ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 170.1 root default
-7  68.39996 chassis chassis2
-4  34.19998 host host-3
 8   6.84000 osd.8  up  1.0  1.0
 9   6.84000 osd.9  up  1.0  1.0
10   6.84000 osd.10 up  1.0  1.0
11   6.84000 osd.11 up  1.0  1.0
21   6.84000 osd.21 up  1.0  1.0
-5  34.19998 host host-6
12   6.84000 osd.12 up  1.0  1.0
13   6.84000 osd.13 up  1.0  1.0
14   6.84000 osd.14 up  1.0  1.0
15   6.84000 osd.15 up  1.0  1.0
23   6.84000 osd.23 up  1.0  1.0
-6 102.59995 chassis chassis1
-2  47.87997 host host-1
 0   6.84000 osd.0  up  1.0  1.0
 1   6.84000 osd.1  up  1.0  1.0
 2   6.84000 osd.2  up  1.0  1.0
 3   6.84000 osd.3  up  1.0  1.0
16   6.84000 osd.16 up  1.0  1.0
17   6.84000 osd.17 up  1.0  1.0
24   6.84000 osd.24 up  1.0  1.0
-3  54.71997 host host-2
 4   6.84000 osd.4  up  1.0  1.0
 5   6.84000 osd.5  up  1.0  1.0
 6   6.84000 osd.6  up  1.0  1.0
 7   6.84000 osd.7  up  1.0  1.0
18   6.84000 osd.18 up  1.0  1.0
19   6.84000 osd.19 up  1.0  1.0
25   6.84000 osd.25 up  1.0  1.0
26   6.84000 osd.26 up  1.0  1.0
20 0 osd.20 up  1.0  1.0
22 0 osd.22 up  1.0  1.0
user@host-1:~$
user@host-1:~$ df
Filesystem  1K-blocks   Used  Available Use% Mounted on
/dev/sdq1   414579696   11211248  382285928   3% /
none4  0  4   0% /sys/fs/cgroup
udev 65980912  4   65980908   1% /dev
tmpfs13198836   1124   13197712   1% /run
none 5120  0   5120   0% /run/lock
none 65994176 12   65994164   1% /run/shm
none   102400  0 102400   0% /run/user
/dev/sdl1  7345777988 3233438932 4112339056  45% /var/lib/ceph/osd/ceph-2
/dev/sda1  7345777988 4484766028 2861011960  62% /var/lib/ceph/osd/ceph-3
/dev/sdn1  7345777988 3344604424 4001173564  46% /var/lib/ceph/osd/ceph-1
/dev/sdp1  7345777988 3897260808 3448517180  54% /var/lib/ceph/osd/ceph-0
/dev/sdc1  7345777988 3029110220 4316667768  42% /var/lib/ceph/osd/ceph-16
/dev/sde1  7345777988 2673181020 4672596968  37% /var/lib/ceph/osd/ceph-17
/dev/sdg1  7345777988 3537932824 3807845164  49% /var/lib/ceph/osd/ceph-24
user@host-1:~$

On Fri, Jul 31, 2015 at 3:53 PM, John Spray john.sp...@redhat.com wrote:


 On 31/07/15 09:47, Mallikarjun Biradar wrote:

 For a moment it de-list removed OSD's and after sometime it again
 comes up in ceph osd tree listing.


 Is the OSD service itself definitely stopped?  Are you using any
 orchestration systems (puppet, chef) that might be re-creating its auth key
 etc?

 John
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD removal is not cleaning entry from osd listing

2015-07-31 Thread Mallikarjun Biradar
I am using hammer 0.94

On Fri, Jul 31, 2015 at 4:01 PM, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
 Yeah. OSD service stopped.
 Nope, I am not using any orchestration system.

 user@host-1:~$ ps -ef | grep ceph
 root  2305 1  7 Jul27 ?06:52:36 /usr/bin/ceph-osd
 --cluster=ceph -i 3 -f
 root  2522 1  6 Jul27 ?06:19:42 /usr/bin/ceph-osd
 --cluster=ceph -i 0 -f
 root  2792 1  6 Jul27 ?06:07:49 /usr/bin/ceph-osd
 --cluster=ceph -i 2 -f
 root  2904 1  8 Jul27 ?07:48:19 /usr/bin/ceph-osd
 --cluster=ceph -i 1 -f
 root 13368 1  5 Jul28 ?04:15:31 /usr/bin/ceph-osd
 --cluster=ceph -i 17 -f
 root 16685 1  6 Jul28 ?04:36:54 /usr/bin/ceph-osd
 --cluster=ceph -i 16 -f
 root 26942 1  7 Jul29 ?03:54:45 /usr/bin/ceph-osd
 --cluster=ceph -i 24 -f
 user  42767 42749  0 15:58 pts/300:00:00 grep --color=auto ceph
 use@host-1:~$ ceph osd tree
 ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 170.1 root default
 -7  68.39996 chassis chassis2
 -4  34.19998 host host-3
  8   6.84000 osd.8  up  1.0  1.0
  9   6.84000 osd.9  up  1.0  1.0
 10   6.84000 osd.10 up  1.0  1.0
 11   6.84000 osd.11 up  1.0  1.0
 21   6.84000 osd.21 up  1.0  1.0
 -5  34.19998 host host-6
 12   6.84000 osd.12 up  1.0  1.0
 13   6.84000 osd.13 up  1.0  1.0
 14   6.84000 osd.14 up  1.0  1.0
 15   6.84000 osd.15 up  1.0  1.0
 23   6.84000 osd.23 up  1.0  1.0
 -6 102.59995 chassis chassis1
 -2  47.87997 host host-1
  0   6.84000 osd.0  up  1.0  1.0
  1   6.84000 osd.1  up  1.0  1.0
  2   6.84000 osd.2  up  1.0  1.0
  3   6.84000 osd.3  up  1.0  1.0
 16   6.84000 osd.16 up  1.0  1.0
 17   6.84000 osd.17 up  1.0  1.0
 24   6.84000 osd.24 up  1.0  1.0
 -3  54.71997 host host-2
  4   6.84000 osd.4  up  1.0  1.0
  5   6.84000 osd.5  up  1.0  1.0
  6   6.84000 osd.6  up  1.0  1.0
  7   6.84000 osd.7  up  1.0  1.0
 18   6.84000 osd.18 up  1.0  1.0
 19   6.84000 osd.19 up  1.0  1.0
 25   6.84000 osd.25 up  1.0  1.0
 26   6.84000 osd.26 up  1.0  1.0
 20 0 osd.20 up  1.0  1.0
 22 0 osd.22 up  1.0  1.0
 user@host-1:~$
 user@host-1:~$ df
 Filesystem  1K-blocks   Used  Available Use% Mounted on
 /dev/sdq1   414579696   11211248  382285928   3% /
 none4  0  4   0% /sys/fs/cgroup
 udev 65980912  4   65980908   1% /dev
 tmpfs13198836   1124   13197712   1% /run
 none 5120  0   5120   0% /run/lock
 none 65994176 12   65994164   1% /run/shm
 none   102400  0 102400   0% /run/user
 /dev/sdl1  7345777988 3233438932 4112339056  45% /var/lib/ceph/osd/ceph-2
 /dev/sda1  7345777988 4484766028 2861011960  62% /var/lib/ceph/osd/ceph-3
 /dev/sdn1  7345777988 3344604424 4001173564  46% /var/lib/ceph/osd/ceph-1
 /dev/sdp1  7345777988 3897260808 3448517180  54% /var/lib/ceph/osd/ceph-0
 /dev/sdc1  7345777988 3029110220 4316667768  42% /var/lib/ceph/osd/ceph-16
 /dev/sde1  7345777988 2673181020 4672596968  37% /var/lib/ceph/osd/ceph-17
 /dev/sdg1  7345777988 3537932824 3807845164  49% /var/lib/ceph/osd/ceph-24
 user@host-1:~$

 On Fri, Jul 31, 2015 at 3:53 PM, John Spray john.sp...@redhat.com wrote:


 On 31/07/15 09:47, Mallikarjun Biradar wrote:

 For a moment it de-list removed OSD's and after sometime it again
 comes up in ceph osd tree listing.


 Is the OSD service itself definitely stopped?  Are you using any
 orchestration systems (puppet, chef) that might be re-creating its auth key
 etc?

 John
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com