Re: [ceph-users] Can't remove /var/lib/ceph/osd/ceph-53 dir

2016-07-12 Thread Pisal, Ranjit Dnyaneshwar
Try umount /dev/  - This should unmount the directory.
Post this host restart may be needed to reflect the changes.

Best Regards,
Ranjit

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
William Josefsson
Sent: Tuesday, July 12, 2016 4:15 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Can't remove /var/lib/ceph/osd/ceph-53 dir

Hi Cephers,
I got problem removing /var/lib/ceph/osd/ceph-53 dir which was used by OSD.53 
that I have removed.
The way that I remove the OSD:
1. ceph osd out 53
2. sudo service ceph stop osd.53
3. ceph osd crush remove osd.53
4. ceph auth del osd.53
5. ceph osd rm 53
6. sudo umount /var/lib/ceph/osd/ceph-53
and then I try to 'sudo rm -rf /var/lib/ceph/osd/ceph-53' but got a following 
response:
sudo rm -rf /var/lib/ceph/osd/ceph-53
rm: cannot remove '/var/lib/ceph/osd/ceph-53': Device or resource busy
I use 'fuser -m' to check and saw following results:
sudo fuser -m /var/lib/ceph/osd/ceph-53
/var/lib/ceph/osd/ceph-53: 1rce 2rc 3rc 5rc 8rc 9rc
10rc11rc12rc13rc14rc15rc16rc17rc18rc19rc
20rc21rc22rc23rc24rc25rc26rc27rc28rc29rc
30rc31rc32rc33rc34rc35rc36rc37rc38rc39rc
40rc41rc42rc43rc44rc45rc46rc47rc48rc49rc
50rc51rc52rc53rc54rc55rc56rc57rc58rc59rc
60rc61rc62rc63rc64rc65rc66rc67rc68rc69rc
70rc71rc72rc73rc74rc75rc76rc77rc78rc79rc
80rc81rc82rc83rc84rc85rc86rc87rc88rc89rc
90rc91rc92rc93rc94rc95rc96rc97rc98rc99rc   
100rc   101rc   102rc   103rc   104rc   105rc   106rc   107rc   108rc   109rc   
110rc   111rc   112rc   113rc   114rc   115rc   116rc   117rc   118rc   119rc   
120rc   121rc   122rc   123rc   124rc   125rc   126rc   127rc   128rc   129rc   
130rc   131rc   132rc   133rc   134rc   135rc   136rc   137rc   138rc   139rc   
140rc   141rc   142rc   143rc   144rc   145rc   146rc   147rc   148rc   149rc   
150rc   151rc   152rc   153rc   154rc   155rc   156rc   157rc   158rc   159rc   
160rc   161rc   162rc   163rc   164rc   165rc   166rc   167rc   168rc   169rc   
170rc   171rc   172rc   173rc   174rc   175rc   176rc   177rc   178rc   179rc   
180rc   181rc   182rc   183rc   184rc   185rc   186rc   187rc   188rc   189rc   
190rc   191rc   192rc   193rc   194rc   195rc   196rc   197rc   198rc   199rc   
200rc   201rc   202rc   203rc   204rc   205rc   206rc   207rc   208rc   209rc   
210rc   211rc   212rc   213rc   214rc   215rc   216rc   217rc   218rc   219rc   
220rc   221rc   222rc   223rc   224rc   225rc   226rc   227rc   228rc   229rc   
230rc   231rc   232rc   233rc   234rc   235rc   236rc   237rc   238rc   239rc   
240rc   241rc   242rc   243rc   244rc   245rc   246rc   247rc   248rc   249rc   
250rc   251rc   252rc   253rc   254rc   255rc   256rc   257rc   258rc   259rc   
260rc   261rc   262rc   263rc   264rc   265rc   266rc   267rc   268rc   269rc   
270rc   271rc   272rc   273rc   274rc   275rc   276rc   277rc   278rc   279rc   
280rc   281rc   282rc   283rc   284rc   285rc   286rc   287rc   288rc   289rc   
290rc   291rc   292rc   293rc   294rc   295rc   296rc   297rc   298rc   299rc   
300rc   301rc   302rc   303rc   304rc   305rc   306rc   307rc   308rc   309rc   
310rc   311rc   312rc   313rc   314rc   315rc   316rc   317rc   318rc   319rc   
320rc   321rc   322rc   323rc   324rc   325rc   326rc   327rc   328rc   329rc   
330rc   331rc   332rc   333rc   334rc   335rc   336rc   337rc   338rc   339rc   
340rc   341rc   342rc   343rc   344rc   345rc   346rc   347rc   348rc   349rc   
350rc   351rc   352rc   353rc   354rc   355rc   356rc   357rc   358rc   359rc   
360rc   361rc   362rc   363rc   364rc   365rc   366rc   367rc   368rc   369rc   
370rc   371rc   372rc   373rc   374rc   375rc   376rc   377rc   378rc   379rc   
380rc   381rc   382rc   383rc   384rc   385rc   386rc   387rc   388rc   389rc   
390rc   391rc   392rc   393rc   394rc   395rc   396rc   397rc   398rc   402rc   
403rc   404rc   406rc   407rc   408rc   409rc   412rc   413rc   414rc   415rc   
417rc   418rc   419rc   422rc   423rc   424rc   426rc   427rc   428rc   429rc   
431rc   432rc   433rc   434rc   437rc   438rc   439rc   441rc   442rc   443rc   
444rc   446rc   447rc   448rc   449rc   451rc   452rc   453rc   454rc   456rc   
457rc   458rc   459rc   460rc   461rc   462rc   463rc   464rc   466rc   467rc   
468rc   469rc   472rc   473rc   474rc   476rc   477rc   478rc   479rc   481rc   
482rc   483rc   484rc   486rc   487rc   488rc   489rc   491rc   492rc   493rc   
494rc   496rc   497rc   498rc   499rc   500rc   501rc   502rc   503rc   504rc   
506rc   507rc   508rc   509rc   511rc   512rc   513rc   514rc   516rc   517rc   
518rc   519rc   521rc   522rc   523rc   524rc   526rc   527rc   

[ceph-users] Data recovery stuck

2016-07-08 Thread Pisal, Ranjit Dnyaneshwar
Hi All,

I am in process of adding new OSDs to Cluster however after adding second node 
Cluster recovery seems to be stopped.

Its more than 3 days but Objects degraded % has not improved even by 1%.

Will adding further OSDs help improve situation or is there any other way to 
improve recovery process?


[ceph@MYOPTPDN01 ~]$ ceph -s
cluster 9e3e9015-f626-4a44-83f7-0a939ef7ec02
 health HEALTH_WARN 315 pgs backfill; 23 pgs backfill_toofull; 3 pgs 
backfilling; 53 pgs degraded; 2 pgs recovering; 232 pgs recovery_wait; 552 pgs 
stuck unclean; recovery 3622384/90976826 objects degraded (3.982%); 1 near full 
osd(s)
 monmap e4: 5 mons at 
{MYOPTPDN01=10.115.1.136:6789/0,MYOPTPDN02=10.115.1.137:6789/0,MYOPTPDN03=10.115.1.138:6789/0,MYOPTPDN04=10.115.1.139:6789/0,MYOPTPDN05=10.115.1.140:6789/0},
 election epoch 6654, quorum 0,1,2,3,4 
MYOPTPDN01,MYOPTPDN02,MYOPTPDN03,MYOPTPDN04,MYOPTPDN05
 osdmap e198079: 171 osds: 171 up, 171 in
  pgmap v26428186: 5696 pgs, 4 pools, 105 TB data, 28526 kobjects
329 TB used, 136 TB / 466 TB avail
3622384/90976826 objects degraded (3.982%)
  23 active+remapped+wait_backfill+backfill_toofull
 120 active+recovery_wait+remapped
5144 active+clean
   1 active+recovering+remapped
 104 active+recovery_wait
  45 active+degraded+remapped+wait_backfill
   1 active+recovering
   3 active+remapped+backfilling
 247 active+remapped+wait_backfill
   8 active+recovery_wait+degraded+remapped
  client io 62143 kB/s rd, 100 MB/s wr, 14427 op/s
[ceph@MYOPTPDN01 ~]$

Best Regards,
Ranjit

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy new OSD addition issue

2016-06-28 Thread Pisal, Ranjit Dnyaneshwar
This is another error I get while trying to activate disk -

[ceph@MYOPTPDN16 ~]$ sudo ceph-disk activate /dev/sdl1
2016-06-29 11:25:17.436256 7f8ed85ef700  0 -- :/1032777 >> 10.115.1.156:6789/0 
pipe(0x7f8ed4021610 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8ed40218a0).fault
2016-06-29 11:25:20.436362 7f8ed84ee700  0 -- :/1032777 >> 10.115.1.156:6789/0 
pipe(0x7f8ec4000c00 sd=6 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8ec4000e90).fault
^Z
[2]+  Stopped sudo ceph-disk activate /dev/sdl1

Best Regards,
Ranjit
+91-9823240750


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pisal, 
Ranjit Dnyaneshwar
Sent: Wednesday, June 29, 2016 10:59 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph-deploy new OSD addition issue


Hi,

I am stuck at one point to new OSD Host to existing ceph cluster. I tried a 
multiple combinations for creating OSDs on new host but every time its failing 
while disk activation and no partition for OSD (/var/lib/ceph/osd/ceoh-xxx) is 
getting created instead (/var/lib/ceph/tmp/bhbjnk.mnt) temp partition is 
created. The host I have is combination of SSD and SAS disks. SSDs are parted 
to use for Journaling purpose. The sequence I tried to add the new host as 
follows -

1. Ceph-rpms installed on new Host
2. from INIT node - ceph-disk list for new host checked
3. Prepared disk - ceph-deploy --overwrite-conf osd create --fs-type xfs {OSD 
node}:{raw device}, - Result showed that Host is ready for OSD use; however it 
didn't reflect in OSD tree (Because crush was not updated (?) ) neither 
/var/lib/OSD.xx mount got created.
4. Although it showed Host ready for OSD use; before it threw a warning that 
disconnecting after 300 seconds as no data received from new Host
5.I tried to activate the disk manually - a. sudo ceph-disk activate /dev/sde1 -
This command failed to execute with erroneous values
ceph-disk: Cannot discover filesystem type: device /dev/sda: Line is truncated

After this I also tried to install ceph-deploy and prepare new host using below 
commands and repeated above steps but it still failed at same point of disk 
activation.

ceph-deploy install new Host
ceph-deploy new newHost

Attached logs for reference.

Please assist with any known workaround/resolution.

Thanks
Ranjit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph-deploy new OSD addition issue

2016-06-28 Thread Pisal, Ranjit Dnyaneshwar
Hi,

I am stuck at one point to new OSD Host to existing ceph cluster. I tried a 
multiple combinations for creating OSDs on new host but every time its failing 
while disk activation and no partition for OSD (/var/lib/ceph/osd/ceoh-xxx) is 
getting created instead (/var/lib/ceph/tmp/bhbjnk.mnt) temp partition is 
created. The host I have is combination of SSD and SAS disks. SSDs are parted 
to use for Journaling purpose. The sequence I tried to add the new host as 
follows -

1. Ceph-rpms installed on new Host
2. from INIT node - ceph-disk list for new host checked
3. Prepared disk - ceph-deploy --overwrite-conf osd create --fs-type xfs {OSD 
node}:{raw device}, - Result showed that Host is ready for OSD use; however it 
didn't reflect in OSD tree (Because crush was not updated (?) ) neither 
/var/lib/OSD.xx mount got created.
4. Although it showed Host ready for OSD use; before it threw a warning that 
disconnecting after 300 seconds as no data received from new Host
5.I tried to activate the disk manually - a. sudo ceph-disk activate /dev/sde1 -
This command failed to execute with erroneous values
ceph-disk: Cannot discover filesystem type: device /dev/sda: Line is truncated

After this I also tried to install ceph-deploy and prepare new host using below 
commands and repeated above steps but it still failed at same point of disk 
activation.

ceph-deploy install new Host
ceph-deploy new newHost

Attached logs for reference.

Please assist with any known workaround/resolution.

Thanks
Ranjit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com