Re: [ceph-users] OSD recovery problem

2014-07-04 Thread Loic Dachary
For the record here is a summary of what happened : http://dachary.org/?p=3131

On 04/07/2014 15:35, Loic Dachary wrote:
> 
> 
> On 04/07/2014 15:25, Wido den Hollander wrote:
>> On 07/04/2014 03:18 PM, Loic Dachary wrote:
>>> Hi,
>>>
>>> I extracted a disk with two partitions (journal and data) and copied its 
>>> content in the hope to restart the OSD and recover its content.
>>>
>>> mount /dev/sdb1 /mnt
>>> rsync -avH --numeric-ids /mnt/ /var/lib/ceph/osd/ceph-$(cat 
>>> /mnt/whoami)/
>>
>> I think you went wrong there, rsync man page:
>>
>> -a, --archive   archive mode; equals -rlptgoD (no -H,-A,-X)
>> -X, --xattrspreserve extended attributes
>>
>> So you didn't copy over the xattrs, so basically the data is lost/unusable.
> 
> Thanks ! Fortunately the original disks are still available ;-)
> 
>>> rm /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journal
>>> dd if=/dev/sdb2 of=/var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journal
>>>
>>> and then
>>>
>>> start ceph-osd id=$(cat /mnt/whoami)
>>>
>>> It crashes on https://github.com/ceph/ceph/blob/v0.72.2/src/osd/PG.cc#L2182 
>>> and before it happens there is
>>>
>>> load_pgs ignoring unrecognized meta
>>>
>>> and the full "debug osd = 20" logs are in http://paste.ubuntu.com/7746993/ 
>>> and this is
>>>
>>> root@bm4202:/etc/ceph# dpkg -l | grep ceph
>>> ii  ceph0.72.2-1trusty amd64  
>>> distributed storage and file system
>>> ii  ceph-common 0.72.2-1trusty amd64  
>>> common utilities to mount and interact with a ceph storage
>>> ii  python-ceph 0.72.2-1trusty amd64  
>>> Python libraries for the Ceph distributed filesystem
>>> root@bm4202:/etc/ceph# ceph --version
>>> ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
>>>
>>> Cheers
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD recovery problem

2014-07-04 Thread Loic Dachary


On 04/07/2014 15:25, Wido den Hollander wrote:
> On 07/04/2014 03:18 PM, Loic Dachary wrote:
>> Hi,
>>
>> I extracted a disk with two partitions (journal and data) and copied its 
>> content in the hope to restart the OSD and recover its content.
>>
>> mount /dev/sdb1 /mnt
>> rsync -avH --numeric-ids /mnt/ /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/
> 
> I think you went wrong there, rsync man page:
> 
> -a, --archive   archive mode; equals -rlptgoD (no -H,-A,-X)
> -X, --xattrspreserve extended attributes
> 
> So you didn't copy over the xattrs, so basically the data is lost/unusable.

Thanks ! Fortunately the original disks are still available ;-)

>> rm /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journal
>> dd if=/dev/sdb2 of=/var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journal
>>
>> and then
>>
>> start ceph-osd id=$(cat /mnt/whoami)
>>
>> It crashes on https://github.com/ceph/ceph/blob/v0.72.2/src/osd/PG.cc#L2182 
>> and before it happens there is
>>
>> load_pgs ignoring unrecognized meta
>>
>> and the full "debug osd = 20" logs are in http://paste.ubuntu.com/7746993/ 
>> and this is
>>
>> root@bm4202:/etc/ceph# dpkg -l | grep ceph
>> ii  ceph0.72.2-1trusty amd64  
>> distributed storage and file system
>> ii  ceph-common 0.72.2-1trusty amd64  common 
>> utilities to mount and interact with a ceph storage
>> ii  python-ceph 0.72.2-1trusty amd64  Python 
>> libraries for the Ceph distributed filesystem
>> root@bm4202:/etc/ceph# ceph --version
>> ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
>>
>> Cheers
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD recovery problem

2014-07-04 Thread Wido den Hollander

On 07/04/2014 03:18 PM, Loic Dachary wrote:

Hi,

I extracted a disk with two partitions (journal and data) and copied its 
content in the hope to restart the OSD and recover its content.

mount /dev/sdb1 /mnt
rsync -avH --numeric-ids /mnt/ /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/


I think you went wrong there, rsync man page:

-a, --archive   archive mode; equals -rlptgoD (no -H,-A,-X)
-X, --xattrspreserve extended attributes

So you didn't copy over the xattrs, so basically the data is lost/unusable.


rm /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journal
dd if=/dev/sdb2 of=/var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journal

and then

start ceph-osd id=$(cat /mnt/whoami)

It crashes on https://github.com/ceph/ceph/blob/v0.72.2/src/osd/PG.cc#L2182 and 
before it happens there is

load_pgs ignoring unrecognized meta

and the full "debug osd = 20" logs are in http://paste.ubuntu.com/7746993/ and 
this is

root@bm4202:/etc/ceph# dpkg -l | grep ceph
ii  ceph0.72.2-1trusty amd64  
distributed storage and file system
ii  ceph-common 0.72.2-1trusty amd64  common 
utilities to mount and interact with a ceph storage
ii  python-ceph 0.72.2-1trusty amd64  Python 
libraries for the Ceph distributed filesystem
root@bm4202:/etc/ceph# ceph --version
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)

Cheers



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD recovery problem

2014-07-04 Thread Loic Dachary
Hi,

I extracted a disk with two partitions (journal and data) and copied its 
content in the hope to restart the OSD and recover its content.

   mount /dev/sdb1 /mnt
   rsync -avH --numeric-ids /mnt/ /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/
   rm /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journal
   dd if=/dev/sdb2 of=/var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journal

and then

   start ceph-osd id=$(cat /mnt/whoami)

It crashes on https://github.com/ceph/ceph/blob/v0.72.2/src/osd/PG.cc#L2182 and 
before it happens there is

   load_pgs ignoring unrecognized meta

and the full "debug osd = 20" logs are in http://paste.ubuntu.com/7746993/ and 
this is

root@bm4202:/etc/ceph# dpkg -l | grep ceph
ii  ceph0.72.2-1trusty amd64  
distributed storage and file system
ii  ceph-common 0.72.2-1trusty amd64  common 
utilities to mount and interact with a ceph storage
ii  python-ceph 0.72.2-1trusty amd64  Python 
libraries for the Ceph distributed filesystem
root@bm4202:/etc/ceph# ceph --version
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)

Cheers
-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com