Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Bulst, Vadim
Okay i'll try that.

On 12 Jun 2018 4:24 pm, Alfredo Deza  wrote:
On Tue, Jun 12, 2018 at 10:06 AM, Vadim Bulst
 wrote:
> Thanks Alfredo - I can imagine why. I edited the filter in lvm.conf and well
> it is definitely not generic.   Is there a other way to get this setup
> working? Or do I have to go back to filestore?

If you have an LV on top of your multipath device and use that for
ceph-volume it should work (we don't test this, but we do ensure we
work with LVs)

What is not supported is ceph-volume creating an LV from a multipath
device as input.

This is not related to the objectstore type.

>
> Cheers,
>
> Vadim
>
>
>
>
> On 12.06.2018 14:41, Alfredo Deza wrote:
>>
>> On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst 
>> wrote:
>>>
>>> I cannot release this lock! This is an expansion shelf connected with two
>>> cables to the controller. If there is no multipath management, the os
>>> would
>>> see every disk at least twice. Ceph has to deal with it somehow. I guess
>>> I'm
>>> not the only one who has a setup like this.
>>>
>> Do you have an LV on top of that dm? We don't support multipath devices:
>>
>>
>> http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#multipath-support
>>
>>> Best,
>>>
>>> Vadim
>>>
>>>
>>>
>>>
>>> On 12.06.2018 12:55, Alfredo Deza wrote:
>>>>
>>>> On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst
>>>> 
>>>> wrote:
>>>>>
>>>>> Hi Alfredo,
>>>>>
>>>>> thanks for your help. Yust to make this clear /dev/dm-0 is the name of
>>>>> my
>>>>> multipath disk:
>>>>>
>>>>> root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
>>>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-name-35000c500866f8947 ->
>>>>> ../../dm-0
>>>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50
>>>>> dm-uuid-mpath-35000c500866f8947
>>>>> ->
>>>>> ../../dm-0
>>>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 scsi-35000c500866f8947 ->
>>>>> ../../dm-0
>>>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 wwn-0x5000c500866f8947 ->
>>>>> ../../dm-0
>>>>>
>>>>> If I run pvdisplay this device is not listed.
>>>>
>>>> Either way, you should not use dm devices directly. If this is a
>>>> multipath disk, then you must use that other name instead of /dev/dm-*
>>>>
>>>> I am not sure what kind of setup you have, but that mapper must
>>>> release its lock so that you can zap. We ensure that works with LVM, I
>>>> am not sure
>>>> how to do that in your environment.
>>>>
>>>> For example, with dmcrypt you get into similar issues, that is why we
>>>> check crypsetup, so that we can make dmcrypt release that device
>>>> before zapping.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Vadim
>>>>>
>>>>>
>>>>>
>>>>> On 12.06.2018 12:40, Alfredo Deza wrote:
>>>>>>
>>>>>> On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst
>>>>>> 
>>>>>> wrote:
>>>>>>>
>>>>>>> no change:
>>>>>>>
>>>>>>>
>>>>>>> root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy
>>>>>>> /dev/dm-0
>>>>>>> --> Zapping: /dev/dm-0
>>>>>>
>>>>>> This is the problem right here. Your script is using the dm device
>>>>>> that belongs to an LV.
>>>>>>
>>>>>> What you want to do here is destroy/zap the LV. Not the dm device that
>>>>>> belongs to the LV.
>>>>>>
>>>>>> To make this clear in the future, I've created:
>>>>>> http://tracker.ceph.com/issues/24504
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Running command: /sbin/cryptsetup status /dev/mapper/
>>>>>>> stdout: /dev/mapper/ is inactive.
>>>>>>> --> Skipping --destroy because no associated physical volumes are
>>>>>>> found
>>>>>>> for
>>>>>>> /dev/dm-0
>>>>>>> Running command: wipefs --all /dev/dm-0
>>>>>>>  

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Alfredo Deza
On Tue, Jun 12, 2018 at 10:06 AM, Vadim Bulst
 wrote:
> Thanks Alfredo - I can imagine why. I edited the filter in lvm.conf and well
> it is definitely not generic.   Is there a other way to get this setup
> working? Or do I have to go back to filestore?

If you have an LV on top of your multipath device and use that for
ceph-volume it should work (we don't test this, but we do ensure we
work with LVs)

What is not supported is ceph-volume creating an LV from a multipath
device as input.

This is not related to the objectstore type.

>
> Cheers,
>
> Vadim
>
>
>
>
> On 12.06.2018 14:41, Alfredo Deza wrote:
>>
>> On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst 
>> wrote:
>>>
>>> I cannot release this lock! This is an expansion shelf connected with two
>>> cables to the controller. If there is no multipath management, the os
>>> would
>>> see every disk at least twice. Ceph has to deal with it somehow. I guess
>>> I'm
>>> not the only one who has a setup like this.
>>>
>> Do you have an LV on top of that dm? We don't support multipath devices:
>>
>>
>> http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#multipath-support
>>
>>> Best,
>>>
>>> Vadim
>>>
>>>
>>>
>>>
>>> On 12.06.2018 12:55, Alfredo Deza wrote:
>>>>
>>>> On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst
>>>> 
>>>> wrote:
>>>>>
>>>>> Hi Alfredo,
>>>>>
>>>>> thanks for your help. Yust to make this clear /dev/dm-0 is the name of
>>>>> my
>>>>> multipath disk:
>>>>>
>>>>> root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
>>>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-name-35000c500866f8947 ->
>>>>> ../../dm-0
>>>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50
>>>>> dm-uuid-mpath-35000c500866f8947
>>>>> ->
>>>>> ../../dm-0
>>>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 scsi-35000c500866f8947 ->
>>>>> ../../dm-0
>>>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 wwn-0x5000c500866f8947 ->
>>>>> ../../dm-0
>>>>>
>>>>> If I run pvdisplay this device is not listed.
>>>>
>>>> Either way, you should not use dm devices directly. If this is a
>>>> multipath disk, then you must use that other name instead of /dev/dm-*
>>>>
>>>> I am not sure what kind of setup you have, but that mapper must
>>>> release its lock so that you can zap. We ensure that works with LVM, I
>>>> am not sure
>>>> how to do that in your environment.
>>>>
>>>> For example, with dmcrypt you get into similar issues, that is why we
>>>> check crypsetup, so that we can make dmcrypt release that device
>>>> before zapping.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Vadim
>>>>>
>>>>>
>>>>>
>>>>> On 12.06.2018 12:40, Alfredo Deza wrote:
>>>>>>
>>>>>> On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst
>>>>>> 
>>>>>> wrote:
>>>>>>>
>>>>>>> no change:
>>>>>>>
>>>>>>>
>>>>>>> root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy
>>>>>>> /dev/dm-0
>>>>>>> --> Zapping: /dev/dm-0
>>>>>>
>>>>>> This is the problem right here. Your script is using the dm device
>>>>>> that belongs to an LV.
>>>>>>
>>>>>> What you want to do here is destroy/zap the LV. Not the dm device that
>>>>>> belongs to the LV.
>>>>>>
>>>>>> To make this clear in the future, I've created:
>>>>>> http://tracker.ceph.com/issues/24504
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Running command: /sbin/cryptsetup status /dev/mapper/
>>>>>>> stdout: /dev/mapper/ is inactive.
>>>>>>> --> Skipping --destroy because no associated physical volumes are
>>>>>>> found
>>>>>>> for
>>>>>>> /dev/dm-0
>>>>>>> Running command: wipefs --all /dev/dm-0
>>>>>>> stderr: wipefs: error: /dev/dm-0: probing initialization failed:
&

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Vadim Bulst
Thanks Alfredo - I can imagine why. I edited the filter in lvm.conf and 
well it is definitely not generic.   Is there a other way to get this 
setup working? Or do I have to go back to filestore?


Cheers,

Vadim



On 12.06.2018 14:41, Alfredo Deza wrote:

On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst  wrote:

I cannot release this lock! This is an expansion shelf connected with two
cables to the controller. If there is no multipath management, the os would
see every disk at least twice. Ceph has to deal with it somehow. I guess I'm
not the only one who has a setup like this.


Do you have an LV on top of that dm? We don't support multipath devices:

http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#multipath-support


Best,

Vadim




On 12.06.2018 12:55, Alfredo Deza wrote:

On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst 
wrote:

Hi Alfredo,

thanks for your help. Yust to make this clear /dev/dm-0 is the name of my
multipath disk:

root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-name-35000c500866f8947 ->
../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-uuid-mpath-35000c500866f8947
->
../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 scsi-35000c500866f8947 ->
../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 wwn-0x5000c500866f8947 ->
../../dm-0

If I run pvdisplay this device is not listed.

Either way, you should not use dm devices directly. If this is a
multipath disk, then you must use that other name instead of /dev/dm-*

I am not sure what kind of setup you have, but that mapper must
release its lock so that you can zap. We ensure that works with LVM, I
am not sure
how to do that in your environment.

For example, with dmcrypt you get into similar issues, that is why we
check crypsetup, so that we can make dmcrypt release that device
before zapping.

Cheers,

Vadim



On 12.06.2018 12:40, Alfredo Deza wrote:

On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst

wrote:

no change:


root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
--> Zapping: /dev/dm-0

This is the problem right here. Your script is using the dm device
that belongs to an LV.

What you want to do here is destroy/zap the LV. Not the dm device that
belongs to the LV.

To make this clear in the future, I've created:
http://tracker.ceph.com/issues/24504




Running command: /sbin/cryptsetup status /dev/mapper/
stdout: /dev/mapper/ is inactive.
--> Skipping --destroy because no associated physical volumes are found
for
/dev/dm-0
Running command: wipefs --all /dev/dm-0
stderr: wipefs: error: /dev/dm-0: probing initialization failed:
Device
or
resource busy
-->  RuntimeError: command returned non-zero exit status: 1


On 12.06.2018 09:03, Linh Vu wrote:

ceph-volume lvm zap --destroy $DEVICE


From: ceph-users  on behalf of Vadim
Bulst 
Sent: Tuesday, 12 June 2018 4:46:44 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Filestore -> Bluestore


Thanks Sergey.

Could you specify your answer a bit more? When I look into the manpage
of
ceph-volume I couldn't find an option named "--destroy".

I just like to make clear - this script has already migrated several
servers. The problem is appearing when it should migrate devices in the
expansion shelf.

"-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
existing device is needed"

Cheers,

Vadim


I would say the handling of devices
On 11.06.2018 23:58, Sergey Malinin wrote:

“Device or resource busy” error rises when no “--destroy” option is
passed
to ceph-volume.
On Jun 11, 2018, 22:44 +0300, Vadim Bulst ,
wrote:

Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little script:

#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
contains("filestore")) ]' | jq '[.[] | select(.hostname |
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
 DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
.backend_filestore_dev_node' | sed 's/"//g'`
 echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
 ceph osd out ${OSD}
   while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for
full
evacuation"; sleep 60 ; done
 systemctl stop ceph-osd@${OSD}
 umount /var/lib/ceph/osd/ceph-${OSD}
 /usr/sbin/ceph-volume lvm zap ${DEV}
 ceph osd destroy ${OSD} --yes-i-really-mean-it
 /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
--osd-id ${OSD}
done

Unfortunately - under normal circumstances this works flawlessly. In
our
case we have expansion shelfs connected as multipath devices to our
nodes.

/usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapp

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Vadim Bulst

Yeah I've tried it - no success.


On 12.06.2018 15:41, Sergey Malinin wrote:

You should pass underlying device instead of DM volume to ceph-volume.
On Jun 12, 2018, 15:41 +0300, Alfredo Deza , wrote:
On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst 
 wrote:
I cannot release this lock! This is an expansion shelf connected 
with two
cables to the controller. If there is no multipath management, the 
os would
see every disk at least twice. Ceph has to deal with it somehow. I 
guess I'm

not the only one who has a setup like this.



Do you have an LV on top of that dm? We don't support multipath devices:

http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#multipath-support


Best,

Vadim




On 12.06.2018 12:55, Alfredo Deza wrote:


On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst 


wrote:


Hi Alfredo,

thanks for your help. Yust to make this clear /dev/dm-0 is the 
name of my

multipath disk:

root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
lrwxrwxrwx 1 root root 10 Jun 12 07:50 dm-name-35000c500866f8947 ->
../../dm-0
lrwxrwxrwx 1 root root 10 Jun 12 07:50 dm-uuid-mpath-35000c500866f8947
->
../../dm-0
lrwxrwxrwx 1 root root 10 Jun 12 07:50 scsi-35000c500866f8947 ->
../../dm-0
lrwxrwxrwx 1 root root 10 Jun 12 07:50 wwn-0x5000c500866f8947 ->
../../dm-0

If I run pvdisplay this device is not listed.


Either way, you should not use dm devices directly. If this is a
multipath disk, then you must use that other name instead of /dev/dm-*

I am not sure what kind of setup you have, but that mapper must
release its lock so that you can zap. We ensure that works with LVM, I
am not sure
how to do that in your environment.

For example, with dmcrypt you get into similar issues, that is why we
check crypsetup, so that we can make dmcrypt release that device
before zapping.


Cheers,

Vadim



On 12.06.2018 12:40, Alfredo Deza wrote:


On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst

wrote:


no change:


root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy 
/dev/dm-0

--> Zapping: /dev/dm-0


This is the problem right here. Your script is using the dm device
that belongs to an LV.

What you want to do here is destroy/zap the LV. Not the dm device 
that

belongs to the LV.

To make this clear in the future, I've created:
http://tracker.ceph.com/issues/24504




Running command: /sbin/cryptsetup status /dev/mapper/
stdout: /dev/mapper/ is inactive.
--> Skipping --destroy because no associated physical volumes 
are found

for
/dev/dm-0
Running command: wipefs --all /dev/dm-0
stderr: wipefs: error: /dev/dm-0: probing initialization failed:
Device
or
resource busy
--> RuntimeError: command returned non-zero exit status: 1


On 12.06.2018 09:03, Linh Vu wrote:

ceph-volume lvm zap --destroy $DEVICE


From: ceph-users  on behalf 
of Vadim

Bulst 
Sent: Tuesday, 12 June 2018 4:46:44 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Filestore -> Bluestore


Thanks Sergey.

Could you specify your answer a bit more? When I look into the 
manpage

of
ceph-volume I couldn't find an option named "--destroy".

I just like to make clear - this script has already migrated several
servers. The problem is appearing when it should migrate devices 
in the

expansion shelf.

"--> RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
existing device is needed"

Cheers,

Vadim


I would say the handling of devices
On 11.06.2018 23:58, Sergey Malinin wrote:

“Device or resource busy” error rises when no “--destroy” option is
passed
to ceph-volume.
On Jun 11, 2018, 22:44 +0300, Vadim Bulst 
,

wrote:

Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little 
script:


#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
contains("filestore")) ]' | jq '[.[] | select(.hostname |
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
.backend_filestore_dev_node' | sed 's/"//g'`
echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
ceph osd out ${OSD}
while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for
full
evacuation"; sleep 60 ; done
systemctl stop ceph-osd@${OSD}
umount /var/lib/ceph/osd/ceph-${OSD}
/usr/sbin/ceph-volume lvm zap ${DEV}
ceph osd destroy ${OSD} --yes-i-really-mean-it
/usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
--osd-id ${OSD}
done

Unfortunately - under normal circumstances this works flawlessly. In
our
case we have expansion shelfs connected as multipath devices to our
nodes.

/usr/sbin/ceph-volume lvm zap ${DEV} is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
stdout: /dev/mapper/ is inactive.
Running c

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Sergey Malinin
You should pass underlying device instead of DM volume to ceph-volume.
On Jun 12, 2018, 15:41 +0300, Alfredo Deza , wrote:
> On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst  
> wrote:
> > I cannot release this lock! This is an expansion shelf connected with two
> > cables to the controller. If there is no multipath management, the os would
> > see every disk at least twice. Ceph has to deal with it somehow. I guess I'm
> > not the only one who has a setup like this.
> >
>
> Do you have an LV on top of that dm? We don't support multipath devices:
>
> http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#multipath-support
>
> > Best,
> >
> > Vadim
> >
> >
> >
> >
> > On 12.06.2018 12:55, Alfredo Deza wrote:
> > >
> > > On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst 
> > > wrote:
> > > >
> > > > Hi Alfredo,
> > > >
> > > > thanks for your help. Yust to make this clear /dev/dm-0 is the name of 
> > > > my
> > > > multipath disk:
> > > >
> > > > root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
> > > > lrwxrwxrwx 1 root root 10 Jun 12 07:50 dm-name-35000c500866f8947 ->
> > > > ../../dm-0
> > > > lrwxrwxrwx 1 root root 10 Jun 12 07:50 dm-uuid-mpath-35000c500866f8947
> > > > ->
> > > > ../../dm-0
> > > > lrwxrwxrwx 1 root root 10 Jun 12 07:50 scsi-35000c500866f8947 ->
> > > > ../../dm-0
> > > > lrwxrwxrwx 1 root root 10 Jun 12 07:50 wwn-0x5000c500866f8947 ->
> > > > ../../dm-0
> > > >
> > > > If I run pvdisplay this device is not listed.
> > >
> > > Either way, you should not use dm devices directly. If this is a
> > > multipath disk, then you must use that other name instead of /dev/dm-*
> > >
> > > I am not sure what kind of setup you have, but that mapper must
> > > release its lock so that you can zap. We ensure that works with LVM, I
> > > am not sure
> > > how to do that in your environment.
> > >
> > > For example, with dmcrypt you get into similar issues, that is why we
> > > check crypsetup, so that we can make dmcrypt release that device
> > > before zapping.
> > > >
> > > > Cheers,
> > > >
> > > > Vadim
> > > >
> > > >
> > > >
> > > > On 12.06.2018 12:40, Alfredo Deza wrote:
> > > > >
> > > > > On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst
> > > > > 
> > > > > wrote:
> > > > > >
> > > > > > no change:
> > > > > >
> > > > > >
> > > > > > root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy 
> > > > > > /dev/dm-0
> > > > > > --> Zapping: /dev/dm-0
> > > > >
> > > > > This is the problem right here. Your script is using the dm device
> > > > > that belongs to an LV.
> > > > >
> > > > > What you want to do here is destroy/zap the LV. Not the dm device that
> > > > > belongs to the LV.
> > > > >
> > > > > To make this clear in the future, I've created:
> > > > > http://tracker.ceph.com/issues/24504
> > > > >
> > > > >
> > > > >
> > > > > > Running command: /sbin/cryptsetup status /dev/mapper/
> > > > > > stdout: /dev/mapper/ is inactive.
> > > > > > --> Skipping --destroy because no associated physical volumes are 
> > > > > > found
> > > > > > for
> > > > > > /dev/dm-0
> > > > > > Running command: wipefs --all /dev/dm-0
> > > > > > stderr: wipefs: error: /dev/dm-0: probing initialization failed:
> > > > > > Device
> > > > > > or
> > > > > > resource busy
> > > > > > --> RuntimeError: command returned non-zero exit status: 1
> > > > > >
> > > > > >
> > > > > > On 12.06.2018 09:03, Linh Vu wrote:
> > > > > >
> > > > > > ceph-volume lvm zap --destroy $DEVICE
> > > > > >
> > > > > > 
> > > > > > From: ceph-users  on behalf of 
> > > > > > Vadim
> > > > > > Bulst 
> > > > > > Sent: Tuesday, 12 June 2018 4:46:44 PM
> &

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Alfredo Deza
On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst  wrote:
> I cannot release this lock! This is an expansion shelf connected with two
> cables to the controller. If there is no multipath management, the os would
> see every disk at least twice. Ceph has to deal with it somehow. I guess I'm
> not the only one who has a setup like this.
>

Do you have an LV on top of that dm? We don't support multipath devices:

http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#multipath-support

> Best,
>
> Vadim
>
>
>
>
> On 12.06.2018 12:55, Alfredo Deza wrote:
>>
>> On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst 
>> wrote:
>>>
>>> Hi Alfredo,
>>>
>>> thanks for your help. Yust to make this clear /dev/dm-0 is the name of my
>>> multipath disk:
>>>
>>> root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-name-35000c500866f8947 ->
>>> ../../dm-0
>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-uuid-mpath-35000c500866f8947
>>> ->
>>> ../../dm-0
>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 scsi-35000c500866f8947 ->
>>> ../../dm-0
>>> lrwxrwxrwx 1 root root   10 Jun 12 07:50 wwn-0x5000c500866f8947 ->
>>> ../../dm-0
>>>
>>> If I run pvdisplay this device is not listed.
>>
>> Either way, you should not use dm devices directly. If this is a
>> multipath disk, then you must use that other name instead of /dev/dm-*
>>
>> I am not sure what kind of setup you have, but that mapper must
>> release its lock so that you can zap. We ensure that works with LVM, I
>> am not sure
>> how to do that in your environment.
>>
>> For example, with dmcrypt you get into similar issues, that is why we
>> check crypsetup, so that we can make dmcrypt release that device
>> before zapping.
>>>
>>> Cheers,
>>>
>>> Vadim
>>>
>>>
>>>
>>> On 12.06.2018 12:40, Alfredo Deza wrote:
>>>>
>>>> On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst
>>>> 
>>>> wrote:
>>>>>
>>>>> no change:
>>>>>
>>>>>
>>>>> root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
>>>>> --> Zapping: /dev/dm-0
>>>>
>>>> This is the problem right here. Your script is using the dm device
>>>> that belongs to an LV.
>>>>
>>>> What you want to do here is destroy/zap the LV. Not the dm device that
>>>> belongs to the LV.
>>>>
>>>> To make this clear in the future, I've created:
>>>> http://tracker.ceph.com/issues/24504
>>>>
>>>>
>>>>
>>>>> Running command: /sbin/cryptsetup status /dev/mapper/
>>>>>stdout: /dev/mapper/ is inactive.
>>>>> --> Skipping --destroy because no associated physical volumes are found
>>>>> for
>>>>> /dev/dm-0
>>>>> Running command: wipefs --all /dev/dm-0
>>>>>stderr: wipefs: error: /dev/dm-0: probing initialization failed:
>>>>> Device
>>>>> or
>>>>> resource busy
>>>>> -->  RuntimeError: command returned non-zero exit status: 1
>>>>>
>>>>>
>>>>> On 12.06.2018 09:03, Linh Vu wrote:
>>>>>
>>>>> ceph-volume lvm zap --destroy $DEVICE
>>>>>
>>>>> 
>>>>> From: ceph-users  on behalf of Vadim
>>>>> Bulst 
>>>>> Sent: Tuesday, 12 June 2018 4:46:44 PM
>>>>> To: ceph-users@lists.ceph.com
>>>>> Subject: Re: [ceph-users] Filestore -> Bluestore
>>>>>
>>>>>
>>>>> Thanks Sergey.
>>>>>
>>>>> Could you specify your answer a bit more? When I look into the manpage
>>>>> of
>>>>> ceph-volume I couldn't find an option named "--destroy".
>>>>>
>>>>> I just like to make clear - this script has already migrated several
>>>>> servers. The problem is appearing when it should migrate devices in the
>>>>> expansion shelf.
>>>>>
>>>>> "-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
>>>>> existing device is needed"
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Vadim
>>>>>
>

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Vadim Bulst
I cannot release this lock! This is an expansion shelf connected with 
two cables to the controller. If there is no multipath management, the 
os would see every disk at least twice. Ceph has to deal with it 
somehow. I guess I'm not the only one who has a setup like this.


Best,

Vadim



On 12.06.2018 12:55, Alfredo Deza wrote:

On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst  wrote:

Hi Alfredo,

thanks for your help. Yust to make this clear /dev/dm-0 is the name of my
multipath disk:

root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-name-35000c500866f8947 ->
../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-uuid-mpath-35000c500866f8947 ->
../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 scsi-35000c500866f8947 ->
../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 wwn-0x5000c500866f8947 ->
../../dm-0

If I run pvdisplay this device is not listed.

Either way, you should not use dm devices directly. If this is a
multipath disk, then you must use that other name instead of /dev/dm-*

I am not sure what kind of setup you have, but that mapper must
release its lock so that you can zap. We ensure that works with LVM, I
am not sure
how to do that in your environment.

For example, with dmcrypt you get into similar issues, that is why we
check crypsetup, so that we can make dmcrypt release that device
before zapping.

Cheers,

Vadim



On 12.06.2018 12:40, Alfredo Deza wrote:

On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst 
wrote:

no change:


root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
--> Zapping: /dev/dm-0

This is the problem right here. Your script is using the dm device
that belongs to an LV.

What you want to do here is destroy/zap the LV. Not the dm device that
belongs to the LV.

To make this clear in the future, I've created:
http://tracker.ceph.com/issues/24504




Running command: /sbin/cryptsetup status /dev/mapper/
   stdout: /dev/mapper/ is inactive.
--> Skipping --destroy because no associated physical volumes are found
for
/dev/dm-0
Running command: wipefs --all /dev/dm-0
   stderr: wipefs: error: /dev/dm-0: probing initialization failed: Device
or
resource busy
-->  RuntimeError: command returned non-zero exit status: 1


On 12.06.2018 09:03, Linh Vu wrote:

ceph-volume lvm zap --destroy $DEVICE


From: ceph-users  on behalf of Vadim
Bulst 
Sent: Tuesday, 12 June 2018 4:46:44 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Filestore -> Bluestore


Thanks Sergey.

Could you specify your answer a bit more? When I look into the manpage of
ceph-volume I couldn't find an option named "--destroy".

I just like to make clear - this script has already migrated several
servers. The problem is appearing when it should migrate devices in the
expansion shelf.

"-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
existing device is needed"

Cheers,

Vadim


I would say the handling of devices
On 11.06.2018 23:58, Sergey Malinin wrote:

“Device or resource busy” error rises when no “--destroy” option is
passed
to ceph-volume.
On Jun 11, 2018, 22:44 +0300, Vadim Bulst ,
wrote:

Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little script:

#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
contains("filestore")) ]' | jq '[.[] | select(.hostname |
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
.backend_filestore_dev_node' | sed 's/"//g'`
echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
ceph osd out ${OSD}
  while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full
evacuation"; sleep 60 ; done
systemctl stop ceph-osd@${OSD}
umount /var/lib/ceph/osd/ceph-${OSD}
/usr/sbin/ceph-volume lvm zap ${DEV}
ceph osd destroy ${OSD} --yes-i-really-mean-it
/usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
--osd-id ${OSD}
done

Unfortunately - under normal circumstances this works flawlessly. In our
case we have expansion shelfs connected as multipath devices to our
nodes.

/usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
   stdout: /dev/mapper/ is inactive.
Running command: wipefs --all /dev/dm-0
   stderr: wipefs: error: /dev/dm-0: probing initialization failed:
Device or resource busy
-->  RuntimeError: command returned non-zero exit status: 1
destroyed osd.1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Alfredo Deza
On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst  wrote:
> Hi Alfredo,
>
> thanks for your help. Yust to make this clear /dev/dm-0 is the name of my
> multipath disk:
>
> root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
> lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-name-35000c500866f8947 ->
> ../../dm-0
> lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-uuid-mpath-35000c500866f8947 ->
> ../../dm-0
> lrwxrwxrwx 1 root root   10 Jun 12 07:50 scsi-35000c500866f8947 ->
> ../../dm-0
> lrwxrwxrwx 1 root root   10 Jun 12 07:50 wwn-0x5000c500866f8947 ->
> ../../dm-0
>
> If I run pvdisplay this device is not listed.

Either way, you should not use dm devices directly. If this is a
multipath disk, then you must use that other name instead of /dev/dm-*

I am not sure what kind of setup you have, but that mapper must
release its lock so that you can zap. We ensure that works with LVM, I
am not sure
how to do that in your environment.

For example, with dmcrypt you get into similar issues, that is why we
check crypsetup, so that we can make dmcrypt release that device
before zapping.
>
> Cheers,
>
> Vadim
>
>
>
> On 12.06.2018 12:40, Alfredo Deza wrote:
>>
>> On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst 
>> wrote:
>>>
>>> no change:
>>>
>>>
>>> root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
>>> --> Zapping: /dev/dm-0
>>
>> This is the problem right here. Your script is using the dm device
>> that belongs to an LV.
>>
>> What you want to do here is destroy/zap the LV. Not the dm device that
>> belongs to the LV.
>>
>> To make this clear in the future, I've created:
>> http://tracker.ceph.com/issues/24504
>>
>>
>>
>>> Running command: /sbin/cryptsetup status /dev/mapper/
>>>   stdout: /dev/mapper/ is inactive.
>>> --> Skipping --destroy because no associated physical volumes are found
>>> for
>>> /dev/dm-0
>>> Running command: wipefs --all /dev/dm-0
>>>   stderr: wipefs: error: /dev/dm-0: probing initialization failed: Device
>>> or
>>> resource busy
>>> -->  RuntimeError: command returned non-zero exit status: 1
>>>
>>>
>>> On 12.06.2018 09:03, Linh Vu wrote:
>>>
>>> ceph-volume lvm zap --destroy $DEVICE
>>>
>>> 
>>> From: ceph-users  on behalf of Vadim
>>> Bulst 
>>> Sent: Tuesday, 12 June 2018 4:46:44 PM
>>> To: ceph-users@lists.ceph.com
>>> Subject: Re: [ceph-users] Filestore -> Bluestore
>>>
>>>
>>> Thanks Sergey.
>>>
>>> Could you specify your answer a bit more? When I look into the manpage of
>>> ceph-volume I couldn't find an option named "--destroy".
>>>
>>> I just like to make clear - this script has already migrated several
>>> servers. The problem is appearing when it should migrate devices in the
>>> expansion shelf.
>>>
>>> "-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
>>> existing device is needed"
>>>
>>> Cheers,
>>>
>>> Vadim
>>>
>>>
>>> I would say the handling of devices
>>> On 11.06.2018 23:58, Sergey Malinin wrote:
>>>
>>> “Device or resource busy” error rises when no “--destroy” option is
>>> passed
>>> to ceph-volume.
>>> On Jun 11, 2018, 22:44 +0300, Vadim Bulst ,
>>> wrote:
>>>
>>> Dear Cephers,
>>>
>>> I'm trying to migrate our OSDs to Bluestore using this little script:
>>>
>>> #!/bin/bash
>>> HOSTNAME=$(hostname -s)
>>> OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
>>> contains("filestore")) ]' | jq '[.[] | select(.hostname |
>>> contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
>>> IFS=' ' read -a OSDARRAY <<<$OSDS
>>> for OSD in "${OSDARRAY[@]}"; do
>>>DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
>>> .backend_filestore_dev_node' | sed 's/"//g'`
>>>echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
>>>ceph osd out ${OSD}
>>>  while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full
>>> evacuation"; sleep 60 ; done
>>>systemctl stop ceph-osd@${OSD}
>>>umount /var/lib/ceph/osd/ceph-${OSD}
>>>/usr/sbin/ceph-volume lvm zap ${DEV}
>>>ceph osd destroy ${

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Vadim Bulst

Hi Alfredo,

thanks for your help. Yust to make this clear /dev/dm-0 is the name of 
my multipath disk:


root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-name-35000c500866f8947 -> 
../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 dm-uuid-mpath-35000c500866f8947 
-> ../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 scsi-35000c500866f8947 -> 
../../dm-0
lrwxrwxrwx 1 root root   10 Jun 12 07:50 wwn-0x5000c500866f8947 -> 
../../dm-0


If I run pvdisplay this device is not listed.

Cheers,

Vadim


On 12.06.2018 12:40, Alfredo Deza wrote:

On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst  wrote:

no change:


root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
--> Zapping: /dev/dm-0

This is the problem right here. Your script is using the dm device
that belongs to an LV.

What you want to do here is destroy/zap the LV. Not the dm device that
belongs to the LV.

To make this clear in the future, I've created:
http://tracker.ceph.com/issues/24504




Running command: /sbin/cryptsetup status /dev/mapper/
  stdout: /dev/mapper/ is inactive.
--> Skipping --destroy because no associated physical volumes are found for
/dev/dm-0
Running command: wipefs --all /dev/dm-0
  stderr: wipefs: error: /dev/dm-0: probing initialization failed: Device or
resource busy
-->  RuntimeError: command returned non-zero exit status: 1


On 12.06.2018 09:03, Linh Vu wrote:

ceph-volume lvm zap --destroy $DEVICE


From: ceph-users  on behalf of Vadim
Bulst 
Sent: Tuesday, 12 June 2018 4:46:44 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Filestore -> Bluestore


Thanks Sergey.

Could you specify your answer a bit more? When I look into the manpage of
ceph-volume I couldn't find an option named "--destroy".

I just like to make clear - this script has already migrated several
servers. The problem is appearing when it should migrate devices in the
expansion shelf.

"-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
existing device is needed"

Cheers,

Vadim


I would say the handling of devices
On 11.06.2018 23:58, Sergey Malinin wrote:

“Device or resource busy” error rises when no “--destroy” option is passed
to ceph-volume.
On Jun 11, 2018, 22:44 +0300, Vadim Bulst ,
wrote:

Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little script:

#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
contains("filestore")) ]' | jq '[.[] | select(.hostname |
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
   DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
.backend_filestore_dev_node' | sed 's/"//g'`
   echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
   ceph osd out ${OSD}
 while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full
evacuation"; sleep 60 ; done
   systemctl stop ceph-osd@${OSD}
   umount /var/lib/ceph/osd/ceph-${OSD}
   /usr/sbin/ceph-volume lvm zap ${DEV}
   ceph osd destroy ${OSD} --yes-i-really-mean-it
   /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
--osd-id ${OSD}
done

Unfortunately - under normal circumstances this works flawlessly. In our
case we have expansion shelfs connected as multipath devices to our nodes.

/usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
  stdout: /dev/mapper/ is inactive.
Running command: wipefs --all /dev/dm-0
  stderr: wipefs: error: /dev/dm-0: probing initialization failed:
Device or resource busy
-->  RuntimeError: command returned non-zero exit status: 1
destroyed osd.1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
osd tree -f json
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
-i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753
5c8f 1
--> Was unable to complete a new OSD, will rollback changes
--> OSD will be destroyed, keeping the ID because it was provided with
--osd-id
Running command: ceph osd destroy osd.1 --yes-i-really-mean-it
  stderr: destroyed osd.1

-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
existing device is needed


Does anybody know how to solve this problem?

Cheers,

Vadim

--
Vadim Bulst

Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10

phone: +49-341-97-33380
mail: vadim.bu...@uni-leipzig.de

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Alfredo Deza
On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst  wrote:
> no change:
>
>
> root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
> --> Zapping: /dev/dm-0

This is the problem right here. Your script is using the dm device
that belongs to an LV.

What you want to do here is destroy/zap the LV. Not the dm device that
belongs to the LV.

To make this clear in the future, I've created:
http://tracker.ceph.com/issues/24504



> Running command: /sbin/cryptsetup status /dev/mapper/
>  stdout: /dev/mapper/ is inactive.
> --> Skipping --destroy because no associated physical volumes are found for
> /dev/dm-0
> Running command: wipefs --all /dev/dm-0
>  stderr: wipefs: error: /dev/dm-0: probing initialization failed: Device or
> resource busy
> -->  RuntimeError: command returned non-zero exit status: 1
>
>
> On 12.06.2018 09:03, Linh Vu wrote:
>
> ceph-volume lvm zap --destroy $DEVICE
>
> 
> From: ceph-users  on behalf of Vadim
> Bulst 
> Sent: Tuesday, 12 June 2018 4:46:44 PM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Filestore -> Bluestore
>
>
> Thanks Sergey.
>
> Could you specify your answer a bit more? When I look into the manpage of
> ceph-volume I couldn't find an option named "--destroy".
>
> I just like to make clear - this script has already migrated several
> servers. The problem is appearing when it should migrate devices in the
> expansion shelf.
>
> "-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
> existing device is needed"
>
> Cheers,
>
> Vadim
>
>
> I would say the handling of devices
> On 11.06.2018 23:58, Sergey Malinin wrote:
>
> “Device or resource busy” error rises when no “--destroy” option is passed
> to ceph-volume.
> On Jun 11, 2018, 22:44 +0300, Vadim Bulst ,
> wrote:
>
> Dear Cephers,
>
> I'm trying to migrate our OSDs to Bluestore using this little script:
>
> #!/bin/bash
> HOSTNAME=$(hostname -s)
> OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
> contains("filestore")) ]' | jq '[.[] | select(.hostname |
> contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
> IFS=' ' read -a OSDARRAY <<<$OSDS
> for OSD in "${OSDARRAY[@]}"; do
>   DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
> .backend_filestore_dev_node' | sed 's/"//g'`
>   echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
>   ceph osd out ${OSD}
> while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full
> evacuation"; sleep 60 ; done
>   systemctl stop ceph-osd@${OSD}
>   umount /var/lib/ceph/osd/ceph-${OSD}
>   /usr/sbin/ceph-volume lvm zap ${DEV}
>   ceph osd destroy ${OSD} --yes-i-really-mean-it
>   /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
> --osd-id ${OSD}
> done
>
> Unfortunately - under normal circumstances this works flawlessly. In our
> case we have expansion shelfs connected as multipath devices to our nodes.
>
> /usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:
>
> OSD(s) 1 are safe to destroy without reducing data durability.
> --> Zapping: /dev/dm-0
> Running command: /sbin/cryptsetup status /dev/mapper/
>  stdout: /dev/mapper/ is inactive.
> Running command: wipefs --all /dev/dm-0
>  stderr: wipefs: error: /dev/dm-0: probing initialization failed:
> Device or resource busy
> -->  RuntimeError: command returned non-zero exit status: 1
> destroyed osd.1
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> osd tree -f json
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> -i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753
> 5c8f 1
> --> Was unable to complete a new OSD, will rollback changes
> --> OSD will be destroyed, keeping the ID because it was provided with
> --osd-id
> Running command: ceph osd destroy osd.1 --yes-i-really-mean-it
>  stderr: destroyed osd.1
>
> -->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
> existing device is needed
>
>
> Does anybody know how to solve this problem?
>
> Cheers,
>
> Vadim
>
> --
> Vadim Bulst
>
> Universität Leipzig / URZ
> 04109 Leipzig, Augustusplatz 10
>
> phone: +49-341-97-33380
> mail: vadim.bu...@uni-leipzig.de
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.

Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Vadim Bulst

no change:


root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
 stdout: /dev/mapper/ is inactive.
--> Skipping --destroy because no associated physical volumes are found 
for /dev/dm-0

Running command: wipefs --all /dev/dm-0
 stderr: wipefs: error: /dev/dm-0: probing initialization failed: 
Device or resource busy

-->  RuntimeError: command returned non-zero exit status: 1


On 12.06.2018 09:03, Linh Vu wrote:


ceph-volume lvm zap --destroy $DEVICE


*From:* ceph-users  on behalf of 
Vadim Bulst 

*Sent:* Tuesday, 12 June 2018 4:46:44 PM
*To:* ceph-users@lists.ceph.com
*Subject:* Re: [ceph-users] Filestore -> Bluestore

Thanks Sergey.

Could you specify your answer a bit more? When I look into the manpage 
of ceph-volume I couldn't find an option named "--destroy".


I just like to make clear - this script has already migrated several 
servers. The problem is appearing when it should migrate devices in 
the expansion shelf.


"-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an 
existing device is needed"


Cheers,

Vadim


I would say the handling of devices
On 11.06.2018 23:58, Sergey Malinin wrote:
“Device or resource busy” error rises when no “--destroy” option is 
passed to ceph-volume.
On Jun 11, 2018, 22:44 +0300, Vadim Bulst 
 <mailto:vadim.bu...@uni-leipzig.de>, wrote:

Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little script:

#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
contains("filestore")) ]' | jq '[.[] | select(.hostname |
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
  DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
.backend_filestore_dev_node' | sed 's/"//g'`
  echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
  ceph osd out ${OSD}
    while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full
evacuation"; sleep 60 ; done
  systemctl stop ceph-osd@${OSD}
  umount /var/lib/ceph/osd/ceph-${OSD}
  /usr/sbin/ceph-volume lvm zap ${DEV}
  ceph osd destroy ${OSD} --yes-i-really-mean-it
  /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
--osd-id ${OSD}
done

Unfortunately - under normal circumstances this works flawlessly. In our
case we have expansion shelfs connected as multipath devices to our 
nodes.


/usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
 stdout: /dev/mapper/ is inactive.
Running command: wipefs --all /dev/dm-0
 stderr: wipefs: error: /dev/dm-0: probing initialization failed:
Device or resource busy
-->  RuntimeError: command returned non-zero exit status: 1
destroyed osd.1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
osd tree -f json
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
-i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753
5c8f 1
--> Was unable to complete a new OSD, will rollback changes
--> OSD will be destroyed, keeping the ID because it was provided with
--osd-id
Running command: ceph osd destroy osd.1 --yes-i-really-mean-it
 stderr: destroyed osd.1

-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
existing device is needed


Does anybody know how to solve this problem?

Cheers,

Vadim

--
Vadim Bulst

Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10

phone: +49-341-97-33380 
mail: vadim.bu...@uni-leipzig.de <mailto:vadim.bu...@uni-leipzig.de>

___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Vadim Bulst

Universität Leipzig / URZ
04109  Leipzig, Augustusplatz 10

phone: ++49-341-97-33380
mail:vadim.bu...@uni-leipzig.de <mailto:vadim.bu...@uni-leipzig.de>


--
Vadim Bulst

Universität Leipzig / URZ
04109  Leipzig, Augustusplatz 10

phone: ++49-341-97-33380
mail:vadim.bu...@uni-leipzig.de



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Linh Vu
ceph-volume lvm zap --destroy $DEVICE


From: ceph-users  on behalf of Vadim Bulst 

Sent: Tuesday, 12 June 2018 4:46:44 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Filestore -> Bluestore


Thanks Sergey.

Could you specify your answer a bit more? When I look into the manpage of 
ceph-volume I couldn't find an option named "--destroy".

I just like to make clear - this script has already migrated several servers. 
The problem is appearing when it should migrate devices in the expansion shelf.

"-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an existing 
device is needed"

Cheers,

Vadim


I would say the handling of devices
On 11.06.2018 23:58, Sergey Malinin wrote:
“Device or resource busy” error rises when no “--destroy” option is passed to 
ceph-volume.
On Jun 11, 2018, 22:44 +0300, Vadim Bulst 
<mailto:vadim.bu...@uni-leipzig.de>, wrote:
Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little script:

#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
contains("filestore")) ]' | jq '[.[] | select(.hostname |
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
  DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
.backend_filestore_dev_node' | sed 's/"//g'`
  echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
  ceph osd out ${OSD}
while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full
evacuation"; sleep 60 ; done
  systemctl stop ceph-osd@${OSD}
  umount /var/lib/ceph/osd/ceph-${OSD}
  /usr/sbin/ceph-volume lvm zap ${DEV}
  ceph osd destroy ${OSD} --yes-i-really-mean-it
  /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
--osd-id ${OSD}
done

Unfortunately - under normal circumstances this works flawlessly. In our
case we have expansion shelfs connected as multipath devices to our nodes.

/usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
 stdout: /dev/mapper/ is inactive.
Running command: wipefs --all /dev/dm-0
 stderr: wipefs: error: /dev/dm-0: probing initialization failed:
Device or resource busy
-->  RuntimeError: command returned non-zero exit status: 1
destroyed osd.1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
osd tree -f json
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
-i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753
5c8f 1
--> Was unable to complete a new OSD, will rollback changes
--> OSD will be destroyed, keeping the ID because it was provided with
--osd-id
Running command: ceph osd destroy osd.1 --yes-i-really-mean-it
 stderr: destroyed osd.1

-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
existing device is needed


Does anybody know how to solve this problem?

Cheers,

Vadim

--
Vadim Bulst

Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10

phone: +49-341-97-33380
mail: vadim.bu...@uni-leipzig.de<mailto:vadim.bu...@uni-leipzig.de>

___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Vadim Bulst

Universität Leipzig / URZ
04109  Leipzig, Augustusplatz 10

phone: ++49-341-97-33380
mail:vadim.bu...@uni-leipzig.de<mailto:vadim.bu...@uni-leipzig.de>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore -> Bluestore

2018-06-12 Thread Vadim Bulst

Thanks Sergey.

Could you specify your answer a bit more? When I look into the manpage 
of ceph-volume I couldn't find an option named "--destroy".


I just like to make clear - this script has already migrated several 
servers. The problem is appearing when it should migrate devices in the 
expansion shelf.


"-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an 
existing device is needed"


Cheers,

Vadim


I would say the handling of devices
On 11.06.2018 23:58, Sergey Malinin wrote:
“Device or resource busy” error rises when no “--destroy” option is 
passed to ceph-volume.
On Jun 11, 2018, 22:44 +0300, Vadim Bulst 
, wrote:

Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little script:

#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
contains("filestore")) ]' | jq '[.[] | select(.hostname |
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
  DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
.backend_filestore_dev_node' | sed 's/"//g'`
  echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
  ceph osd out ${OSD}
    while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full
evacuation"; sleep 60 ; done
  systemctl stop ceph-osd@${OSD}
  umount /var/lib/ceph/osd/ceph-${OSD}
  /usr/sbin/ceph-volume lvm zap ${DEV}
  ceph osd destroy ${OSD} --yes-i-really-mean-it
  /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
--osd-id ${OSD}
done

Unfortunately - under normal circumstances this works flawlessly. In our
case we have expansion shelfs connected as multipath devices to our 
nodes.


/usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
 stdout: /dev/mapper/ is inactive.
Running command: wipefs --all /dev/dm-0
 stderr: wipefs: error: /dev/dm-0: probing initialization failed:
Device or resource busy
-->  RuntimeError: command returned non-zero exit status: 1
destroyed osd.1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
osd tree -f json
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
-i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753
5c8f 1
--> Was unable to complete a new OSD, will rollback changes
--> OSD will be destroyed, keeping the ID because it was provided with
--osd-id
Running command: ceph osd destroy osd.1 --yes-i-really-mean-it
 stderr: destroyed osd.1

-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
existing device is needed


Does anybody know how to solve this problem?

Cheers,

Vadim

--
Vadim Bulst

Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10

phone: +49-341-97-33380 
mail: vadim.bu...@uni-leipzig.de

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Vadim Bulst

Universität Leipzig / URZ
04109  Leipzig, Augustusplatz 10

phone: ++49-341-97-33380
mail:vadim.bu...@uni-leipzig.de



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore -> Bluestore

2018-06-11 Thread Sergey Malinin
“Device or resource busy” error rises when no “--destroy” option is passed to 
ceph-volume.
On Jun 11, 2018, 22:44 +0300, Vadim Bulst , wrote:
> Dear Cephers,
>
> I'm trying to migrate our OSDs to Bluestore using this little script:
>
> #!/bin/bash
> HOSTNAME=$(hostname -s)
> OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore |
> contains("filestore")) ]' | jq '[.[] | select(.hostname |
> contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
> IFS=' ' read -a OSDARRAY <<<$OSDS
> for OSD in "${OSDARRAY[@]}"; do
>   DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') |
> .backend_filestore_dev_node' | sed 's/"//g'`
>   echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
>   ceph osd out ${OSD}
>     while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full
> evacuation"; sleep 60 ; done
>   systemctl stop ceph-osd@${OSD}
>   umount /var/lib/ceph/osd/ceph-${OSD}
>   /usr/sbin/ceph-volume lvm zap ${DEV}
>   ceph osd destroy ${OSD} --yes-i-really-mean-it
>   /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV}
> --osd-id ${OSD}
> done
>
> Unfortunately - under normal circumstances this works flawlessly. In our
> case we have expansion shelfs connected as multipath devices to our nodes.
>
> /usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:
>
> OSD(s) 1 are safe to destroy without reducing data durability.
> --> Zapping: /dev/dm-0
> Running command: /sbin/cryptsetup status /dev/mapper/
>  stdout: /dev/mapper/ is inactive.
> Running command: wipefs --all /dev/dm-0
>  stderr: wipefs: error: /dev/dm-0: probing initialization failed:
> Device or resource busy
> -->  RuntimeError: command returned non-zero exit status: 1
> destroyed osd.1
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> osd tree -f json
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> -i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753
> 5c8f 1
> --> Was unable to complete a new OSD, will rollback changes
> --> OSD will be destroyed, keeping the ID because it was provided with
> --osd-id
> Running command: ceph osd destroy osd.1 --yes-i-really-mean-it
>  stderr: destroyed osd.1
>
> -->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an
> existing device is needed
>
>
> Does anybody know how to solve this problem?
>
> Cheers,
>
> Vadim
>
> --
> Vadim Bulst
>
> Universität Leipzig / URZ
> 04109 Leipzig, Augustusplatz 10
>
> phone: +49-341-97-33380
> mail: vadim.bu...@uni-leipzig.de
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Filestore -> Bluestore

2018-06-11 Thread Vadim Bulst

Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little script:

#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore | 
contains("filestore")) ]' | jq '[.[] | select(.hostname | 
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`

IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
  DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') | 
.backend_filestore_dev_node' | sed 's/"//g'`

  echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
  ceph osd out ${OSD}
    while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full 
evacuation"; sleep 60 ; done

  systemctl stop ceph-osd@${OSD}
  umount /var/lib/ceph/osd/ceph-${OSD}
  /usr/sbin/ceph-volume lvm zap ${DEV}
  ceph osd destroy ${OSD} --yes-i-really-mean-it
  /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV} 
--osd-id ${OSD}

done

Unfortunately - under normal circumstances this works flawlessly. In our 
case we have expansion shelfs connected as multipath devices to our nodes.


/usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
 stdout: /dev/mapper/ is inactive.
Running command: wipefs --all /dev/dm-0
 stderr: wipefs: error: /dev/dm-0: probing initialization failed: 
Device or resource busy

-->  RuntimeError: command returned non-zero exit status: 1
destroyed osd.1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name 
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring 
osd tree -f json
Running command: /usr/bin/ceph --cluster ceph --name 
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring 
-i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753

5c8f 1
--> Was unable to complete a new OSD, will rollback changes
--> OSD will be destroyed, keeping the ID because it was provided with 
--osd-id

Running command: ceph osd destroy osd.1 --yes-i-really-mean-it
 stderr: destroyed osd.1

-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an 
existing device is needed



Does anybody know how to solve this problem?

Cheers,

Vadim

--
Vadim Bulst

Universität Leipzig / URZ
04109  Leipzig, Augustusplatz 10

phone: +49-341-97-33380
mail:vadim.bu...@uni-leipzig.de

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com