Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Ok, sorry about that.

On Sat, February 28, 2015 9:13 am, Chris Murphy wrote:
> OK It's extremely rude to cross post the same question across multiple
> lists like this at exactly the same time, and without at least showing the
> cross posting. I just replied to the one on Fedora users before I saw this
> post. This sort of thing wastes people's time. Pick one list based on the
> best case chance for response and give it 24 hours.
>
>
> Chris Murphy
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Hello James and All,

For your information, here's the listing looks like:

[root@localhost ~]# pvs
  PV VG Fmt  Attr PSize PFree
  /dev/sda1  vg_hosting lvm2 a--  1.82t0
  /dev/sdb2  vg_hosting lvm2 a--  1.82t0
  /dev/sdc1  vg_hosting lvm2 a--  1.82t0
  /dev/sdd1  vg_hosting lvm2 a--  1.82t0
[root@localhost ~]# lvs
  LV  VG Attr   LSize  Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
  lv_home vg_hosting -wi-s-  7.22t
  lv_root vg_hosting -wi-a- 50.00g
  lv_swap vg_hosting -wi-a- 11.80g
[root@localhost ~]# vgs
  VG #PV #LV #SN Attr   VSize VFree
  vg_hosting   4   3   0 wz--n- 7.28t0
[root@localhost ~]#

The problem is, when I do:

[root@localhost ~]# vgchange -a y
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume vg_hosting-lv_home (253:4)
  3 logical volume(s) in volume group "vg_hosting" now active

Only lv_root and lv_swap are activated; but lv_home is not, with the error
above (on the vgchange command).

How to activate the lv_home even with the 3 PVs left?
The PV /dev/sdb2 is the one lost. I created it from a new blank hard disk
and restore the VG using:

# pvcreate --restorefile ... --uuid ... /dev/sdb2
# vgcfgrestore --file ... vg_hosting

Regards,
Khem

On Sat, February 28, 2015 7:42 am, Khemara Lyn wrote:
> Dear James,
>
>
> Thank you for being quick to help.
> Yes, I could see all of them:
>
>
> # vgs
> # lvs
> # pvs
>
>
> Regards,
> Khem
>
>
> On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:
>
>>
>
>>
>> - Original Message -
>> | Dear All,
>> |
>> | I am in desperate need for LVM data rescue for my server.
>> | I have an VG call vg_hosting consisting of 4 PVs each contained in a
>> | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
>> | And this LV: lv_home was created to use all the space of the 4 PVs.
>> |
>> | Right now, the third hard drive is damaged; and therefore the third PV
>>  | (/dev/sdc1) cannot be accessed anymore. I would like to recover
>> whatever | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and
>> /dev/sdd1).
>> |
>> | I have tried with the following:
>> |
>> | 1. Removing the broken PV:
>> |
>> | # vgreduce --force vg_hosting /dev/sdc1
>> |   Physical volume "/dev/sdc1" still in use
>> |
>> | # pvmove /dev/sdc1
>> |   No extents available for allocation
>>
>>
>>
>> This would indicate that you don't have sufficient extents to move the
>> data off of this disk.  If you have another disk then you could try
>> adding it to the VG and then moving the extents.
>>
>> | 2. Replacing the broken PV:
>> |
>> | I was able to create a new PV and restore the VG Config/meta data:
>> |
>> | # pvcreate --restorefile ... --uuid ... /dev/sdc1
>> | # vgcfgrestore --file ... vg_hosting
>> |
>> | However, vgchange would give this error:
>> |
>> | # vgchange -a y
>> |  device-mapper: resume ioctl on  failed: Invalid argument
>> |  Unable to resume vg_hosting-lv_home (253:4)
>> |  0 logical volume(s) in volume group "vg_hosting" now active
>>
>>
>>
>> There should be no need to create a PV and then restore the VG unless
>> the entire VG is damaged.  The configuration should still be available
>> on the other disks and adding the new PV and moving the extents should
>> be enough.
>>
>> | Could someone help me please???
>> | I'm in dire need for help to save the data, at least some of it if
>> possible.
>>
>> Can you not see the PV/VG/LV at all?
>>
>>
>>
>> --
>> James A. Peltier
>> IT Services - Research Computing Group
>> Simon Fraser University - Burnaby Campus
>> Phone   : 778-782-6573
>> Fax : 778-782-3045
>> E-Mail  : jpelt...@sfu.ca
>> Website : http://www.sfu.ca/itservices
>> Twitter : @sfu_rcg
>> Powering Engagement Through Technology
>> "Build upon strengths and weaknesses will generally take care of
>> themselves" - Joyce C. Lock
>>
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> http://lists.centos.org/mailman/listinfo/centos
>>
>>
>>
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Dear John,

I understand; I tried it in the hope that, I could activate the LV again
with a new PV replacing the damaged one. But still I could not activate
it.

What is the right way to recover the remaining PVs left?

Regards,
Khem

On Sat, February 28, 2015 7:42 am, John R Pierce wrote:
> On 2/27/2015 4:37 PM, James A. Peltier wrote:
>
>> | I was able to create a new PV and restore the VG Config/meta data:
>> |
>> | # pvcreate --restorefile ... --uuid ... /dev/sdc1
>> |
>>
>
> oh, that step means you won't be able to recover ANY of the data that was
> formerly on that PV.
>
>
>
> --
> john r pierce  37N 122W somewhere on
> the middle of the left coast
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Dear James,

Thank you for being quick to help.
Yes, I could see all of them:

# vgs
# lvs
# pvs

Regards,
Khem

On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:
>

>
> - Original Message -
> | Dear All,
> |
> | I am in desperate need for LVM data rescue for my server.
> | I have an VG call vg_hosting consisting of 4 PVs each contained in a
> | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
> | And this LV: lv_home was created to use all the space of the 4 PVs.
> |
> | Right now, the third hard drive is damaged; and therefore the third PV
> | (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
>  | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
> |
> | I have tried with the following:
> |
> | 1. Removing the broken PV:
> |
> | # vgreduce --force vg_hosting /dev/sdc1
> |   Physical volume "/dev/sdc1" still in use
> |
> | # pvmove /dev/sdc1
> |   No extents available for allocation
>
>
> This would indicate that you don't have sufficient extents to move the
> data off of this disk.  If you have another disk then you could try
> adding it to the VG and then moving the extents.
>
> | 2. Replacing the broken PV:
> |
> | I was able to create a new PV and restore the VG Config/meta data:
> |
> | # pvcreate --restorefile ... --uuid ... /dev/sdc1
> | # vgcfgrestore --file ... vg_hosting
> |
> | However, vgchange would give this error:
> |
> | # vgchange -a y
> |   device-mapper: resume ioctl on  failed: Invalid argument
> |   Unable to resume vg_hosting-lv_home (253:4)
> |   0 logical volume(s) in volume group "vg_hosting" now active
>
>
> There should be no need to create a PV and then restore the VG unless the
> entire VG is damaged.  The configuration should still be available on the
> other disks and adding the new PV and moving the extents should be
> enough.
>
> | Could someone help me please???
> | I'm in dire need for help to save the data, at least some of it if
> possible.
>
> Can you not see the PV/VG/LV at all?
>
>
> --
> James A. Peltier
> IT Services - Research Computing Group
> Simon Fraser University - Burnaby Campus
> Phone   : 778-782-6573
> Fax : 778-782-3045
> E-Mail  : jpelt...@sfu.ca
> Website : http://www.sfu.ca/itservices
> Twitter : @sfu_rcg
> Powering Engagement Through Technology
> "Build upon strengths and weaknesses will generally take care of
> themselves" - Joyce C. Lock
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Thank you, John for your quick reply.
That is what I hope. But how to do it? I cannot even activate the LV with
the remaining PVs.

Thanks,
Khem

On Sat, February 28, 2015 7:34 am, John R Pierce wrote:
> On 2/27/2015 4:25 PM, Khemara Lyn wrote:
>
>> Right now, the third hard drive is damaged; and therefore the third PV
>> (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
>>  left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
>
> your data is spread across all 4 drives, and you lost 25% of it. so only 3
> out of 4 blocks of data still exist.  good luck with recovery.
>
>
>
> --
> john r pierce  37N 122W somewhere on
> the middle of the left coast
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Looking for a life-save LVM Guru

2015-02-27 Thread Khemara Lyn
Dear All,

I am in desperate need for LVM data rescue for my server.
I have an VG call vg_hosting consisting of 4 PVs each contained in a
separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
And this LV: lv_home was created to use all the space of the 4 PVs.

Right now, the third hard drive is damaged; and therefore the third PV
(/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).

I have tried with the following:

1. Removing the broken PV:

# vgreduce --force vg_hosting /dev/sdc1
  Physical volume "/dev/sdc1" still in use

# pvmove /dev/sdc1
  No extents available for allocation

2. Replacing the broken PV:

I was able to create a new PV and restore the VG Config/meta data:

# pvcreate --restorefile ... --uuid ... /dev/sdc1
# vgcfgrestore --file ... vg_hosting

However, vgchange would give this error:

# vgchange -a y
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume vg_hosting-lv_home (253:4)
  0 logical volume(s) in volume group "vg_hosting" now active

Could someone help me please???
I'm in dire need for help to save the data, at least some of it if possible.

Regards,
Khem


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos