[linux-lvm] convert logical sector -> physical sector + pv/vg extent number

2022-01-02 Thread Roland

hello,

if i have a logical sector/block "x" on a lvm logical volume ,  is there
a way to easily calculate/determine (optimally by script/cli) the
corresponding physical sector of the physical device it belongs to and
the extent number of the appropiate pv/vg where it resides ?

regards
roland

ps:
i wonder if there is bad block relocation possible with lvm, and i'm
trying to build some script. as far as i can see lvm has everything
needed for that to shuffle extents from one position on disk to another...


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] convert logical sector -> physical sector + pv/vg extent number

2022-01-03 Thread Roland

thanks!

any chance to get retrieve this information for automated/script based
processing ?

roland


Am 03.01.22 um 00:12 schrieb Andy Smith:

Hi Roland,

On Sun, Jan 02, 2022 at 08:00:30PM +0100, Roland wrote:

if i have a logical sector/block "x" on a lvm logical volume ,  is there
a way to easily calculate/determine (optimally by script/cli) the
corresponding physical sector of the physical device it belongs to and
the extent number of the appropiate pv/vg where it resides ?

lvdisplay --maps /path/to/lv

tells you all the mappings of logical extents within the LV to
physical extents within the PV.

pvdisplay --units s /dev/yourpv | grep 'PE Size'

tells you the size of a physical extent in sectors.

Between those you can work out the sector number on the physical
device for a sector number on a logical volume.

Cheers,
Andy

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] convert logical sector -> physical sector + pv/vg extent number

2022-01-03 Thread Roland

> any chance to get retrieve this information for automated/script
based processing ?

after long trial and error and reading manpages, i got it (see below):

what i'm yet missing is how "translation" of the headings/fields in
normal output can be avoided , i.e print the technical/internal field
name instead of a "human readable" or "translated" one.

'seg_start_pe' and 'pvseg_start' for example both translate to "Start",
and that does not only make it hard to read (and makes no real sense to
have duplicate field names) ,  but it's also difficult to build an
appropriate output field command option from the default output.  i
remember i had such issue with other tools...

think this is a missing feature ?

roland


# pvs --segments -olv_name,seg_start_pe,seg_size_pe,pvseg_start -S
lv_name=usedblocks -O seg_start
   LV Start SSize Start
   usedblocks 0 1   512
   usedblocks 1   233 1
   usedblocks   234 1   513
   usedblocks   235   277   235

# pvs --segments -olv_name,seg_start_pe,seg_size_pe,pvseg_start -S
lv_name=markedbad -O seg_start
   LV    Start SSize Start
   markedbad 0 1  1022
   markedbad 1 1 0
   markedbad 2 1   234

# pvs --segments -olv_name,seg_start_pe,seg_size_pe,pvseg_start -S
lv_name=markedbad -O seg_start --nameprefixes --noheadings
   LVM2_LV_NAME='markedbad' LVM2_SEG_START_PE='0' LVM2_SEG_SIZE_PE='1'
LVM2_PVSEG_START='1022'
   LVM2_LV_NAME='markedbad' LVM2_SEG_START_PE='1' LVM2_SEG_SIZE_PE='1'
LVM2_PVSEG_START='0'
   LVM2_LV_NAME='markedbad' LVM2_SEG_START_PE='2' LVM2_SEG_SIZE_PE='1'
LVM2_PVSEG_START='234'

# pvs --segments -olv_name,seg_start_pe,seg_size_pe,pvseg_start -S
lv_name=usedblocks -O seg_start --nameprefixes --noheadings
   LVM2_LV_NAME='usedblocks' LVM2_SEG_START_PE='0' LVM2_SEG_SIZE_PE='1'
LVM2_PVSEG_START='512'
   LVM2_LV_NAME='usedblocks' LVM2_SEG_START_PE='1'
LVM2_SEG_SIZE_PE='233' LVM2_PVSEG_START='1'
   LVM2_LV_NAME='usedblocks' LVM2_SEG_START_PE='234'
LVM2_SEG_SIZE_PE='1' LVM2_PVSEG_START='513'
   LVM2_LV_NAME='usedblocks' LVM2_SEG_START_PE='235'
LVM2_SEG_SIZE_PE='277' LVM2_PVSEG_START='235'

Am 03.01.22 um 12:13 schrieb Roland:

thanks!

any chance to get retrieve this information for automated/script based
processing ?

roland


Am 03.01.22 um 00:12 schrieb Andy Smith:

Hi Roland,

On Sun, Jan 02, 2022 at 08:00:30PM +0100, Roland wrote:

if i have a logical sector/block "x" on a lvm logical volume ,  is
there
a way to easily calculate/determine (optimally by script/cli) the
corresponding physical sector of the physical device it belongs to and
the extent number of the appropiate pv/vg where it resides ?

lvdisplay --maps /path/to/lv

tells you all the mappings of logical extents within the LV to
physical extents within the PV.

pvdisplay --units s /dev/yourpv | grep 'PE Size'

tells you the size of a physical extent in sectors.

Between those you can work out the sector number on the physical
device for a sector number on a logical volume.

Cheers,
Andy

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] indistinguishable column names ( BA Start Start Start Start )

2023-08-23 Thread Roland

Hi,

> Hi Roland, is `lvs --reportformat json` or `lvs --nameprefixes` good
enough?

good when using scripting / automated processing - but not good, when we
need it human readable.

i'd even would call it a bug, when using same strings for "translated"
names.

would anybody accept something like this (just for saving column space)
?  ok, that's a little bit over the top - but i wand to underline what i
mean

getpersondetails -o forename, surname, streetname, cityname

name        name    name            name
john          doe  highstreet     washington


roland




Am 21.08.23 um 18:00 schrieb Marian Csontos:

Hi Roland, is `lvs --reportformat json` or `lvs --nameprefixes` good
enough?

On Sat, Aug 19, 2023 at 6:06 PM Roland  wrote:

 > furthermore, it's a little bit weird that some columns being
printed
by default when using -o, is
 > there an easier way to remove those besides explictly removing
them
one by one with several -o options ? "-o-opt1,-opt2,..." doesn't work

sorry for this noise, i was too dumb for that , "-o+opt1,opt2
-o-opt3,opt4" works as desired (as documented in manpage)

 > themselves,and there also seems no way to add separators in
between
(like with printf) for separation/formatting

also sorry for this, as we can use --separator="," (apparently did
have too much coffee and had overseen that in the manpage)

the question regarding native column headers/description remains

roland

Am 19.08.23 um 17:20 schrieb Roland:
> hello,
>
> does somebody know how we can have native (i.e. non translated)
column
> field names in first line of output of pvs ?
>
> in its current implementation, output is hard to read ,
difficult to
> distinguish and also not scriptable/parseable, as whitespace is
used
> for field separator and there are fields which also contain
whitespace
> themselves,and there also seems no way to add separators in between
> (like with printf) for separation/formatting
>
> # pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
>   PV VG    Fmt  Attr PSize    PFree BA Start Start
> Start Start
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S
> 2097152S 1 0
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S
> 2097152S 1 1
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S 0 
5309
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S 0 
5587
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S 0 
5588
>
> i mean like this:
>
> # pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
>   PV VG    Fmt  Attr PSize    PFree pv_ba_start
> seg_start seg_start_pe pvseg_start
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S    0S
> 2097152S    1   0
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S 0S  0S
> 2097152S            1   1
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S       
    0
>        5309
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S
> 0    5587
>   /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S
> 0    5588
>
> furthermore, it's a little bit weird that some columns being
printed
> by default when using -o, is there an easier way to remove those
> besides explictly removing them one by one with several -o
options ?
> "-o-opt1,-opt2,..." doesn't work
>
> # pvs --units s -o+pv_ba_start,seg_start,pvseg_start,seg_start_pe
> -o-vg_name -o-pv_name -o-pv_fmt -o-attr -o -pv_size -o -pv_free
>   BA Start Start    Start Start
    >     0S 2097152S 0 1
>     0S 2097152S 1 1
>     0S   0S  5309 0
>     0S   0S  5587 0
>     0S   0S  5588 0
>
> roland
>
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO athttp://tldp.org/HOWTO/LVM-HOWTO/
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] indistinguishable column names ( BA Start Start Start Start )

2023-08-19 Thread Roland

hello,

does somebody know how we can have native (i.e. non translated) column 
field names in first line of output of pvs ?


in its current implementation, output is hard to read , difficult to 
distinguish and also not scriptable/parseable, as whitespace is used for 
field separator and there are fields which also contain whitespace 
themselves,and there also seems no way to add separators in between 
(like with printf) for separation/formatting


# pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
  PV VG    Fmt  Attr PSize    PFree BA Start Start    
Start Start
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S   0S 
2097152S 1 0
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S   0S 
2097152S 1 1

  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S   0S 0  5309
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S   0S 0  5587
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S   0S 0  5588

i mean like this:

# pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
  PV VG    Fmt  Attr PSize    PFree pv_ba_start 
seg_start seg_start_pe pvseg_start
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S      0S 
2097152S    1   0
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S  0S 
2097152S            1   1
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S    0S        
    0        5309
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S    0S        
    0    5587
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S    
0S    0    5588


furthermore, it's a little bit weird that some columns being printed by 
default when using -o, is there an easier way to remove those besides 
explictly removing them one by one with several -o options ? 
"-o-opt1,-opt2,..." doesn't work


# pvs --units s -o+pv_ba_start,seg_start,pvseg_start,seg_start_pe 
-o-vg_name -o-pv_name -o-pv_fmt -o-attr -o -pv_size -o -pv_free

  BA Start Start    Start Start
    0S 2097152S 0 1
    0S 2097152S 1 1
    0S   0S  5309 0
    0S   0S  5587 0
    0S   0S  5588 0

roland

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] indistinguishable column names ( BA Start Start Start Start )

2023-08-19 Thread Roland
> furthermore, it's a little bit weird that some columns being printed 
by default when using -o, is
> there an easier way to remove those besides explictly removing them 
one by one with several -o options ? "-o-opt1,-opt2,..." doesn't work


sorry for this noise, i was too dumb for that , "-o+opt1,opt2 
-o-opt3,opt4" works as desired (as documented in manpage)


> themselves,and there also seems no way to add separators in between 
(like with printf) for separation/formatting


also sorry for this, as we can use --separator=","   (apparently did 
have too much coffee and had overseen that in the manpage)


the question regarding native column headers/description remains

roland

Am 19.08.23 um 17:20 schrieb Roland:

hello,

does somebody know how we can have native (i.e. non translated) column 
field names in first line of output of pvs ?


in its current implementation, output is hard to read , difficult to 
distinguish and also not scriptable/parseable, as whitespace is used 
for field separator and there are fields which also contain whitespace 
themselves,and there also seems no way to add separators in between 
(like with printf) for separation/formatting


# pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
  PV VG    Fmt  Attr PSize    PFree BA Start Start    
Start Start
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S   0S 
2097152S 1 0
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S   0S 
2097152S 1 1

  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S 0  5309
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S 0  5587
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S 0  5588

i mean like this:

# pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
  PV VG    Fmt  Attr PSize    PFree pv_ba_start 
seg_start seg_start_pe pvseg_start
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S      0S 
2097152S    1   0
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S  0S 
2097152S            1   1
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S            0 
       5309
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S            
0    5587
  /dev/sdb   VGrecycle lvm2 a--  11720982528S    0S 0S 0S    
0    5588


furthermore, it's a little bit weird that some columns being printed 
by default when using -o, is there an easier way to remove those 
besides explictly removing them one by one with several -o options ? 
"-o-opt1,-opt2,..." doesn't work


# pvs --units s -o+pv_ba_start,seg_start,pvseg_start,seg_start_pe 
-o-vg_name -o-pv_name -o-pv_fmt -o-attr -o -pv_size -o -pv_free

  BA Start Start    Start Start
    0S 2097152S 0 1
    0S 2097152S 1 1
    0S   0S  5309 0
    0S   0S  5587 0
    0S   0S  5588 0

roland


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] indistinguishable column names ( BA Start Start Start Start )

2023-08-21 Thread Roland

Am 21.08.23 um 18:17 schrieb David Teigland:


On Sat, Aug 19, 2023 at 06:05:58PM +0200, Roland wrote:

furthermore, it's a little bit weird that some columns being printed by
default when using -o, is there an easier way to remove those besides
explictly removing them one by one with several -o options ?
"-o-opt1,-opt2,..." doesn't work

sorry for this noise, i was too dumb for that , "-o+opt1,opt2 -o-opt3,opt4"
works as desired (as documented in manpage)

Usually if you're interested in exact fields, you'd avoid the default
options and just use "-o opt1,opt2,..." without +/-.


yes, that makes totally sense  - there are default options, "+" adds
some options,
"- removes some , and -o without "+/-" just makes it print exactly what
we want.

totally straightforward if understood, i was just too lazy, sorry.


# pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
   PV VG    Fmt  Attr PSize    PFree pv_ba_start
  seg_start seg_start_pe pvseg_start

It does sound better than some of the strange header abbreviations.
It could also use the key words that appear with --nameprefixes.


i have found the place where to add an RFE and did that :

https://bugzilla.redhat.com/show_bug.cgi?id=226

thank you

roland


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] How to handle Bad Block relocation with LVM?

2023-03-15 Thread Roland

hello,

quite old thread - but damn interesting, though :)

> Having the PE number, you can easily do
> pvmove /dev/broken:PE /dev/somewhere-else

does somebody know if it's possible to easily remap a PE with standard
lvm tools,
instead of pvmoving it ?

trying to move data off from defective sectors can need a very long
time, especially
when multiple sectors being affected and if the disks are desktop drives.

let's think of some "re-partitioning" tool which sets up lvm on top of a
disk with bad
sectors and which scans/skips megabyte sized PE's to some spare
area, before the
disk is being used.  badblock remapping at the os level instead at the
disks controller
level.

yes, most of you will tell it's a bad idea but i have a cabinet full of
disks with bad
sectors and i'd be really be curious how good and how long a zfs raidz
would work on top
of such "badblocks lvm".  at least, i'd like to experiment with that.
let's call it
academical project for learning purpose and for demonstration of lvm
strength :D

such "remapping" could look like this:

# pvs --segments -ovg_name,lv_name,seg_start_pe,seg_size_pe,pvseg_start 
-O pvseg_start -S vg_name=VGloop0
  VG  LV   Start SSize Start
  VGloop0 blocks_good  0 4 0
  VGloop0 blocks_bad   1 1 4
  VGloop0 blocks_good  5   195 5
  VGloop0 blocks_bad   2 1   200
  VGloop0 blocks_good    201   699   201
  VGloop0 blocks_spare 0   120   900
  VGloop0 blocks_good    200 1  1020
  VGloop0 blocks_good  4 1  1021
  VGloop0 blocks_bad   0 1  1022


blocks_good is LV with healty PE's, blocks_bad is LV with bad PE's and
blocks_spare is LV
where you take healthy PE's from as a replacement for bad PE's found in
blocks_good LV

roland
sysadmin


> linux-lvm] How to handle Bad Block relocation with LVM?
> Lars Ellenberg lars.ellenberg at linbit.com
> Thu Nov 29 14:04:01 UTC 2012
>
>     Previous message (by thread): [linux-lvm] How to handle Bad Block
relocation with LVM?
>     Next message (by thread): [linux-lvm] How to handle Bad Block
relocation with LVM?
>     Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
>
> On Thu, Nov 29, 2012 at 07:26:24AM -0500, Brian J. Murrell wrote:
> > On 12-11-28 08:57 AM, Zdenek Kabelac wrote:
> > >
> > > Sorry, no automated tool.
> >
> > Pity,
> >
> > > You could possibly pvmove separated PEs manually with set of pvmove
> > > commands.
> >
> > So, is the basic premise to just find the PE that is sitting on a bad
> > block and just pvmove it into an LV created just for the purpose of
> > holding PEs that are on bad blocks?
> >
> > So what happens when I pvmove a PE out of an LV?  I take it LVM moves
> > the data (or at least tries in this case) on the PE being pvmoved onto
> > another PE before moving it?
> >
> > Oh, but wait.  pvmove (typically) moves PEs between physical volumes.
> > Can it be used to remap PEs like this?
>
> So what do you know?
> You either know that pysical sector P on some physical disk is broken.
> Or you know that logical sector L in some logical volume is broken.
>
> If you do
> pvs --unit s --segment -o
vg_name,lv_name,seg_start,seg_size,seg_start_pe,pe_start,seg_pe_ranges
>
> That should give you all you need to transform them into each other,
> and to transform the sector number to PE number.
>
> Having the PE number, you can easily do
> pvmove /dev/broken:PE /dev/somewhere-else
>
> Or with alloc anywhere even elsewhere on the same broken disk.
> # If you don't have an other PV available,
> # but there are free "healthy" extents on the same PV:
> # pvmove --alloc anywhere /dev/broken:PE /dev/broken
> Which would likely not be the smartest idea ;-)
>
> You should then create one LV named e.g. "BAD_BLOCKS",
> which you would create/extend to cover that bad PE,
> so that won't be re-allocated again later:
> lvextend VG/BAD_BLOCKS -l +1 /dev/broken:PE
>
> Better yet, pvchange -an /dev/broken,
> so it won't be used for new LVs anymore,
> and pvmove /dev/broken completely to somewhere else.
>
> So much for the theory, how I would try to do this.
> In case I would do this at all.
> Which I probably won't, if I had an other PV available.
>
>     ;-)
>
> I'm unsure how pvmove will handle IO errors, though.
>
> > > But I'd strongly recommend to get rid of such broken driver
quickly then
> > > you loose any more data - IMHO it's the most efficient solution
cost &
> > > time.
>
> Right.
>
>     Lars



___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Roland

hi,

we can extend a logical volume by arbitrary pv extends like this :


root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:5
  Size of logical volume mytestVG/blocks_allocated changed from 1.00 
MiB (1 extents) to 2.00 MiB (2 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:10
  Size of logical volume mytestVG/blocks_allocated changed from 2.00 
MiB (2 extents) to 3.00 MiB (3 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:15
  Size of logical volume mytestVG/blocks_allocated changed from 3.00 
MiB (3 extents) to 4.00 MiB (4 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:20
  Size of logical volume mytestVG/blocks_allocated changed from 4.00 
MiB (4 extents) to 5.00 MiB (5 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments 
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start

  LV   Start SSize  Start
  blocks_allocated 0  1 0
   0  4 1
  blocks_allocated 1  1 5
   0  4 6
  blocks_allocated 2  1    10
   0  4    11
  blocks_allocated 3  1    15
   0  4    16
  blocks_allocated 4  1    20
   0 476917    21


how can i do this in reverse ?

when i specify the physical extend to be added, it works - but when is 
specifcy the physical extent to be removed,

the last one is being removed but not the specified one.

see here for example - i wanted to remove extent number 10 like i did 
add it, but instead extent number 20

is being removed

root@s740:~# lvresize mytestVG/blocks_allocated -l -1 /dev/sdb:10
  Ignoring PVs on command line when reducing.
  WARNING: Reducing active logical volume to 4.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mytestVG/blocks_allocated? [y/n]: y
  Size of logical volume mytestVG/blocks_allocated changed from 5.00 
MiB (5 extents) to 4.00 MiB (4 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments 
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start

  LV   Start SSize  Start
  blocks_allocated 0  1 0
   0  4 1
  blocks_allocated 1  1 5
   0  4 6
  blocks_allocated 2  1    10
   0  4    11
  blocks_allocated 3  1    15
   0 476922    16


how can i remove extent number 10 ?

is this a bug ?

regards
roland

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Roland

Well, if the LV is being used for anything real, then I don't know of
anything where you could remove a block in the middle and still have a
working fs.   You can only reduce fs'es (the ones that you can reduce)
by reducing off of the end and making it smaller.


yes, that's clear to me.


It makes zero sense to be able to remove a block in the middle of a LV
used by just about everything that uses LV's as nothing supports being
able to remove a block in the middle.


yes, that critics is totally valid. from a fs point of view you completely
corrupt  the volume, that's clear to me.


What is your use case that you believe removing a block in the middle
of an LV needs to work?


my use case is creating some badblocks script with lvm which intelligently
handles and skips broken sectors on disks which can't be used otherwise...

my plan is to scan a disk for usable sectors and map the logical volume
around the broken sectors.

whenever more sectors get broken, i'd like to remove the broken ones to have
a usable lv without broken sectors.

since you need to rebuild your data anyway for that disk, you can also
recreate the whole logical volume.

my question and my project is a little bit academic. i'd simply want to try
out how much use you can have from some dead disks which are trash otherwise...


the manpage is telling this:


   Resize an LV by specified PV extents.

   lvresize LV PV ...
   [ -r|--resizefs ]
   [ COMMON_OPTIONS ]



so, that sounds like that i can resize in any direction by specifying extents.



Now if you really need to remove a specific block in the middle of the
LV then you are likely going to need to use pvmove with specific
blocks to replace those blocks with something else.


yes, pvmove is the other approach for that.

but will pvmove continue/finish by all means when moving extents located on a
bad sector ?

the data may be corrupted anywhy, so i thought it's better to skip it.

what i'm really after is some "remap a physical extent to a healty/reserved
section and let zfs selfheal do the rest".  just like "dismiss the problematic
extents and replace with healthy extents".

i'd better like remapping instead of removing a PE, as removing will invalidate
the whole LV

roland






Am 09.04.23 um 19:32 schrieb Roger Heflin:

On Sun, Apr 9, 2023 at 10:18 AM Roland  wrote:

hi,

we can extend a logical volume by arbitrary pv extends like this :


root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:5
Size of logical volume mytestVG/blocks_allocated changed from 1.00
MiB (1 extents) to 2.00 MiB (2 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:10
Size of logical volume mytestVG/blocks_allocated changed from 2.00
MiB (2 extents) to 3.00 MiB (3 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:15
Size of logical volume mytestVG/blocks_allocated changed from 3.00
MiB (3 extents) to 4.00 MiB (4 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:20
Size of logical volume mytestVG/blocks_allocated changed from 4.00
MiB (4 extents) to 5.00 MiB (5 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
LV   Start SSize  Start
blocks_allocated 0  1 0
 0  4 1
blocks_allocated 1  1 5
 0  4 6
blocks_allocated 2  110
 0  411
blocks_allocated 3  115
 0  416
blocks_allocated 4  120
 0 47691721


how can i do this in reverse ?

when i specify the physical extend to be added, it works - but when is
specifcy the physical extent to be removed,
the last one is being removed but not the specified one.

see here for example - i wanted to remove extent number 10 like i did
add it, but instead extent number 20
is being removed

root@s740:~# lvresize mytestVG/blocks_allocated -l -1 /dev/sdb:10
Ignoring PVs on command line when reducing.
WARNING: Reducing active logical volume to 4.00 MiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mytestVG/blocks_allocated? [y/n]: y
Size of logical volume mytestVG/blocks_allocated changed from 5.00
MiB (5 extents) to 4.00 MiB (4 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
LV   Start SSize  Start
blocks_allocated 0  1 0
 0  4 1
blocks_al

Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Roland

thank you, very valuable!

Am 09.04.23 um 20:53 schrieb Roger Heflin:

On Sun, Apr 9, 2023 at 1:21 PM Roland  wrote:

Well, if the LV is being used for anything real, then I don't know of
anything where you could remove a block in the middle and still have a
working fs.   You can only reduce fs'es (the ones that you can reduce)
by reducing off of the end and making it smaller.

yes, that's clear to me.


It makes zero sense to be able to remove a block in the middle of a LV
used by just about everything that uses LV's as nothing supports being
able to remove a block in the middle.

yes, that critics is totally valid. from a fs point of view you completely
corrupt  the volume, that's clear to me.


What is your use case that you believe removing a block in the middle
of an LV needs to work?

my use case is creating some badblocks script with lvm which intelligently
handles and skips broken sectors on disks which can't be used otherwise...

my plan is to scan a disk for usable sectors and map the logical volume
around the broken sectors.

whenever more sectors get broken, i'd like to remove the broken ones to have
a usable lv without broken sectors.

since you need to rebuild your data anyway for that disk, you can also
recreate the whole logical volume.

my question and my project is a little bit academic. i'd simply want to try
out how much use you can have from some dead disks which are trash otherwise...


the manpage is telling this:


 Resize an LV by specified PV extents.

 lvresize LV PV ...
 [ -r|--resizefs ]
 [ COMMON_OPTIONS ]



so, that sounds like that i can resize in any direction by specifying extents.



Now if you really need to remove a specific block in the middle of the
LV then you are likely going to need to use pvmove with specific
blocks to replace those blocks with something else.

yes, pvmove is the other approach for that.

but will pvmove continue/finish by all means when moving extents located on a
bad sector ?

the data may be corrupted anywhy, so i thought it's better to skip it.

what i'm really after is some "remap a physical extent to a healty/reserved
section and let zfs selfheal do the rest".  just like "dismiss the problematic
extents and replace with healthy extents".

i'd better like remapping instead of removing a PE, as removing will invalidate
the whole LV

roland



Create an LV per device, and when the device is replaced then lvremove
the devices list.  Once a sector/area is bad I would not trust the
sectors until you replace the device.  You may be able to try the
pvmove multiple times and the disk may be able to eventually rebuild
the data.

My experience with bad sectors is once it reports bad the disks will
often rewrite it at the same location and call it "good" when it is
going to report bad again almost immediately, or be a uselessly slow
sector.   Sometimes it will replace the sector on a
re-write/successful read but that seems unreliable.

On non-zfs fs'es I have found the "bad" file and renamed it
badfile. and put it in a dir called badblocks.  So long as the bad
block is in the file data then you can contain the badblock by
containing the bad file.   And since most of the disk will be file
data that should also be a management scheme not requiring a fs
rebuild.

The re-written sector may also be "slow" and it might be wise to treat
those sectors as bad, and in the "slow" sector case pvmove should
actually work.  For that you would need a badblocks that "timed" the
reads to disk and treats any sector taking longer that even say .25
seconds as slow/bad.   At 5400 rpm, .25/250ms translates to around 22
failed re-read tries.   If you time it you may have to do some testing
on the entire group of reads in smaller aligned sectors to figure out
which sector in the main read was bad.  If you scanned often enough
for slows you might catch them before they are completely bad.
Technically the disk is supposed to do that on its scans, but even
when I have turned the scans up to daily it does not seem to act
right.

And I have usually found that the bad "units" are 8 units of 8
512-byte sectors for a total of around 32k (aligned on the disk).

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-12 Thread Roland

>Controllers remap blocks all on their own and the so-called geometry
is entirely fictitious anyway

so tell me then, why i have a shelf full with dead disks where half of
them are out of business for nothing but a couple of bad sectors ?

i don't see the point that hardware capable storing terabytes of data is
being put to trash, because of some <0.01% of it's sectors is defective
for this or for that reason.  it's that "the vendor tells it's dead now
- so please better buy a new one" paradigm, which seems to rule
everewhere today.

i dislike this attitude.

if you had a self healing diving suit which quits healing itself after
the 5th small hole, would you throw that away after the 5th hole - or
would you put a patch on that? same goes for bicycle inner tubing. there
were times, where you put patches on that because new ones where
expensive. nowadays, everbody puts them to trash and buys a new one.

so, if some drive controller isn't able to fix your 20 broken sectors -
i'd like to fix it myself. and i'd like to try the lvm apporach, because
i think it's a sensible way of putting some abstraction layer between
your filesystem and your rotating disks.

and even if it's dumb to do or if it's something which will not succeed
, it's at least worth a try to show if it works or show why it can't
work - and if it doesn't work - there is at least something to learn
about lvm or dead disks.

roland


Am 09.04.23 um 22:18 schrieb matthew patton:

> my plan is to scan a disk for usable sectors and map the logical volume
> around the broken sectors.

1977 called, they'd like their non-self-correcting HD controller
implementations back.

From a real-world perspective there is ZERO (more like negative)
utility to this exercise. Controllers remap blocks all on their own
and the so-called geometry is entirely fictitious anyway. From a
script/program "because I want to" perspective you could leave LVM
entirely out of it and just use a file with arbitrary offsets
scribbled with a "bad" signature.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO athttp://tldp.org/HOWTO/LVM-HOWTO/
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] pvshrink - defrag pv / move extents to the beginning

2023-02-13 Thread Roland

hello,

i wanted to resize/shrink a virtual machines disk to a smaller size and
found this tool very helpful:

https://github.com/mythic-beasts/pvshrink

there is a python3 enable version at

https://github.com/sensimple-contrib/pvshrink


is there something similar in lvm2 tools in the meantime , or could this
utility get integrated into the lvm2 tools suite ?

regards
roland


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-12 Thread Roland

>Reall silly plan  - been there years back in time when drives were FAR
more expensive with the price per GiB.
>Todays - just throw bad drive to recycle bin - it's not worth to do
this silliness.

ok, i understand your point of view. and thank you for the input.

but this applies to a world with endless ressources and when people can
afford the new hardware.

i think, with the same logic, you can designate some guy to be silly ,
if he puts a patch on his bicycle inner tube instead
of buying a new one, as they are cheap. with the patch, the tube is
always worse then a new one. and he probably risks
his own health, because of using a tube which may already have gotten
porous...

but shouldn't we perhaps leave it up to the end user / owner of the
hardware,  to decide when it's ready for the recycle bin ?

or should we perhaps wait for the next harddrive supply crisis (like in
2011)?  then people start to get more creative in using
what they have, because they have no other option...

> whenever you want to create new arrangement for you disk with 'bad'
areas,
>you can always start from 'scratch' - since afterall - lvm2 ONLY
manipulates with metadata within disk front -
>so if you need to create new 'holes',
>just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
>and then  'lvextend'  with normal  or  'lvextend --type error | --type
zero' segment types around bad areas with specific size.
>Once you are finished and your LV precisely matches your 'previous' 
LV of you past VG - you can start to use this LV again with
>new arrangement of  'broken zeroed/errored' areas.

yes, i have already come to the conclusion, that it's always better to
start from scratch like this. i dismissed the idea of
excluding or relocating bad sectors.

> But good advice from me - whenever  'smartctl' starts to show
relocation block errors - it's the right moment to  'dd_rescue'
> any LV to your new drive...

yes, i'm totally aware that we walk on very thin ice here.

but i'd really like to collect some real world data/information on how
good such disk "recycling" can probably work.  i don't have
any pointers for such and did not find any information, on "how fast a
bad disk gets worse,  if it has irretrievable bad sectors
and smart is reporting relocation errors. there seems not much
information around for this...

i guess such "broken" disks being used with zfs in a redundant setup,
they could probable still serve a purpose. maybe not
for production data, but probably good enough for "not so important"
application.

it's a little bit academic project. for my own fun. i like to fiddle
with disks, lvm, zfs and that stuff

roland

Am 12.04.23 um 12:20 schrieb Zdenek Kabelac:

Dne 09. 04. 23 v 20:21 Roland napsal(a):

Well, if the LV is being used for anything real, then I don't know of
anything where you could remove a block in the middle and still have a
working fs.   You can only reduce fs'es (the ones that you can reduce)


my plan is to scan a disk for usable sectors and map the logical volume
around the broken sectors.

whenever more sectors get broken, i'd like to remove the broken ones
to have
a usable lv without broken sectors.



Reall silly plan  - been there years back in time when drives were FAR
more expensive with the price per GiB.

Todays - just throw bad drive to recycle bin - it's not worth to do
this silliness.

HDD bad sectors are spreading - and slowly the surface gets destroyed

So if you make large 'head-room' around bad disk areas - if they are
concentrated on some disk area - and you know topology of you disk drive
like i.e. 1% free disk space before and after bad area - you could
possibly use disk for a little while more - but only to store
invaluable data



since you need to rebuild your data anyway for that disk, you can also
recreate the whole logical volume.

my question and my project is a little bit academic. i'd simply want
to try
out how much use you can have from some dead disks which are trash
otherwise...


You could always take  'vgcfgbackup'  of lvm2 metadata and make some
crazy transition of if with even  AWK/python/perl   -  but we really
tend to support
just some useful features - as there is already  'too much' and users
are getting often lost.

One very simply & naive implementation could be going alonge this path -

whenever you want to create new arrangement for you disk with 'bad'
areas,
you can always start from 'scratch' - since afterall - lvm2 ONLY
manipulates with metadata within disk front - so if you need to create
new 'holes',
just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
and then  'lvextend'  with normal  or  'lvextend --type error | --type
zero' segment types around bad areas with specific size.
Once you are finished and your LV precisely matches your 'previous' 
LV of you past VG - you can start to use this LV again with new
arrangement of  'broken zeroed/errored' area