Maybe it will be implemented later? But it seems to me a little strange when
there is no way to clear the cache from a garbage.
Maybe I do not understand? Can you please explain this behavior.
For example:
I have the full cached partition, with full cache:
[root@localhost ~]# df -h /data
>
> dm-writecache could be seen as 'extension' of your page-cache to held
> longer list of dirty-pages...
>
> Zdenek
>
Does it mean that the dm-writecache is always empty, after reboot?
Thanks.
smime.p7s
Description: S/MIME Cryptographic Signature
On 19.10.2018 16:08, Zdenek Kabelac wrote:
> Dne 19. 10. 18 v 14:45 Gionatan Danti napsal(a):
>> On 19/10/2018 12:58, Zdenek Kabelac wrote:
>>> Hi
>>>
>>> Writecache simply doesn't care about caching your reads at all.
>>> Your RAM with it's page caching mechanism keeps read data as long as
>>>
On 19.10.2018 12:12, Zdenek Kabelac wrote:
> Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):
>> Maybe it will be implemented later? But it seems to me a little
>> strange when there is no way to clear the cache from a garbage.
>> Maybe I do not understand? Can you please
>
> Well, there are the following 2 commands:
>
> Get physical block size:
> blockdev --getpbsz
> Get logical block size:
> blockdev --getbsz
>
> Filesystems seem to care about the physical block size only, not the logical
> block size.
>
> So as soon as you have PVs with different
>>
>> smartctl -i /dev/sdb; blockdev --getbsz --getpbsz /dev/sdb
>> Device Model: HGST HUS722T2TALA604
>> User Capacity:2,000,398,934,016 bytes [2.00 TB]
>> Sector Size: 512 bytes logical/physical
>> Rotation Rate:7200 rpm
>> Form Factor: 3.5 inches
>> 4096
>> 512
>>
>> As
> Discarding device blocks: done
> Creating filesystem with 307200 1k blocks and 76912 inodes
> ..
> # pvs
> /dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
> /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
> /dev/LOOP_VG/LV: read failed
> At the time the file system was created (possibly may years ago), I did not
> know that I would ever move it to a device with a larger block size.
>
For this purpose all 4k disks have logical sector size 512.
Don't look at "blockdev --getbsz" it's not property of physical(real)
device.
>>
>> smartctl -i /dev/sdb; blockdev --getbsz --getpbsz /dev/sdb
>> Device Model: HGST HUS722T2TALA604
>> User Capacity:2,000,398,934,016 bytes [2.00 TB]
>> Sector Size: 512 bytes logical/physical
>> Rotation Rate:7200 rpm
>> Form Factor: 3.5 inches
>> 4096
>> 512
>>
>> As
Hello.
>> THAT is a crucial observation. It's not an LVM bug, but the filesystem
>> trying to read 1024 bytes on a 4096 device.
> Yes that's probably the reason. Nevertheless, its not really the FS's fault,
> since it was moved by LVM to a 4069 device.
> The FS does not know anything about
>
>> Presumably you want a thick volume but inside a thin pool so that you
>> can used snapshots?
>> If so have you considered the 'external snapshot' feature?
>
> Yes, in some cases they are quite useful. Still, a fast volume
> allocation can be an handy addition.
>
Hello.
Can I use external
>> Presumably you want a thick volume but inside a thin pool so that you
>> can used snapshots?
>> If so have you considered the 'external snapshot' feature?
>
> Yes, in some cases they are quite useful. Still, a fast volume
> allocation can be an handy addition.
>
Hello.
Can I use external
Hello.
Tell me please, how can I get the maximum address used by a virtual disk
(disk created with -V VirtualSize). I have several large virtual disks,
but they use only a small part at the beginning of the disk. For example:
# lvs
LV VG Attr LSize Pool Origin Data%
mylvm
Maybe this?
Please note that this problem can also happen in other cases, such as
mixing disks with different block sizes (e.g. SCSI disks with 512 bytes
and s390x-DASDs with 4096 block size).
https://www.redhat.com/archives/linux-lvm/2019-February/msg00018.html
On 11.09.2019 12:17, Gang He
Maybe this?
Please note that this problem can also happen in other cases, such as
mixing disks with different block sizes (e.g. SCSI disks with 512 bytes
and s390x-DASDs with 4096 block size).
https://www.redhat.com/archives/linux-lvm/2019-February/msg00018.html
On 11.09.2019 12:17, Gang He
On 23.10.2019 14:08, Gionatan Danti wrote:
>
> For example, consider a completely filled 64k chunk thin volume (with
> thinpool having ample free space). Snapshotting it and writing a 4k
> block on origin will obviously cause a read of the original 64k chunk,
> an in-memory change of the 4k block
On 23.10.2019 14:08, Gionatan Danti wrote:
>
> For example, consider a completely filled 64k chunk thin volume (with
> thinpool having ample free space). Snapshotting it and writing a 4k
> block on origin will obviously cause a read of the original 64k chunk,
> an in-memory change of the 4k block
On 23.10.2019 17:40, Gionatan Danti wrote:
> On 23/10/19 15:05, Zdenek Kabelac wrote:
>> Yep - we are recommending to disable zeroing as soon as chunksize >512K.
>>
>> But for 'security' reason the option it's up to users to select what
>> fits the needs in the best way - there is no 'one
Hello.
Please, tell me if there is a way to search the archive of this list?
https://www.redhat.com/archives/linux-lvm/index.html
Thanks.
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the
19 matches
Mail list logo