Re: [ClusterLabs] One volume is trimmable but the other is not?

2018-01-29 Thread Klaus Wenninger
On 01/29/2018 07:51 PM, Eric Robinson wrote:
>
>
>
>> -Original Message-
>> From: Klaus Wenninger [mailto:kwenn...@redhat.com]
>> Sent: Friday, January 26, 2018 11:38 AM
>> To: users@clusterlabs.org
>> Subject: Re: [ClusterLabs] One volume is trimmable but the other is not?
>>
>> On 01/26/2018 07:45 PM, Eric Robinson wrote:
>>>>> I sent this to the drbd list too, but it’s possible that someone
>>>>> here may know.
>>>>>
>>>>>
>>>>>
>>>>> This is a WEIRD one.
>>>>>
>>>>>
>>>>>
>>>>> Why would one drbd volume be trimmable and the other one not?
>>>>>
>>>> iirc drbd stores some of the config in the meta-data as well - like
>>>> e.g. some block-size I remember in particular - and that doesn't just
>>>> depend on the content of the current config-files but as well on the
>>>> history (like already connected and to whom).
>>>> Don't know if that helps in particular - just saying taking a look at
>>>> differences on the replication-partners might be worth while.
>>>>
>>>> I know that it shows the maximum discard block-size 0 on one of the
>>>> drbds but that might be a configuration passed down by the lvm layer as
>> well.
>>>> (provisioning_mode?) So searching for differences in the
>>>> volume-groups or volumes might make sense as well.
>>>>
>>>> Regards,
>>>> Klaus
>>> Thanks for your reply, Klaus. However, I don't think it's possible that 
>>> anything
>> could be getting "passed down" from LVM because the drbd devices are built
>> directly on top of the raid arrays, with no LVM layer between...
>> That is why I had written "passed down" as there is LVM on top of drbd and 
>> not
>> between raid and drbd ;-)
>>
>>> {
>>> on ha11a {
>>> device /dev/drbd1;
>>> disk /dev/md3;
>>> address 198.51.100.65:7789;
>>> meta-disk internal;
>>> }
>>>
>>> on ha11b {
>>> device /dev/drbd1;
>>> disk /dev/md3;
>>> address 198.51.100.66:7789;
>>> meta-disk internal;
>>> }
>>> }
>>>
>>> --Eric
>>
> You said, 
>
>>>> I know that it shows the maximum discard block-size 0 on one of the
>>>> drbds but that might be a configuration passed down by the lvm layer as
>>>> well.
> How would TRIM support get "passed down" to DRBD from LVM? TRIM support works 
> the other way around, doesn't it? Support gets "passed up" from lower layers.

Exactly. And I'm not talking about "passing down" TRIM support there ;-)
If a layer reports TRIM support to the upper layers depends both on
the the fact if the lower layer claims to support TRIM, and on local
configuration of this layer (examples: lvm, luks, ...).
The latter is what I was referring to and what might be influenced
by configuration passed down from the layer on top - not TRIM
explicitly but something that makes the layer implicitly decide
not to support trim (e.g. additional complexity not implemented yet).
Anyway - just a few thoughts on why it might make still sense to
look at differences in upper layer ...

Regards,
Klaus
 
>
> --Eric


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] One volume is trimmable but the other is not?

2018-01-26 Thread Klaus Wenninger
On 01/26/2018 07:45 PM, Eric Robinson wrote:
>>> I sent this to the drbd list too, but it’s possible that someone here
>>> may know.
>>>
>>>
>>>
>>> This is a WEIRD one.
>>>
>>>
>>>
>>> Why would one drbd volume be trimmable and the other one not?
>>>
>> iirc drbd stores some of the config in the meta-data as well - like e.g. some
>> block-size I remember in particular - and that doesn't just depend on the
>> content of the current config-files but as well on the history (like already
>> connected and to whom).
>> Don't know if that helps in particular - just saying taking a look at 
>> differences
>> on the replication-partners might be worth while.
>>
>> I know that it shows the maximum discard block-size 0 on one of the drbds
>> but that might be a configuration passed down by the lvm layer as well.
>> (provisioning_mode?) So searching for differences in the volume-groups or
>> volumes might make sense as well.
>>
>> Regards,
>> Klaus
> Thanks for your reply, Klaus. However, I don't think it's possible that 
> anything could be getting "passed down" from LVM because the drbd devices are 
> built directly on top of the raid arrays, with no LVM layer between...
That is why I had written "passed down" as there is LVM on top of drbd and
not between raid and drbd ;-)

>
> {
> on ha11a {
> device /dev/drbd1;
> disk /dev/md3;
> address 198.51.100.65:7789;
> meta-disk internal;
> }
>
> on ha11b {
> device /dev/drbd1;
> disk /dev/md3;
> address 198.51.100.66:7789;
> meta-disk internal;
> }
> }
>
> --Eric


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] One volume is trimmable but the other is not?

2018-01-26 Thread Eric Robinson
> > I sent this to the drbd list too, but it’s possible that someone here
> > may know.
> >
> >
> >
> > This is a WEIRD one.
> >
> >
> >
> > Why would one drbd volume be trimmable and the other one not?
> >
> 
> iirc drbd stores some of the config in the meta-data as well - like e.g. some
> block-size I remember in particular - and that doesn't just depend on the
> content of the current config-files but as well on the history (like already
> connected and to whom).
> Don't know if that helps in particular - just saying taking a look at 
> differences
> on the replication-partners might be worth while.
> 
> I know that it shows the maximum discard block-size 0 on one of the drbds
> but that might be a configuration passed down by the lvm layer as well.
> (provisioning_mode?) So searching for differences in the volume-groups or
> volumes might make sense as well.
> 
> Regards,
> Klaus

Thanks for your reply, Klaus. However, I don't think it's possible that 
anything could be getting "passed down" from LVM because the drbd devices are 
built directly on top of the raid arrays, with no LVM layer between...

{
on ha11a {
device /dev/drbd1;
disk /dev/md3;
address 198.51.100.65:7789;
meta-disk internal;
}

on ha11b {
device /dev/drbd1;
disk /dev/md3;
address 198.51.100.66:7789;
meta-disk internal;
}
}

--Eric
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] One volume is trimmable but the other is not?

2018-01-26 Thread Klaus Wenninger
On 01/25/2018 11:45 PM, Eric Robinson wrote:
>
> I sent this to the drbd list too, but it’s possible that someone here
> may know.
>
>  
>
> This is a WEIRD one.
>
>  
>
> Why would one drbd volume be trimmable and the other one not?
>

iirc drbd stores some of the config in the meta-data as well -
like e.g. some block-size I remember in particular - and that
doesn't just depend on the content of the current config-files
but as well on the history (like already connected and to
whom).
Don't know if that helps in particular - just saying taking a
look at differences on the replication-partners might be
worth while.

I know that it shows the maximum discard block-size 0 on
one of the drbds but that might be a configuration passed
down by the lvm layer as well. (provisioning_mode?)
So searching for differences in the volume-groups or
volumes might make sense as well.

Regards,
Klaus

>  
>
> Here you can see me issuing the trim command against two different
> filesystems. It works on one but fails on the other.
>
>  
>
> ha11a:~ # fstrim -v /ha01_mysql
>
> /ha01_mysql: 0 B (0 bytes) trimmed
>
>  
>
> ha11a:~ # fstrim -v /ha02_mysql
>
> fstrim: /ha02_mysql: the discard operation is not supported
>
>  
>
> Both filesystems are on the same server, two different drbd devices on
> two different mdraid arrays, but the same underlying physical drives.
>
>  
>
> Yet it can be seen that discard is enabled on drbd0 but not on drbd1…
>
>  
>
> NAME    DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
>
> sda    0  512B   4G 1
>
> ├─sda1 0  512B   4G 1
>
> │ └─md0    0  128K 256M 0
>
> ├─sda2 0  512B   4G 1
>
> │ └─md1    0  128K 256M 0
>
> ├─sda3 0  512B   4G 1
>
> ├─sda4 0  512B   4G 1
>
> ├─sda5 0  512B   4G 1
>
> │ └─md2    0    1M 256M 0
>
> │   └─drbd0    0    1M 128M 0
>
> │ └─vg_on_drbd0-lv_on_drbd0   393216    1M 128M 0
>
> └─sda6 0  512B   4G 1
>
>   └─md3    0    1M 256M 0
>
>     └─drbd1    0    0B   0B 0
>
>   └─vg_on_drbd1-lv_on_drbd1    0    0B   0B 0
>
>  
>
>  
>
> The filesystems are set up the same. (Note that I do not want
> automatic discard so that option is not enabled on either filesystem,
> but the problem is not the filesystem, since that relies on drbd, and
> you can see from lsblk that the drbd volume is the problem.)
>
>  
>
> ha11a:~ # mount|grep drbd
>
> /dev/mapper/vg_on_drbd1-lv_on_drbd1 on /ha02_mysql type ext4
> (rw,relatime,stripe=160,data=ordered)
>
> /dev/mapper/vg_on_drbd0-lv_on_drbd0 on /ha01_mysql type ext4
> (rw,relatime,stripe=160,data=ordered)
>
>  
>
>  
>
>  
>
>  
>
>  
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] One volume is trimmable but the other is not?

2018-01-25 Thread Eric Robinson
I sent this to the drbd list too, but it's possible that someone here may know.

This is a WEIRD one.

Why would one drbd volume be trimmable and the other one not?

Here you can see me issuing the trim command against two different filesystems. 
It works on one but fails on the other.

ha11a:~ # fstrim -v /ha01_mysql
/ha01_mysql: 0 B (0 bytes) trimmed

ha11a:~ # fstrim -v /ha02_mysql
fstrim: /ha02_mysql: the discard operation is not supported

Both filesystems are on the same server, two different drbd devices on two 
different mdraid arrays, but the same underlying physical drives.

Yet it can be seen that discard is enabled on drbd0 but not on drbd1...

NAMEDISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda0  512B   4G 1
├─sda1 0  512B   4G 1
│ └─md00  128K 256M 0
├─sda2 0  512B   4G 1
│ └─md10  128K 256M 0
├─sda3 0  512B   4G 1
├─sda4 0  512B   4G 1
├─sda5 0  512B   4G 1
│ └─md201M 256M 0
│   └─drbd001M 128M 0
│ └─vg_on_drbd0-lv_on_drbd0   3932161M 128M 0
└─sda6 0  512B   4G 1
  └─md301M 256M 0
└─drbd100B   0B 0
  └─vg_on_drbd1-lv_on_drbd100B   0B 0


The filesystems are set up the same. (Note that I do not want automatic discard 
so that option is not enabled on either filesystem, but the problem is not the 
filesystem, since that relies on drbd, and you can see from lsblk that the drbd 
volume is the problem.)

ha11a:~ # mount|grep drbd
/dev/mapper/vg_on_drbd1-lv_on_drbd1 on /ha02_mysql type ext4 
(rw,relatime,stripe=160,data=ordered)
/dev/mapper/vg_on_drbd0-lv_on_drbd0 on /ha01_mysql type ext4 
(rw,relatime,stripe=160,data=ordered)





___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org