Re: Mixing HDD and SSD in lvm

2024-02-11 Thread Andy Smith
Hi,

On Sun, Feb 11, 2024 at 11:00:07AM +0100, Kamil Jońca wrote:
> ID# ATTRIBUTE_NAME  FLAGSVALUE WORST THRESH FAIL RAW_VALUE
> 246 Total_LBAs_Written  -O--CK   100   100   000-14380174325
> [...]
> --8<---cut here---end--->8---
> 
> Do I unterstand correctly, that to have TB written I should take
> "Total_LBAs_Written"
> and divide it by 1024*1024*2 ?

In theory yes. The raw value of attribute 246 is supposed to be the
number of LBAs written where an LBA is the logical sector size, in
your case 512 bytes. However, I have a number of devices where 246
is not in units of 512 bytes. Aside from the usual 512b I have seen
units of

- 512,000 bytes
- 1GiB (!)
- 1MiB
- 32MiB

So your process is correct but you will want to check what units
your drives actually increment in. If possible, write a known
quantity to one of them and see how much it goes up by.

The documentation for your drives may also let you know this, or let
you know another SMART attribute you can use for this purpose.

> 2nd question.
> I have read about "trim/discard" operations in SSD context  and I am not
> sure how to setup these here.

These days just don't do anything. There is a systemd timer called
fstrim.timer on default Debian that activates periodically and does
offline discard on every mounted filesystem, and this is probably
the best way. You can instead put "discard" in the mount options of
most filesystems and then they will do online discard as they go,
but there is not usually any need to do this.

Also LVM has a discard option. It is on by default and all this does
is trigger a discard when you remove an LV. Again that is best left
on by default.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Mixing HDD and SSD in lvm

2024-02-11 Thread Kamil Jońca
Kamil Jońca  writes:

> Debian box with LVM
> LVM uses  2 PV - raid devices each uses 2 HDD (rotating)
> discs (with sata interfaces).
>
> Now I am considering replacing one PV with md device constisting of SSD
> discs, so LVM will be have one "HDD" based pv and one SSD based PV.
> Should I worry about anything (speed differences or sth)?
> KJ

Finally I did it. Installed 2 ssd's, made raid1 on them, and use this md
as PV in lvm.
Then with pvmove I moved 2 most loaded LV's to this md (and rest to the
other PV, then remove old empty PV).
So far so good, everything seems to working fine (and in fact machine
looks to be more responsive especially with reads, but this might be
autosuggestion). 

But I have 2 questions.

1.
--8<---cut here---start->8---
sudo smartctl -x /dev/sdc
[...]
=== START OF INFORMATION SECTION ===
Model Family: Crucial/Micron Client SSDs
Device Model: CT4000MX500SSD1
Serial Number:2333E86CB9A0
LU WWN Device Id: 5 00a075 1e86cb9a0
Firmware Version: M3CR046
User Capacity:4 000 787 030 016 bytes [4,00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate:Solid State Device
Form Factor:  2.5 inches
TRIM Command: Available
Device is:In smartctl database 7.3/5528
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:Sun Feb 11 10:48:39 2024 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Unavailable
ATA Security is:  Disabled, frozen [SEC2]
Wt Cache Reorder: Unknown
[...]
ID# ATTRIBUTE_NAME  FLAGSVALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate POSR-K   100   100   000-0
  5 Reallocate_NAND_Blk_Cnt -O--CK   100   100   010-0
  9 Power_On_Hours  -O--CK   100   100   000-88
 12 Power_Cycle_Count   -O--CK   100   100   000-10
171 Program_Fail_Count  -O--CK   100   100   000-0
172 Erase_Fail_Count-O--CK   100   100   000-0
173 Ave_Block-Erase_Count   -O--CK   100   100   000-4
174 Unexpect_Power_Loss_Ct  -O--CK   100   100   000-0
180 Unused_Reserve_NAND_Blk PO--CK   000   000   000-231
183 SATA_Interfac_Downshift -O--CK   100   100   000-0
184 Error_Correction_Count  -O--CK   100   100   000-0
187 Reported_Uncorrect  -O--CK   100   100   000-0
194 Temperature_Celsius -O---K   074   059   000-26 (Min/Max 19/41)
196 Reallocated_Event_Count -O--CK   100   100   000-0
197 Current_Pending_ECC_Cnt -O--CK   100   100   000-0
198 Offline_Uncorrectable   CK   100   100   000-0
199 UDMA_CRC_Error_Count-O--CK   100   100   000-0
202 Percent_Lifetime_Remain CK   100   100   001-0
206 Write_Error_Rate-OSR--   100   100   000-0
210 Success_RAIN_Recov_Cnt  -O--CK   100   100   000-0
246 Total_LBAs_Written  -O--CK   100   100   000-14380174325
247 Host_Program_Page_Count -O--CK   100   100   000-124507650
248 FTL_Program_Page_Count  -O--CK   100   100   000-72553858
[...]
--8<---cut here---end--->8---

Do I unterstand correctly, that to have TB written I should take
"Total_LBAs_Written"
and divide it by 1024*1024*2 ?
(2 - because I should multiply by 512 as sector size, and then divide by
1024 one more)
so in my case it would be

--8<---cut here---start->8---
echo $((  14380174325 / (1024*1024*2 ) ))
6857
--8<---cut here---end--->8---
this suggest almost 7TB (this is not unbelievable when I think about
inital operations, later it should be less per day)
Am I correct? (and any suggestions about these SMART values?)

2nd question.
I have read about "trim/discard" operations in SSD context  and I am not
sure how to setup these here.


KJ



Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Andy Smith
Hi,

On Tue, Feb 06, 2024 at 12:18:26PM +0100, Kamil Jońca wrote:
> My main concern is if speed differences between SSD and HDD in one lvm
> can make any problems.

The default allocation policy for LVM ("normal") is to use an
arbitrary PV that has space. So this means that unless you say so,
you will not know which PV the extents for any given LV will go to.
Assuming you create an LV that is not larger than an entire PV, all
of it will end up on one or the other and will have the same
performance profile.

If you don't like that you can specify which PV to put it on, at
lvcreate time.

If you tell LVM to stripe extents between the two PVs then it will
not cause a problem, but I expect performance to be impacted,
possibly capped at that of the slowest PV.

Do check your device's sector size. I have been having problems
with mixed 512 vs 4K devices. That is only when the 4K device is
formatted to only do 4K though; most "Advanced Format" devices can
do both 512b and 4K.

If you are trying to do tiered storage you may have more luck with
dm-cache, zfs, bcache or (the only recently upstreamed) bcachefs.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Andy Smith
Hi,

On Tue, Feb 06, 2024 at 11:03:03AM +0100, Basti wrote:
> If you use mdadm for RAID you can mark the slower disk as 'write-mostly' to
> get more read speed.

Both (MD) RAID-1 and RAID-10 will work this out by themselves, by
the way, and tend to read from the fastest device.

I have benchmarked this. With very fast enterprise NVMe as the
faster device and consumer SATA SSD as the slower "write-mostly", I
wasn't able to detect much benefit from using "write-mostly", i.e.
MD already chose to read mostly from the NVMe.

When pairing any kind of SSD with HDD, the difference was more
dramatic and "write-mostly" did have noticeable beneficial effect,
though not huge. Again, MD by itself chose to read from the SSD even
without "write-mostly".

I hypothesise that this is because MD picks the mirror device with
the lowest outstanding request count, and that is often going to be
the flash-based device.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Andy Smith
Hi,

On Tue, Feb 06, 2024 at 09:04:13AM +0100, Hans wrote:
> I am not sure, if it is possible, to do same in LVM. As far as I know, LVM 
> must also set the corrct devicenames in correct order, mustn't it?

Neither LVM nor MD will have a problem with member devices changing
their device path as they both put their own metadata onto the
devices and use that to detect them.

If you have set a filter in lvm.conf to only look at certain
devices, you might want to be aware of the full range of names that
can happen, though.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Max Nikulin

On 06/02/2024 18:18, Kamil Jońca wrote:

1. now VG has two PV. Both are raid1 with two HDD.
2. I want to have VG with one PV as RAID1 with 2 HDD's and second PV as
RAID1 with 2SSD's


Just a warning. It seems, it is necessary to ensure that drives use the 
same block size, however my impression may be wrong.


512e vs 4K sector confusion. Sun, 14 Jan 2024 08:01:52 +
https://lists.debian.org/msgid-search/ZaOU8Bd/acsoh...@mail.bitfolk.com




Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Kamil Jońca
Kamil Jońca  writes:

> Marco Moock  writes:
>
>> Am 06.02.2024 um 07:17:02 Uhr schrieb Kamil Jońca:
>>
>>> Should I worry about anything (speed differences or sth)?
>>
>> Speed differences will occur because reading and writing from/to the
>> SSD will be much faster.
> Of course, but can it make any data damage to lvm?
>
> I am asking because some time ago was a (different) story  about SMR
> drives whose can make problem when in RAID. And I am wondering if here I
> can similar problems.
>
> KJ

Maybe I was not precise:
1. now VG has two PV. Both are raid1 with two HDD.
2. I want to have VG with one PV as RAID1 with 2 HDD's and second PV as
RAID1 with 2SSD's

So:
1. I do not want to mix HDD and SSD in one RAID.
2. I do not want to play with other RAID levels

I assume, that my motherboard will not have problems with SSD. (this is
B450 AORUS PRO)

My main concern is if speed differences between SSD and HDD in one lvm
can make any problems.

KJ



Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Dan Ritter
Kamil Jońca wrote: 
> 
> Debian box with LVM
> LVM uses  2 PV - raid devices each uses 2 HDD (rotating)
> discs (with sata interfaces).
> 
> Now I am considering replacing one PV with md device constisting of SSD
> discs, so LVM will be have one "HDD" based pv and one SSD based PV.
> Should I worry about anything (speed differences or sth)?

1. Refer to the disks in mdadm with the /dev/disk/by-id names, not
/dev/sdb style names.

2. Use mdadm's RAID-1 with the write-intent bitmap feature and
specify that the spinning disk will use the write-mostly
feature.

RAID 0 will be bad; don't try it.

Good luck; I've never actually tried this, but thought about
quite a bit.

-dsr-



Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Basti
If you use mdadm for RAID you can mark the slower disk as 'write-mostly' 
to get more read speed.


On 06.02.24 09:23, Marco Moock wrote:

Am 06.02.2024 um 08:54:18 Uhr schrieb Kamil Jońca:


Marco Moock  writes:


Am 06.02.2024 um 07:17:02 Uhr schrieb Kamil Jońca:
  

Should I worry about anything (speed differences or sth)?

Speed differences will occur because reading and writing from/to the
SSD will be much faster.

Of course, but can it make any data damage to lvm?

I am asking because some time ago was a (different) story  about SMR
drives whose can make problem when in RAID. And I am wondering if
here I can similar problems.

That was because they have a significant decrease in writing
performance when shingled data needs to be rewritten.
Some RAID controllers treated that as a drive failure.
SSDs normally have a constant write speed, so I don't think this
problem occurs here.





Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Marco Moock
Am 06.02.2024 um 08:54:18 Uhr schrieb Kamil Jońca:

> Marco Moock  writes:
> 
> > Am 06.02.2024 um 07:17:02 Uhr schrieb Kamil Jońca:
> >  
> >> Should I worry about anything (speed differences or sth)?  
> >
> > Speed differences will occur because reading and writing from/to the
> > SSD will be much faster.  
> Of course, but can it make any data damage to lvm?
> 
> I am asking because some time ago was a (different) story  about SMR
> drives whose can make problem when in RAID. And I am wondering if
> here I can similar problems.

That was because they have a significant decrease in writing
performance when shingled data needs to be rewritten.
Some RAID controllers treated that as a drive failure.
SSDs normally have a constant write speed, so I don't think this
problem occurs here.

-- 
Gruß
Marco

Spam und Werbung bitte an ichschickerekl...@cartoonies.org



Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Kamil Jońca
Marco Moock  writes:

> Am 06.02.2024 um 07:17:02 Uhr schrieb Kamil Jońca:
>
>> Should I worry about anything (speed differences or sth)?
>
> Speed differences will occur because reading and writing from/to the
> SSD will be much faster.
Of course, but can it make any data damage to lvm?

I am asking because some time ago was a (different) story  about SMR
drives whose can make problem when in RAID. And I am wondering if here I
can similar problems.

KJ



Re: Mixing HDD and SSD in lvm

2024-02-06 Thread Hans
Am Dienstag, 6. Februar 2024, 07:17:02 CET schrieb Kamil Jońca:
Hi Kamil,

I don't know, if this is working at all. The reason for this, is, the BIOS 
might cause trouble, because it might not always detect the drives in the 
correct order. 

For example, in my case I have several SATA drives, connected from port 0 to 
4.

Port 0 = 3 partitions Linux (HDD)
Port 1 = 1 partition Windows (HDD)
Port 3 = 1 partition Daten (SSD)
Port 4 = 2 partitions Daten (HDD)

So, Port 3 always shall be /dev/sdc1, and Port 4 shall be /dev/sdd1 and /dev/
sdd2, but every time I boot, I can not be sure. So it might be, that Port 3 
becomes /dev/sdd1 and Port 4 becomes /dev/sdc1 and /dev/sdc2.

To get everything corrrect mounted, I am using UUID in /etc/fstab instead 
of /dev/sdX. 

I am not sure, if it is possible, to do same in LVM. As far as I know, LVM 
must also set the corrct devicenames in correct order, mustn't it?

Maybe someone knows more, just wanted to mention the point, that the BIOS 
might interfere when you are using slower and faster harddrives.

Hope this helps.

Best regards

Hans 

> Debian box with LVM
> LVM uses  2 PV - raid devices each uses 2 HDD (rotating)
> discs (with sata interfaces).
> 
> Now I am considering replacing one PV with md device constisting of SSD
> discs, so LVM will be have one "HDD" based pv and one SSD based PV.
> Should I worry about anything (speed differences or sth)?
> KJ






Re: Mixing HDD and SSD in lvm

2024-02-05 Thread Marco Moock
Am 06.02.2024 um 07:17:02 Uhr schrieb Kamil Jońca:

> Should I worry about anything (speed differences or sth)?

Speed differences will occur because reading and writing from/to the
SSD will be much faster.

-- 
kind regards
Marco

Spam und Werbung bitte an ichschickerekl...@cartoonies.org



Mixing HDD and SSD in lvm

2024-02-05 Thread Kamil Jońca


Debian box with LVM
LVM uses  2 PV - raid devices each uses 2 HDD (rotating)
discs (with sata interfaces).

Now I am considering replacing one PV with md device constisting of SSD
discs, so LVM will be have one "HDD" based pv and one SSD based PV.
Should I worry about anything (speed differences or sth)?
KJ