Hi Eneko,
16.02.2022 12:33, Eneko Lacunza пишет:
Hi Sergey,
So, does this really make sense? If you put the new 2 disks in node7
in a pool, that data won't be able to survive node7 failure.
You are right, if 7 node is failure data won't be able.
But I think about if 2 disks/2 osd ad on new pool and is shared on all
nodes.
If you're trying to benchmark the disks, that wouldn't be a good test,
because in a real deployment disk IO for only one VM would be worse
(due to replication and network latencies).
Not only for one VM, I have 2 and more in future Windows VM
What IOPS are you getting in your 4K tests? You won't get near direct
disk IOPS...
I need to test the host disk or the VM disk ?
Did you try with multiple parallel VMs? Aggregate 4K results should be
much better :)
I think about this way, maybe is work.
Cheers
El 16/2/22 a las 10:24, Сергей Цаболов escribió:
Hi Eneko,
16.02.2022 11:58, Eneko Lacunza пишет:
Hi Sergey,
El 16/2/22 a las 9:52, Сергей Цаболов escribió:
I have 7 node's PVE Cluster + Ceph storage
In 7 node I add new 2 disks and want to make specific new osd pool
on Ceph.
Is possible with new disk create specific pool ?
You are adding 2 additional disk in each node, right?
No, I add the new disk on node 7, not on each node of cluster.
You can assign them to a new pool, creating custom crush rules.
Yes this I know how is make new rules
In one node I for test added 2 ssd disk and make the new rules|
|
|ceph osd crush rule create-replicated replicated_ssd default host
ssd and with this rule I make new pool vm.ssd
|
Why do you want to use those disks for a different pool? What disks
do you have now, and what disk are the new? (for example, are all
HDD or SSD...)
I want make new pool with HDD - SAS for specific storage of some
Windows Server VM.
In existing pools :
vm.pool base pool for VM disks
cephfs_data some disks and ISO and other datas
vm.ssd new pool I make from 2 ssd disk
I try to test the Windows Server disk speed for Read/Write and RND4K
Q32T1 with CrystalDiskMark 8.0.4x64
If I configure the VM disk to Sata and SSD emulation,Cache: Write
back and Discard, Speed write/read is very good something like :
SEQ1M Q8T1 1797.43/1713.07
SEQ1M Q1T1 1790.77/1350.55
but the RND4K Q32T1 and RND4K Q1T1 is not good, very small.
After the test I think if I add 2 new disks, configure is for
specific pool maybe my speed for RND4K Q32T1 and RND4K Q1T1 maybe
they will get better
Thank you
Cheers
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
Sergey TS
The best Regard
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user