What disk image type do you use? Qcow is known for not being the fastest. If 
performance matters, raw is preferable. Also check if 
kvm.storage.pool.io.policy (global settings) is set to io_uring

Am 18.11.24, 14:06 schrieb "Leo" <szz...@163.com <mailto:szz...@163.com>>:


Hi, 


Thanks. 
Yes, I have tried to disable NFS 4 in storage and use vers=3 to mount the 
storage, but the performance difference is no more than 5%. 


Best regards.
Leo






---- Replied Message ----
| From | Oliver Dzombic<i...@layer7.net <mailto:i...@layer7.net>> |
| Date | 11/18/2024 15:57 |
| To | <users@cloudstack.apache.org <mailto:users@cloudstack.apache.org>> |
| Subject | Re: about the storage perfonmance |
Hi,


try to mount with NFS version 3


That helpes in some cases.


--
Mit freundlichen Gruessen / Best regards


Oliver Dzombic
Layer7 Networks


mailto:i...@layer7.net <mailto:i...@layer7.net>


Anschrift:


Layer7 Networks GmbH
Zum Sonnenberg 1-3
63571 Gelnhausen


HRB 96293 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
UST ID: DE259845632


On 18.11.24 03:22, 史洲洲 wrote:
Hi,


Sorry, I need to send this mail again. This is the same question,
only append the url for the images.


I have tested the storage performance of my storage system using three
different methods:


1. Using the NFS protocol with a 10G network, on VMware ESXi, with a 4K
block size and 100% sequential read operations.


Url of this image: https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png 
<https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png>
<https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png> 
<https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png&gt;>




2. Using the NFS protocol with a 10G network, on Cloudstack, with a 4K
block size and 100% sequential read operations.
Mounted with these: '/vers=4.1,nconnect=16/'


* Url of this image: https://s2.loli.net/2024/11/18/ 
<https://s2.loli.net/2024/11/18/>
a2WY3gpGVA6XeCO.png <https://s2.loli.net/2024/11/18/a2WY3gpGVA6XeCO.png> 
<https://s2.loli.net/2024/11/18/a2WY3gpGVA6XeCO.png&gt;>




3. Using the FC protocol with 16G , on Cloudstack, treating the LUN
accessed via FC as a local disk, with a 4K block size and 100%
sequential read operations.


Url of this image: https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png 
<https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png>
<https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png> 
<https://s2.loli.net/2024/11/18/WFsek69I3ZBblvQ.png&gt;>




The results show : with the same storage system, the average I/O
response time is best (0.32) when using NFS with VMware ESXi. The second
is FC-SAN with Cloudstack (1.25), and the worst is NFS with Cloudstack.


Even the NFS with ESXi is better than fc-san?


I believe there may be some configurations that could improve the
storage performance when using Cloudstack. I would greatly appreciate it
if anyone could offer some advice or solutions to help me optimize the
Cloudstack storage performance.


Thank you very much for your attention.


Best regards.


Leo.







Reply via email to