Hello to all,

fortunately it turned out to be that once a month TrueNAS does zfs volume scrub  ( I completely forgot that :(   ) and it lasted more than 24h ...  after that i/o wait lowered to minimum ...

I apologize for disturbance

BR

Tonči

srdačan pozdrav / best regards

Tonči Stipičević, dipl. ing. elektr.
direktor / manager

SUMA Informatika d.o.o., Badalićeva 27, OIB 93926415263

Podrška / Upravljanje IT sustavima za male i srednje tvrtke
Small & Medium Business IT Support / Management

mob: 091 1234003
www.suma-informatika.hr

On 06. 10. 2024. 15:01, Tonči Stipičević wrote:
continued from previous post :

on the other host  TrueNAS VM (TrueNAS-13.0-U6.2)   is causing 1GB/s read ... :    ...   Is this host issue or VM internal issue ?  ... where to "intervene"?


IOTOP:

Total DISK READ:      1037.23 M/s | Total DISK WRITE: 1060.48 K/s
Current DISK READ:     702.02 M/s | Current DISK WRITE:    1679.83 K/s
    TID  PRIO  USER    DISK READ>  DISK WRITE COMMAND
1803753 be/4 root      283.67 M/s    0.00 B/s kvm -id 106 -name TN0103,debug-threads=on -no-shutdown -chardev socket,id=~024,tx_queue_size=256,bootindex=102 -machine type=pc+pve0 [iou-wrk-1726849]
1803546 be/0 root       59.29 M/s  811.01 B/s [zvol_tq-3]
1803545 be/0 root       56.26 M/s    0.00 B/s [zvol_tq-3]
1803544 be/0 root       54.22 M/s    0.00 B/s [zvol_tq-3]


Thank you

and

BR

srdačan pozdrav / best regards

Tonči Stipičević, dipl. ing. elektr.
direktor / manager

SUMA Informatika d.o.o., Badalićeva 27, OIB 93926415263

Podrška / Upravljanje IT sustavima za male i srednje tvrtke
Small & Medium Business IT Support / Management

mob: 091 1234003
www.suma-informatika.hr

On 06. 10. 2024. 14:42, Tonči Stipičević wrote:
Hello to all,

I've been using for years TrueNAS as full VM in Prox host.  No sata pass-through.

Host data pool consists of 6 x 4T enterprise sata drives in raid10  and that is where TrueNAS  disks images reside. ( 1 x boot, 3 x 2T virtual disk stripped)

'Till last week everything was working smoothly (in real time) but after upgrade to latest Prox host (community/no-subs) TrueNAS VM (also on latest version)  started generating very high i/o waits (60-70%).

This host hosts 10 VMs and when TrueeNAS VM is not running , i/o wait lowers down to 2-3% and CPU %  runs between 20-30% depending on the VM load.  Most important -> CPU % utilization is much higher than i/o wait ...

But , as soon as TrueNAS boots  i/o waits jumps up to 60-70% ... and everything slows down ...

I moved all other VMs to another host so to eliminate all possible influences , but TrueNAS alone generates the same high i/o wait ...

Is there any way to debug this high i/o wait  problem ? ...

Total DISK READ:       208.71 M/s | Total DISK WRITE: 0.00 B/s
Current DISK READ:     171.33 M/s | Current DISK WRITE: 0.00 B/s
    TID  PRIO  USER    DISK READ>  DISK WRITE  SWAPIN      IO COMMAND
1189075 be/4 root       50.80 M/s    0.00 B/s  ?unavailable? kvm -id 100 -name TN02,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/v~d=net0,rx_queue_size=1024,tx_queue_size=1024 -machine type=pc+pve0 [iou-wrk-876705]
    370 be/0 root       40.69 M/s    0.00 B/s  ?unavailable? [z_rd_int]
1189069 be/4 root       17.61 M/s    0.00 B/s  ?unavailable? kvm -id 100 -name TN02,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/v~d=net0,rx_queue_size=1024,tx_queue_size=1024 -machine type=pc+pve0 [iou-wrk-876705]
1188348 be/0 root       10.78 M/s    0.00 B/s  ?unavailable? [zvol]

iotop shows that  z_rd_int and kvm (TrueNAS VM)  reads a lot ... Why ? ... why is so much reading needed   TrueNAS is doing nothing (VM internal 4-5% cpu utilization) , no file transfer or such ....


Till this weekend everything worked seemless and I 've never thought about migrating TrueNAS on baremetal or sata-passthrough


Thank you very very much for your help and support


Best regards

Tonči


_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to