Guido Falsi wrote on 2023/07/11 16:27:
> Anyway you can monitor "vfs.freevnodes" and "vfs.numvnodes" to discover if
> that's an issue for you. If numvnodes is very near or above maxvnodes nad at
> the same time freevnodes is very low you have a problem. Another indicator
> could be vfs.vnodes_created incrementing very fast together with the
> previously described condition.
Thank you.
For now, here are the values as they run now.
kern.maxvnodes has been increased to a good number :) (+427921)
# env TZ=Etc/UTC sysctl kern.boottime kern.maxvnodes vfs.numvnodes
vfs.freevnodes vfs.vnodes_created
kern.boottime: { sec = 1687490420, usec = 560972 } Fri Jun 23 03:20:20 2023
kern.maxvnodes: 1048576
vfs.numvnodes: 1048572
vfs.freevnodes: 409882
vfs.vnodes_created: 137552034
As a point of concern,
A value where vfs.numvnodes is close to kern.maxvnodes.
The value of vfs.freevnodes is not far from the increased value of
kern.maxvnodes.
What do they mean? Hmmm? :)
George Mitchell wrote on 2023/07/10 23:43:
> Dang, I was hoping we could blame it on SCHED_ULE. -- George
I also think ULE is somewhat strange.
For example, the following two.
nice of the shell that will run them is 0.
nice -n 1 process-that-accesses-some-disk
nice -n 20 process-that-accesses-some-disk
It seems that the chance to use the CPU is obtained as per the nice value.
However, I don't think it follows the value that it gets a chance to access the
disk.
As a result, even if the scheduling priority is low, it can be terminated first.
This is only a matter of my feeling :)
There is no basis for feeling. Because all the behaviors are written in the
program :)
Regards.