Hey ya’ll,

One challenge I’ve been looking at is how to setup an appropriate memory cgroup 
limit for workloads that are leveraging virtiofs (ie, running pods with Kata 
Containers). I noticed that memory usage of the daemon itself can grow 
considerably depending on the workload; though much more than I’d expect.

I’m running workload that simply runs a build on kernel sources with -j3. In 
doing this, the source of the linux kernel are shared via virtiofs (no DAX), so 
as the build goes on, there are a lot of files opened, closed, as well as 
created. The rss memory of virtiofsd grows into several hundreds of MBs.

When taking a look, I’m suspecting that virtiofsd is carrying out the opens, 
but never actually closing fds. In the guest, I’m seeing fd’s on the order of 
10-40 for all the container processes as it runs, whereas I see the number of 
fds for virtiofsd continually increasing, reaching over 80,000 fds. I’m 
guessing this isn’t expected?

Can any ya’ll help shed some light, or where I can look?

Thanks,
Eric


_______________________________________________
Virtio-fs mailing list
[email protected]
https://listman.redhat.com/mailman/listinfo/virtio-fs

Reply via email to