On 5/17/23 16:28, Matthias Petermann wrote:
This leads to the main reason for my question: apart from the VND nodes in /dev that I can easily generate in sufficient numbers using MAKEDEV, what other unknown limitations might I potentially encounter earlier with this approach compared to another approach where I require fewer VND devices? An assessment would greatly help me, which pertains to the number of approximately 20-30 VMs on a system with 8GB of RAM (Dom0 = 512MB).

this reminds me of a limitation on Linux Dom0s with that setup: one had to increase the number of possible loop devices (e.g. `max_loop=128`).

I suppose there's no default limit with vnd, and as for scalability I suppose a simple script to generate many and stress-test a few with bonnie++ or dd would tell, if that's sustainable or not. I don't see why it wouldn't.

I wonder however about the sparse-file situation. I heard vnd is fixed and can now handle sparse-files, but what about the underlying file-system?

-elge

Reply via email to