Rich Freeman <[email protected]> writes:

> On Mon, Jan 18, 2016 at 9:45 PM, Alec Ten Harmsel
> <[email protected]> wrote:
>>
>> All Joost is saying is that most resources can be overcommitted, since
>> all the users will not be using all their resources at the same time.
>>
>
> Don't want to sound like a broken record, but this is precisely why
> containers are so attractive.  You can set hard limits wherever you
> want, but otherwise absolutely everything can be
> over-comitted/shared/etc to the degree you desire.  They're just
> processes and namespaces and cgroups and so on.  You just have to be
> willing to live with whatever kernel is running on the host.  Of
> course, it isn't a solution for Windows, and there aren't any mature
> VDI-oriented solutions I'm aware of.  However, running as non-root in
> a container should be very secure so there is no reason it couldn't be
> done.  I just spun up a new container yesterday to test out burp
> (alas, ago beat me to the stablereq) and the server container is using
> all of 54M total / 3M RSS (some of that because I like to run sshd and
> so on inside).  I can afford to run a LOT of those.

Yes, I prefer containers over xen and kvm.  They are easy to set up,
have basically no overhead, no noticeable performance impact or loss,
and handing over devices, like a network card, to a container is easy
and painless.  Unfortunately, as you say, you can't use them when you
need Windoze VMs.

BTW, is it as easy to give a graphics card to a container as it is to
give it a network card?  What if you have a container for each user who
somehow logs in remotely to an X session?  Do (can) you run X sessions
that do not have a console and do not need a (dedicated) graphics card
(just for users logging in remotely)?

Having a container for each user would be much less painful than having
a VM for each user.  That brings back the question what to use when you
want to log in remotely to an X session ...

Reply via email to