Daniel Smith (2011-11-21 18:49:25 -0500) wrote:

> As I recall the following steps would reliable lock up the system (and
> really should be allowed to happen in the first place). Say we have a
> host (host) and a container (cont1) with a ns pid 252.
>  1. In host: iw phy0 set netns 252
>  2. In cont1: iw phy0 interface add wlan0 type managed
>  3. In cont1: ip l s wlan0 up
>  4. In host: iw phy0 interface add wlan0 type monitor
>  5. In host: ip l s wlan0 up
>  6. In cont1: halt
>  7. system locked
> 
> This was on a stock Ubuntu 11.04 system. I have been busy with other
> problems so I have never gotten back to do a deeper analysis of the
> issue.

I've tried to repeat that and adding an interface from the host makes it
appear right on the container only.  Of course, it needs a different name,
otherwise you get a "command failed: Too many open files in system (-23)".  I
guess that problem should be considered a fixed under 3.1.0 + 0.7.5.

> > However, I think this should be automatically set up by the 
> > "lxc.network.type
> > = phys" option, so if no one has a clear reason of why this isn't supported
> > I'll file a new issue in the tracker.
> 
> Should this really be under phys type? Right now, if I am not
> mistaken, phys is used to denote a physical network interface, e.g.
> eth0, in this case the device is a PHY which could have one or more
> network interfaces.

Yeah, that sounds logical.  Anyway I'll be filing a wishlist item so that
moving a physical WiFi interface can be specified from the config file (using
"lxc.network.link = phy0" doesn't work either).

Thank you very much!
-- 
Ivan Vilata i Balaguer -- https://elvil.net/

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to