Papp Tamás wrote, On 2010. 09. 05. 0:58:
> hi All,
>   

hi All again,

> 1. I have some more problem. I guest a hard lockup. I really don't know, 
>   

I meant here I _GOT_ a hard lockup:)

> why. There was no high load or any fs activity. I just run 
> /etc/init.d/mailman start inside the VM and got an oops message on the 
> console. Unfortunately after the reboot the logs were empty. Sure I 
> cannot reproduce it, at least I hope.
>   

Well, I can. Now again, right after I start a container I get the kernel 
panic. I see the console through a KVM, this is a screenshot:




Another shot:



Is this lxc, or cgroup related, or something else?
The system a brand new  Proliant DL160 G6.

This is the lxc.conf:

lxc.utsname = test
lxc.tty = 4


lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br1
lxc.network.name = eth1
lxc.network.mtu = 1500
lxc.network.ipv4 = 10.1.1.219/16
lxc.network.hwaddr = AC:DD:22:63:22:22
lxc.network.veth.pair = veth118

lxc.rootfs = /data/lxc/test/rootfs
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rm



Thank you,

tamas

------------------------------------------------------------------------------
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to