Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-13 Thread Daniel Lezcano
On 09/13/2010 12:16 AM, Papp Tamás wrote:
 Papp Tamás wrote, On 2010. 09. 12. 23:18:

 hi!

 I also tried with qemu and no problem.

  
 I've just upgraded the box to Maverick, and after a short time it looks
 better. After 1 hour still it's up and working.

 I don't know, if it helps.


Yes, that helps. At least we have some boundaries for the bug in the kernel.
I desperately tried to reproduce the problem on my host, with a 
configuration similar of yours and the bug didn't appeared :(

It can be interesting if you can try the following,

  (1) try to reproduce the bug with all the nic offloading capabilities 
disabled
  (2) try with a macvlan configuration instead of veth+bridge

Thanks
   -- Daniel



--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-13 Thread Daniel Lezcano
On 09/13/2010 10:15 AM, Ferenc Holzhauser wrote:
 On 13 September 2010 00:16, Papp Tamástom...@martos.bme.hu  wrote:

 Papp Tamás wrote, On 2010. 09. 12. 23:18:

 hi!

 I also tried with qemu and no problem.


 I've just upgraded the box to Maverick, and after a short time it looks
 better. After 1 hour still it's up and working.

 I don't know, if it helps.

 tamas


 Sorry fo the delay. The requested information from my side (stopped
 qemu and started a container).

Thanks Ferenc !

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-10 Thread Ferenc Holzhauser
On 10 September 2010 09:48, Daniel Lezcano daniel.lezc...@free.fr wrote:
 On 09/09/2010 11:22 PM, Papp Tamás wrote:

 Daniel Lezcano wrote, On 2010. 09. 05. 22:30:

 Well, I can. Now again, right after I start a container I get the
 kernel
 panic. I see the console through a KVM, this is a screenshot:

 Papp, I suppose you attached an image but I don't see it. It is
 possible to resend.
 I increased the message body size limit to 128KB to the mailing setting.

 hi Daniel,

 Have you checked the screenshots?

 Yes, thanks for sending the screenshots.
 This is a critical bug :s

 It seems you didn't face this problem before, right ? Did you change
 something in your configuration ?
 Do you have the same problem with a 2.6.32-23-server kernel ? Are you
 able to reproduce this bug with qemu ?

 Thanks
   -- Daniel



 --
 Automate Storage Tiering Simply
 Optimize IT performance and efficiency through flexible, powerful,
 automated storage tiering capabilities. View this brief to learn how
 you can reduce costs and improve performance.
 http://p.sf.net/sfu/dell-sfdev2dev
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users


Hi,

I might be able to answer that question:
From the screenshots this appears to be the same issue what I (and at
least another user) have experienced and reported earlier on the list.
I've created a qemu-kvm VM on the server where I had this issue. I can
not reproduce the problem anymore either in this qemu VM or an LXC
container INSIDE that VM.

Ferenc

--
Automate Storage Tiering Simply
Optimize IT performance and efficiency through flexible, powerful, 
automated storage tiering capabilities. View this brief to learn how
you can reduce costs and improve performance. 
http://p.sf.net/sfu/dell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-10 Thread Papp Tamás

Daniel Lezcano wrote, On 2010. 09. 10. 9:48:
 It seems you didn't face this problem before, right ? Did you change 
 something in your configuration ?

Right, I i didn't. I haven't changed anything to the config, because 
this is a brand new install. The host machine is absolutely virgin.

 Do you have the same problem with a 2.6.32-23-server kernel ? Are you 
 able to reproduce this bug with qemu ?

Yes, I definetly do.

I haven't tried qemu. Should I?

tamas

--
Automate Storage Tiering Simply
Optimize IT performance and efficiency through flexible, powerful, 
automated storage tiering capabilities. View this brief to learn how
you can reduce costs and improve performance. 
http://p.sf.net/sfu/dell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-10 Thread Papp Tamás

Daniel Lezcano wrote, On 2010. 09. 10. 13:03:  
 Yep, sounds the same problem. Let's look at this closely ...

 1) Papp is using a 2.6.32-24-server kernel  =  kernel crash
 br
 Physical interfaces:
 ===

05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)
05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)


 ??

 Interfaces configuration:
 

auto lo
iface lo inet loopback

# The primary network interface
auto eth0
auto br0
iface br0 inet static
address x.x.x.120
netmask 255.255.255.0
gateway x.x.x.x.254
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0

auto eth1
auto br1
iface br1 inet static
address 10.1.1.120
netmask 255.0.0.0
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
up route add -net 192.168.1.0/24 gw 10.1.3.254

 Ifconfig:
 

br0   Link encap:Ethernet  HWaddr 78:e7:d1:60:ed:24
  inet addr:x.x.x.120  Bcast:x.x.x.255  Mask:255.255.255.0
  inet6 addr: fe80::7ae7:d1ff:fe60:ed24/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1353 errors:0 dropped:0 overruns:0 frame:0
  TX packets:142 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:79293 (79.2 KB)  TX bytes:12120 (12.1 KB)

br1   Link encap:Ethernet  HWaddr 78:e7:d1:60:ed:25
  inet addr:10.1.1.120  Bcast:10.255.255.255  Mask:255.0.0.0
  inet6 addr: fe80::7ae7:d1ff:fe60:ed25/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:629 errors:0 dropped:0 overruns:0 frame:0
  TX packets:276 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:70627 (70.6 KB)  TX bytes:58265 (58.2 KB)

eth0  Link encap:Ethernet  HWaddr 78:e7:d1:60:ed:24
  inet6 addr: fe80::7ae7:d1ff:fe60:ed24/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:2183 errors:0 dropped:0 overruns:0 frame:0
  TX packets:148 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:100
  RX bytes:492024 (492.0 KB)  TX bytes:12588 (12.5 KB)
  Memory:fbe6-fbe8

eth1  Link encap:Ethernet  HWaddr 78:e7:d1:60:ed:25
  inet6 addr: fe80::7ae7:d1ff:fe60:ed25/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1672 errors:0 dropped:0 overruns:0 frame:0
  TX packets:280 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:420358 (420.3 KB)  TX bytes:58361 (58.3 KB)
  Memory:fbee-fbf0

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:21 errors:0 dropped:0 overruns:0 frame:0
  TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:2063 (2.0 KB)  TX bytes:2063 (2.0 KB)

 bridge info:
 ===



bridge name bridge id   STP enabled interfaces
br0 8000.78e7d160ed24   no  eth0
br1 8000.78e7d160ed25   no  eth1


 Lxc configuration:
 =

 lxc.utsname = test
 lxc.tty = 4
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br1
 lxc.network.name = eth1
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 10.1.1.219/16
 lxc.network.hwaddr = AC:DD:22:63:22:22
 lxc.network.veth.pair = veth118

 lxc.rootfs = /data/lxc/test/rootfs
 lxc.cgroup.devices.deny = a
 lxc.cgroup.devices.allow = c 1:3 rwm
 lxc.cgroup.devices.allow = c 1:5 rwm
 lxc.cgroup.devices.allow = c 5:1 rwm
 lxc.cgroup.devices.allow = c 5:0 rwm
 lxc.cgroup.devices.allow = c 4:0 rwm
 lxc.cgroup.devices.allow = c 4:1 rwm
 lxc.cgroup.devices.allow = c 1:9 rwm
 lxc.cgroup.devices.allow = c 1:8 rwm
 lxc.cgroup.devices.allow = c 136:* rwm
 lxc.cgroup.devices.allow = c 5:2 rwm
 lxc.cgroup.devices.allow = c 254:0 rm

 2) Daniel tried on a 2.6.32-23-server kernel=  no problem

 Physical interfaces:
 ===

 05:00.0 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit 
 Ethernet Controller (Copper) (rev 01)
 05:00.1 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit 
 Ethernet Controller (Copper) (rev 01)

 Interfaces configuration:
 

 auto lo
 iface lo inet loopback

 auto eth0
 iface eth0 inet dhcp




 auto br0
 iface br0 inet static
 address 172.20.0.1
 netmask 255.255.0.0
 bridge_stp off
 bridge_maxwait 5
 pre-up  /usr/sbin/brctl addbr br0
 post-up /usr/sbin/brctl setfd br0 0
 post-up /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
 post-up 

Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-09 Thread Papp Tamás

Daniel Lezcano wrote, On 2010. 09. 05. 22:30:
  
 Well, I can. Now again, right after I start a container I get the kernel
 panic. I see the console through a KVM, this is a screenshot:


 Papp, I suppose you attached an image but I don't see it. It is 
 possible to resend.
 I increased the message body size limit to 128KB to the mailing setting.

hi Daniel,

Have you checked the screenshots?
Do you or anybody else have any idea?

Thank you,

tamas

--
Automate Storage Tiering Simply
Optimize IT performance and efficiency through flexible, powerful, 
automated storage tiering capabilities. View this brief to learn how
you can reduce costs and improve performance. 
http://p.sf.net/sfu/dell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-05 Thread Papp Tamás

Papp Tamás wrote, On 2010. 09. 05. 0:58:
 hi All,
   

hi All again,

 1. I have some more problem. I guest a hard lockup. I really don't know, 
   

I meant here I _GOT_ a hard lockup:)

 why. There was no high load or any fs activity. I just run 
 /etc/init.d/mailman start inside the VM and got an oops message on the 
 console. Unfortunately after the reboot the logs were empty. Sure I 
 cannot reproduce it, at least I hope.
   

Well, I can. Now again, right after I start a container I get the kernel 
panic. I see the console through a KVM, this is a screenshot:




Another shot:



Is this lxc, or cgroup related, or something else?
The system a brand new  Proliant DL160 G6.

This is the lxc.conf:

lxc.utsname = test
lxc.tty = 4


lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br1
lxc.network.name = eth1
lxc.network.mtu = 1500
lxc.network.ipv4 = 10.1.1.219/16
lxc.network.hwaddr = AC:DD:22:63:22:22
lxc.network.veth.pair = veth118

lxc.rootfs = /data/lxc/test/rootfs
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rm



Thank you,

tamas

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users