[Bug 1437846] Re: akonadi mysql 5.6 crash with signal 11
I've had similar problems. It seems to me that the default Akonadi options for MySQL include a table_cache parameter, but it's been deprecated since 5.1 and in 5.6 prevents a start of the database. Removing this options did the trick for me. In MariaDB documentation there's a remark that "all versions of MariaDB are based on MySQL 5.1 and greater, thus the table_cache option is deprecated in favor of table_open_cache." -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to mysql-5.6 in Ubuntu. https://bugs.launchpad.net/bugs/1437846 Title: akonadi mysql 5.6 crash with signal 11 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/akonadi/+bug/1437846/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1484682] Re: memory leak in xl
Problems described in first parts of Debian's bug report was gone after update, although a problem with pseudo-leak, explained in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=767295#75 (and few following messages) still exists. As far I understand second part of bugreport describes problems that leads to some inconvenient and doubtful messages during creating/rebooting domU's when memory balloning is disabled, but aren't very grave in general (although patches are available in Debian repository - as specific for glibc they are not pushed upstream, I suppose). In general, where balloning is disabled, kernel complains few (about twenty, to be honest) times during creating/restarting domU: xen:balloon: Cannot add additional memory (-17) Enabling balloning, for example by dom0_mem=1536M,max:2048M fixes problem, although it is may be troublesome in environments with very small memory amount assigned to dom0 (as stated in mentioned thread in Debian's bugzilla). ** Bug watch added: Debian Bug tracker #767295 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=767295 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to xen in Ubuntu. https://bugs.launchpad.net/bugs/1484682 Title: memory leak in xl To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/xen/+bug/1484682/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1484682] [NEW] memory leak in xl
Public bug reported: With xen-utils-4.4 (4.4.1-0ubuntu0.14.04.6) I step into problem thats looks like one, described in https://bugs.debian.org/cgi- bin/bugreport.cgi?bug=767295 (that is already fixed in Debian). Memory is leaked after every domU start and restart, exactly like in: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=767295#35 Additional informations: # uname -a Linux mewa 3.13.0-61-generic #100-Ubuntu SMP Wed Jul 29 11:21:34 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION=Ubuntu 14.04.3 LTS ** Affects: xen (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to xen in Ubuntu. https://bugs.launchpad.net/bugs/1484682 Title: memory leak in xl To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/xen/+bug/1484682/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1484682] Re: memory leak in xl
Looks like patches are already included in upstream and 4.4.2 is already fixed: http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=30f10d4d2b102bd7184b84c9cc3d2246f060706a -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to xen in Ubuntu. https://bugs.launchpad.net/bugs/1484682 Title: memory leak in xl To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/xen/+bug/1484682/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1211722] Re: VM crashes on Ubuntu 12.04
Hi, In the meantime I upgraded kvm to 1:84+dfsg-0ubuntu16+1.0+noroms+0ubuntu14.11. But problem occured again. Hope this will be helpful: 0x7f19d124ae3a dma_complete+42: 1887144776 1962902856 2083055629 -951517190 0x7f19d124ae4a dma_complete+58: 28739 2071986176 175374409 147096392 0x7f19d124ae5a dma_complete+74: 264461659 -2092433377 -1991767868 -379757601 0x7f19d124ae6a dma_complete+90: -296174 1213436006 -1958151287 -2058866561 0x7f19d125bc3a ide_bus_reset+74: -394032312 1207959558 3162311 1207959552 0x7f19d125bc4a ide_bus_reset+90: 115905419 -1958215680 -678868990 1082869851 0x7f19d125bc5a ide_bus_reset+106: 266403648 -1991770081 1222124636 -534483831 0x7f19d125bc6a ide_init_drive+10: 1291553096 -400268151 611092812 -175552272 0x7f19d125d031 piix3_reset+33:1488817480 -402653173 -5197 279494 0x7f19d125d041 piix3_reset+49:345030 -2147073082 34030534 18891718 0x7f19d125d051 piix3_reset+65:610044744 1821067272 -2092429276 1103304900 0x7f19d125d061 pci_piix3_xen_ide_unplug+1:-58111660 -617524395 29351561-402653184 0x7f19d12802ed qemu_system_reset+45: 1977320776 -310099730 -164557708 447 0x7f19d12802fd qemu_system_reset+61: -486676480 -2092433398 1566247108 128758505 0x7f19d128030d qemu_system_reset+77: 1217422848 -1962349437 -1939055099 1975551232 0x7f19d128031d qemu_system_reset_request+13: -335165662 16813165 -402653184 505507 0x7f19d11ed944 main+5284: 283027432 1426426624 -2097130423 158597880 0x7f19d11ed954 main+5300: 252442755 -201595 311295 -370671616 0x7f19d11ed964 main+5316: -385873635 -8033007-525670400 0x7f19d11ed974 main+5332: -1226244080 1224736764 -1924601463 516264725 0x7f19cd8ca76d __libc_start_main+237: -51853431 822084001 -12654126 -1958150145 0x7f19cd8ca77d __libc_start_main+253: 969551365 -926857216 860382225 3155204 0x7f19cd8ca78d __libc_start_main+269: -788594688 134581064 1207974346 1678887105 0x7f19cd8ca79d __libc_start_main+285: 621032264 48 252248048 -1070480748 0x7f19d11f0d3d _start+41: 1217433844 1208544387 1393362315 -2058878894 0x7f19d11f0d4d call_gmon_start+13:-16616256 -998029104 -1869561080 -1869574000 0x7f19d11f0d5d call_gmon_start+29:1435537552 802700672 1207959637 1413604745 0x7f19d11f0d6d __do_global_dtors_aux+13: 1214412115 1325415811 1946157138 1032538124 Regards -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1211722 Title: VM crashes on Ubuntu 12.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1211722/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1211722] Re: VM crashes on Ubuntu 12.04
Thank you. I found that some packets were dropped on interface working as a bridge port. The same story is on the second server. I don't if this may have impact on this crash. You can find this in logs I attached. Regards ** Attachment added: Logs from affected server https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1211722/+attachment/3785073/+files/logs.tar.gz -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1211722 Title: VM crashes on Ubuntu 12.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1211722/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1211722] [NEW] VM crashes on Ubuntu 12.04
Public bug reported: One of my VMs just crashed. I didn't perform any specific action connected with this VM. Below are details of my environment: Ubuntu 12.04 ii kvm 1:84+dfsg-0ubuntu16+1.0+noroms+0ubuntu14.10 dummy transitional package from kvm to qemu-kvm ii qemu-kvm 1.0+noroms-0ubuntu14.10 Full virtualization on i386 and amd64 hardware Linux vdev2 3.5.0-34-generic #55~precise1-Ubuntu SMP Fri Jun 7 16:25:50 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux In /var/log/libvirtd/qemu I found this: 2013-08-13 09:34:56.103+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name one-455 -uuid a79ad8d6-bf48-f88c-b4a3-844aa5d389dc -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-455.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/one//datastores/0/455/disk.0,if=none,id=drive-ide0-0-0,format=qcow2,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/var/lib/one//datastores/0/455/disk.2,if=none,media=cdrom,id=drive-ide0-0-1,readonly=on,format=raw -device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive file=/var/lib/one//datastores/0/455/disk.1,if=none,id=drive-ide0-1-0,format=raw,cache=none -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=18,id=hostnet0 -device e1000,netdev= hostnet0,id=net0,mac=02:00:c0:a8:fb:1f,bus=pci.0,addr=0x3 -netdev tap,fd=20,id=hostnet1 -device e1000,netdev=hostnet1,id=net1,mac=02:00:0f:0f:0f:21,bus=pci.0,addr=0x4 -netdev tap,fd=29,id=hostnet2 -device e1000,netdev=hostnet2,id=net2,mac=02:00:c0:a8:05:20,bus=pci.0,addr=0x5 -usb -vnc 0.0.0.0:455 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 kvm: -device e1000,netdev=hostnet0,id=net0,mac=02:00:c0:a8:fb:1f,bus=pci.0,addr=0x3: pci_add_option_rom: failed to find romfile pxe-e1000.rom kvm: -device e1000,netdev=hostnet1,id=net1,mac=02:00:0f:0f:0f:21,bus=pci.0,addr=0x4: pci_add_option_rom: failed to find romfile pxe-e1000.rom kvm: -device e1000,netdev=hostnet2,id=net2,mac=02:00:c0:a8:05:20,bus=pci.0,addr=0x5: pci_add_option_rom: failed to find romfile pxe-e1000.rom *** glibc detected *** /usr/bin/kvm: double free or corruption (!prev): 0x7f8f1fe1f2a0 *** === Backtrace: = /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f8f31c27b96] /usr/bin/kvm(+0xcae3a)[0x7f8f3554ae3a] /usr/bin/kvm(+0xdbc3a)[0x7f8f3555bc3a] /usr/bin/kvm(+0xdd03d)[0x7f8f3555d03d] /usr/bin/kvm(+0x1002ed)[0x7f8f355802ed] /usr/bin/kvm(main+0x14a4)[0x7f8f354ed944] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f8f31bca76d] /usr/bin/kvm(+0x70d3d)[0x7f8f354f0d3d] === Memory map: 7f8edbdff000-7f8edbe0 rw-p 00:00 0 7f8edbe0-7f8f1be0 rw-p 00:00 0 7f8f1be0-7f8f1c00 rw-p 00:00 0 7f8f1c00-7f8f1ff2 rw-p 00:00 0 7f8f1ff2-7f8f2000 ---p 00:00 0 7f8f21a3a000-7f8f21a3b000 ---p 00:00 0 7f8f21a3b000-7f8f2233c000 rw-p 00:00 0 [stack:84344] 7f8f22469000-7f8f2246d000 r-xp fc:00 2234500 /usr/lib/x86_64-linux-gnu/sasl2/libcrammd5.so.2.0.25 7f8f2246d000-7f8f2266d000 ---p 4000 fc:00 2234500 /usr/lib/x86_64-linux-gnu/sasl2/libcrammd5.so.2.0.25 7f8f2266d000-7f8f2266e000 r--p 4000 fc:00 2234500 /usr/lib/x86_64-linux-gnu/sasl2/libcrammd5.so.2.0.25 7f8f2266e000-7f8f2266f000 rw-p 5000 fc:00 2234500 /usr/lib/x86_64-linux-gnu/sasl2/libcrammd5.so.2.0.25 7f8f2266f000-7f8f227db000 r-xp fc:00 2097379 /usr/lib/x86_64-linux-gnu/libdb-5.1.so 7f8f227db000-7f8f229db000 ---p 0016c000 fc:00 2097379 /usr/lib/x86_64-linux-gnu/libdb-5.1.so 7f8f229db000-7f8f229e1000 r--p 0016c000 fc:00 2097379 /usr/lib/x86_64-linux-gnu/libdb-5.1.so 7f8f229e1000-7f8f229e2000 rw-p 00172000 fc:00 2097379 /usr/lib/x86_64-linux-gnu/libdb-5.1.so 7f8f229f-7f8f229f5000 r-xp fc:00 2234431 /usr/lib/x86_64-linux-gnu/sasl2/libsasldb.so.2.0.25 7f8f229f5000-7f8f22bf4000 ---p 5000 fc:00 2234431 /usr/lib/x86_64-linux-gnu/sasl2/libsasldb.so.2.0.25 7f8f22bf4000-7f8f22bf5000 r--p 4000 fc:00 2234431 /usr/lib/x86_64-linux-gnu/sasl2/libsasldb.so.2.0.25 7f8f22bf5000-7f8f22bf6000 rw-p 5000 fc:00 2234431 /usr/lib/x86_64-linux-gnu/sasl2/libsasldb.so.2.0.25 7f8f22bf6000-7f8f22bfe000 r-xp fc:00 2234496 /usr/lib/x86_64-linux-gnu/sasl2/libntlm.so.2.0.25 7f8f22bfe000-7f8f22dfd000 ---p 8000 fc:00 2234496
[Bug 1211722] Re: VM crashes on Ubuntu 12.04
Thanks for your help. Below are details you are asking for: 1. We use bridge networking. Three network adapters attached to the VM use the same host bridge interface. There're no errors on this network adapter (e1000e). 2. VM was running linux-based os (customized Debian distribution). 3. We used these VMs to test HA cluster so there was Heartbeat/iSCSI/drbd (quite intensive network test). VMs involved into the cluster were set up on different physical hosts. 4. VM had been running for almost 4 days I suppose before the crash. I looked also in libvirtd.log and found this (maybe it can help): 2013-08-13 09:31:12.654+: 3093: info : libvirt version: 0.9.8 2013-08-13 09:31:12.654+: 3093: error : qemuMonitorIORead:513 : Unable to read from monitor: Connection reset by peer 2013-08-13 09:50:59.524+: 3093: error : qemuMonitorIORead:513 : Unable to read from monitor: Connection reset by peer 2013-08-13 10:31:18.026+: 3093: error : qemuMonitorIO:603 : internal error End of file from monitor Regards ** Attachment added: qemu log https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1211722/+attachment/3774923/+files/one-455.log -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1211722 Title: VM crashes on Ubuntu 12.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1211722/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 564355] Re: Second euca-run-instance request in same security group causes eucalyptus to remove network assoicated with security group
Not sure about logs from all Eucalyptus components. This problem seems to be centered in the cluster controller code. I think the telling log lines I've seen after the second euca-run-instances command are: [Thu Apr 15 14:29:46 2010][001328][EUCAINFO ] RunInstances(): called [Thu Apr 15 14:29:46 2010][001328][EUCAERROR ] vnetAddHost(): failed to add host d0:0d:3B:E6:07:11 on vlan 10 [Thu Apr 15 14:29:46 2010][001328][EUCAERROR ] RunInstances(): could not find/initialize any free network address, failing doRunInstances() Once the cluster controller fails to issue network addresses for the new instances it doesn't bother to farm them out to the node controllers. Those instances are never started on any of the NCs. It almost seems like the cluster controller forgets about the available network addresses on a given network and won't allocate addresses for new instances. The most distressing thing is (and this doesn't happen every time) the network associated with a given security group is deallocated by the cluster controller. Its rule chain is removed from iptables and I've even seen other users get issued the same slice of network addresses for their new security groups. All this while instances in the old security group are still in a running state. I can confirm Aimon's comment. We have seen this behavior with ADDRSPERNET set to 256, 128, and 64. -- Second euca-run-instance request in same security group causes eucalyptus to remove network assoicated with security group https://bugs.launchpad.net/bugs/564355 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to eucalyptus in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 564355] Re: Second euca-run-instance request in same security group causes eucalyptus to remove network assoicated with security group
I wanted to add that we have since upgraded to 1.6.2-0ubuntu30.3 and still witness this behavior regularly. -- Second euca-run-instance request in same security group causes eucalyptus to remove network assoicated with security group https://bugs.launchpad.net/bugs/564355 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to eucalyptus in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 568108] [NEW] getSecretKey() in euca_conf uses unanchored regex to find admin credentials
Public bug reported: When the function getSecretKey() in euca_conf tries to set SKEY and AKEY it uses an unanchored regex with awk that can cause it to select the credentials of any user with the word admin in their login name. I imagine the intent was to select the 'admin' user but the way the code is written the regex could match 'sadminer' for instance, who may or may not have admin credentials. This problem manifested when we created some accounts named jdoe_admin. Even through jdoe_admin was marked as an Administrator since there were no credentials in the database (the user had not retrieved their credentials.zip) euca_conf requests started to fail on the machine. The offending lines seem to be: SKEY=$(eval echo $(awk -v field=${FIELD} -F, '/INSERT INTO AUTH_USERS.*admin/ {print $field}' ${DBDIR}/*auth* | head -n 1)) AKEY=$(eval echo $(awk -v field=${FIELD} -F, '/INSERT INTO AUTH_USERS.*admin/ {print $field}' ${DBDIR}/*auth* | head -n 1)) Since the usernames in the files are surrounded by single quotes the following fix seemed to work for us: Replace: '/INSERT INTO AUTH_USERS.*admin/ {print $field}' With: /INSERT INTO AUTH_USERS.*'admin'/ {print \$field} Not sure if that is the best solution. Thanks! ** Affects: eucalyptus (Ubuntu) Importance: Undecided Status: New -- getSecretKey() in euca_conf uses unanchored regex to find admin credentials https://bugs.launchpad.net/bugs/568108 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to eucalyptus in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 564355] Re: Second euca-run-instance request in same security group causes eucalyptus to remove network assoicated with security group
I was able to repeat this behavior with ADDRSPERNET set to 128. The system seems more prone to this behavior when a user makes requests for large numbers of VMs in a security group and then attempts to add more. Not sure if this bug manifests based on the size of requests or how many IPs are already allocated in a given security group. -- Second euca-run-instance request in same security group causes eucalyptus to remove network assoicated with security group https://bugs.launchpad.net/bugs/564355 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to eucalyptus in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 566715] [NEW] GREEDY scheduling policy occasionally over-subscribes nodes (more VMs than cores)
Public bug reported: We are running eucalyptus 1.6.2-0ubuntu27 on lucid beta1. I will retest this bug on the latest-and-greatest as soon as that is feasible on our cluster. We have been running many tests with the ROUNDROBIN scheduling policy. As a test I changed it to GREEDY a little over a week ago. The scheduling policy seems to work as expected except that occasionally when servicing large requests the cluster controller will request that a node run more VMs than available cores on a machine. We are using kvm. When making a large request (say 100 VMs) it seems that invariably one of the nodes used to run the machines will be over-subscribed. Our machines have 8 cores each and I often see 9 VMs and occasionally I've seen as many as 12. I have checked that the machines with extra VMs did not have hyper- threading enabled and were therefore reporting the correct number of cores to Eucalyptus according to the /var/log/eucalyptus/euca_test_nc.log file on each system. ** Affects: eucalyptus (Ubuntu) Importance: Undecided Status: New -- GREEDY scheduling policy occasionally over-subscribes nodes (more VMs than cores) https://bugs.launchpad.net/bugs/566715 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to eucalyptus in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 564355] [NEW] Second euca-run-instance request in same security group causes eucalyptus to remove network assoicated with security group
Public bug reported: We are running eucalyptus 1.6.2-0ubuntu27 on lucid beta1 in MANAGED- NOVLAN. I will retest as soon as is feasible with ubuntu30 but as I see no mention of this issue/fix in the changelog I wanted to get the information in your hands. Eucalyptus has trouble allocating additional VMs to existing security groups in some cases. I tried several tests and saw very similar results. Eucalyptus allows you to request VMs in a given security group. Once all the VMs are running an additional euca-run-instances request for that security group will fail and in some cases the network associated with that security group will be removed from iptables (even if there are running VMs within that security group). The network that was freed up can be re-allocated to another security group but new VMs requested in that security group fail with the same failed to add host message. --- A typical cycle looks like this (command-line interspersed with snippets of cc.log): $ euca-run-instances -n 250 -g default… [Thu Apr 15 14:14:51 2010][001325][EUCAINFO ] StartNetwork(): called [Thu Apr 15 14:14:51 2010][001324][EUCAINFO ] ConfigureNetwork(): called [Thu Apr 15 14:14:51 2010][001324][EUCAINFO ] vnetTableRule(): applying iptables rule: -A user-default -s 0.0.0.0/0 -d 10.0.8.0/24 -p tcp --dport 22:22 -j ACCEPT [Thu Apr 15 14:14:51 2010][001327][EUCAINFO ] RunInstances(): called #….Proceeds to run 250 instances successfully….. $ euca-run-instances -n 1 -g default…. [Thu Apr 15 14:29:46 2010][001376][EUCAINFO ] StartNetwork(): called [Thu Apr 15 14:29:46 2010][001368][EUCAINFO ] ConfigureNetwork(): called [Thu Apr 15 14:29:46 2010][001368][EUCAINFO ] vnetTableRule(): applying iptables rule: -A user-default -s 0.0.0.0/0 -d 10.0.8.0/24 -p tcp --dport 22:22 -j ACCEPT [Thu Apr 15 14:29:46 2010][001328][EUCAINFO ] RunInstances(): called [Thu Apr 15 14:29:46 2010][001328][EUCAERROR ] vnetAddHost(): failed to add host d0:0d:3B:E6:07:11 on vlan 10 [Thu Apr 15 14:29:46 2010][001328][EUCAERROR ] RunInstances(): could not find/initialize any free network address, failing doRunInstances() #…..After 15 minutes instance goes to terminated and TerminateInstance() is called many times (once per NC?)……. [Thu Apr 15 14:39:51 2010][005458][EUCAERROR ] ERROR: TerminateInstance() could not be invoked (check NC host, port, and credentia ls) [Thu Apr 15 14:39:51 2010][001326][EUCAINFO ] TerminateInstances(): calling terminate instance (i-3BE60711) on (192.168.1.2) [Thu Apr 15 14:39:51 2010][005459][EUCAERROR ] ERROR: TerminateInstance() could not be invoked (check NC host, port, and credentia ls) [Thu Apr 15 14:39:51 2010][001326][EUCAINFO ] TerminateInstances(): calling terminate instance (i-3BE60711) on (192.168.1.3) [Thu Apr 15 14:39:51 2010][005460][EUCAERROR ] ERROR: TerminateInstance() could not be invoked (check NC host, port, and credentia ls) [Thu Apr 15 14:39:51 2010][001326][EUCAINFO ] TerminateInstances(): calling terminate instance (i-3BE60711) on (192.168.1.4) [Thu Apr 15 14:39:51 2010][005461][EUCAERROR ] ERROR: TerminateInstance() could not be invoked (check NC host, port, and credentia ls) #……It then removes the network allocated for the user's default security group even though there are 250 running VMs!!!…… [Thu Apr 15 14:40:00 2010][001328][EUCAINFO ] StopNetwork(): called #iptables shows that the chain user-default has disappeared! --- I tried many different combinations of numbers of nodes, etc. (ADDRSPERNET is 256) 250 + 1 additional (the 1 additional failed, network was removed and VMs are inaccessible) 100 + 1 additional (the 1 additional failed, network was removed and VMs are inaccessible) 20 + 20 additional (the 20 additional failed, network was removed and VMs are inaccessible) I did have some success adding to to existing security groups by 10 or 20 nodes at a time. One security group grew to 80 nodes before I received the failed to add host messages. It seemed I was more successful when I was making requests rapidly (waiting only a few minutes between requests) rather than waiting for all the nodes to allocate in a given reservation. I am at a loss to the exact cause because some security groups are allowed to expand while others are cut off from receiving additional IPs well before they reach ADDRSPERNET. ** Affects: eucalyptus (Ubuntu) Importance: Undecided Status: New -- Second euca-run-instance request in same security group causes eucalyptus to remove network assoicated with security group https://bugs.launchpad.net/bugs/564355 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to eucalyptus in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 253268] Re: php5-cgi not working with suphp in Hardy
This problem can be fixed (at least in Jaunty) by changing application/x-httpd-php to application/x-httpd-suphp in {/etc/suphp/,/etc/apache2/mods-available/}suphp.conf. This solution was described in Debian bugreport #519005. ** Bug watch added: Debian Bug tracker #519005 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=519005 ** Changed in: suphp (Debian) Status: New = Unknown ** Changed in: suphp (Debian) Remote watch: Debian Bug tracker #477646 = Debian Bug tracker #519005 -- php5-cgi not working with suphp in Hardy https://bugs.launchpad.net/bugs/253268 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to php5 in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs