On Mon, 2014-05-26 at 14:18 +0700, Fajar A. Nugraha wrote: > On Mon, May 26, 2014 at 2:03 PM, Timotheus Pokorra > <[email protected]> wrote: > Hello Federico, > that is strange. > I tried now on my old Laptop which runs Ubuntu 14.04, and got > the same error: > <30>systemd[1]: Listening on /dev/initctl Compatibility Named > Pipe. > <30>systemd[1]: Starting Root Slice. > <27>systemd[1]: Caught <SEGV>, dumped core as pid 11. > <30>systemd[1]: Freezing execution. > > > my kernel is: > uname -a > Linux timotheusp-LIFEBOOK-S7110 3.13.0-24-generic #47-Ubuntu > SMP Fri > May 2 23:31:42 UTC 2014 i686 i686 i686 GNU/Linux > > I am also using the packages 1.0.3-0ubuntu3 of lxc. > > What might be the difference, so that it works for you and > does not work for me? > > > > > > Try > https://www.mail-archive.com/[email protected]/msg00993.html > > > Look for "unconfined". > > > > > >>>> The LXC host (Ubuntu) is a virtual machine running in a > XEN environment. > >>>> I would understand if that is not possible, but it is > possible since > >>>> Debian 7 and CentOS 6 containers run fine on this host. > >>> > >>> XEN??? > >>> > >>> Oh crap... It's information like this that is critical to > understand > >>> what's going on. > >>> > >>> You're in an environment with a Fedora 20 container > running on an Ubuntu > >>> virtualized host in a Xen guest running under a Xen > paravirtualization > >>> hypervisor. Without knowing this, it would be impossible > to even guess > >>> where the problem may lay (even with this information, it > may be > >>> impossible). I haven't even begun to attempt to reproduce > it but the > >>> number of independent variables just shot through the > roof. > >>> > >>> First order of troubleshooting. Eliminate independent > variables... > > > > > > Try running fedora under lxc under xen under vmware :P
Yeah, this should work now in most cases but the corner cases can still drive you nuts. It use to not be so. The hypervisor has to virtualize (emulate) the privileged supervisory (ring 0 priv) instructions to virtualize a system. To virtualize a hypervisory, that includes emulating the hardware virtualization instructions themselves. That's non-trivial and, iirc, even some of the early Intel SVM instructions could not be properly virtualized. I think it was some privileged status register request that had some quirk in the early revs that just couldn't be virtualized properly. At one time (for reasons I will not go into), I had to virtualize SCO Unix ODT (SCO [the ORIGINAL SCO] called it Open DeskTop - SCO engineers called it "Open Death Trap", I just needed it OFF hard iron). I had it working under VMware Server 1.x but I don't think I ever got it to work properly under VirtualBox due to driver issues. I tried it under Xen but that was before we had the hardware virtualization, so Xen required the para drivers, so that was a hell no. Even tried QEMU emulation and QEMU basically gave me errors to sod off because of some 286 class modes and executables that it abjectly refused to touch at the time. I've done enough virt under virt. It gets to be fun. At Internet Security Systems, some of us were experimenting with a hypervisor to act as an anti-malware security agent, ala Joanna's Little Blue Pill. After IBM bought us and before I retired, there were some announcements and I think they even rolled out some enterprise products along those lines. > FWIW though, when using standard configurations (e.g. distro-bundled > kernel, or vanilla upstream kernel with distro-provided config), xen > usually behaves similar-enough to bare-metal for most cases. It's only > when someone uses their own stripped-down custom-config-and-build > kernel that results might vary wildly. Key operative word there is "most cases". They work consistently for the vast majority of "normal" applications and "normal" operating systems. I would not be surprised if systemd falls outside of some people's idea of "normal". In contrast, there are virtualization aware applications that make my life amusing. There are well known ways to know if you are virtualized and, most of the time, tell you what hypervisor you're running under. Most techniques are simple, list checking the BIOS and "motherboard" type or network cards and mac addresses. Some get into the paravirt communications stuff (IO ports for VMware, emulated illegal instructions for VirtualBox). Some look at relocated IDT tables. Some get really twitchy where you clock machine cycles for priv instructions. In the extreme case there's code that runs a "puzzle" that includes priv instructions. You then verify the output of the puzzle with the number of machine cycles it consumed. The hypervisor can not predict what the result will be (it's a one-way puzzle) in advance and can not rig the results to pass. We've got malware now that spots virtualization and evades debugging when detected. There's also some reported to try and counter attack the hypervisor. There's been an arms race out there for years. I did run into some malware that was virtualization aware and recognized a VMware hypervisor, Microsoft's HyperV, and VirtualBox. but it didn't recognize XEN. PITA to analyze. OTOH, using HW virt and it's snapshot capabilities had enabled me to do a lot of debugging in the past on malware and on some benign tasks. But I've always had to watch for where the introduction of virtualization and virtualization artifacts can affect the tests. So my first step is always to eliminate those variables where ever possible. Fortunately, in this case, it was something much more obvious and simpler. > > -- > Fajar Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 978-7061 | [email protected] /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF | possible worlds. A pessimist is sure of it!
signature.asc
Description: This is a digitally signed message part
_______________________________________________ lxc-users mailing list [email protected] http://lists.linuxcontainers.org/listinfo/lxc-users
