I am not sure if this solved it, but the changes i have done is removed user home directory encryption and remounted all the drives. (Therefore mountinfo might have changed)
On Wed, Sep 17, 2014 at 3:14 AM, Nilesh B. <[email protected]> wrote: > After some troubleshooting, now I can start container (with following > message) and able to run sudo inside container > > p1 login: <4>init: setvtrgb main process (569) terminated with status 1 > * Stopping save kernel messages ...done. > <4>init: plymouth-upstart-bridge main process ended, respawning > -------------------------------- > > and output from /proc/self/mountinfo is: > > 17 22 0:15 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw > 18 22 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw > 19 22 0:5 / /dev rw,relatime - devtmpfs udev > rw,size=16305464k,nr_inodes=4076366,mode=755 > 20 19 0:12 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts > rw,gid=5,mode=620,ptmxmode=000 > 21 22 0:16 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs > rw,size=3263248k,mode=755 > 22 1 8:2 / / rw,relatime - ext4 > /dev/disk/by-uuid/13a9487e-bb6a-4a9e-bc56-d31e1daa30fe > rw,errors=remount-ro,data=ordered > 23 17 0:17 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 > 24 17 0:18 / /sys/fs/fuse/connections rw,relatime - fusectl none rw > 25 17 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw > 26 17 0:10 / /sys/kernel/security rw,relatime - securityfs none rw > 27 21 0:19 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none > rw,size=5120k > 28 21 0:20 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw > 52 21 0:31 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none > rw,size=102400k,mode=755 > 53 17 0:32 / /sys/fs/pstore rw,relatime - pstore none rw > 54 22 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,data=ordered > 55 22 8:4 / /home rw,relatime - ext4 /dev/sda4 rw,data=ordered > 56 18 0:33 / /proc/sys/fs/binfmt_misc rw,nosuid,nodev,noexec,relatime - > binfmt_misc binfmt_misc rw > 58 23 0:21 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - > cgroup systemd > rw,release_agent=/run/cgmanager/agents/cgm-release-agent.systemd,name=systemd > 59 52 0:34 / /run/user/1000/gvfs rw,nosuid,nodev,relatime - > fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=1000 > > > Thanks > Nilesh > > > > > > On Tue, Sep 16, 2014 at 3:51 AM, Serge Hallyn <[email protected]> > wrote: > >> Quoting Nilesh B. ([email protected]): >> > Hi, >> > I recently did fresh Ubuntu install with LXC. >> > After starting container in non-daemon mode ($lxc-start -n trusty-raw) >> it >> > showed errors and foreground process hangs. >> > I am able to stop the container from another terminal. >> > >> > Although the container starts in daemon mode ($lxc-start -n trusty-raw >> -l >> > debug -o start_daemon.log) but after ssh and executing sudo command gave >> > following error: >> > sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the >> > 'nosuid' option set or an NFS file system without root privileges? >> >> Can you give output of >> >> cat /proc/self/mountinfo >> >> Note that in the screen output you show, all looks fine. There are >> console >> msgs showing up, but I see a login prompt, and your container seems to >> have >> booted up just fine. So I'm just wondering about the sudo error. >> _______________________________________________ >> lxc-users mailing list >> [email protected] >> http://lists.linuxcontainers.org/listinfo/lxc-users > > >
_______________________________________________ lxc-users mailing list [email protected] http://lists.linuxcontainers.org/listinfo/lxc-users
