I am using a fresh install of Ubuntu 10.04 LTS Server and have used
virt-manager to set up an NFS storage repository (located on another
machine).
On boot the storage fails to mount.
First it was failing with statd not running and I fixed that by
editing /etc/init/statd.conf making this change:
Hi Steve,
Thanks for your work on this.
Just to make sure you understand (since you mentioned the problem
doesn't happen for you). this bug is not just about warning messages...
It actually drops you into the rescue shell, and the boot process stops
there, waiting for console input.
This
Is this still an issue in lucid final?
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
There aren't any known remaining reliability problems in lucid regarding
stacked NFS mounts, and I don't think we'll go back now to document this
in the release notes for 9.10. Marking wontfix.
** Changed in: ubuntu-release-notes
Status: New = Won't Fix
--
retry remote devices when
** Description changed:
Binary package hint: mountall
- Hi,
+ Hi,
I have a similar problem like in
https://bugs.launchpad.net/ubuntu/+source/mountall/+bug/461133 . My NFS-Shares
are not mounted at boot time. But I don't think both problems have the same
cause.
I think my problem is
Meanwhile we have bug 548954. I'm glad other people agree that hiding
boot messages on servers is a bad idea. The bug is even marked critical,
but why is this one not critical? Only because it is 'hard to fix' in
Karmic?
--
retry remote devices when parent is ready after SIGUSR1
The bug about the messages is bug 504224.
Steve, I hope you mean the reason for the messages will be gone in Lucid
and not: don't worry, we'll cover the messages with a nice looking boot
splash.
Older Unix admins will have a heart attack if they see a boot splash on
a server OS.
--
retry
On Thu, Mar 25, 2010 at 03:03:50PM -, Alvin wrote:
Steve, I hope you mean the reason for the messages will be gone in Lucid
No, I do not.
and not: don't worry, we'll cover the messages with a nice looking boot
splash.
They're *already* covered in lucid on systems which support the boot
I was running into this issue on Karmic and now I have tried Lucid Beta
1. The behavior here is a little different because the shares do mount
on startup, but error messages are still displayed in the terminal.
mount.nfs: DNS resolution failed for hostname: Name or service not known
mountall:
Brian,
The mount.nfs messages are shown if you aren't using the graphical
splash, but this should also be resolved in the next upload of plymouth
to lucid.
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification
Thanks for the helpful info. I'll make sure to report back after that
package is updated.
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I was affected by this on my karmic box.
My work around was to add noauto to the /etc/fstab line for the NFS
mount, and then add mount /path/to/mount in /etc/rc.local. This means
that the directory would get mounted at start up, but at the end. This
worked for my situation.
--
retry remote
I am/was also affected by this. I run a home network with a main nfs server
and 4 diskless clients which need root over nfs as well as other nfs mounts. I
have been able to keep this setup going since Dapper until this past upgrade to
Karmic. After struggling with this for the last 2 days I
Steve,
It seems as though Network Manager is as big a crock of as it ever
was -- at least for workstations that are either connected via eth0 or
not connected at all.
I purged Network Manager and added the appropriate lines to
/etc/network/interfaces and now when I boot I get the same
Network Manager is supposed to bring up wired network connections at
boot time. If this isn't happening for you, then that's either a bug in
network-manager or a configuration problem of some sort. I would
encourage you to file a bug report on network-manager about that issue.
I agree with you
On 11/02/2010 14:06, Steve Langasek wrote:
Most users of NFS (including myself) have seen no such problem in karmic,
aside from the warning that the NFS mounts are not yet available - messages
which I believe are being suppressed in lucid.
My experience is different. Out of 10 NFS clients,
I have a workstation with a fresh Karmic install and am seeing that all
my NFS mounts are failing at boot time with this same message:
One or more of the mounts listed in /etc/fstab cannot yet be mounted:
. . .
Press ESC to enter a recovery shell.
Other bug reports using the same
Actually, I don't have eth0 which is why DNS isn't working and also why
nothing will work on the network. So why is the boot sequence trying to
mount NFS mounts before the network interface is started?
So how do I get the eth0 interface started so that the machine can boot
sensibly?
--
retry
Russel,
This bug is not about trying to mount NFS mounts before the network is
up. That's expected behavior, and does not affect the reliability of
NFS mounts at boot time. If your eth0 isn't coming up, then of course
NFS mounts will fail; you should determine why your network isn't being
This Bug is a very, very critical and nasty problem (a K.O.) criteria
für karmic use in commercial environments.
This is a mission critical bug, which makes karmic and so ubuntu
unusable ins coperorate standard setups.
It must fixed.
J. Sauer
--
retry remote devices when parent is ready after
** Tags added: verification-needed
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
** Also affects: ubuntu-release-notes
Importance: Undecided
Status: New
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
Subscribing ubuntu-release-notes because upgrading in environments that
rely on NFS might not be a good idea.
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is
The attached patch is sufficient to fix this bug for me in Karmic, but
is probably not a very good long-term solution. It simply sets the
ready flag of each remote filesystem after SIGUSR1 is received, before
attempting to mount them all.
** Attachment added: Mark all remote filesystems ready
Well, I'm a home user, I'm affected - and the issue is a right PITA...
Karmic's inability to mount NFS drives at boot is a major regression.
So if the mechanism for solving the problem is an SRU then I vote for
that solution too please.
--
retry remote devices when parent is ready after
I believe an SRU is unfortunately not practical here because the fix is
intertwined with significant architectural changes to mountall that
can't be backported to 9.10. Scott, please correct me if I'm wrong.
--
retry remote devices when parent is ready after SIGUSR1
Since I have to boot via a rescue shell nowadays, I filed bug 504271
requesting a fix for this issue in Karmic.
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Is it possible that this issue also affects mounting CIFS filesystems at
boot?
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
This is a request for an SRU.
This bug prevents machines from booting if home directories are on NFS.
My machine was able to boot Jaunty so this is a regression. Not being
able to boot is severe.
According to https://wiki.ubuntu.com/StableReleaseUpdates, SRUs are issued:
* to fix high-impact
Without a fix, a lot of servers using NFS can't even boot unattended.
Isn't that important enough for a separate fix?
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs,
My local library is setup to netboot 30 workstations, which won't boot
well with karmic.
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
On Thu, 2009-12-31 at 11:52 +, Patrick wrote:
When is this going to be fixed in karmic?
It isn't.
Scott
--
Scott James Remnant
sc...@ubuntu.com
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because
When is this going to be fixed in karmic?
It isn't.
Um, but my nested NFS mounts don't work in Karmic. Or should I build a
backport of mountall from Lucid for my machines?
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug
On Mon, 2010-01-04 at 18:14 +, Dan wrote:
When is this going to be fixed in karmic?
It isn't.
Um, but my nested NFS mounts don't work in Karmic. Or should I build a
backport of mountall from Lucid for my machines?
A Lucid backport won't work, you'd need to backport more than just
When is this going to be fixed in karmic?
--
retry remote devices when parent is ready after SIGUSR1
https://bugs.launchpad.net/bugs/470776
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
** Changed in: mountall (Ubuntu)
Status: Triaged = Fix Committed
** Changed in: mountall (Ubuntu)
Milestone: None = lucid-alpha-2
** Changed in: mountall (Ubuntu)
Assignee: (unassigned) = Scott James Remnant (scott)
--
retry remote devices when parent is ready after SIGUSR1
This bug was fixed in the package mountall - 2.0
---
mountall (2.0) lucid; urgency=low
[ Scott James Remnant ]
* mount event changed to mounting, to make it clear it happens
before the filesystem is mounted. Added mounted event which
happens afterwards.
* Dropped the
37 matches
Mail list logo