Re: [Bug 362013] Re: kvm migration fails with large-memory VMs

2009-09-09 Thread Brent Nelson
On Tue, 8 Sep 2009, Dustin Kirkland wrote:

 Hi Brent-

 Upstream indicates that this is solved in qemu 0.11, which is what we
 have in Karmic right now.

 Any chance you can test this on a Karmic system running qemu-kvm 0.11?

 With regard to your ifup/ifdown question, I'm uploading a fix right now
 for Bug #376387, which is that bug.

 :-Dustin


The KVM 84 PPA you posted for Hardy fixed the memory issue (although live 
migration of SMP systems is still broken; this is fixed somewhat after KVM 
84, apparently).  I don't have any Koala to test with (trying to hold out 
for the next LTS release), but it appears to be the same version or 
better, so I would expect that it resolves the issue with migration of 
large-memory VMs.

Thanks,

Brent

-- 
kvm migration fails with large-memory VMs
https://bugs.launchpad.net/bugs/362013
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu-kvm in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


Re: [Bug 362013] Re: kvm migration fails with large-memory VMs

2009-09-09 Thread Brent Nelson
On Tue, 8 Sep 2009, Dustin Kirkland wrote:

 Hi Brent-

 Upstream indicates that this is solved in qemu 0.11, which is what we
 have in Karmic right now.

 Any chance you can test this on a Karmic system running qemu-kvm 0.11?

 With regard to your ifup/ifdown question, I'm uploading a fix right now
 for Bug #376387, which is that bug.

 :-Dustin


The KVM 84 PPA you posted for Hardy fixed the memory issue (although live 
migration of SMP systems is still broken; this is fixed somewhat after KVM 
84, apparently).  I don't have any Koala to test with (trying to hold out 
for the next LTS release), but it appears to be the same version or 
better, so I would expect that it resolves the issue with migration of 
large-memory VMs.

Thanks,

Brent

-- 
kvm migration fails with large-memory VMs
https://bugs.launchpad.net/bugs/362013
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 420423] Re: Running karmic as virtual machine with virtio hard disk outputs I/O erros

2009-09-01 Thread Brent Nelson
The kernel patch and discussion for this issue is here:

http://patchwork.kernel.org/patch/39589/

-- 
Running karmic as virtual machine with virtio hard disk outputs I/O erros
https://bugs.launchpad.net/bugs/420423
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 420508] [NEW] nfs_getattr starvation with heavy NFS write activity

2009-08-28 Thread Brent Nelson
Public bug reported:

Binary package hint: linux-image

This is a known bug in kernels prior to 2.6.25 (not sure when it was
introduced).  If you have a long write task (such as a dd) to an NFS
mount, an ls -l on the NFS mount won't complete until the write
finishes.  If you are copying a file that takes 20 minutes to complete,
a simple ls -l will also take 20 minutes.  A \ls (ls with no
arguments) will work fine.

This was fixed in a really tiny patch:

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=28c494c5c8d425e15b7b82571e4df6d6bc34594d

** Affects: linux-ports-meta (Ubuntu)
 Importance: Undecided
 Status: New

-- 
nfs_getattr starvation with heavy NFS write activity
https://bugs.launchpad.net/bugs/420508
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 420508] Re: nfs_getattr starvation with heavy NFS write activity

2009-08-28 Thread Brent Nelson
Redhat discussion of this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=469848

** Bug watch added: Red Hat Bugzilla #469848
   https://bugzilla.redhat.com/show_bug.cgi?id=469848

-- 
nfs_getattr starvation with heavy NFS write activity
https://bugs.launchpad.net/bugs/420508
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 362013] Re: kvm migration fails with large-memory VMs

2009-04-20 Thread Brent Nelson
The live migration crashes with KVM:84 are specific to SMP guests.  This
is already logged in the KVM bug database.  I've seen mention that
KVM:85 is soon to be release, but I do not know whether or not this bug
is fixed.

-- 
kvm migration fails with large-memory VMs
https://bugs.launchpad.net/bugs/362013
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 362013] Re: kvm migration fails with large-memory VMs

2009-04-17 Thread Brent Nelson
I tried again to migrate with Hardy PPA KVM:84 and got a different oops.
However, if I first stop the guest VM, and then migrate, it does work
fine, and it's actually pretty quick, such that an ssh would see just a
few seconds delay.  So, offline migration does work, and isn't bad.  I
would love have live migration working properly, but the offline
migration delay was brief enough that I'll probably cancel plans to
experiment with Xen and just stick with KVM (the newer version in PPA).

One thing I didn't like with the migration is that the migrated guest
takes the full amount of RAM the guest was allocated, rather than just
the RAM the guest was using prior to the migration.  Good thing I
already bought more RAM, although maybe there's something I can do with
the new memory ballooning feature...

-- 
kvm migration fails with large-memory VMs
https://bugs.launchpad.net/bugs/362013
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 362013] [NEW] kvm migration fails with large-memory VMs

2009-04-15 Thread Brent Nelson
Public bug reported:

Binary package hint: kvm

In Hardy, an attempt to migrate a 6GB VM results in:
migration: memory size mismatch: recv 2168467456 mine 2168467456
migrate_incoming_fd failed (rc=232)
Migration failed rc=232

I ran across a thread which suggests that this is a known KVM bug that
has since been fixed (when Qemu fixed it in their source, and KVM pulled
in the new code):

http://kerneltrap.org/index.php?q=mailarchive/linux-
kvm/2008/12/14/4414784

I'm planning to test the KVM 84 Hardy backport to see if it is fixed
there...

** Affects: kvm (Ubuntu)
 Importance: Undecided
 Status: New

-- 
kvm migration fails with large-memory VMs
https://bugs.launchpad.net/bugs/362013
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 362013] Re: kvm migration fails with large-memory VMs

2009-04-15 Thread Brent Nelson
I just tried with the Hardy PPA KVM:84.  It doesn't have this migration
issue, and migration claims to go to completion, however:

1) The first time I tried migrating, after migration completed, the
guest VM, also running Hardy, had done a fresh boot.

2) I tried migrating again, this time with a vncviewer attached to the
destination.  The vncviewer went away on migration completion, but when
I started the vncviewer again, the guest VM, now on the destination, had
panicked.  See screenshot, attached, for what I was able to see of the
kernel panic.

PS A minor nuisance with KVM:84, as packaged: if you don't specify
script= in your -net options, kvm looks for /etc/kvm-ifup, which does
not exist, rather than /etc/kvm/kvm-ifup.

** Attachment added: Screenshot.png
   http://launchpadlibrarian.net/25513435/Screenshot.png

-- 
kvm migration fails with large-memory VMs
https://bugs.launchpad.net/bugs/362013
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 240905] Re: virDomainCreateLinux fails because of unknown parameter -drive

2008-09-24 Thread Brent Nelson
Just another me too, although I get the issue with virt-install,
rather than virt-manager, using the --pxe option.

-- 
virDomainCreateLinux fails because of unknown parameter -drive
https://bugs.launchpad.net/bugs/240905
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


Re: [Bug 210468] Re: try to access a .Trash-$USER directory on autofs mounts

2008-04-29 Thread Brent Nelson
On Tue, 29 Apr 2008, Sebastien Bacher wrote:

 this erroneous assumption is the bug

 it's not an assumption but it needs to look on the partition to know if
 there a directory there


It may need to look at THE partition (i.e., the partition on which a file 
is actually being deleted, although for a home directory that would still 
be erroneous; it needs to look in the home directory, not the mountpoint 
of the home partition); it certainly does not need to look at EVERY 
partition the system can access, even when no deletion has actually 
occurred.

This is quite unworkable on anything but a standalone desktop PC.  We had 
to disable the trash demon (chmod a-x) before we could deploy Hardy in our 
environment.  Our environment is rather typical of managed Linux/UNIX 
systems.

Thanks,

Brent

-- 
try to access a .Trash-$USER directory on autofs mounts
https://bugs.launchpad.net/bugs/210468
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 214130] Re: flashplugin-nonfree mismatch

2008-04-08 Thread Brent Nelson
Ditto, on Hardy Heron.  The download site indicates a new version.  This
has been a frequent problem over the years, and I wish there was a more
permanent solution for it.  Ideally, Adobe would simply archive older
versions, rather than always offering only the latest, without a way to
download by version.  Then the package would always work.

I suppose I'm just dreaming...

-- 
flashplugin-nonfree mismatch
https://bugs.launchpad.net/bugs/214130
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 152794] Re: nis daemon fails to attach to domain the first time it is run in Gutsy

2008-04-03 Thread Brent Nelson
I am seeing this issue on a fresh Hardy amd64 install.  ypbind is
actually failing to register with portmap, even though portmap started
significantly earlier (nis startup is S18 in rc2, while portmap seems to
be starting as S43 in rcS; curiously, portmap is also listed as S17 in
rc2, but that's not what actually starts it).  Other things started a
little later (NFS) do register with portmap.

However, if I add nis to rcS after portmap (S46), it works.  It seems
like something in-between is causing a temporary disruption (could be
NetworkManager, although I have a static IP configured in
/etc/network/interfaces).

-- 
nis daemon fails to attach to domain the first time it is run in Gutsy
https://bugs.launchpad.net/bugs/152794
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 210468] Re: try to access a .Trash-$USER directory on autofs mounts

2008-04-03 Thread Brent Nelson
I tried again today, with gvfs 0.2.2svn20080403-0ubuntu1.  I found that
the mount storm is currently pretty slow, but the gvfsd-trash processes
seem to be accumulating endlessly. After ~45 minutes, it has mounted
everything in our department, and I have over 3000 gvfsd-trash processes
(and 1.1GB consumed on a freshly booted machine with a completely idle
logged-in Gnome session).

-- 
try to access a .Trash-$USER directory on autofs mounts
https://bugs.launchpad.net/bugs/210468
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 210468] Re: try to access a .Trash-$USER directory on autofs mounts

2008-04-02 Thread Brent Nelson
Please also see duplicate bug #210586.  gvfsd-trash continuously tries
to access the automount points (and keeps spawning and spawning,
although it does allow old processes to die before spawning more, so it
doesn't consume all available memory).  The result is that, as soon as
someone logs in, the machine will automount every NFS filesystem in our
department (~145, in our case) and keep the mounts active (some may
expire, only to be remounted shortly thereafter).

This results in a mount storm (in the past, on slower processors, such
storms have rendered machines unusable for ~10-20 minutes in our
environment), and it really defeats much of the purpose of automounting
if all mounts are kept active.

-- 
try to access a .Trash-$USER directory on autofs mounts
https://bugs.launchpad.net/bugs/210468
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 210586] [NEW] gvfsd-trash spawns continuously, triggers all automounts

2008-04-01 Thread Brent Nelson
Public bug reported:

Binary package hint: gvfs

In gvfs 0.2.2svn20080331-0ubuntu1 on a recent amd64 Hardy install,
gvfsd-trash is triggering all of my NFS automounts.  gvfsd-trash spawns
continuously, although the total number doesn't seem to grow beyond
~50-60 something (some exit and new ones are spawned).

Perhaps gvfsd-trash should ignore autofs mounts?

Note that I am using an autofs5 package from Debian experimental (direct
mount support; wish Hardy included it), so it would be good to get
confirmation from someone running the autofs included with hardy.

** Affects: gvfs (Ubuntu)
 Importance: Undecided
 Status: New

-- 
gvfsd-trash spawns continuously, triggers all automounts
https://bugs.launchpad.net/bugs/210586
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 210591] [NEW] icedtea-gcjwebplugin issue with a specific URL

2008-04-01 Thread Brent Nelson
Public bug reported:

Binary package hint: icedtea-gcjwebplugin

icedtea-gcjwebplugin 1.0-0ubuntu5 on a recent amd64 Hardy install does
run the applet at http://www.goes.noaa.gov/GSSLOOPS/ecwv.html.  The
applet is supposed to pull in multiple satellite images and display them
in sequence.  However, the animation does not occur, and only one image
of the sequence is displayed.

** Affects: icedtea-gcjwebplugin (Ubuntu)
 Importance: Undecided
 Status: New

-- 
icedtea-gcjwebplugin issue with a specific URL
https://bugs.launchpad.net/bugs/210591
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs