[Bug 1858161] Re: Ubuntu 18.04 install not reading block storage properly

2020-01-06 Thread Laz Peterson
Hello Brian, my apologies for that ... we are installing using the
18.04.3 server media.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858161

Title:
  Ubuntu 18.04 install not reading block storage properly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1858161] Re: Ubuntu 18.04 install not reading block storage properly

2020-01-03 Thread Laz Peterson
FYI - both Ubuntu 16.04 and 20.20 daily build show the correct disk
size. See attached from Focal Fossa.

** Attachment added: "20200103-Focal.png"
   
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+attachment/5317451/+files/20200103-Focal.png

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858161

Title:
  Ubuntu 18.04 install not reading block storage properly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1858161] Re: Ubuntu 18.04 install not reading block storage properly

2020-01-03 Thread Laz Peterson
This screenshot shows what the Ubuntu 18.04.3 installer is showing when
allowing me to select the installation target. The correct value should
be ~1TB, not ~8TB.

** Attachment added: "20200102-Install_Block_Storage_Issue.png"
   
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+attachment/5317436/+files/20200102-Install_Block_Storage_Issue.png

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858161

Title:
  Ubuntu 18.04 install not reading block storage properly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1858161] Re: Ubuntu 18.04 install not reading block storage properly

2020-01-03 Thread Laz Peterson
I really don't know how to select the appropriate package, but hdparm
does seem to be confused as to the physical/logical sector size. I don't
know what the installer uses to get the wrong values though.

** Package changed: ubuntu => hdparm (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858161

Title:
  Ubuntu 18.04 install not reading block storage properly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1858161] Re: Ubuntu 18.04 install not reading block storage properly

2020-01-03 Thread Laz Peterson
I should also mention that this server is running on an AMD EPYC Rome
platform, and it seems that the kernel version used during install is a
bit older than it should be to support this.

Any thoughts on where the Ubuntu installation is getting the wrong
physical/logical size?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858161

Title:
  Ubuntu 18.04 install not reading block storage properly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1858161] Re: Ubuntu 18.04 install not reading block storage properly

2020-01-03 Thread Laz Peterson
This screenshot shows output from hdparm, fdisk and the values in
/sys/block/sda/queue/physical_block_size and logical_block_size.

** Attachment added: "20200103-More_Info.png"
   
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+attachment/5317437/+files/20200103-More_Info.png

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858161

Title:
  Ubuntu 18.04 install not reading block storage properly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1858161] [NEW] Ubuntu 18.04 install not reading block storage properly

2020-01-02 Thread Laz Peterson
Public bug reported:

We recently received Supermicro servers with Avago 3108 chipset, and 2x
Seagate 4K SAS drives in a hardware RAID1 configuration.

All other operating systems (including Ubuntu 16.04) report this virtual
drive properly.

Ubuntu 18.04 shows it as if it were using 512 byte sectors, instead of
the 4K sectors that are actually there.

Instead of having ~1TB drive, it shows as ~8TB and is unable to install
no matter what I do.

Going to shell during install and running fdisk shows all of the right
information and drive/sector sizes.

I am able to install 16.04 and then upgrade to 18.04, but that is not
ideal.

Please let me know what other information I can get, or how I can get it
for you.

** Affects: ubuntu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858161

Title:
  Ubuntu 18.04 install not reading block storage properly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1858161/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1738864] Re: libvirt updates all iSCSI targets from host each time a pool is started

2017-12-21 Thread Laz Peterson
Christian, I couldn't hold back from giving this a try. FYI, it's
working like a dream.

Thanks again!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1738864

Title:
  libvirt updates all iSCSI targets from host each time a pool is
  started

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1738864/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1738864] Re: libvirt updates all iSCSI targets from host each time a pool is started

2017-12-19 Thread Laz Peterson
Ha ha -- yes, good point about January!

You are wonderful, thanks so much for your help Christian. We're going
to plan for one VM host to test, with only VMs that are part of a HA
pair (and probably a nice big OpenStack test cluster), and leave the
primary VM on the stock 16.04.3 packages. That way if everything goes
nuts, at least we won't have to worry about any services going out.

I'm looking forward to this! What a pleasant surprise to learn about the
Ubuntu Cloud archives!

Take care, happy holidays to you Christian.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1738864

Title:
  libvirt updates all iSCSI targets from host each time a pool is
  started

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1738864/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1738864] Re: libvirt updates all iSCSI targets from host each time a pool is started

2017-12-19 Thread Laz Peterson
Hello Christian, thanks for your quick response.

True, it is quite unfortunate that this problem is just a waiting game,
and not a game-ending issue. And I definitely agree that some end game
here might not be good for a [5] type of result.

Regarding [3], yes I saw that yesterday while searching around. Had not
seen [4] yet, but I can see why they wanted to find a better solution
for the "large hammer" approach.

Now, regarding [2], that sounds like a very interesting possibility. I
will say that while running Ubuntu 17.10 for a short time, we did
experience quite noticeable instability on Xeon-based servers, as well
as hard system resets on AMD-based servers. (This required us to go back
to 16.04.3 on all hardware last weekend.) Do you think that those issues
have anything to do with libvirt, or with other parts of the system?

Also for [2], do you think there would be any harm in updating all
packages that are newer from xenial-pike? Or what do you think the best
approach here would be? I wasn't exactly aware of this option, and
definitely don't know much about it safely going into datacenter
production. But depending on how reliable/stable all of these versions
are on 16.04.3, I'd definitely be willing to go this route. After our
17.10 disaster though, I just need to be cautious. :)

Very interested to hear more. Thanks again, Christian.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1738864

Title:
  libvirt updates all iSCSI targets from host each time a pool is
  started

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1738864/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1738864] Re: libvirt updates all iSCSI targets from host each time a pool is started

2017-12-18 Thread Laz Peterson
Here is the debug log from libvirt.  Starting on line 933, you will see
libvirt discovering the targets from the host, and then the next 10,000+
lines are libvirt going through and updating all of the target
information for each of these available targets.  Finally, it does what
I would expect it to do first, on line 11,476, which is login to the
target.

This particular example was about 15 seconds to connect one iSCSI pool.
But that number varies depending on unknown factors, from 15 (which is
rare in our case with 140+ targets) up to 90 seconds to connect one
pool.

And for each of the 140 targets on this one host, it goes and repeats
the same process for each individual pool that is started.

** Attachment added: "20171218-libvirt.txt"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1738864/+attachment/5024287/+files/20171218-libvirt.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1738864

Title:
  libvirt updates all iSCSI targets from host each time a pool is
  started

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1738864/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1738864] [NEW] libvirt updates all iSCSI targets from host each time a pool is started

2017-12-18 Thread Laz Peterson
Public bug reported:

Hello everyone, I'm a little confused about the behavior of libvirt in
Ubuntu 16.04.3.

We have up to 140 iSCSI targets on a single storage host, and all of
these are made available to our VM hosts. If I stop one of the iSCSI
pools through virsh ("virsh pool-destroy iscsipool1") and start it back
up again while running libvirt in debug, I see that it runs a discovery
and proceeds to go through and update every single target available on
that host -- even targets that we do not use, instead of simply
connecting to that target.

This turns a <1 second process into a minimum of 30 seconds, and I just
ran it with the stopwatch and clocked it at 64 seconds. So if we are
doing maintenance on these hosts and go for a reboot, it takes 90-120+
minutes to finish auto starting all of the iSCSI pools. And of course,
during this period of time, the server is completely worthless as a VM
host. Libvirt is just stuck until it finishes connecting everything.
Manually connecting to the targets using iscsiadm without doing all the
same hubbub that libvirt is doing connects these targets immediately, as
I would expect from libvirt.

And for each of the 140 iSCSI targets, it goes through and runs an
iscsiadm sendtargets and then updates every single target before finally
connecting the respective pool.

We also noticed that libvirt in Ubuntu 17.10 does not have this
behavior. Well maybe it does, but it connects the iSCSI targets
immediately. It is a much different process than Ubuntu 16.04.3.

Any help would be greatly appreciated. Thank you so much.

** Affects: libvirt (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: iscsi storage

** Tags added: iscsi

** Tags added: storage

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1738864

Title:
  libvirt updates all iSCSI targets from host each time a pool is
  started

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1738864/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-07-17 Thread Laz Peterson
Hello Stefan,

Yes, now that you mention it, it seems that Generate from host NUMA
configuration in 15.04 simply puts everything in node 0.  At least,
that's what I'm seeing.  While that's a little better than just spanning
the entire guest across both nodes (as a default), leaving an entire
second node available for sunshine and rainbows is not a desired
function.

Manual pinning seems to be the only way to go.  Unfortunately, this puts
a heavy strain on managing those resources -- I have numerous scrap
papers laying all over my office with CPU and memory counts under node
columns, with arrows pointed left and right.  It's comical to think
about, but ...

Very surprising that libvirt and Ubuntu are not able to recognize the
available NUMA resources when starting guests and automatically placing
them in the node that will be most appropriate for their requirements.

I do understand regarding your comment about manual pinning being
broken.  And from what I can tell, that is working fine.  So essentially
the LTS release seems covered.

Now that you speak about vcpu and cpuset, I am now very curious to try
running VMs with a much different topology entirely unknown to libvirt.
To manually (hoping that in the future this will be a feature) make
physical cores available as cores and HT cores available as threads.
I don't really know much about any of this function, maybe it already
exists or maybe I've just lost my marbles.

In all of the tuning aspects of libvirt, I am always concerned with the
quality of HT compared to the physical core.  There are a few things
here, one is that I do not want to waste a potentially usable thread by
disabling HT.  But second, under heavy load I would prefer a process on
a guest with 2 cores to be pushed to a physical core with the respective
physical HT as a part of that single core, while also making available
the second physical core with its HT as a physical part of that.

As I type this email, I have a database server that is happily pinned to
only HT cores right now.  I can't imagine that would be detrimental to
its function, but preferably I would like some sort of policy to ensure
each guest is operating on at least one legitimate physical core, and
not entirely on the core leftovers.

Maybe I am thinking too far down the road here :-).  You enlighten me
greatly Stefan, your wisdom us always appreciated.

Thanks again.
~Laz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-07-02 Thread Laz Peterson
Thank you for the update Stefan.  Yes, I tried to compile and run numad
on Ubuntu but I had no luck there.

Also correct, I am explicitly pinning CPUs at this time.  As far as I
can tell, it is the only option.

As far as the big question mark in my head goes ... This function works
as expected on Ubuntu 15.04.  I have not tried 14.10.  So something
between then and now (something, somewhere !?) has changed to allow
this.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-07-02 Thread Laz Peterson
Err, most importantly, my (selfish) opinion is that something of this
magnitude should not be fixed upstream, but in a current Ubuntu LTS
release.  :-)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-07-01 Thread Laz Peterson
My apologies.  I meant to say that only libvirt has the issue with
detecting and properly using NUMA nodes.  All other NUMA functions with
the system work as expected.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-07-01 Thread Laz Peterson
Why hello Stefan, glad to know you are still with us! :-)

laz@dev-vm0:~$ virsh nodeinfo
CPU model:   x86_64
CPU(s):  24
CPU frequency:   1500 MHz
CPU socket(s):   1
Core(s) per socket:  6
Thread(s) per core:  2
NUMA cell(s):2
Memory size: 198069904 KiB

laz@dev-vm0:~$ numactl -s
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 
cpubind: 0 1 
nodebind: 0 1 
membind: 0 1 

root@dev-vm0:/proc/sys# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):24
On-line CPU(s) list:   0-23
Thread(s) per core:2
Core(s) per socket:6
Socket(s): 2
NUMA node(s):  2
Vendor ID: GenuineIntel
CPU family:6
Model: 45
Stepping:  7
CPU MHz:   1200.000
BogoMIPS:  4002.28
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  15360K
NUMA node0 CPU(s): 0-5,12-17
NUMA node1 CPU(s): 6-11,18-23

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-07-01 Thread Laz Peterson
As I mentioned before, we did buy new server hardware to host the VMs.
This hew hardware also has the same NUMA node issue.

I initially installed 14.04.2 on those new servers, then when the NUMA
issue was there, I thought what the hey and installed 15.04 just for
fun.  The problem was gone -- NUMA nodes were automatically generated
and maintained, though it put everything on node 0 and nothing on node 1
(not sure if that is by design?).

Anyhow, noticed a lot of issues with the new migration engine that
version uses, so I decided to downgrade back to 14.04 and just bite the
bullet manually setting my CPU configurations.

So whatever extra help you need from me, I am more than happy to spend
time doing any tasks you like.  Just let me know.  Thanks again Stefan!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-07-01 Thread Laz Peterson
Yes, so I guess that is the big confusing question.  Why does virsh
nodeinfo show the right information, but libvirt doesn't/can't use it.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-10 Thread Laz Peterson
Ok back to 3.13.0-53-generic kernel, awaiting your command.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-10 Thread Laz Peterson
Here's the libvirtd.log.  Has same issue with NUMA.

** Attachment added: 20150610-libvirtd.log
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4412639/+files/20150610-libvirtd.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-09 Thread Laz Peterson
I will do whatever you think is the best for diagnosing the problem.

So you would like me to go to 12.04 then, yes?  Or keep at 14.04.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-09 Thread Laz Peterson
Ah, no prob.   Then I will go back to kernel 3.13 and we will go from
there.  Thanks Stefan!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-08 Thread Laz Peterson
Yes I have a dedicated test environment strictly for this issue. :-)

Would you like me to prepare a 15.04 default install ready for your
updated images?

Much appreciate all of your help Stefan!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-08 Thread Laz Peterson
I have tried two things.  Upgrading to kernel 3.19.8 and also disabling
Hyper-Threading.  Neither has any effect.

Here is attached libvirtd.log, initial part of log is with kernel 3.19.8
which I tried first.  Second part of log, which starts at 13:40:28 is
with HT disabled.

** Attachment added: 20150608-libvirtd.log
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4411566/+files/20150608-libvirtd.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-08 Thread Laz Peterson
Here is some more information attached.

Also, the CPU is actually Intel 6-core with HT so it appears as 12-core.
I can disable HT and see what it reports.  Also, running kernel
3.16.0-38-generic.

Back to another piece of interesting information, when I installed
Ubuntu 15.04 (just for fun to see if there was anything new that I might
want to take advantage of) all of these functions work just fine.  We
could possibly go down this route to find out which package or part of
the system allows libvirt to find this information properly.

Without going fully to 15.04 (as that defeats the purpose of LTS), what
would be a good package-by-package upstream path to try?  I can start
with kernel and work my way from there maybe?

Surprising to me why not many others have this type of issue.  The only
common factor I can tell (including other forum posts long since
forgotten) is Supermicro motherboard.

** Attachment added: cpuinfo.txt
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4411527/+files/cpuinfo.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-05 Thread Laz Peterson
Here we go Stefan.  The log file is short and sweet -- hopefully it gets
you the information you are looking for!

** Attachment added: libvirtd.log
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4410572/+files/libvirtd.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-05 Thread Laz Peterson
A much more comprehensive log to show the changes in the log as it goes.
This might be better.

** Attachment added: libvirtd.log
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4410583/+files/libvirtd.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-06-03 Thread Laz Peterson
Yes, Stefan, I am in the datacenter right now getting the new servers
online.  In about a week or so, I will install your new binaries and
post results.

Thank you!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-29 Thread Laz Peterson
Also, in the meantime (if my test server becomes available before the
test packages), I can install upstream libvirt/qemu packages to see if
the fix came from there or if the fix came from elsewhere.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-29 Thread Laz Peterson
I would be more than happy to test for you Stefan.  As long as it is
cookie cutter for a non-guru like myself, you just tell me what to do.

I will have a non-production server ready to rock in roughly a week from
now.  Thank you for all of your efforts!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-28 Thread Laz Peterson
Here we go.

** Attachment added: sysfscpu.txt
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4406334/+files/sysfscpu.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-28 Thread Laz Peterson
Stefan if you would like to poke around, I have a server we are taking
out of production (also Supermicro) that has this issue as well. I can
provide you access at that time. Possibly 1 week from now.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-28 Thread Laz Peterson
We are moving our equipment to the datacenter tomorrow, so won't have
much more input until after then.  But all of the links etc are all
where they are expected to be.  Not sure if the libvirt user can read
those, but I'm sure it can since my standard user can.

Running a 'find . -name physical_package_id -exec cat {} \;' from
/sys/devices/system shows 0's and 1's.  So according to your formula
there, we would definitely be seeing a 2 for my sockets.  Shows only
1.  According to the other data, that 1 is not from the max value, it
is simply just putting 1 as a static number.

I am not sure what socket bitmap is, or anything else about it. :-)  Or
I might have something to comment on this.

Also, one thing to keep in mind, all of this has been fixed since Ubuntu
15.04.  So something in the pipe between all previous versions and then
has allowed this to start working.  This is possibly kernel related or
even in another package that libvirt depends on.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-28 Thread Laz Peterson
Hmm, I am having issues uploading files, as well as downloading FliTTi's
to view.  Seems to be an issue with the launchpadlibrarian.net?

I can post as plain text if you like.  Or I will try uploading
sysfscpu.txt later on.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-26 Thread Laz Peterson
I have this issue as well.  This issue has persisted from many Ubuntu
versions ago, and has always made it extremely difficult to deal with
NUMA configuration.  Oddly enough, after testing Ubuntu 15.04 last
weekend, it seems the issue is gone.

All of the servers that we have affected by this have Supermicro
motherboards.  (We only have Supermicro, so I can't tell you otherwise.)

Disabling NUMA in BIOS shows the right socket information, however all
of the cores are listed in node 0, instead of being listed in their
separate node.  (Of course.)

Re-enabling NUMA in BIOS shows wrong socket information, but with proper
cores split between the right nodes.

Might be something directly related to the architecture of Supermicro
motherboard.  Or possibly we can get a confirmation this happens on
another manufacturer's board?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-26 Thread Laz Peterson
I might add, I am using Intel processors, not AMD.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177

Title:
  Nodeinfo returns wrong NUMA topology / bad virtualization performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1446177] Re: Nodeinfo returns wrong NUMA topology / bad virtualization performance

2015-05-26 Thread Laz Peterson
Number of cells seems right, but number of sockets is definitely wrong.

OS: Ubuntu 14.04.2 LTS
Kernel: 3.16.0-38-generic
Most updated versions of all related packages as of May 26, 2015.

root@vm0:/media/scripts/vm# virsh capabilities
capabilities

  host
uuid----0cc47a4c5e42/uuid
cpu
  archx86_64/arch
  modelSandyBridge/model
  vendorIntel/vendor
  topology sockets='1' cores='12' threads='2'/
  feature name='invpcid'/
  feature name='erms'/
  feature name='bmi2'/
  feature name='smep'/
  feature name='avx2'/
  feature name='bmi1'/
  feature name='fsgsbase'/
  feature name='abm'/
  feature name='pdpe1gb'/
  feature name='rdrand'/
  feature name='f16c'/
  feature name='osxsave'/
  feature name='movbe'/
  feature name='dca'/
  feature name='pcid'/
  feature name='pdcm'/
  feature name='xtpr'/
  feature name='fma'/
  feature name='tm2'/
  feature name='est'/
  feature name='smx'/
  feature name='vmx'/
  feature name='ds_cpl'/
  feature name='monitor'/
  feature name='dtes64'/
  feature name='pbe'/
  feature name='tm'/
  feature name='ht'/
  feature name='ss'/
  feature name='acpi'/
  feature name='ds'/
  feature name='vme'/
/cpu
power_management
  suspend_disk/
  suspend_hybrid/
/power_management
migration_features
  live/
  uri_transports
uri_transporttcp/uri_transport
  /uri_transports
/migration_features
topology
  cells num='2'
cell id='0'
  memory unit='KiB'131928440/memory
  cpus num='24'
cpu id='0' socket_id='0' core_id='0' siblings='0,24'/
cpu id='1' socket_id='0' core_id='1' siblings='1,25'/
cpu id='2' socket_id='0' core_id='2' siblings='2,26'/
cpu id='3' socket_id='0' core_id='3' siblings='3,27'/
cpu id='4' socket_id='0' core_id='4' siblings='4,28'/
cpu id='5' socket_id='0' core_id='5' siblings='5,29'/
cpu id='6' socket_id='0' core_id='8' siblings='6,30'/
cpu id='7' socket_id='0' core_id='9' siblings='7,31'/
cpu id='8' socket_id='0' core_id='10' siblings='8,32'/
cpu id='9' socket_id='0' core_id='11' siblings='9,33'/
cpu id='10' socket_id='0' core_id='12' siblings='10,34'/
cpu id='11' socket_id='0' core_id='13' siblings='11,35'/
cpu id='24' socket_id='0' core_id='0' siblings='0,24'/
cpu id='25' socket_id='0' core_id='1' siblings='1,25'/
cpu id='26' socket_id='0' core_id='2' siblings='2,26'/
cpu id='27' socket_id='0' core_id='3' siblings='3,27'/
cpu id='28' socket_id='0' core_id='4' siblings='4,28'/
cpu id='29' socket_id='0' core_id='5' siblings='5,29'/
cpu id='30' socket_id='0' core_id='8' siblings='6,30'/
cpu id='31' socket_id='0' core_id='9' siblings='7,31'/
cpu id='32' socket_id='0' core_id='10' siblings='8,32'/
cpu id='33' socket_id='0' core_id='11' siblings='9,33'/
cpu id='34' socket_id='0' core_id='12' siblings='10,34'/
cpu id='35' socket_id='0' core_id='13' siblings='11,35'/
  /cpus
/cell
cell id='1'
  memory unit='KiB'132117356/memory
  cpus num='24'
cpu id='12' socket_id='1' core_id='0' siblings='12,36'/
cpu id='13' socket_id='1' core_id='1' siblings='13,37'/
cpu id='14' socket_id='1' core_id='2' siblings='14,38'/
cpu id='15' socket_id='1' core_id='3' siblings='15,39'/
cpu id='16' socket_id='1' core_id='4' siblings='16,40'/
cpu id='17' socket_id='1' core_id='5' siblings='17,41'/
cpu id='18' socket_id='1' core_id='8' siblings='18,42'/
cpu id='19' socket_id='1' core_id='9' siblings='19,43'/
cpu id='20' socket_id='1' core_id='10' siblings='20,44'/
cpu id='21' socket_id='1' core_id='11' siblings='21,45'/
cpu id='22' socket_id='1' core_id='12' siblings='22,46'/
cpu id='23' socket_id='1' core_id='13' siblings='23,47'/
cpu id='36' socket_id='1' core_id='0' siblings='12,36'/
cpu id='37' socket_id='1' core_id='1' siblings='13,37'/
cpu id='38' socket_id='1' core_id='2' siblings='14,38'/
cpu id='39' socket_id='1' core_id='3' siblings='15,39'/
cpu id='40' socket_id='1' core_id='4' siblings='16,40'/
cpu id='41' socket_id='1' core_id='5' siblings='17,41'/
cpu id='42' socket_id='1' core_id='8' siblings='18,42'/
cpu id='43' socket_id='1' core_id='9' siblings='19,43'/
cpu id='44' socket_id='1' core_id='10' siblings='20,44'/
cpu id='45' socket_id='1' core_id='11' siblings='21,45'/
cpu id='46' socket_id='1' core_id='12' siblings='22,46'/
cpu id='47' 

[Bug 1458357] Re: migrate failed invalid cpu feature invtsc

2015-05-25 Thread Laz Peterson
I stand corrected ... This is something to be addressed within virt-
manager and not libvirt. Using Copy host CPU configuration adds
+invtsc (among other things).  I mistakenly thought that directive
simply does -cpu host.

My apologies!  Thank you and keep up the good work.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1458357

Title:
  migrate failed invalid cpu feature invtsc

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1458357/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1447916] Re: iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module failed to build

2015-05-25 Thread Laz Peterson
Can't argue with results, can you now?  With my fingers crossed, that's
good enough for me!

I like to stick with the Ubuntu LTS packages, instead of going upstream.
But I think I can now confidently move this into production. :-)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iscsitarget in Ubuntu.
https://bugs.launchpad.net/bugs/1447916

Title:
  iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module
  failed to build

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/1447916/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1447916] Re: iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module failed to build

2015-05-25 Thread Laz Peterson
Can't argue with results, can you now?  With my fingers crossed, that's
good enough for me!

I like to stick with the Ubuntu LTS packages, instead of going upstream.
But I think I can now confidently move this into production. :-)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1447916

Title:
  iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module
  failed to build

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/1447916/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1458357] [NEW] migrate failed invalid cpu feature invtsc

2015-05-24 Thread Laz Peterson
Public bug reported:

Description:Ubuntu 15.04
Release:15.04

libvirt-bin:
  Installed: 1.2.12-0ubuntu12
  Candidate: 1.2.12-0ubuntu13
  Version table:
 1.2.12-0ubuntu13 0
500 http://us.archive.ubuntu.com/ubuntu/ vivid-updates/main amd64 
Packages
 *** 1.2.12-0ubuntu12 0
500 http://us.archive.ubuntu.com/ubuntu/ vivid/main amd64 Packages
100 /var/lib/dpkg/status

Very similar to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1386503

I have only tried on Ubuntu 15.04 using SandyBridge CPU features.  I
actually have Haswell CPUs in the host servers, but the TSX issue is
preventing me from using that CPU feature set. (Would love to see that
fix come downstream!)

Live migration does not work.  Suspend does work.

Issue exists in both 1.2.12-0ubuntu12 and 1.2.12-0ubuntu13.

ProblemType: Bug
DistroRelease: Ubuntu 15.04
Package: libvirt-bin 1.2.12-0ubuntu12
ProcVersionSignature: Ubuntu 3.19.0-18.18-generic 3.19.6
Uname: Linux 3.19.0-18-generic x86_64
ApportVersion: 2.17.2-0ubuntu1.1
Architecture: amd64
Date: Sun May 24 10:31:55 2015
InstallationDate: Installed on 2015-05-24 (0 days ago)
InstallationMedia: Ubuntu-Server 15.04 Vivid Vervet - Release amd64 (20150422)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: libvirt
UpgradeStatus: No upgrade log present (probably fresh install)
modified.conffile..etc.libvirt.qemu.networks.default.xml: [deleted]

** Affects: libvirt (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug vivid

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in Ubuntu.
https://bugs.launchpad.net/bugs/1458357

Title:
  migrate failed invalid cpu feature invtsc

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1458357/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1458357] [NEW] migrate failed invalid cpu feature invtsc

2015-05-24 Thread Laz Peterson
Public bug reported:

Description:Ubuntu 15.04
Release:15.04

libvirt-bin:
  Installed: 1.2.12-0ubuntu12
  Candidate: 1.2.12-0ubuntu13
  Version table:
 1.2.12-0ubuntu13 0
500 http://us.archive.ubuntu.com/ubuntu/ vivid-updates/main amd64 
Packages
 *** 1.2.12-0ubuntu12 0
500 http://us.archive.ubuntu.com/ubuntu/ vivid/main amd64 Packages
100 /var/lib/dpkg/status

Very similar to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1386503

I have only tried on Ubuntu 15.04 using SandyBridge CPU features.  I
actually have Haswell CPUs in the host servers, but the TSX issue is
preventing me from using that CPU feature set. (Would love to see that
fix come downstream!)

Live migration does not work.  Suspend does work.

Issue exists in both 1.2.12-0ubuntu12 and 1.2.12-0ubuntu13.

ProblemType: Bug
DistroRelease: Ubuntu 15.04
Package: libvirt-bin 1.2.12-0ubuntu12
ProcVersionSignature: Ubuntu 3.19.0-18.18-generic 3.19.6
Uname: Linux 3.19.0-18-generic x86_64
ApportVersion: 2.17.2-0ubuntu1.1
Architecture: amd64
Date: Sun May 24 10:31:55 2015
InstallationDate: Installed on 2015-05-24 (0 days ago)
InstallationMedia: Ubuntu-Server 15.04 Vivid Vervet - Release amd64 (20150422)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: libvirt
UpgradeStatus: No upgrade log present (probably fresh install)
modified.conffile..etc.libvirt.qemu.networks.default.xml: [deleted]

** Affects: libvirt (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug vivid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1458357

Title:
  migrate failed invalid cpu feature invtsc

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1458357/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1447916] Re: iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module failed to build

2015-05-20 Thread Laz Peterson
This seems to affect 3.16 kernels as far as I can tell.

In the meantime, is there an option for installing and using
iscsitarget?  We have a new server going into the datacenter here in a
week or two, and would love to have something in place.

Our current iSCSI targets are running Ubuntu 14.04.1 (3.13.0-44-generic)
or Ubuntu 14.04.2 (3.13.0-49-generic) and there were no issues with
this.

Or is there a way that we can install Ubuntu 14.04.2 with kernel 3.13.0
instead of 3.16.0?

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iscsitarget in Ubuntu.
https://bugs.launchpad.net/bugs/1447916

Title:
  iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module
  failed to build

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/1447916/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1447916] Re: iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module failed to build

2015-05-20 Thread Laz Peterson
This seems to affect 3.16 kernels as far as I can tell.

In the meantime, is there an option for installing and using
iscsitarget?  We have a new server going into the datacenter here in a
week or two, and would love to have something in place.

Our current iSCSI targets are running Ubuntu 14.04.1 (3.13.0-44-generic)
or Ubuntu 14.04.2 (3.13.0-49-generic) and there were no issues with
this.

Or is there a way that we can install Ubuntu 14.04.2 with kernel 3.13.0
instead of 3.16.0?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1447916

Title:
  iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module
  failed to build

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/1447916/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1447916] Re: iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module failed to build

2015-05-20 Thread Laz Peterson
Daniel, that's a very interesting fix.  I'm not too familiar with any of
the kernel headers (or really any of the source for that matter).  Is
the object you are referencing essentially the exact same thing, but
from kernel 3.13 - 3.16?

If this is so ... Regarding the block I/O function, this would (or
SHOULD) have no impact whatsoever in normal functionality, would you
agree?

I only ask because if this is a simple reference to an obsolete object,
then I may consider sending this into production in a couple weeks.  Or
just hope that there is an official update released by then. :-)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iscsitarget in Ubuntu.
https://bugs.launchpad.net/bugs/1447916

Title:
  iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module
  failed to build

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/1447916/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1447916] Re: iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module failed to build

2015-05-20 Thread Laz Peterson
Daniel, that's a very interesting fix.  I'm not too familiar with any of
the kernel headers (or really any of the source for that matter).  Is
the object you are referencing essentially the exact same thing, but
from kernel 3.13 - 3.16?

If this is so ... Regarding the block I/O function, this would (or
SHOULD) have no impact whatsoever in normal functionality, would you
agree?

I only ask because if this is a simple reference to an obsolete object,
then I may consider sending this into production in a couple weeks.  Or
just hope that there is an official update released by then. :-)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1447916

Title:
  iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module
  failed to build

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/1447916/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1447916] Re: iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module failed to build

2015-05-20 Thread Laz Peterson
Perfect!  I love the references, thank you so much.

I'm going to call this as good as it gets and roll the dice for
testing, and maybe even for production.

I will report if I come across any unexpected issues.

Thanks Wolff.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iscsitarget in Ubuntu.
https://bugs.launchpad.net/bugs/1447916

Title:
  iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module
  failed to build

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/1447916/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1447916] Re: iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module failed to build

2015-05-20 Thread Laz Peterson
Perfect!  I love the references, thank you so much.

I'm going to call this as good as it gets and roll the dice for
testing, and maybe even for production.

I will report if I come across any unexpected issues.

Thanks Wolff.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1447916

Title:
  iscsitarget-dkms 1.4.20.3+svn499-0ubuntu2: iscsitarget kernel module
  failed to build

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/1447916/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1309768] [NEW] package slapd 2.4.28-1.1ubuntu4.4 failed to install/upgrade: ErrorMessage: subprocess new pre-installation script returned error exit status 1

2014-04-18 Thread Laz Peterson
Public bug reported:

This happened doing do-release-upgrade -d from 12.04.4 - 14.04.  I
believe it is from the fact that I am running multiple LDAP databases.
Don't have any other information other than that.

ProblemType: Package
DistroRelease: Ubuntu 14.04
Package: slapd 2.4.28-1.1ubuntu4.4
ProcVersionSignature: Ubuntu 3.11.0-19.33~precise1-generic 3.11.10.5
Uname: Linux 3.11.0-19-generic x86_64
ApportVersion: 2.0.1-0ubuntu17.6
Architecture: amd64
Date: Fri Apr 18 14:35:10 2014
ErrorMessage: ErrorMessage: subprocess new pre-installation script returned 
error exit status 1
InstallationMedia: Ubuntu-Server 12.04.4 LTS Precise Pangolin - Release amd64 
(20140204)
MarkForUpload: True
SourcePackage: openldap
Title: package slapd 2.4.28-1.1ubuntu4.4 failed to install/upgrade: 
ErrorMessage: subprocess new pre-installation script returned error exit status 
1
UpgradeStatus: Upgraded to trusty on 2014-04-18 (0 days ago)

** Affects: openldap (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package trusty

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to openldap in Ubuntu.
https://bugs.launchpad.net/bugs/1309768

Title:
  package slapd 2.4.28-1.1ubuntu4.4 failed to install/upgrade:
  ErrorMessage: subprocess new pre-installation script returned error
  exit status 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openldap/+bug/1309768/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1309768] [NEW] package slapd 2.4.28-1.1ubuntu4.4 failed to install/upgrade: ErrorMessage: subprocess new pre-installation script returned error exit status 1

2014-04-18 Thread Laz Peterson
Public bug reported:

This happened doing do-release-upgrade -d from 12.04.4 - 14.04.  I
believe it is from the fact that I am running multiple LDAP databases.
Don't have any other information other than that.

ProblemType: Package
DistroRelease: Ubuntu 14.04
Package: slapd 2.4.28-1.1ubuntu4.4
ProcVersionSignature: Ubuntu 3.11.0-19.33~precise1-generic 3.11.10.5
Uname: Linux 3.11.0-19-generic x86_64
ApportVersion: 2.0.1-0ubuntu17.6
Architecture: amd64
Date: Fri Apr 18 14:35:10 2014
ErrorMessage: ErrorMessage: subprocess new pre-installation script returned 
error exit status 1
InstallationMedia: Ubuntu-Server 12.04.4 LTS Precise Pangolin - Release amd64 
(20140204)
MarkForUpload: True
SourcePackage: openldap
Title: package slapd 2.4.28-1.1ubuntu4.4 failed to install/upgrade: 
ErrorMessage: subprocess new pre-installation script returned error exit status 
1
UpgradeStatus: Upgraded to trusty on 2014-04-18 (0 days ago)

** Affects: openldap (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package trusty

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1309768

Title:
  package slapd 2.4.28-1.1ubuntu4.4 failed to install/upgrade:
  ErrorMessage: subprocess new pre-installation script returned error
  exit status 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openldap/+bug/1309768/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1235372] [NEW] managesieve skipping of CRLF at end of AUTHENTICATE command

2013-10-04 Thread Laz Peterson
Public bug reported:

Can we update the Ubuntu packages to include Stephan Bosch's update to
client-authenticate.c in managesieve-login?

http://hg.rename-it.nl/dovecot-2.1-pigeonhole/rev/32d178f5e1a2

Certain sieve clients have some issues with the login procedure, and are
unable to complete their tasks.  It would be nice to see this included
in a simple update -- it's getting a little old dealing with custom
packages.  Thanks.

Ubuntu: 13.04 (but affects many versions)
Package: dovecot-managesieved 1:2.1.7-7ubuntu1

** Affects: dovecot (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: dovecot managesieve

** Patch added: Diff patch by Stephan Bosch
   
https://bugs.launchpad.net/bugs/1235372/+attachment/3859946/+files/client-authenticate.c

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to dovecot in Ubuntu.
https://bugs.launchpad.net/bugs/1235372

Title:
  managesieve skipping of CRLF at end of AUTHENTICATE command

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dovecot/+bug/1235372/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1235372] [NEW] managesieve skipping of CRLF at end of AUTHENTICATE command

2013-10-04 Thread Laz Peterson
Public bug reported:

Can we update the Ubuntu packages to include Stephan Bosch's update to
client-authenticate.c in managesieve-login?

http://hg.rename-it.nl/dovecot-2.1-pigeonhole/rev/32d178f5e1a2

Certain sieve clients have some issues with the login procedure, and are
unable to complete their tasks.  It would be nice to see this included
in a simple update -- it's getting a little old dealing with custom
packages.  Thanks.

Ubuntu: 13.04 (but affects many versions)
Package: dovecot-managesieved 1:2.1.7-7ubuntu1

** Affects: dovecot (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: dovecot managesieve

** Patch added: Diff patch by Stephan Bosch
   
https://bugs.launchpad.net/bugs/1235372/+attachment/3859946/+files/client-authenticate.c

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1235372

Title:
  managesieve skipping of CRLF at end of AUTHENTICATE command

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dovecot/+bug/1235372/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs