Note these are freshly bootstrapped clouds, as per an irc conversation
with alexisb and anastasiamac_.
I took a working juju environment deploying to canonistack, just changed the
default series, did a juju bootstrap and then a juju
deploy local:xenial/ubuntu --to lxc:0, and got an error as
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1557345
Title:
xenial juju 1.25.3 unable to deploy to lxc containers
To manage notifications
Public bug reported:
There appears to be some issue with deploying to lxc containers using
juju 1.25.3 on Xenial.
When deploying with xenial to canonistack-lcy02:
bradm@serenity:~/src/juju$ juju deploy local:xenial/ubuntu --to lxc:0
Added charm "local:xenial/ubuntu-2" to the environment.
ERROR
Public bug reported:
The current rsyslogd configuration as provided by the rsyslogd package
causes double logging to occur.
Steps to Reproduce:
1) Install haproxy via whatever normal means (apt-get etc)
2) Configure it to listen on at least one port, even just the stats port
3) Visit the URL
This does indeed appear to work correctly, I've deployed a container
using juju:
ubuntu@apollo:~$ dpkg-query -W lxc
lxc 1.0.8-0ubuntu0.3
ubuntu@apollo:~$ sudo lxc-ls --fancy
NAME STATEIPV4IPV6 AUTOSTART
FWIW and a totally expected result, I just downgraded the LXC packages
on these hosts and redeployed, and things came up ok.
$ dpkg-query -W lxc
lxc 1.0.7-0ubuntu0.10
I don't think this changes anything, but just putting it here for
completeness.
--
You received this bug notification
Public bug reported:
I've just tried using juju to deploy to a container with trusty-proposed
repo enabled, and I get an error message about 'failed to retrieve the
template to clone'. The underlying error appears to be:
tar --numeric-owner -xpJf
Public bug reported:
Issue
---
When nagios3 is configured to have livestatus from check-mk-livestatus as a
broker module, and checks have a downtime applied to them it will crash when
the logs rotate. This shows up in /var/log/nagios3/nagios.log as:
[1445238000] Caught SIGSEGV,
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to curtin in Ubuntu.
https://bugs.launchpad.net/bugs/1356392
Title:
lacks sw raid1 install support
To manage notifications about this bug go to:
Public bug reported:
/etc/dhcp/dhclient-exit-hooks.d/ntp doesn't check if /etc/ntp.conf has
been updated since the last time dhclient ran. A simple addition of a
check to see if /etc/ntp.conf is newer than /var/lib/ntp/ntp.conf.dhcp,
and if so letting it add the servers would be sufficient.
Public bug reported:
When using ssh and managing ssh port forwards with ~C to remove a
forward that doesn't exist, the following occurs:
user@host:~$
ssh -KD12345
Unkown port forwarding.
ie, the mispelling of the work Unknown as 'Unkown'.
This occurs at least on a server running on
: nrpe-external-master
scope: container
I've got a branch at lp:~brad-
marshall/charms/trusty/cinder/add-n-e-m-interface with the change in it.
** Affects: swift (Ubuntu)
Importance: Undecided
Status: New
** Affects: ceilometer (Juju Charms Collection)
Importance: Undecided
Public bug reported:
We appear to have a performance regression with puppet 2.7.11-1ubuntu2.4
that we recently upgraded to, particularly on our more heavily loaded
puppet master. When we're running 2.4, many of our puppet clients get
the following:
err: Could not retrieve catalog from remote
Public bug reported:
If you are using a bare word dns domain (.test for example), facter fqdn
returns the incorrect information. Since there's no . in the domain,
the checks fall back to parsing /etc/resolv.conf, which may not be
correct.
$ hostname
eagle
$ dnsdomainname
test
$ facter fqdn
This appears to have been fixed after upgrading to Essex, so we can
close off this bug.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/973953
Title:
euca-describe-instance returns
Public bug reported:
euca-describe-instances has been working fine up until recently. We now
see:
$ euca-describe-instances
VolumeNotFound: Volume vol-0019 could not be found.
The logs on the nova-api server are as follows:
2012-04-05 03:14:05 DEBUG nova.auth.manager [-] Looking up user:
I can confirm this is still happening on lucid (10.04.3) with the
following apache versions:
$ dpkg --list | grep apache
ii apache2
2.2.14-5ubuntu8.6 Apache HTTP Server metapackage
ii apache2-mpm-worker
17 matches
Mail list logo