** Changed in: juju
Importance: Undecided => High
** Changed in: juju
Status: New => In Progress
** Changed in: juju
Milestone: None => 2.9.16
** Changed in: juju
Assignee: (unassigned) => Joseph Phillips (manadart)
--
You received this bug notification because you are a
That particular error is coming from here:
https://github.com/juju/juju/blob/19d6a75eea61f5021a9fa0ee8e37fe7f6e3c9d53/charmhub/download.go#L201
Given what it is downloading from, I believe it is coming from:
>From everything I can tell this was a temporary outage (as you can again
deploy from postgresql-k8s). However, it is something we should be aware
of in case it comes back and we have the opportunity to reproduce.
** Changed in: juju
Status: New => Incomplete
--
You received this bug
I don't believe Juju expects public addresses for all machines. It
*does* expect a public address for the controller, because you need
external access to be able to connect for things like "juju status" from
your machine.
I don't know how you would have been able to bootstrap and 'ssh' into
the
** Changed in: juju
Status: New => Triaged
** Changed in: juju
Importance: Undecided => Low
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1777512
Title:
key retrieval timeouts cause
We've also been seeing keyserver request failures during our CI/Build
process
** Also affects: juju
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1777512
nassigned) => John A Meinel (jameinel)
** Changed in: juju
Assignee: John A Meinel (jameinel) => Witold Krecicki (wpk)
** Changed in: juju
Status: Triaged => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribe
We should also check into https://bugs.launchpad.net/juju/+bug/1751739
and see if there is a consistent issue here.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1756040
Title:
bionic: LXD
Likely we should remove juju-mongodb3.2 from bionic once we actually have an
alternative and can trust that it actually works. It is my understanding that
3.2 FTBFS and nobody was willing to maintain it.
Given the option, yes we would prefer a 3.4 without the JS engine. My
understanding is that
** Also affects: juju/2.3
Importance: Undecided
Status: New
** Changed in: juju/2.3
Status: New => Triaged
** Changed in: juju/2.3
Importance: Undecided => High
** Changed in: juju/2.3
Milestone: None => 2.3.6
** Changed in: juju
Status: New => Triaged
**
the containers that are bionic?
John
=:->
On Mar 15, 2018 15:45, "John Meinel" <j...@arbash-meinel.com> wrote:
> I'll note that I tried just launching a bionic container using snap
> 3.0.0.beta5 and after "lxc launch ubuntu:x" and "lxc launch ubuntu-daily:b"
> n
I'll note that I tried just launching a bionic container using snap
3.0.0.beta5 and after "lxc launch ubuntu:x" and "lxc launch ubuntu-daily:b"
neither of them came up with an IP address.
I might have broken my networking on this machine because I was trying to
install stock Juju which tries to
It would be good to have a clearer discussion of what issues you are
running into with routes. There are several ways that we *could* tackle the
issue. Static Routes was the mechanism that we started modeling because
that was the ask from the field (because, as-I-understand, that was the
solution
Note that any time you actively need to use custom userdata is probably a
time where Juju is failing to model something important. So while it is an
outlet to getting something that works, it should still be a bug that we
don't support the underlying use case correctly.
John
=:->
On Sun, Dec
I just ran into this doing a dist upgrade from 14.04 to 16.04.3. Now,
the initial do-release-upgrade failed mid way, because of something
wrong with Postgres. (I can't remember the exact details now, but it
started popping up a dialog about postgres, which had a 'close' button,
but clicking close
If you are installing snaps in an LXD container, you also need to make sure
you have squashfuse installed (and possibly also 'fuse' as there may be a
problem with the dependencies).
They were looking to add those into the default images (since snapd is also
there), but its possible your image is
** Also affects: snapd (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1705988
Title:
snap install --classic juju fails
To manage notifications
*** This bug is a duplicate of bug 1628289 ***
https://bugs.launchpad.net/bugs/1628289
** This bug has been marked a duplicate of bug 1628289
snapd should depend on squashfuse (for use in containers)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
uju itself isn't ever installing snapd (it is only ever
there-by-default or driven by the charm), it doesn't feel like our direct
responsibility.
It may be the that we need to work around the limitation of snapd
packaging.
On Tue, May 23, 2017 at 6:08 PM, John Meinel <j...@arbash-meinel.com&
*** This bug is a duplicate of bug 1660273 ***
https://bugs.launchpad.net/bugs/1660273
Agreed
** This bug has been marked a duplicate of bug 1660273
/etc/environment does not include /snap/bin in $PATH
--
You received this bug notification because you are a member of Ubuntu
Bugs, which
I'm not sure that 'snapd' is the right target for this, but not having
'/snap/bin' on your PATH inside systemd launched scripts seems to be a
snapd packaging issue.
** Changed in: juju
Importance: Undecided => Wishlist
** Also affects: snapd (Ubuntu)
Importance: Undecided
Status:
arm), it doesn't feel like our direct
responsibility.
It may be the that we need to work around the fact that snapd itself isn't
doing the right thing.
John
=:->
On Tue, May 23, 2017 at 6:08 PM, John Meinel <j...@arbash-meinel.com> wrote:
> https://www.reddit.com/r/linuxquestions/com
I thought with juju 2.1+ we no longer configure storage for LXD, which
means we would no longer conflict.
I suppose as we introduce support for more directly controlling storage
for lxd resources, we'll want to be using the new APIs and carving out
our own pools.
Can we confirm if 2.1+ still
Should I mark comment #16 as "hidden" so it doesn't give people the
wrong idea?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1516989
Title:
juju status broken
To manage notifications about this
So Juju itself could accept apt_security_mirror as the configuration key
and map that to whatever is necessary for the series we are deploying.
John
=:->
On Mar 8, 2017 20:19, "Seyeong Kim" wrote:
> I agree with jameinel,
>
> supporting kind of apt_security_mirror is
It feels like overriding security.ubuntu.com should be a separate setting,
since it is security sensitive, vs having the normal 'apt_mirror' override
both settings always. Otherwise why would cloud-init itself have 2
different settings.
maybe 'apt_security_mirror' ? I'm not sure on the specific
As mentioned from Stephane, this is intentional, and trying to follow
the "spec" seems like it will likely cause more problems than doing what
we're currently doing.
** Changed in: juju
Status: Triaged => Won't Fix
--
You received this bug notification because you are a member of Ubuntu
see also bug #1657850 that we probably shouldn't just be using the 'next
available 10.0.x' address. (This was the original algorithm used by LXD
when we added support, but they have changed their algorithm since
then.)
** Package changed: juju (Ubuntu) => juju
** Changed in: juju
Status:
I'll note that 'peer_store' isn't safe to directly call, but looking at the
traceback of the original description it is line ~217 of
hooks/amqp-relation-changed which looks to be:
# If this node is the elected leader then share our secret with other nodes
if
Sorry, I'm on crack, I missed line 70/71 which is exactly the 'check if I'm
leader first':
if not is_leader():
return _leader_get(attribute=attribute)
Forgive my earlier rambling. I missed that line and then dug all over to see if
it was trapped outside of that function.
--
You
I haven't dug particularly deeply. However if I do
charm pull ceilometer
I get: cs:ceilometer-24
And then dig into the contents of:
charmhelpers/contrib/peerstorage/__init__.py
I see that it has a function:
def leader_get():
which looks like it is supposed to be a compatibility function, so
** Project changed: lxd => lxd (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1660542
Title:
container mac addresses should use 'locally assigned' section
To manage notifications about
The actual promise from Juju is that from the time you call 'is-leader'
and get a True value, that you will have 30s before we would possibly
return True to any other unit.
Internally the mechanism is that we obtain a lease (valid for 1 minute)
and attempt to renew that lease every 30s (so the
I'm removing the direct remote watch as they said it is an Ubuntu bug.
** Changed in: vim
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1611363
Title:
vim.gtk3
Given the Assignee: auto-github-vim-vim #851
I think what happened is that someone noticed this is also the bug in github.
However, if you read the bug in github, they closed *that* bug because they
said it was an Ubuntu bug. So I think there is no Fix Released, as upstream
said it was an
with
fs.inotify.max_user_instances = 2048
it didn't fail until the 187th container.
It did fail with a new message that I haven't seen:
[3831] x-187:error: Error calling 'lxd forkstart x-187 /var/lib/lxd/containers
/var/log/lxd/x-187/lxc.conf': err='exit status 1'
lxc 20161007100331.798
I added a "cat /proc/meminfo | grep Slab" to go-lxc-run.sh and found
this:
$ sysctl fs.inotify
fs.inotify.max_queued_events = 65536
fs.inotify.max_user_instances = 1024
fs.inotify.max_user_watches = 524288
$ ulimit -a
...
open files (-n) 1048576
...
$ go-lxc-run.sh
[0]
After rebooting with the new values, I did manage to get to launch a lot
more containers, failing at only the 232nd one.
I wonder if the issue is not the User number of open files, but the Root
number of open files, which requires a reboot to get updated.
I'll try to play around more to really
Interestingly, my baseline Kernel memory with no containers (and not
much other software) was about the same (~400MB). I'm not entirely sure
why it grew faster with the new settings, but didn't effect the
baseline.
--
You received this bug notification because you are a member of Ubuntu
Bugs,
I did try setting all of the items that are mentioned in production-
setup.md. To start with, a few of them are not reasonable.
max_user_instances defaults to 128, and we were able to see a difference
at 256, but not at 1024. Setting it to 1M seems silly.
I'll also note that my Kernel memory
Note that there should be support for /etc/system/limits.d/10-juju.conf
I'm testing it now, but it may be that we can drop something in there as
well. I'll test it a bit, but if we have some tasteful defaults, maybe
we can make it work.
I think we can change from their default so instead of "*
With Juju in the loop, I run into whatever limit a bit faster. I was
successful at doing:
juju bootstrap test-lxd lxd
juju deploy ubuntu
juju add-unit -n 5 ubuntu # wait for status to say everything is running
juju add-unit -n 5 ubuntu # wait for status to be happy
but then after doing one
5) Another data point, with
$ sysctl fs.inotify
fs.inotify.max_queued_events = 131072
fs.inotify.max_user_instances = 1024
fs.inotify.max_user_watches = 524288
(so max_queued_events 8x greater, and max_user_instances well above
previously established useful level), I still only get 19 containers.
Michael and I played around with some different settings, and here are
my notes.
1) Package kde-runtime seems to install
/etc/sysctl.d/30-baloo-inotify-limits.conf which sets max_user_watches to
512*1024
'slabtop' says that my baseline kernel memory is 380-420MB with no containers
Is this a case that we are successfully sending data to squid at a reasonable
but slow rate, such that it takes more than 5 minutes to upload the whole
content and squid doesn't send anything to Apache until all of it has been
transferred.
It sounds like we'll have the same problem with
I tried stripping my 1.6 based build, and it seems to be working. (Juju
does reflection at init() time as part of some of the registries), so it
seems safe to do. On the flip side it isn't amazingly better. It seems
to be a bit less than 2:1. (73MB down to 40MB.) Almost certainly still
worth it,
Note that in a customer site with HA=3 and ~100 machines we are seeing syslog
grow to around 3.6GB before it gets rotated (presumably daily?). Which means
that we end up consuming 7.2GB just before the second rotation, and most of it
is going to be these fairly useless logging output.
I believe
Note that in a customer site with HA=3 and ~100 machines we are seeing syslog
grow to around 3.6GB before it gets rotated (presumably daily?). Which means
that we end up consuming 7.2GB just before the second rotation, and most of it
is going to be these fairly useless logging output.
I believe
I wonder if this is just saturating the transaction log. The txn log is
a capped collection, so it only allows N entries before it starts
overwriting earlier ones. I can imagine there are other issues going on,
but we might want a knob that lets you say I'm going to be running
1000s things, make
I wonder if this is just saturating the transaction log. The txn log is
a capped collection, so it only allows N entries before it starts
overwriting earlier ones. I can imagine there are other issues going on,
but we might want a knob that lets you say I'm going to be running
1000s things, make
I actually don't think using mongo is the correct fix here. We already
have all the code we need inside of juju-core, we shouldn't depend on a
separate client library in order to write stuff to the database.
We *definitely* don't want mongodb-clients as a dependency for juju-core
(juju-core is
I actually don't think using mongo is the correct fix here. We already
have all the code we need inside of juju-core, we shouldn't depend on a
separate client library in order to write stuff to the database.
We *definitely* don't want mongodb-clients as a dependency for juju-core
(juju-core is
** Changed in: juju-core/1.18
Assignee: John A Meinel (jameinel) = Ian Booth (wallyworld)
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-quickstart in Ubuntu.
https://bugs.launchpad.net/bugs/1306537
Title:
LXC local
** Changed in: juju-core/1.18
Assignee: John A Meinel (jameinel) = Ian Booth (wallyworld)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1306537
Title:
LXC local provider fails to provision
That sounds like a regression, we certainly need to be able to fall back to
the old method of connecting if the server doesn't support proxying the
request.
John
=:-
On Tue, May 6, 2014 at 5:12 PM, Curtis Hovey cur...@canonical.com
wrote:
This issue might be that the 1.19.1 client cannot ssh
That sounds like a regression, we certainly need to be able to fall back to
the old method of connecting if the server doesn't support proxying the
request.
John
=:-
On Tue, May 6, 2014 at 5:12 PM, Curtis Hovey cur...@canonical.com
wrote:
This issue might be that the 1.19.1 client cannot ssh
I believe this is more about the local provider, which doesn't actually
look for tools on streams.canonical.com
I don't quite understand why we don't recognize 'utopic', though, as we
*should* be reading /usr/share/distro-info/ubuntu.csv to find what series
are available.
Certainly you can see
James, this may be a special case for juju-mongodb, but if you uninstall a
package, doesn't it usually stop the service that was running?
(uninstalling postgres should stop the postgres process, right?)
I guess in the case of juju-mongodb we have the problem that the packaging
itself isn't
So this sounds like we are putting all the machine IDs into a single URL
and we end up running out of URL space (quick google says that the default
max length is 2000 characters).
It sounds like we either need to send the request in batches, or POST the
IDs rather than put them in the URL itself.
I believe this is more about the local provider, which doesn't actually
look for tools on streams.canonical.com
I don't quite understand why we don't recognize 'utopic', though, as we
*should* be reading /usr/share/distro-info/ubuntu.csv to find what series
are available.
Certainly you can see
James, this may be a special case for juju-mongodb, but if you uninstall a
package, doesn't it usually stop the service that was running?
(uninstalling postgres should stop the postgres process, right?)
I guess in the case of juju-mongodb we have the problem that the packaging
itself isn't
So this sounds like we are putting all the machine IDs into a single URL
and we end up running out of URL space (quick google says that the default
max length is 2000 characters).
It sounds like we either need to send the request in batches, or POST the
IDs rather than put them in the URL itself.
** Changed in: juju-core
Status: New = Triaged
** Changed in: juju-core
Importance: Undecided = High
** Changed in: juju-core
Milestone: None = 1.19.2
** Changed in: juju-core
Importance: High = Critical
--
You received this bug notification because you are a member of Ubuntu
** Changed in: juju-core
Status: New = Triaged
** Changed in: juju-core
Importance: Undecided = High
** Changed in: juju-core
Milestone: None = 1.19.2
** Changed in: juju-core
Importance: High = Critical
--
You received this bug notification because you are a member of Ubuntu
** Changed in: juju-core
Milestone: 1.19.1 = None
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1305280
Title:
juju command get_cgroup fails when creating new machines, local
provider arm32
** Changed in: juju-core/1.18
Status: Triaged = In Progress
** Changed in: juju-core/1.18
Assignee: (unassigned) = Nate Finch (natefinch)
** Changed in: juju-core
Status: Triaged = In Progress
--
You received this bug notification because you are a member of Ubuntu
Server
** Also affects: juju-mongodb (Ubuntu)
Importance: Undecided
Status: New
** Changed in: juju-core
Status: Triaged = Invalid
** Changed in: juju-core
Milestone: 1.19.1 = None
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is
** Summary changed:
- juju scp no longer allows multiple extra arguments to pass throug
+ juju scp no longer allows multiple extra arguments to pass through
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
** Changed in: juju-core/1.18
Status: Triaged = In Progress
** Changed in: juju-core/1.18
Assignee: (unassigned) = Nate Finch (natefinch)
** Changed in: juju-core
Status: Triaged = In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs,
** Also affects: juju-mongodb (Ubuntu)
Importance: Undecided
Status: New
** Changed in: juju-core
Status: Triaged = Invalid
** Changed in: juju-core
Milestone: 1.19.1 = None
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
** Summary changed:
- juju scp no longer allows multiple extra arguments to pass throug
+ juju scp no longer allows multiple extra arguments to pass through
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Changed in: juju-core/1.18
Importance: High = Critical
** Changed in: juju-core
Importance: High = Critical
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1304407
Title:
** Changed in: juju-core/1.18
Importance: High = Critical
** Changed in: juju-core
Importance: High = Critical
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1304407
Title:
juju bootstrap
I would target this to a 1.18.2 if it existed.
This is a change in behavior from 1.16 to 1.18. If you just do:
juju-1.18 bootstrap -e amazon
I end up getting a i386 target. I have to do:
juju-1.18 bootstrap -e amazon --constraints=arch=amd64
for it to pick a amd64 (which was always the default
I would target this to a 1.18.2 if it existed.
This is a change in behavior from 1.16 to 1.18. If you just do:
juju-1.18 bootstrap -e amazon
I end up getting a i386 target. I have to do:
juju-1.18 bootstrap -e amazon --constraints=arch=amd64
for it to pick a amd64 (which was always the default
** Changed in: juju-core
Milestone: 1.19.0 = None
** Changed in: juju-core
Assignee: Nate Finch (natefinch) = (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
** Changed in: juju-core
Milestone: 1.19.0 = None
** Changed in: juju-core
Assignee: Nate Finch (natefinch) = (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1208430
Title:
...
On Mon, Mar 31, 2014 at 01:30:45PM -, Mark Ramm wrote:
I think a key point here is that the juju package does not generally
install or pull down binaries from anywhere to your machine. It does
instruct the cloud installation of a server to use a specific ubuntu
image from
...
On Mon, Mar 31, 2014 at 01:30:45PM -, Mark Ramm wrote:
I think a key point here is that the juju package does not generally
install or pull down binaries from anywhere to your machine. It does
instruct the cloud installation of a server to use a specific ubuntu
image from
Actually, because juju-local only supports one architecture (your local
machine), it does *not* download the jujud tools from a remote site, but
uses the one on your local machine. (It should be put into the juju-
local package, rather than being in the 'juju-core' package, but that is
just
Actually, because juju-local only supports one architecture (your local
machine), it does *not* download the jujud tools from a remote site, but
uses the one on your local machine. (It should be put into the juju-
local package, rather than being in the 'juju-core' package, but that is
just
Public bug reported:
When installing the juju-local package you should also get the cpu-checker
package so that we can probe for KVM support.
We could make it a Recommends instead of a Depends (because if it isn't there,
we just assume no KVM support), but it seems useful to have around.
This
Public bug reported:
When installing the juju-local package you should also get the cpu-checker
package so that we can probe for KVM support.
We could make it a Recommends instead of a Depends (because if it isn't there,
we just assume no KVM support), but it seems useful to have around.
This
FWIW, I do see this occasionally, but I'm guessing my ISP is doing transparent
proxy caching. However,
sudo apt-get update -o Acquire::http::No-Cache=true
Did fix it for me.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to the bug report.
** Changed in: juju-core
Milestone: 1.18.0 = 1.17.6
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1273769
Title:
ppc64el enablement for juju/lxc
To manage notifications about
** Changed in: juju-core
Milestone: 1.18.0 = 1.17.6
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1273769
Title:
ppc64el enablement for juju/lxc
To manage notifications about this bug go to:
** Summary changed:
- juju deploy fails against juju-core 1.17.3 environment with 1.17.2 client
+ juju 1.17.2 client doesn't like juju 1.17.3 .jenv files
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
The specific bug doesn't have to do with streams or any such thing. As
Andrew mentioned, it is just that we changed the default values for an
item in the environment configuration, and 1.17.2 didn't like the
content being the empty string.
The way to fix that is to not use 1.17.2. So I think
Marked Won't Fix because it is technically a bug in 1.17.2's
interpretation of a newer config, 1.16 didn't have that field so it
isn't a stable release compatibility problem.
** Changed in: juju-core
Status: Triaged = Won't Fix
--
You received this bug notification because you are a
** Summary changed:
- juju deploy fails against juju-core 1.17.3 environment with 1.17.2 client
+ juju 1.17.2 client doesn't like juju 1.17.3 .jenv files
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
The specific bug doesn't have to do with streams or any such thing. As
Andrew mentioned, it is just that we changed the default values for an
item in the environment configuration, and 1.17.2 didn't like the
content being the empty string.
The way to fix that is to not use 1.17.2. So I think
Marked Won't Fix because it is technically a bug in 1.17.2's
interpretation of a newer config, 1.16 didn't have that field so it
isn't a stable release compatibility problem.
** Changed in: juju-core
Status: Triaged = Won't Fix
--
You received this bug notification because you are a
I'm pretty sure the arch changes haven't landed yet, right? (mapping the
values of uname -m to the names we'll be using for the juju
packages/tarballs)
On Mon, Feb 17, 2014 at 10:02 PM, James Page james.p...@ubuntu.com
wrote:
** Changed in: juju-core (Ubuntu)
Status: Confirmed =
I'm pretty sure the arch changes haven't landed yet, right? (mapping the
values of uname -m to the names we'll be using for the juju
packages/tarballs)
On Mon, Feb 17, 2014 at 10:02 PM, James Page james.p...@ubuntu.com
wrote:
** Changed in: juju-core (Ubuntu)
Status: Confirmed =
1.16 needs to be merged into trunk, which Martin P is working on today
John
=:-
On Feb 19, 2014 6:41 PM, Curtis Hovey cur...@canonical.com wrote:
Hi John. Is the status of this bug fix committed in trunk? Does the
branch need to merge into trunk?
--
You received this bug notification
1.16 needs to be merged into trunk, which Martin P is working on today
John
=:-
On Feb 19, 2014 6:41 PM, Curtis Hovey cur...@canonical.com wrote:
Hi John. Is the status of this bug fix committed in trunk? Does the
branch need to merge into trunk?
--
You received this bug notification
offhand, I would think we'd want to use the term arm64 for this (to
match arm), but I don't have a strong stake in what we call it.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
offhand, I would think we'd want to use the term arm64 for this (to
match arm), but I don't have a strong stake in what we call it.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1276909
Title:
I don't have direct ways to test the patch, but it seems sane to me. Is
there something we can get into some form of testing to make sure that
we don't break this in the future? (CI testing for aarch64?)
** Changed in: juju-core
Importance: Undecided = High
** Changed in: juju-core
Given the statements in bug #1276909 this seems to be a small patch so
that we have a regex to recognize the architecture.
It seems the new patch would be something like:
--- juju-core-1.17.2.orig/src/launchpad.net/juju-core/environs/manual/init.go
+++
1 - 100 of 264 matches
Mail list logo