On Thu, Nov 17, 2016 at 5:02 PM, Charles Butler
wrote:
> This deserves a ton of fanfare. Let's celebrate this win by circulating this
> like crazy.
>
> I've already retweeted this evening and plan on following up again tomorrow
> during normal business hours. Great
This deserves a ton of fanfare. Let's celebrate this win by circulating
this like crazy.
I've already retweeted this evening and plan on following up again tomorrow
during normal business hours. Great work Stokes on completing this
herculean task. The ~containers team, appreciates the effort
Greetings Kubernauts and charm aficionados alike.
We've been cycling so hard we actually forgot to post release notes on the
last release of the Kubernetes bundles. I apologize for that, as it's been
a busy couple weeks with Kubecon, a lot of end-user engagements in IRC, and
future forward work.
Just pulled in changes to support deploying The Canonical Distribution of
Kubernetes on the localhost cloud type.
I've blogged about it here:
http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/
Please give it a shot deploy some workloads on it and let us know how it
goes.
Having a difficult time understanding why my additional units aren't
respecting my spaces constraint...
I have created a subnet in each AZ available in my region, see here ->
http://paste.ubuntu.com/23492849/
When I initially deployed my bundle it seems that the units deployed to the
correct
Having a difficult time understanding why my additional units aren't
respecting my spaces constraint...
I have created a subnet in each AZ available in my region, see here ->
http://paste.ubuntu.com/23492849/
When I initially deployed my bundle it seems that the units deployed to the
correct
Is there a specific reason the barbican charm doesn't have the
os-{internal,private}-hostname config params?
Thanks
--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/juju
Greetings, all. Kevin, Konstantinos, Pete, and I reviewed some charms in
the new review queue this week and last week, which I forgot to send out an
email for.
Nov 17, 2016: Cory, Konstantinos, Kevin
-
IBM HTTP Server
-
https://bugs.launchpad.net/charms/+bug/1612535
-
Thanks for this! Alongside setting os-*-hostname to the public ip address,
I had to login to the aws console and manually open up port 35357, as juju
doesn't expose 35357 (for good measure I'm assuming).
Yeah, I had the same situation in Juju 2.0 ! (all is ok with Juju 1.25)
(with AWS provider)
Resources are also stored in mongo and can be unlimited in size (not much
different than fat charms, except that at least they're only pulled down on
demand).
We should let admins configure their max log size... our defaults may not
be what they like, but I bet that's not really the issue, since
On 17 November 2016 at 12:12, Stuart Bishop wrote:
> On 17 November 2016 at 02:34, roger peppe wrote:
>>
>> +1 to using blocking flock. Polling is a bad idea with a heavily contended
>> lock.
>>
>> FWIW I still think that mutexing all unit
I'm definitely agreeing we need to provide some better tools for the admin
of the controller to track and garden things such as charms and resources
which can be quite large and grows over time.
My main point with Uros was that we have this way due to model migrations
to stick a controller/model
FWIW this is being tracked in https://bugs.launchpad.net/juju/+bug/1642541
On Thu, 17 Nov 2016 at 04:17 Nate Finch wrote:
> Just for historical reference. The original implementation of the new OS
> mutex used flock until Dave mentioned that it presented problems
So logs in mongo and logs on disk should be capped, and purged when they
get above a certain size. 'audit.log' should never be automatically purged.
Charms in the blobstore are potentially local data that we can't reproduce,
so hard to automatically purge them. I think there has been some work
On 17/11/16 08:20, Uros Jovanovic wrote:
> Hi all,
>
> I'd like to start a discussion on how to handle storage issues on
> controller machines, especially what we can do when storage is getting 95%
> or even 98% full. There are many processes that are storing data, we have
> at least:
> - charms
15 matches
Mail list logo