Re: conjure-up Canonical Kubernetes in LXD

2016-11-17 Thread Antonio Rosales
On Thu, Nov 17, 2016 at 5:02 PM, Charles Butler
 wrote:
> This deserves a ton of fanfare. Let's celebrate this win by circulating this
> like crazy.
>
> I've already retweeted this evening and plan on following up again tomorrow
> during normal business hours.  Great work Stokes on completing this
> herculean task. The ~containers team, appreciates the effort that went into
> this, and the collaboration across our teams.

Indeed, OpenStack, Kubernetes, or Hadoop-Spark Cluster . . . on your
laptop. . . in machine containers . . . multi-node . . . same as in
the cloud. . . Developers Rejoice!

-Antonio


>
> Go team!
>
>
>
> Charles Butler  - Juju Charmer
> Come see the future of modeling your datacenter: http://jujucharms.com
>
> On Thu, Nov 17, 2016 at 5:51 PM, Adam Stokes 
> wrote:
>>
>> Just pulled in changes to support deploying The Canonical Distribution of
>> Kubernetes on the localhost cloud type.
>>
>> I've blogged about it here:
>>
>> http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/
>>
>> Please give it a shot deploy some workloads on it and let us know how it
>> goes.
>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>



-- 
Antonio Rosales
Ecosystem Engineering
Canonical

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: conjure-up Canonical Kubernetes in LXD

2016-11-17 Thread Charles Butler
This deserves a ton of fanfare. Let's celebrate this win by circulating
this like crazy.

I've already retweeted this evening and plan on following up again tomorrow
during normal business hours.  Great work Stokes on completing this
herculean task. The ~containers team, appreciates the effort that went into
this, and the collaboration across our teams.

Go team!



Charles Butler  - Juju Charmer
Come see the future of modeling your datacenter: http://jujucharms.com

On Thu, Nov 17, 2016 at 5:51 PM, Adam Stokes 
wrote:

> Just pulled in changes to support deploying The Canonical Distribution of
> Kubernetes on the localhost cloud type.
>
> I've blogged about it here:
>
> http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/
>
> Please give it a shot deploy some workloads on it and let us know how it
> goes.
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Canonical Kubernetes charm Release Notes - week ending 11/18/2016

2016-11-17 Thread Charles Butler
Greetings Kubernauts and charm aficionados alike.

We've been cycling so hard we actually forgot to post release notes on the
last release of the Kubernetes bundles. I apologize for that, as it's been
a busy couple weeks with Kubecon, a lot of end-user engagements in IRC, and
future forward work.


Without any further adieu, here's the goodness that just landed in the
charm store as of today (and some last week, whoops!):


Etcd

   -

   Added snapshot/restore actions. This enables operators to snapshot and
   clone a running Etcd cluster. This is useful during version migrations, and
   restoring from a broken state.


Thanks @rimusz for requesting and piloting this feature in the edge channel.

https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/126


   -

   Updated the Readme with snapshot/restore instructions
   -

   Stripped the (leader) identifier from status messages in favor of the
   asterisk
   -

   Python fixes for flake8 version 3.2.x
   -

   Added Juju Storage so the WAL and Data files may now be stored on a
   cloud provider persistent disk in the event of disaster recovery



Kubernetes-master


   -

   Updated kubernetes resource to version 1.4.5


   -

   Updated addon templates for kube-dns and kubernetes-dashboard
   -

   Updated README to bring up-to-date with recent changes
   -

   Added a `charm build tactic` to copy/rewrite manifests from the VCS
   system instead of maintaining forks


Kubernetes-worker


   -

   Updated kubernetes resource to version 1.4.5


   -

   Fixed microbot action failing when replicas param is omitted
   -

   Fixed a temporary hook failure involving premature deployment of the
   ingress controller
   -

   Added config option to set node labels for a deployed group of workers
   -

  Thanks @rimusz for requesting this feature.



Kubernetes-e2e (conformance/testing apparatus.)


   -

   Added support for running e2e tests in parallel. This is the new default
   behavior.


   -

   Added junit output to results
   -

   Updated charm icon for consistency
   -

   Fixed a bug when deploying with no resource
   -

   Fixed a bug when upgrading the e2e resource
   -

   Removed excess paths from the result archives


Special thanks to @rimusz for working with us to improve the operations.
Thank you to @mbruzek, @chuckbutler, @wwwtyro, and @cynerva for all the
assistance on this release.




Charles Butler  - Juju Charmer
Come see the future of modeling your datacenter: http://jujucharms.com
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


conjure-up Canonical Kubernetes in LXD

2016-11-17 Thread Adam Stokes
Just pulled in changes to support deploying The Canonical Distribution of
Kubernetes on the localhost cloud type.

I've blogged about it here:

http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/

Please give it a shot deploy some workloads on it and let us know how it
goes.
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Juju Respect Spaces

2016-11-17 Thread James Beedy
Having a difficult time understanding why my additional units aren't
respecting my spaces constraint...

I have created a subnet in each AZ available in my region, see here ->
http://paste.ubuntu.com/23492849/

When I initially deployed my bundle it seems that the units deployed to the
correct subnet in my space, as there was only one subnet created at that
time. Following my initial bundle deploy I decided to add units of each
service. At this point, the units deployed to random subnets in other AZs,
and did not respect the space constraint. Next I thought, "why don't I just
add other subnets to the other AZs in my region to my space?", thinking
this would allow juju to disperse the units across AZs, whilst staying
within the subnets defined in my space. To my dismay, even after adding
subnets in each AZ to my space, adding additional units of applications
that were deployed with the spaces constraint, still failed to deploy to a
subnet in my space. Am I doing something wrong here? Suggestions?

Thanks
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Juju Respect Spaces

2016-11-17 Thread James Beedy
Having a difficult time understanding why my additional units aren't
respecting my spaces constraint...

I have created a subnet in each AZ available in my region, see here ->
http://paste.ubuntu.com/23492849/

When I initially deployed my bundle it seems that the units deployed to the
correct subnet in my space, as there was only one subnet created at that
time. Following my initial bundle deploy I decided to add units of each
service. At this point, the units deployed to random subnets in other AZs,
and did not respect the space constraint. Next I thought, "why don't I just
add other subnets to the other AZs in my region to my space?", thinking
this would allow juju to disperse the units across AZs, whilst staying
within the subnets defined in my space. To my dismay, even after adding
subnets in each AZ to my space, adding additional units of applications
that were deployed with the spaces constraint, still failed to deploy to a
subnet in my space. Am I doing something wrong here? Suggestions?

Thanks
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


[charms] Barbican Hostname Config Params

2016-11-17 Thread James Beedy
Is there a specific reason the barbican charm doesn't have the
os-{internal,private}-hostname config params?

Thanks
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


[Review Queue] ntp, ntpmaster, nagios, ibm-http, ghost, ibm-spectrum-symphony-master

2016-11-17 Thread Cory Johns
Greetings, all.  Kevin, Konstantinos, Pete, and I reviewed some charms in
the new review queue this week and last week, which I forgot to send out an
email for.

Nov 17, 2016:  Cory, Konstantinos, Kevin

   -

   IBM HTTP Server
   -

  https://bugs.launchpad.net/charms/+bug/1612535
  -

  Use of sys.exit() in a reactive charm needs to be addressed
  -

   ~nagios-charmers/nagios
   -

  https://review.jujucharms.com/reviews/13
  -

  Test fixes work
  -

  Promulgated and closed
  -

   ~ntp-team/ntpmaster
   -

  https://review.jujucharms.com/reviews/10
  
  -

  This charm was already reviewed twice plus it was in the promulgated
  space
  -

  Tested the charm on lxd and went through the review items
  -

  Promulgated and closed. Thank you for your work.
  -

   ~ntp-team/ntp
   -

  https://review.jujucharms.com/reviews/11
  
  -

  Closing this review item, since it is already promulgated.


Nov 10, 2016:  Cory, Pete, Kevin

   -

   Ghost
   -

  https://review.jujucharms.com/reviews/21
  -

  New conjure-up-based test fails
  -

   IBM Spectrum Symphony
   -

  https://review.jujucharms.com/reviews/15
  -

  Charm winds up in “maintenance” rather than “active” mode.
  -

  10-deploy.py not executable.
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [charms] Deploy Keystone To Public Cloud

2016-11-17 Thread James Beedy
Thanks for this! Alongside setting os-*-hostname to the public ip address,
I had to login to the aws console and manually open up port 35357, as juju
doesn't expose 35357 (for good measure I'm assuming).

Yeah, I had the same situation in Juju 2.0 ! (all is ok with Juju 1.25)
(with AWS provider)

It happens because OpenStack charms started to use 'network-get
--primary-address public' instead of 'unit-get'.
Then 'network-get public' returns private address and then charms register
endpoint with private address and then they do not work from outside.
I could solve this only be setting 'os-public-hostname' to floating-ip of
specific machine.

On Thu, Nov 17, 2016 at 2:14 AM, James Beedy  wrote:

> I'm having an issue getting a response back (mostly timeouts occur) when
> trying to talk to keystone deployed to AWS using private (on vpn) or
public
> ip address. I've had luck with setting os-*-hostname configs, and ssh'ing
> in and running the keystone/openstack client locally from the keystone
> instance after adding the private ip <-> fqdn mapping in
> keystone:/etc/hosts, but can't seem to come up with any combination that
> lets me talk to the keystone api remotely. Just to be clear, I'm only
> deploying keystone and percona-cluster charms to AWS, not all of
Openstack.
>
> If not possible using the ec2 provider, is this a possibility with any
> public providers?
>
> Thanks
>
> _
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Controllers running out of disk space

2016-11-17 Thread Nate Finch
Resources are also stored in mongo and can be unlimited in size (not much
different than fat charms, except that at least they're only pulled down on
demand).

We should let admins configure their max log size... our defaults may not
be what they like, but I bet that's not really the issue, since we cap them
smallish (the logs stored in mongo are capped, right?)

Do we still store all logs from all machines on the controller?  Are they
capped?  That has been a problem in the past with large models.

On Thu, Nov 17, 2016 at 8:53 AM Rick Harding 
wrote:

> I'm definitely agreeing we need to provide some better tools for the admin
> of the controller to track and garden things such as charms and resources
> which can be quite large and grows over time.
>
> My main point with Uros was that we have this way due to model migrations
> to stick a controller/model into a quiet mode so that a migration can take
> place. It feels like a natural fit for a bit of a maintenance mode as
> operators manage large scale/long running Juju controllers over time. I
> wanted to look into the feasibility of allowing the operator to engage the
> quiet mode while they manage disk or other issues.
>
> The other question is, is this a temp issue? When you migrate models to
> another controller only the latest charm/resource revision goes with it.
> Maybe there's a place for a migration as part of good hygiene. It feels a
> bit forceful, but actually might be a safe practice.
>
> On Thu, Nov 17, 2016 at 4:13 AM John Meinel 
> wrote:
>
> So logs in mongo and logs on disk should be capped, and purged when they
> get above a certain size. 'audit.log' should never be automatically purged.
> Charms in the blobstore are potentially local data that we can't reproduce,
> so hard to automatically purge them. I think there has been some work done
> to properly reference count them, so we at least know if anything is
> currently referencing them. And we could purge things that are from a
> charmstore, since we know it is available elsewhere.
>
> I'm fine with a "pause" sort of mode so that you have an opportunity to
> move things out, and some sort of manually triggered garbage collection.
> But if we do know that some garbage collection is safe, we should probably
> just do it by default.
>
> John
> =:->
>
>
> On Thu, Nov 17, 2016 at 12:20 PM, Uros Jovanovic <
> uros.jovano...@canonical.com> wrote:
>
> Hi all,
>
> I'd like to start a discussion on how to handle storage issues on
> controller machines, especially what we can do when storage is getting 95%
> or even 98% full. There are many processes that are storing data, we have
> at least:
> - charms and resources in the blobstore
> - logs in mongo
> - logs on disk
> - audit.log
>
> Even if we put all models in the controller into read-only mode to prevent
> upload of new charms or resources to blobstore, we still have to deal with
> logs which can fill the space quickly.
>
> While discussing about this with Rick, given the work on model migrations,
> maybe the path forward would be to allow admin to put the whole controller
> in "maintenance" mode and put all agents on "hold".
>
> How to deal with storage issue after that is open to discussion, maybe we
> need tools to "clear" blobstore, or export/compress/truncate logs, etc.
>
> Thoughts?
>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: "unfairness" of juju/mutex

2016-11-17 Thread roger peppe
On 17 November 2016 at 12:12, Stuart Bishop  wrote:
> On 17 November 2016 at 02:34, roger peppe  wrote:
>>
>> +1 to using blocking flock. Polling is a bad idea with a heavily contended
>> lock.
>>
>> FWIW I still think that mutexing all unit hooks is a bad idea
>> that's really only there to paper over the problem that apt-get
>> doesn't work well concurrently.
>
>
> apt is just the one you commonly trip over. If there was no mutex, then
> charms would need to do their own locking for every single resource they
> need to access that might potentially also be accessed by a subordinate (now
> or in the future), and hope subordinates also use the lock. So I think
> mutexing unit hooks on the same machine is a fantastic idea :) Just
> something innocuous like 'adduser' can collide with a subordinate wanting to
> stick a config file in that user's home directory.

Surely a hook mutex primitve (e.g. "mutex adduser ...") would have been
more appropriate than the sledgehammer approach of mutexing everything
all the time? Sometimes I might want a hook to run for a long time
(or it might unfortunately block on the network) and turning off all
subordinate hooks while that happens doesn't seem right to me.

Anyway, I appreciate that it's too late now. We can't change this assumption
because it'll break all the charms that rely on it.

  cheers,
rog.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Controllers running out of disk space

2016-11-17 Thread Rick Harding
I'm definitely agreeing we need to provide some better tools for the admin
of the controller to track and garden things such as charms and resources
which can be quite large and grows over time.

My main point with Uros was that we have this way due to model migrations
to stick a controller/model into a quiet mode so that a migration can take
place. It feels like a natural fit for a bit of a maintenance mode as
operators manage large scale/long running Juju controllers over time. I
wanted to look into the feasibility of allowing the operator to engage the
quiet mode while they manage disk or other issues.

The other question is, is this a temp issue? When you migrate models to
another controller only the latest charm/resource revision goes with it.
Maybe there's a place for a migration as part of good hygiene. It feels a
bit forceful, but actually might be a safe practice.

On Thu, Nov 17, 2016 at 4:13 AM John Meinel  wrote:

So logs in mongo and logs on disk should be capped, and purged when they
get above a certain size. 'audit.log' should never be automatically purged.
Charms in the blobstore are potentially local data that we can't reproduce,
so hard to automatically purge them. I think there has been some work done
to properly reference count them, so we at least know if anything is
currently referencing them. And we could purge things that are from a
charmstore, since we know it is available elsewhere.

I'm fine with a "pause" sort of mode so that you have an opportunity to
move things out, and some sort of manually triggered garbage collection.
But if we do know that some garbage collection is safe, we should probably
just do it by default.

John
=:->


On Thu, Nov 17, 2016 at 12:20 PM, Uros Jovanovic <
uros.jovano...@canonical.com> wrote:

Hi all,

I'd like to start a discussion on how to handle storage issues on
controller machines, especially what we can do when storage is getting 95%
or even 98% full. There are many processes that are storing data, we have
at least:
- charms and resources in the blobstore
- logs in mongo
- logs on disk
- audit.log

Even if we put all models in the controller into read-only mode to prevent
upload of new charms or resources to blobstore, we still have to deal with
logs which can fill the space quickly.

While discussing about this with Rick, given the work on model migrations,
maybe the path forward would be to allow admin to put the whole controller
in "maintenance" mode and put all agents on "hold".

How to deal with storage issue after that is open to discussion, maybe we
need tools to "clear" blobstore, or export/compress/truncate logs, etc.

Thoughts?



--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/juju


--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/juju
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: "unfairness" of juju/mutex

2016-11-17 Thread Adam Collard
FWIW this is being tracked in https://bugs.launchpad.net/juju/+bug/1642541


On Thu, 17 Nov 2016 at 04:17 Nate Finch  wrote:

> Just for historical reference.  The original implementation of the new OS
> mutex used flock until Dave mentioned that it presented problems with file
> management (files getting renamed, deleted, etc).
>
> In general, I'm definitely on the side of using flock, though I don't
> think that necessarily solves our starvation problem, it depends on how
> flock is implemented and the specific behavior of our units.
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Controllers running out of disk space

2016-11-17 Thread John Meinel
So logs in mongo and logs on disk should be capped, and purged when they
get above a certain size. 'audit.log' should never be automatically purged.
Charms in the blobstore are potentially local data that we can't reproduce,
so hard to automatically purge them. I think there has been some work done
to properly reference count them, so we at least know if anything is
currently referencing them. And we could purge things that are from a
charmstore, since we know it is available elsewhere.

I'm fine with a "pause" sort of mode so that you have an opportunity to
move things out, and some sort of manually triggered garbage collection.
But if we do know that some garbage collection is safe, we should probably
just do it by default.

John
=:->


On Thu, Nov 17, 2016 at 12:20 PM, Uros Jovanovic <
uros.jovano...@canonical.com> wrote:

> Hi all,
>
> I'd like to start a discussion on how to handle storage issues on
> controller machines, especially what we can do when storage is getting 95%
> or even 98% full. There are many processes that are storing data, we have
> at least:
> - charms and resources in the blobstore
> - logs in mongo
> - logs on disk
> - audit.log
>
> Even if we put all models in the controller into read-only mode to prevent
> upload of new charms or resources to blobstore, we still have to deal with
> logs which can fill the space quickly.
>
> While discussing about this with Rick, given the work on model migrations,
> maybe the path forward would be to allow admin to put the whole controller
> in "maintenance" mode and put all agents on "hold".
>
> How to deal with storage issue after that is open to discussion, maybe we
> need tools to "clear" blobstore, or export/compress/truncate logs, etc.
>
> Thoughts?
>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Controllers running out of disk space

2016-11-17 Thread Jacek Nykis
On 17/11/16 08:20, Uros Jovanovic wrote:
> Hi all,
> 
> I'd like to start a discussion on how to handle storage issues on
> controller machines, especially what we can do when storage is getting 95%
> or even 98% full. There are many processes that are storing data, we have
> at least:
> - charms and resources in the blobstore
> - logs in mongo
> - logs on disk
> - audit.log
> 
> Even if we put all models in the controller into read-only mode to prevent
> upload of new charms or resources to blobstore, we still have to deal with
> logs which can fill the space quickly.
> 
> While discussing about this with Rick, given the work on model migrations,
> maybe the path forward would be to allow admin to put the whole controller
> in "maintenance" mode and put all agents on "hold".
> 
> How to deal with storage issue after that is open to discussion, maybe we
> need tools to "clear" blobstore, or export/compress/truncate logs, etc.
> 
> Thoughts?

We have this bug for mongodb disk space:
https://bugs.launchpad.net/juju/+bug/1492237

There is a workaround for juju 1.25 but I don't know if it will work for
juju 2:
https://bugs.launchpad.net/juju/+bug/1492237/comments/6

Some time ago I also filed this one for fix log handling in juju:
https://bugs.launchpad.net/juju/+bug/1494661

Feel free to me too!

Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju