juju/retry take 2 - looping

2016-10-19 Thread Tim Penhey

Hi folks,

https://github.com/juju/retry/pull/5/files

As often is the case, the pure solution is not always the best. What 
seemed initially like the best approach didn't end up that way.


Both Katherine and Roger had other retry proposals that got me thinking 
about changes to the juju/retry package. The stale-mate from the tech 
board made me want to try another approach that I thought about while 
walking the dog today.


I wanted the security and fall-back of validation of the various looping 
attributes, while making the call site much more obvious.

The pull request has the result of this attempt.

It is by no means perfect, but an improvement I think. I was able to 
trivially reimplement retry.Call with the retry.Loop concept with no 
test changes.


The tests are probably the best way to look at the usage.

Tim

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Bootstrap Constraints

2016-10-19 Thread James Beedy
Team,

>From what I can gather, Juju either allows or disallows you to bootstrap to
a specific network/subnet dependent on whether or not the provider supports
a network space bootstrap constraint. The EC2 provider just so happens to
be one of the providers which doesn't support controller placement on
bootstrap. This is a massive problem for me, seeing as I have many subnets
for things other than controller nodes. I just can't seem to get the
controller to land in a subnet (seems to be chosen at some sort of random)
that doesn't already have other things in it that I don't want around my
controller. To facilitate bootstrap network constraints on the EC2
provider, I think a 'network' constraint is needed, along with some
filtering of the provided 'network' constraint value to ensure the subnet
exists, is in the current region and current vpc - seems like it might do
the trick until we have a flat model for controller placement that works
across all providers.

Thoughts?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Bootstrap Constraints

2016-10-19 Thread James Beedy
Team,

>From what I can gather, Juju either allows or disallows you to bootstrap to
a specific network/subnet dependent on whether or not the provider supports
a network space bootstrap constraint. The EC2 provider just so happens to
be one of the providers which doesn't support controller placement on
bootstrap. This is a massive problem for me, seeing as I have many subnets
for things other than controller nodes. I just can't seem to get the
controller to land in a subnet (seems to be chosen at some sort of random)
that doesn't already have other things in it that I don't want around my
controller. To facilitate bootstrap network constraints on the EC2
provider, I think a 'network' constraint is needed, along with some
filtering of the provided 'network' constraint value to ensure the subnet
exists, is in the current region and current vpc - seems like it might do
the trick until we have a flat model for controller placement that works
across all providers.

Thoughts?
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Regarding juju Storage - using MAAS as cloud provider

2016-10-19 Thread Shilpa Kaul
Hi,

I have configured a MAAS cluster and have block device 'sdb' attached to 
one of the nodes which I am using to deploy my charm as shown below:


After this I created a storage pool called maastest with attribute tag as 
'sdb' and deployed my charm. In the MAAS controller console I see that the 
node status is "Failed Deployment" and I see the below error in the UI for 
the node on which I am trying to deploy the charm making use of MAAS 
storage:

Installing for i386-pc platform.
grub-install : error: unable to identify a filesystem in 
hostdisk//dev/sdb; safety check cant be performed
failed to install grub!
Command: ['install-grub', 'tmp/tmpqwTtyG/target', '/dev/sdb]'

MAAS version is 1.9.4 and Ubuntu version : 14.04

I am new to MAAS and not sure why the above error is coming, can someone 
please help me in resolving  this error.

Thanks and Regards,
Shilpa Kaul



From:   Matt Bruzek 
To: Shilpa Kaul/India/IBM@IBMIN, Juju email list 
, maas-de...@lists.ubuntu.com
Cc: Kevin Monroe , Suchitra 
Venugopal1/India/IBM@IBMIN, Andrew Wilkins , 
Antonio Rosales , Marco Ceppi 
, Randall Ross 
Date:   10/19/2016 09:24 PM
Subject:Re: Regarding juju Storage - using MAAS as cloud provider



Shilpa,

There are some documentation about creating storage on MAAS here: 
https://maas.ubuntu.com/docs/storage.html

Using this document you should be able to create block devices in MAAS 
that you can later use in Juju.

The Juju storage documentation can be found here: 
https://jujucharms.com/docs/stable/charms-storage

As an example, once you have your MAAS storage created and tagged, you 
could create a storage pool in Juju like this:

juju create-storage-pool mypool maas tags=

And then you could add a storage constraint to deploy your charm like 
this:

juju deploy  --storage disks=mypool,1G

I have not tried MAAS storage with Juju so you may need some additional 
commands. If anyone else has examples of combining MAAS storage with Juju 
please reply here to let us know. Thanks!

   - Matt Bruzek 

On Tue, Oct 18, 2016 at 11:29 AM, Shilpa Kaul  wrote:
Hi Matt/Kevin,

We have a charm called as Spectrum Scale (previously called as gpfs) which 
is making use of Juju Storage feature. I have tested this on AWS, making 
use of ebs as storage option. When I deploy the charm say  "juju deploy 
ibm-spectrum-scale-manager --storage disks=ebs,1G", I am able to get block 
storage disks. My charm uses this disk and then creates a file sytem on 
top of that.
I am able to test this on AWS, but now we have got a scenario where we 
have to deploy the charm on physical servers or VM's. We have configured 
MAAS for VM's and are able to deploy a sample charm as well using MAAS as 
cloud provider, but I am not sure how to make use of juju storage options 
incase of MAAS. Can you please provide us with any contact who can help us 
in making use of storage option with MAAS as cloud provider.

Thanks and Regards,
Shilpa Kaul





-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What are the best practices for stop hook handling?

2016-10-19 Thread James Troup
Marco Ceppi  writes:

> Colocation is a rare scenario, a more common one is manual provider.

Err, sorry, but colocation isn't rare; the majority of clouds we
deploy with juju have ceph colocated with nova compute.

And, to be clear, this is not a theoretical problem, I've been burnt
super badly by this in real world production deployments, e.g.:

https://bugs.launchpad.net/charms/+source/ceph/+bug/1629679

> Outside of "hulk-smashing" which we really don't support,

??

> Having to further distill how to "clean up" seems like a step
> backwards in what we care about with Juju, which is the setup and
> operations of software.

Reasonable people could argue that proper operation of software
includes clean up.

-- 
James

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


[Review Queue]: IBM NFS Storage and IBM Platform Master interfaces

2016-10-19 Thread Kevin Monroe
Matt and I took a look at the following interfaces from IBM today:


   -

   IBM NFSStorage interface
   -

  https://bugs.launchpad.net/charms/+bug/1578166
  -

  +1 from us after lint fixes
  -

   IBM PlatformMaster interface
   -

  https://bugs.launchpad.net/charms/+bug/1578173
  -

  Needs similar lint fixes, so we opened the following MP:
  -


 
https://code.launchpad.net/~mbruzek/interface-ibm-platformmaster/lint-fixes/+merge/308836


As a reminder, all interfaces need to pass flake8 so they don't cause lint
errors in charms that include them.

Questions or concerns?  Find us in Freenode #juju.  Thanks!
-Kevin
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Regarding juju Storage - using MAAS as cloud provider

2016-10-19 Thread Matt Bruzek
Shilpa,

There are some documentation about creating storage on MAAS here:
https://maas.ubuntu.com/docs/storage.html

Using this document you should be able to create block devices in MAAS that
you can later use in Juju.

The Juju storage documentation can be found here:
https://jujucharms.com/docs/stable/charms-storage

As an example, once you have your MAAS storage created and tagged, you
could create a storage pool in Juju like this:

juju create-storage-pool mypool maas tags=

And then you could add a storage constraint to deploy your charm like this:

juju deploy  --storage disks=mypool,1G

I have not tried MAAS storage with Juju so you may need some additional
commands. If anyone else has examples of combining MAAS storage with Juju
please reply here to let us know. Thanks!

   - Matt Bruzek 

On Tue, Oct 18, 2016 at 11:29 AM, Shilpa Kaul  wrote:

> Hi Matt/Kevin,
>
> We have a charm called as Spectrum Scale (previously called as gpfs) which
> is making use of Juju Storage feature. I have tested this on AWS, making
> use of ebs as storage option. When I deploy the charm say  "*juju deploy
> ibm-spectrum-scale-manager --storage disks=ebs,1G*", I am able to get
> block storage disks. My charm uses this disk and then creates a file sytem
> on top of that.
> I am able to test this on AWS, but now we have got a scenario where we
> have to deploy the charm on physical servers or VM's. We have configured
> MAAS for VM's and are able to deploy a sample charm as well using MAAS as
> cloud provider, but I am not sure how to make use of juju storage options
> incase of MAAS. Can you please provide us with any contact who can help us
> in making use of storage option with MAAS as cloud provider.
>
> Thanks and Regards,
> Shilpa Kaul
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What are the best practices for stop hook handling?

2016-10-19 Thread Nate Finch
1. The stop hook happens when the unit is being removed entirely.  It does
not run on reboot (and there's no reboot hook).  The docs on the start hook
mention this: "Note that the charm's software should be configured so as to
persist through reboots without further intervention on juju's part."  The
stop hook should clean up everything to the best of its ability, to make
the machine appear as it did before the unit was added to it (so, uninstall
the software, remove all config files, etc).

2. Don't colocate units if at all possible.  In separate containers on the
same machine, sure.  But there's absolutely no guarantee that colocated
units won't conflict with each other. What you're asking about is the very
problem colocation causes. If both units try to take over the same port, or
a common service, or write to the same file on disk, etc... the results
will very likely be bad.  Stop hooks should clean up everything they
started.  Yes, this may break other units that are colocated, but the
alternative is leaving machines in a bad state when they're not colocated.

3. Many charms don't do this (in fact, there's an email about this on our
internal mailing list right now). They absolutely should.   Many charms get
away with not doing cleanup because Juju's main use case is containers and
throw-away VMs that are discarded after the unit is removed... but there
are many cases where this does not happen, such as using the Manual
provider or colocated units.  Please write cleanup code.


On Wed, Oct 19, 2016 at 10:17 AM Rye Terrell 
wrote:

> I have a number of questions regarding how to handle stop hooks properly:
>
> 1. Background services - stop them or stop & disable them?
>
> The docs say "stop runs immediately before the end of the unit's
> destruction sequence. It should be used to ensure that the charm's software
> is not running, and will not start again on reboot."
>
> Can anyone verify that that is correct? If so, it seems clear that
> services should be stopped & disabled, but leaves me with another question
> - is there no hook that handles scenarios like host rebooting?
>
> If it's not correct, what is the proper behavior for the stop hook
> handler? Stop & disable on stop hook and start & enable on start hook?
>
> 2. Background services - how do we handle colocated applications with
> shared background services?
>
> I'm not sure this is something we support, but if so, what do we do when
> one application is stopped and it has a colocated application that shares a
> background service dependency? I don't think this is something we can
> detect at the charm level, so do we _not_ stop services so that we don't
> cause conflicts?
>
> 3. File cleanup - is anyone doing this?
>
> The docs also say "Remove any files/configuration created during the
> service lifecycle" is part of a charm's stop hook handling behavior. My
> experience isn't exactly vast, but I'm unaware of charms doing this. Is
> this something we actually do? Should we keep that statement in the docs?
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


What are the best practices for stop hook handling?

2016-10-19 Thread Rye Terrell
I have a number of questions regarding how to handle stop hooks properly:

1. Background services - stop them or stop & disable them?

The docs say "stop runs immediately before the end of the unit's
destruction sequence. It should be used to ensure that the charm's software
is not running, and will not start again on reboot."

Can anyone verify that that is correct? If so, it seems clear that services
should be stopped & disabled, but leaves me with another question - is
there no hook that handles scenarios like host rebooting?

If it's not correct, what is the proper behavior for the stop hook handler?
Stop & disable on stop hook and start & enable on start hook?

2. Background services - how do we handle colocated applications with
shared background services?

I'm not sure this is something we support, but if so, what do we do when
one application is stopped and it has a colocated application that shares a
background service dependency? I don't think this is something we can
detect at the charm level, so do we _not_ stop services so that we don't
cause conflicts?

3. File cleanup - is anyone doing this?

The docs also say "Remove any files/configuration created during the
service lifecycle" is part of a charm's stop hook handling behavior. My
experience isn't exactly vast, but I'm unaware of charms doing this. Is
this something we actually do? Should we keep that statement in the docs?
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju