Re: Is there a universal interface I can use?

2017-11-30 Thread Stuart Bishop
On 28 November 2017 at 20:22, Tilman Baumann
<tilman.baum...@canonical.com> wrote:

> That would be quite nice actually. Backup and snapshot could be two
> different actions even.
> Snapshot is a little low-level as it is per-node. But it makes for fast
> recovery if a node hickups.
> Full backup like Haw Leoeung implemented in
> https://jujucharms.com/u/hloeung/cassandra-backup/ is probably even more
> useful for many scenarios.

There doesn't seem to be a standard backup mechanism, so backup
probably belongs in a subordinate. I currently have a subordinate that
rsyncs the files to a backup host on my smallest deployment (per
https://insights.ubuntu.com/2015/08/04/introducing-turku-cloud-friendly-backups-for-your-infrastructure/)


> I'm quite stumped right now with the odd combination of needing to be a
> subordinate and needing to connect to the database relation.
> I just can't get it to work. And I'm out of ideas.
> I would love to just finish it "to get it to work" but I don't know what
> else I could try at this point.

You just need to define the juju-info container scoped relation as
well as the standard cassandra relation.

subordinate: true
requires:
  cassandra:
interface: cassandra
  juju-info:
interface: juju-info
scope: container

When you deploy, you need to connect both relations and you will have
them both become available, with the different scopes:

juju add-relation mysub:juju-info cassandra:juju-info
juju add-relation mysub:cassandra cassandra:database


> I'm not very familiar with the coding style used in the cassandra charm.
> But I think I could help you with adding those functionalities even.
> I will have a lot of distractions the next two weeks. But I can see what
> I can do...

I want to make time to rework it as a charms.reactive charm. But
adding an action shouldn't need that, as the actions.yaml and
actions/foo script don't need any dependencies on the rest of the
charm.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Handling uninstallation upon subordinate removal in reactive

2017-11-27 Thread Stuart Bishop
On 22 November 2017 at 01:58, Junien Fridrick
<junien.fridr...@canonical.com> wrote:

> I was also thinking that maybe the snap layer could save which
> application installed which snap, and if said application is removed,
> then remove said snap. Would that be a good idea ?

Your subordinate charm doesn't know it is the only charm using the
snap on that unit, so it is dangerous doing this sort of cleanup.
Well, right now it is pretty safe because snaps in charms are rare but
I expect this to change over time. It is exactly the same as
automatically removing deb packages, which charms rarely if ever do.
Also note that snaps can contain data, and removing them can destroy
this data. subordinates need to tread lightly to avoid victimizing
other charms on the unit, and primary charms don't bother because
removing them means the machine should ideally be rebuilt.

It would be possible to have the snap layer automatically remove
snaps, but the behaviour would need to be explicitly enabled using a
layer.yaml option.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Is there a universal interface I can use?

2017-11-27 Thread Stuart Bishop
On 23 November 2017 at 21:37, Tilman Baumann
<tilman.baum...@canonical.com> wrote:
> On 22.11.2017 23:26, Haw Loeung wrote:
>> Hi Tilman,
>>
>> On Wed, Nov 22, 2017 at 04:02:08PM +0100, Tilman Baumann wrote:
>>> However, that doesn't seem to work. Juju complains the relation doesn't
>>> exist.
>>> $ juju add-relation cassandra-backup:database cassandra:database
>>> ERROR no relations found
>>>
>>> So, is there a interface that I can (ab-)use in a similar way?
>>>
>>> I don't  want to build a full blown cassandra interface and at it to the
>>> list.
>>
>> Not sure if you've seen this, but I did some work recently with
>> something similar to backup Cassandra DBs:
>>
>> | https://jujucharms.com/u/hloeung/cassandra-backup/
>
> I didn't want to talk about it before it's usable. I think I might be
> working on something similar.
>
> https://github.com/tbaumann/jujucharm-layer-cassandra-backup
>
> It seems to only use "nodetool snapshot"
> I'm integrating this for a 3rd party so I don't quite know what is going
> on there. But looks like the intent is pretty much the same.

I think this charm needs to remain a subordinate, because 'nodetool
snapshot' requires a JMX connection and that should be blocked
(because it is not secured).

I'd be happy to have actions on the main Cassandra charm to manage
snapshots, and cronned snapshots would also be a feature suitable for
the main charm. But you would still need some way to ship the
snapshots to your backup host which should be a subordinate.

Ideally the Cassandra charm would support multiple DCs, which would
allow you to backup only a subset of your nodes to get a complete copy
of your data, but that is going to need to wait until a
charms.reactive rewrite.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: set_state not setting the state immediately

2017-11-10 Thread Stuart Bishop
On 10 November 2017 at 21:54, Konstantinos Tsakalozos
<kos.tsakalo...@canonical.com> wrote:

> Even if it is not correct, the behaviour of the bash reactive is what
> the naive ( :) ) developer expects. Can you give me a concrete example
> of what can go wrong with the approach the bash reactive
> implementation is taking? Stuart, you mention "changes made to the
> Juju environment get rolled back on hook failure" could some of these
> changes cause a bash reactive charm to misbehave, can you please give
> me an example?

Here is a trivial handler that informs the unit's peers that it is
available. If the hook fails, but the state persists, the peers are
never informed. Some information has persisted (the charms.reactive
state), and other information has been thrown away (the Juju
environment state).

@when('mycharm.configured')
@when_not('mycharm.joined')
def join():
for relid in relation_ids('cluster'):
relation_set(relid, active='true')
set_state('mycharm.joined')

Here is a common one you may have written yourself, where the
configured port  might never be opened:

@when('config.changed.port')
def open_port():
 prev_port = unitdata.kv().get('mycharm.port')
 new_port = config()['port']
 if prev_port is not None:
 close_port(prev_port)
 open_port(new_port)
 unitdata.kv().set('mycharm.port', new_port)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Relation scope clarification

2017-10-18 Thread Stuart Bishop
On 18 October 2017 at 02:21, John Meinel <j...@arbash-meinel.com> wrote:

> Ah, I guess that telegraf is actually gathering extra data from mysql or
> postgres about database specific stats, and thus it should have a container
> scoped relation because it wants to explicitly sit with postgres and collect
> general machine information, as well as collect postgres specific
> information. It isn't that telegraf is using postgresql as its data store,
> its that it knows how to get extra statistical information about a database.
>
> In that case, telegraf *should* use a container scope for its postgresql
> interface. I wonder how that works when you have HA postgres, and each
> telegraf connection to postgres is at least logically different. (telegraf
> should care very much that it never connect to the postgresql on a different
> machine and get its information.)

In the current implementation, the 'subordinate' telegraf has access
to the master so it can connect to it and create the resources in the
database that it needs (if some other unit hasn't already beaten it
too it).

If a container scoped relation is used, the subordinate telegraf would
need to create the database resources if it happens to be connected to
the master, or wait until some other subordinate creates them if not.

Either way, my original query remains.

Is it allowed to relate charms with one end declaring the scope as
global and the other end declaring the scope as container?

If yes, should a PostgreSQL unit still be able to see the relation
data from its peers, even if it is related to a charm declaring the
relation as container scoped? Or should charms avoid inspecting
relation data of peer units because it will fail in this case?


> Is this a case where we actually need postgresql to offer up a
> "pgsql-monitoring" relation, rather that use the existing "store my data in
> postgres, 'pgsql' relation"
> ?

If mismatched relations are not allowed (one end declaring global, but
the other end disagreeing and declaring container), then I can provide
a separate relation, yes. It might also be the simplest solution, as
having the PostgreSQL charm share relation connection details from
master to standbys via the peer relation instead of the client
relation is fairly invasive.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Relation scope clarification

2017-10-17 Thread Stuart Bishop
Hi.

A server declares a relation with standard scope. Lets use PostgreSQL
for example, which declares the following in metadata.yaml:

provides:
  db:
interface: pgsql

A client happens to be a subordinate, and declares its end of the
relation as container scoped. So in its metadata.yaml:

requires:
  postgresql:
interface: pgsql
scope: container

My first question is: Is this supported by Juju? Can charms have a
relation with a different scope at each end? I know it works in most
cases, but is it supported or just an accident of implementation?

If the answer to that is yes, my second question is: If the relation
fails when the two charms declare a different scope, whose fault is
it?

The problem I have is that if one end of the relation declares
container scope, then the relation is container scoped, and
relation-get calls attempting to inspect relation data of peers fail.
Is this a Juju bug, or does the PostgreSQL charm need to understand
this limitation and use some other mechanism if it wants the pgsql
relation to work in either global or container scope?

Should relation-get return an error if a charm attempts to access
relation info from a peer unit, rather than only working if both ends
of the relation agree that the relation scope is global.

There are several bugs open on this issue dealing with large scale
deployments and I'm not sure how to proceed.
https://bugs.launchpad.net/juju/+bug/1721295 is the juju one. I think
I can update the PostgreSQL charm to support requirements, but I'm
worried I would just be digging a deeper hole.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: set_state not setting the state immediately

2017-10-04 Thread Stuart Bishop
On 4 October 2017 at 00:51, Mike Wilson <mike.wil...@canonical.com> wrote:
> So the best practice here is to touch a file and test for the existence of
> that file before running must_be_called_exactly_once()?
>
> I think part of the issue here is that without knowing the extent of the
> hook it is hard to enforce idempotency as a charm writer. It's easy to look
> at the code above and say that is it idempotent since the init function is
> wrapped in a when_not and the initialized state is set at the bottom of
> init.

Individual handlers should be idempotent, so it doesn't matter about
the extent of the hook, or even if the chained handlers being triggers
are running in the same hook. Assume your handlers get called multiple
times, because they may be. Yes, it looks idempotent but it isn't. An
assumption is being made that the state changes get committed
immediately, but these changes are actually transactional and
following the same transactional behaviour as the Juju hook
environment [1]. I think this can certainly be explained better in the
docs, but I can't think of a way to stop this being an easy error to
make.

[1] spot the DBA

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: set_state not setting the state immediately

2017-10-03 Thread Stuart Bishop
On 3 October 2017 at 19:34, Konstantinos Tsakalozos
<kos.tsakalo...@canonical.com> wrote:
> Hi,
>
> It seems the reactive framework is flushing the states at the end of hook
> execution. This may surprise charm authors. Consider the following code:
>
> @when_not("initialized")
> def init():
> must_be_called_exactly_once()
> set_state("initialized")
>
> @when("initialized")
> @when_not("ready")
> def get_ready():
> this_call_fails()
> set_state("ready")
>
> As a charm author I would expect the "initialized" state set right after the
> must_be_called_exactly_once() is called. However, the framework is not
> persisting the "initialized" state at that point, and it moves on to trigger
> the get_ready(). Since this_call_fails() happens on the  get_ready() method
> I would expect the "initialized" state to be set when the failure is
> resolved.

The reason the charm state needs to be rolled back on hook failure is
that the changes made to the Juju environment get rolled back on hook
failure. If must_be_called_exactly_once() set a property on a relation
for example, then that change is rolled back by Juju when
this_call_fails() puts the unit into an error state. If init() was not
rerun, then that relation change would never happen (because the charm
thinks it has been made). This is one reason why handlers need to be
idempotent.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Rejection of peer join.

2017-09-27 Thread Stuart Bishop
On 27 September 2017 at 14:50, Michael Van Der Beek
<michael@antlabs.com> wrote:
> Hi Stuart,
>
> I think you misinterpreted what I was asking
>
> Assuming a pair of instance already are in relationship.
> Lets assume we have drbd running between these two instances lets call it A-B.
> If juju starts a 3rd instance C.  I want to make sure the 3rd cannot join the 
> pair as drbd not supposed to have 3rd node. Although in theory you can create 
> a stack node for 3rd or more backups.
>
> So when the ha-relation-joined is triggered on either A or B. How do I tell 
> C, it is rejected from the join so that it doesn't screw up juju. I suppose 
> if A/B can send do a "relation-set" some thing to tell C, it is rejected.

I would have C use relation-list to get a list of all the peers. If
there are two or more units with a lower unit number, C knows it
should not join, sets its status to 'blocked' and exit 0. There is no
need to involve A or B in the conversation; Unit C can make the
decision itself. Unit C could happily sit there in blocked state until
either A or B departs, at which point C would see only one unit with a
lower unit number and know it should join. You could even argue that
the blocked state is incorrect with this design, and that unit C
should actually set its status to active or maintenance, with a
message stating it is a hot spare.

The trick is that if someone does 'juju deploy -n3 thecharm', there is
no guarantee on what order the units join the peer relation. So Unit C
may think it should be active, and then a short while later sees other
units join and will need to become inactive. If you need to avoid this
situation, you are going to need to use Juju leadership to avoid these
sorts of race conditions. There is only one leader at any point in
time, so it can make decisions like which of the several units should
be active or not without worrying about race conditions. But it sounds
overly complex for your use case.

> The issue for me, is how to scale if you have specific data in a set of 
> nodes.  So you can ceph, or drbd or some cluster. So ceph will require 3 ceph 
> nodes, drbd two nodes and maybe galera cluster 3 nodes.
>
> So my idea is that there is already a loadbalance to scale. So my idea is 
> each time you want to scale you would add one or more pairs (assuming drbd) 
> to an already existing set of pairs. The load balancer will just redirect 
> data to specific pairs based on some logic (like modulus of the last octet of 
> customer IP which can give you 256 pairs). This is how we are doing on 
> physical machines. Haven't had a customer yet that requires more than 10,000 
> tps for radius or 5 million concurrent sessions). Note I use pairs loosely in 
> this line as the pair if running galera cluster is 3 nodes instead of pair).
>
> I'm currently trying to figure how to do it on openstack. If you have some 
> recommendation for me to read/view on how people deal with scaling for very 
> high write IO to disk. Current for radius we are looking at near 95% writes 
> 5%reads. Nobody reads the data unless someone wants to know if user X is 
> currently logged in. If it was the other way around in (R/W) IO requirements 
> its much easier to scale.

I'm generally deploying charms to bare metal and using local disk if
there are any sort of non-trivial IO requirements. I believe the newer
Juju storage features should allow you mount whatever sort of volumes
you want from OpenStack (or any cloud provider), but I'm not familiar
with the OpenStack specifics.



-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to use LXC local image for new machine

2017-09-01 Thread Stuart Bishop
On 1 September 2017 at 02:37, fengxia <fx...@lenovo.com> wrote:
> According to https://bugs.launchpad.net/juju/+bug/1650651, juju 2.1 supports
> using local image if its alias is `juju/series/arch` format.
>
> So following this, I created a local image and gave it an alias of this
> format, but juju deploy will still download ubuntu-trusty before creating
> the container.

I'm attaching the script I'm using, which is slightly modified from
the original version passed around and posted here. It might point you
to where your process is failing. I haven't done it manually myself.

-- 
Stuart Bishop <stuart.bis...@canonical.com>


lxdseed
Description: Binary data
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: charm layers hooks/install, how are they "merged"?

2017-08-11 Thread Stuart Bishop
On 11 August 2017 at 09:02, fengxia <fx...@lenovo.com> wrote:
> Hi Juju,
>
> I'm building a charm by including two layers in layer.yaml:
>
> includes:
>   - 'layer:basic'
>   - 'layer:pylxca'
>
> Pylxca is a library I packaged into a "layer" format for reuse.
>
> Both layers have a folder called "hooks", and each has a hook called
> "install". The curious result is that after `charm build`, the pylxca's
> install hook is the one in dist.

The pylxca layer should not be adding the hook stubs if the hooks/
directory. These are declared in layer:basic, and duplicating them in
the pylxca leads to the problem you describe.

> So my question now is, if I want to use layers, and each layer defines its
> own hooks so to help its implementation, how does `charm build` handle them?
> Are they merged? From this test, only the last one will survive. If so, does
> it mean only one layer can define hooks?

Only one layer should define the hook stubs. Layers can do it, as you
have found, and it can be useful in some debugging situations to
override and inject behaviour, but I don't think should ever go as far
as a release.

If multiple layers need hook specific code, they can all declare
handlers with the @hook decorator and they are all run in an arbitrary
order. Its best to avoid @hook completely though and just use normal
reactive states. So instead of using @hook('install'), use
@when_not('mycharm.installed') and have that handler set the
mycharm.installed state at the end.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Is base64 really the best way to use complex config values

2017-06-08 Thread Stuart Bishop
On 7 June 2017 at 23:22, Tilman Baumann <tilman.baum...@canonical.com> wrote:
> I see a lot of charms use base64 values in config parameters. Especially
> when the values are stuff like custom templates.
>
> Is this really the way to go? It may avoid shell quoting hell for
> parameters set via command line. (Usually trivial)
> But when set via --file option (which is clearly the better way with
> complex fields) then I have to say the 'here document' features of YAML
> are actually quite good.
>
> The problem I see with bas64 is that nobody can read it without decoding
> it every time.
>
> Opinions?

base64 is occasionally useful for binary data or text in arbitrary
encodings. It is only popular because people keep cargo culting it
into their charms when it is unnecessary. I always call it out in
reviews and get people to switch to unencoded text.

> Just as a example of what I mean:
> application: logstash-supportcloud
> charm: logstash-conf-d
> settings:
>   config:
> value: |
>filter {
>if [message] == "" {
>drop { }
>}
>}
>output {
>gelf {
>host => "foobar"
>port => "5002"
>    }
>    }

Yes, much better.  It involves teaching people the difference between
> and | in multiline yaml strings.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Stuart Bishop
On 22 May 2017 at 14:36, Tim Penhey <tim.pen...@canonical.com> wrote:
> On 20/05/17 19:48, Merlijn Sebrechts wrote:
>>
>> On May 20, 2017 09:05, "John Meinel" <j...@arbash-meinel.com
>> <mailto:j...@arbash-meinel.com>> wrote:
>>
>> I would actually prefer if it shows up in 'juju status' but that we
>> suppress it from 'juju status-log' by default.
>>
>>
>> This is still very strange behavior. Why should this be default? Just pipe
>> the output of juju status through grep and exclude update-status if that is
>> really what you want.
>>
>> However, I would even argue that this isn't what you want in most
>> use-cases.  "update-status" isn't seen as a special hook in charms.reactive.
>> Anything can happen in that hook if the conditions are right. Ignoring
>> update-status will have unforeseen consequences...
>
>
> Hmm... there are (at least) two problems here.
>
> Firstly, update-status *should* be a special case hook, and it shouldn't
> take long.
>
> The purpose of the update-status hook was to provide a regular beat for the
> charm to report on the workload status. Really it shouldn't be doing other
> things.
>
> The fact that it is a periodic execution rather than being executed in
> response to model changes is the reason it isn't fitting so well into the
> regular status and status history updates.
>
> The changes to the workload status would still be shown in the history of
> the workload status, and the workload status is shown in the status output.
>
> One way to limit the execution of the update-status hook call would be to
> put a hard timeout on it enforced by the agent.
>
> Thoughts?

Unfortunately update-status got wired into charms.reactive like all
the other standard hooks, and just means 'do whatever still needs to
be done'. I think its too late to add timeouts or restrictions. But I
do think special casing it in the status history is needed. Anything
important will still end up in there due to workload status changes.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Stuart Bishop
On 22 May 2017 at 14:36, Tim Penhey <tim.pen...@canonical.com> wrote:
> On 20/05/17 19:48, Merlijn Sebrechts wrote:
>>
>> On May 20, 2017 09:05, "John Meinel" <j...@arbash-meinel.com
>> <mailto:j...@arbash-meinel.com>> wrote:
>>
>> I would actually prefer if it shows up in 'juju status' but that we
>> suppress it from 'juju status-log' by default.
>>
>>
>> This is still very strange behavior. Why should this be default? Just pipe
>> the output of juju status through grep and exclude update-status if that is
>> really what you want.
>>
>> However, I would even argue that this isn't what you want in most
>> use-cases.  "update-status" isn't seen as a special hook in charms.reactive.
>> Anything can happen in that hook if the conditions are right. Ignoring
>> update-status will have unforeseen consequences...
>
>
> Hmm... there are (at least) two problems here.
>
> Firstly, update-status *should* be a special case hook, and it shouldn't
> take long.
>
> The purpose of the update-status hook was to provide a regular beat for the
> charm to report on the workload status. Really it shouldn't be doing other
> things.
>
> The fact that it is a periodic execution rather than being executed in
> response to model changes is the reason it isn't fitting so well into the
> regular status and status history updates.
>
> The changes to the workload status would still be shown in the history of
> the workload status, and the workload status is shown in the status output.
>
> One way to limit the execution of the update-status hook call would be to
> put a hard timeout on it enforced by the agent.
>
> Thoughts?

Unfortunately update-status got wired into charms.reactive like all
the other standard hooks, and just means 'do whatever still needs to
be done'. I think its too late to add timeouts or restrictions. But I
do think special casing it in the status history is needed. Anything
important will still end up in there due to workload status changes.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


charms.reactive bi-weekly catchup

2017-04-11 Thread Stuart Bishop
Hi.

Alex Kavanagh,  Merlijn Sebrechts, Tim Van Steenburgh and myself met
for the regular charms.reactive development discussions.

As some of us are again finding time for actual coding work, we
discussed some smaller fully backwards compatible tasks to move
forward with.

We identified the work on deprecating interface layers and allowing
them to be declared in standard layers as one task that can be
immediately worked on. I plan to tackle this unless others beat me to
it. I believe this  only requires work in charm-tools' 'charm build',
and related charms.reactive documentation.

Another task is migrating the leadership layer into the base layer or
charms.reactive core, per previous discussions. As the existing
implementation is ok and not complex, I'm leaving this for anyone who
wants to dip their toe into the code base.

A simple job is documenting how layers should declare their license
information. It seems a convention of LICENSE.layername or similar
will work fine here and this just needs to be documented.

Renaming states to flags should wait until we have a clearer idea of
any necessary API changes. The rename gives us the opportunity to
change the API while keeping the old API unchanged for backwards
compatibility. Backwards compatibility was a bit of a theme, with Alex
mentioning feedback he has received from charmers already having too
much legacy code needing rewrites and not needing more.

I will discuss my pull request on making the relations base class
pluggable with Cory. This should allow us to start exploring flag
implementations without the pain of using a charms.reactive fork. This
would be an experimental feature rather than one actively supported
and documented.

A lengthy discussion was had on the triggers proposal (
https://github.com/juju-solutions/charms.reactive/issues/97 ). Some of
my assumptions about this were false, as the proposal is actually for
a mechanism to allow state/flag changes to be made after every handler
(rather than 'preflight' code, run before the main reactive loop is
started). Merlijn will work on compelling use cases to convince us on
why this new feature would be a benefit.

We should try to articulate any pain points with the current
charms.reactive framework and bring them to the table. Not everyone is
using charms.reactive the same way, and maybe there are issues people
are having that need to be tackled.

The group will be meeting again in another two weeks.

Discussions welcome here or in github issues at
https://github.com/juju-solutions/charms.reactive/issues :-)


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Postgresql WAL-E Support

2017-04-11 Thread Stuart Bishop
On 11 April 2017 at 08:22, James Beedy <jamesbe...@gmail.com> wrote:
> Hello,
>
> Just wondering what the extent of the WAL-E support for the postgresql charm
> is, and what is the lifecycle here (if any)?
>
> 1) What can I expect to happen if I configure the wal-e options in the
> postgresql charm?

It should start shipping WAL files and making regular filesystem level backups.

> 2) Does WAL-E (if configured) automatically start sending my wal files to
> s3, or is there a place where I must intervene with some manual ops to get
> the process started?

It should. However, S3 is still marked as experimental as I've only
been testing with Swift. It also might need the bucket to be created
(if so, file a bug as the charm should do this automatically). If you
are able to help shake out any kinks here, that would be great and I
can remove the experimental warnings. Watching the bucket contents is
probably the easiest way to confirm it is working. If you run 'SELECT
pg_switch_xlog();' at a psql prompt, you should shortly see a new wal
file appear. This is simpler than watching the main postgresql log or
/var/lib/postgresql/9.x/main/pg_xlog/.

> 3) Can I just set the wal-e configs and expect my database to base backup,
> and have wal files start pushing to s3 automatically?

Yes.

> 4) It seems WAL-E needs to have AWS_REGION set. Can we get support for this
> config through the postgresql charm?

Yes, certainly. Its a minor update, which I've just pushed out to cs:postgresql

I've got a branch nearly done which adds point-in-time recovery
actions (using WAL-E), and a few helpers like list-backups. This
should allow you to easily make use of the backups and wal archive,
for recovering deployments or cloning them.

I'm currently experiencing failures with the wal-e snap inside lxd
containers (the investigation is ongoing).

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Leader Election and Application-Specific Leadership

2017-04-06 Thread Stuart Bishop
On 6 April 2017 at 00:26, Dmitrii Shcherbakov
<dmitrii.shcherba...@canonical.com> wrote:

> https://jujucharms.com/docs/2.1/reference-charm-hooks#leader-elected
> "leader-elected is run at least once to signify that Juju decided this
> unit is the leader. Authors can use this hook to take action if their
> protocols for leadership, consensus, raft, or quorum require one unit
> to assert leadership. If the election process is done internally to
> the service, other code should be used to signal the leader to Juju.
> For more information read the charm leadership document."
>
> This doc says
> "If the election process is done internally to the service, other code
> should be used to signal the leader to Juju.".
>
> However, I don't see any hook tools to assert leadership to Juju from
> a charm based upon application-specific leadership information
> http://paste.ubuntu.com/24319908/
>
> So, as far as I understand, there is no manual way to designate a
> leader and the doc is wrong.
>
> Does anyone know if it is supposed to be that way and if this has not
> been implemented for a reason?

I agree with your reading, and think the documentation is wrong.  If
the election process is done internally to the service, there is no
way (and no need) to signal the internal 'leader' to Juju.

I also put 'leader' in quotes because if your service maintains its
own master, you should not call it 'leader' to avoid confusion with
the Juju leader.

For example, the lead unit in a PostgreSQL service appoints one of the
units as master. The master remains the master until the operator runs
the 'switchover' action on the lead unit, or the master unit is
destroyed causing the lead unit to start the failover process. At no
point does Juju care which unit is 'master'. Its communicated to the
end user using the workload status. Its simple enough to do and works
well.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 23:20, Simon Davy <simon.d...@canonical.com> wrote:

> One thing that seems to have landed in 2.1, which is worth noting IMO, is
> the local juju lxd image aliases.
>
> tl;dr: juju 2.1 now looks for the lxd image alias juju/$series/$arch in the
> local lxd server, and uses that if it finds it.
>
> This is amazing. I can now build a local nightly image[1] that pre-installs
> and pre-downloads a whole set of packages[2], and my local lxd units don't
> have to install them when they spin up. Between layer-basic and Canonical
> IS' basenode, for us that's about 111 packages that I don't need to install
> on every machine in my 10 node bundle. Took my install hook times from 5min+
> each to <1min, and probably halfs my initial deploy time, on average.

Ooh, thanks for highlighting this! I've needed this feature for a long
time for exactly the same reasons.


> [2] my current nightly cron:
> https://gist.github.com/bloodearnest/3474741411c4fdd6c2bb64d08dc75040

/me starts stealing

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 23:20, Simon Davy <simon.d...@canonical.com> wrote:

> One thing that seems to have landed in 2.1, which is worth noting IMO, is
> the local juju lxd image aliases.
>
> tl;dr: juju 2.1 now looks for the lxd image alias juju/$series/$arch in the
> local lxd server, and uses that if it finds it.
>
> This is amazing. I can now build a local nightly image[1] that pre-installs
> and pre-downloads a whole set of packages[2], and my local lxd units don't
> have to install them when they spin up. Between layer-basic and Canonical
> IS' basenode, for us that's about 111 packages that I don't need to install
> on every machine in my 10 node bundle. Took my install hook times from 5min+
> each to <1min, and probably halfs my initial deploy time, on average.

Ooh, thanks for highlighting this! I've needed this feature for a long
time for exactly the same reasons.


> [2] my current nightly cron:
> https://gist.github.com/bloodearnest/3474741411c4fdd6c2bb64d08dc75040

/me starts stealing

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PostgreSQL Use Case/Issues

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 15:46, Stuart Bishop <stuart.bis...@canonical.com> wrote:
> On 23 February 2017 at 01:46, James Beedy <jamesbe...@gmail.com> wrote:
>
>>> I think I can fix this, but I'll need to make a corresponding
>>> adjustment in the PostgreSQL charm and the fix will only take effect
>>> with updated services.
>>>
>> +1 for a fix. Thanks.
>
>>> Its related to the above issue. Your charm connects and gets the
>>> db.master.available state set. But you want to specify the database
>>> name, so a handler runs calling set_database(). At this point the
>>> .master and .standbys properties start correctly returning None, but
>>> set_database() neglected to remove the *.available states so handlers
>>> got kicked in that shouldn't have.
>>>
>> Ok, so a fix coming for this too in that case? This one is borking on my
>> devs who are deploying my bundles, in turn causing me grief, but also
>> borking on me too, making me question my own sanity :(
>
> Yes. I'll push a fix out shortly.

I've pushed a fix for your second issue (the 'available' states not
being removed when you change the requested database name).

I won't be able to fix the first issue today. For now, I think you can
work around it using an extra state.

@when('db.connected')
@when_not('dbname.requested')
def request_database_name(psql):
psql.set_database('foobar')
reactive.set_state('dbname.requested')

@when_all('db.master.available', 'dbname.requested')
def do_stuff_needing_master_db(psql):
assert psql.master is not None
assert psql.master.dbname == 'foobar'


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PostgreSQL Use Case/Issues

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 14:22, Mark Shuttleworth <m...@ubuntu.com> wrote:
> On 22/02/17 19:46, James Beedy wrote:
>
> A client can 'accept the defaults' by not setting any properties on
>>
>> the db relation when it joins (dating back to the original protocol
>> with pyjuju). When the PostgreSQL charm runs its relation-joined and
>> relation-changed hooks, it has no way of telling if the client just
>> wants to 'accept the defaults', or if the client has not yet run its
>> relation-joined or relation-changed hooks yet. So if it sees an empty
>> relation, it assumes 'accept the defaults' and provides a database
>> named after the client service.
>
>
> IIRC we agreed that the full state of a unit would be exposed to it from the
> beginning, if we know that.
>
> We have had ample time to introduce changes in behaviour since pyjuju, so I
> suspect this is just something that slipped through the cracks, not
> something we especially want to preserve. Could you file a bug with the
> proposed change in behaviour that would enable charmers to be more
> definitive in their approach?

I've filed https://bugs.launchpad.net/juju/+bug/1667268.

For exposing full state of a unit,
https://bugs.launchpad.net/juju-core/+bug/1417874 may also be relevant
as clusters don't have enough information or opportunity to
decommission themselves cleanly.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PostgreSQL Use Case/Issues

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 01:46, James Beedy <jamesbe...@gmail.com> wrote:

>> I think I can fix this, but I'll need to make a corresponding
>> adjustment in the PostgreSQL charm and the fix will only take effect
>> with updated services.
>>
> +1 for a fix. Thanks.

>> Its related to the above issue. Your charm connects and gets the
>> db.master.available state set. But you want to specify the database
>> name, so a handler runs calling set_database(). At this point the
>> .master and .standbys properties start correctly returning None, but
>> set_database() neglected to remove the *.available states so handlers
>> got kicked in that shouldn't have.
>>
> Ok, so a fix coming for this too in that case? This one is borking on my
> devs who are deploying my bundles, in turn causing me grief, but also
> borking on me too, making me question my own sanity :(

Yes. I'll push a fix out shortly.


>> need more control, you can use the set_roles() method on the interface
>> to have PostgreSQL grant some roles to your user, and then grant
>> permissions explicitly to those roles. But this doesn't really help
>> much from a security POV, so I've been toying with the idea of just
>> having clients all connect as the same user for the common case where
>> people don't want granular permissions (even if it does make security
>> minded people wince).
>
> Will the "common use case" be the only use case?

I think I need to support both approaches. I don't want applications
to outgrow Juju once they become complex enough to warrant per table
permissions. How this happens, I don't know yet; I haven't yet come up
with a design I like enough to pursue further. I don't think you'll
see any changes here until LTS+1, because it will likely need
backwards incompatible changes.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju GUI handling of empty config breaks promulgated charms

2017-02-22 Thread Stuart Bishop
On 22 February 2017 at 22:45, Merlijn Sebrechts
<merlijn.sebrec...@gmail.com> wrote:
> This - is - awesome !
>
> I guess this hack will still require a rebuild for the affected Charms?

Other affected charms could use the same hack (patching
charmhelpers.core.hookenv.config to convert nulls to ''), and would
need to be rebuilt and published.

I considered submitting it to charm-helpers, but its not completely
backwards compatible and could cause problems for charms that really
do expect null config settings.

The ideal solution is of course to fix it in Juju, and have it stop
throwing away perfectly valid configuration data :)


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PostgreSQL Use Case/Issues

2017-02-22 Thread Stuart Bishop
On 22 February 2017 at 21:46, James Beedy <jamesbe...@gmail.com> wrote:

> Experiencing some varying results with the PostgreSQL charm, hoping to get
> some validation on my use case.

This all seems the pgsql interface. Expected use is at
http://interface-pgsql.readthedocs.io/en/stable/requires.html#example-usage
if you haven't seen it already. The protocol had a fairly large
(backwards compatible) change recently, so sorry about the teething
troubles.

> 1. Sometimes (feeling like 1/5ish deploys) a database is created with the
> name of the application, instead of the database name the application
> requested.

Unfortunately this seems possible.

A client can 'accept the defaults' by not setting any properties on
the db relation when it joins (dating back to the original protocol
with pyjuju). When the PostgreSQL charm runs its relation-joined and
relation-changed hooks, it has no way of telling if the client just
wants to 'accept the defaults', or if the client has not yet run its
relation-joined or relation-changed hooks yet. So if it sees an empty
relation, it assumes 'accept the defaults' and provides a database
named after the client service. If your client then runs its
relation-joined hook, the 'db.connected' state will be set (because
there is a relation), and the 'db.*.available' states will also be set
because there is a database available (not the one you want, but you
haven't had a change to say otherwise yet). At which point your
handlers kick in on *.available and get valid connection strings to a
different database to the one you want. And maybe the handler that
calls set_database() runs too, but it is too late.

I think I can fix this, but I'll need to make a corresponding
adjustment in the PostgreSQL charm and the fix will only take effect
with updated services.

pre-reactive, the burden was on the charm to wait until the database
name provided matches the one requested. That is still what you can do
here if you need an immediate work around, although the goal of the
interface is to remove that sort of annoying implementation detail
from your charm so I certainly should try and sort this out.


> 2. Every ~ 1/5 deploys (odd how this keeps surfacing), I get a 'NoneType'
> error when trying to access 'master.host', or 'master.uri' via relation to
> the PostgreSQL charm on the firing of 'master.available'. See bug created
> here -> https://bugs.launchpad.net/interface-pgsql/+bug/1666337

Its related to the above issue. Your charm connects and gets the
db.master.available state set. But you want to specify the database
name, so a handler runs calling set_database(). At this point the
.master and .standbys properties start correctly returning None, but
set_database() neglected to remove the *.available states so handlers
got kicked in that shouldn't have.


> 3. Users seem to have different access privs.
>
> On this specific deploy, everything seems to have initialized correctly, but
> my applications don't have consistent access to the database across the
> board. See -> http://paste.ubuntu.com/24046732/

PostgreSQL tries to be secure by default, so users do not implicitly
get access to each others tables. The charm that creates the tables
needs to also grant permissions to them. Commonly people just 'GRANT
ALL ON TABLE foo TO PUBLIC' to give full permissions to all other
users on the system (all related services in the Juju case). If you
need more control, you can use the set_roles() method on the interface
to have PostgreSQL grant some roles to your user, and then grant
permissions explicitly to those roles. But this doesn't really help
much from a security POV, so I've been toying with the idea of just
having clients all connect as the same user for the common case where
people don't want granular permissions (even if it does make security
minded people wince).

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju GUI handling of empty config breaks promulgated charms

2017-02-22 Thread Stuart Bishop
On 21 February 2017 at 16:30, Merlijn Sebrechts
<merlijn.sebrec...@gmail.com> wrote:
> Thanks, Stuart!
>
> Now I get a different error. It seems that the charm really can't handle
> null as a string value.

Rather than comb through the charm updating all the call sites, I've
made a hack to ensure config values never get returned as null. I also
didn't want to change the style, as the alternative (using
config.get()) can hide other bugs.

Other charms will have the same problem, so this should certainly be
fixed in Juju.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Nagios charm maintenance

2017-02-21 Thread Stuart Bishop
On 22 February 2017 at 03:49, Tom Barber <t...@spicule.co.uk> wrote:

> yeah it's maintained by (I think) Stuart Bishop

Its launchpad.net/nagios-charm and maintained by ~nagios-charmers. I'm
getting the bug mail subscription fixed so interested parties see
them.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju GUI handling of empty config breaks promulgated charms

2017-02-21 Thread Stuart Bishop
On 21 February 2017 at 16:44, Merlijn Sebrechts
<merlijn.sebrec...@gmail.com> wrote:
> PS: This bug was reported a few months ago by one of my colleagues, but he
> didn't get a reply: https://bugs.launchpad.net/cassandra-charm/+bug/1645821
>
> Is that the correct place to file bugs for the charm?

Yes. I'm now correctly subscribed, so will see them in the future.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju GUI handling of empty config breaks promulgated charms

2017-02-21 Thread Stuart Bishop
On 21 February 2017 at 00:40, Merlijn Sebrechts
<merlijn.sebrec...@gmail.com> wrote:
> Deploying cassandra with the GUI. Cassandra installation will fail with the
> following error.
>
> KeyError: 'http_proxy'
>
>
> The Cassandra charm expects config['http_proxy'] to return an empty string.
> This is what happens when deploying Cassandra using the CLI. However,
> config_get crashes because the config isn't set. running config-get in the
> hooks context shows that the http_proxy config value just isn't set.
>
> https://github.com/juju/juju-gui/issues/2486

I've made a new release using a newer charm-helpers. According to the
issue comments it should work around the problem.

I have no idea why Juju would be changing the empty string to null,
specially as I'm told from Juju's pov null is an invalid value for a
string and why setting a config value's default to null is a lint
error. But that contradicts the documentation for 'config-get --all',
which explicitly calls out config settings with non-null defaults. So
maybe this changed with 2.0, but there is some old code that missed
being updated?

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Peers vs Members vs Terminology vs Usability

2017-01-30 Thread Stuart Bishop
_not('leadership.set.master')
def anoint_master():
leadership.leader_set(master=hookenv.local_unit())

@when('leadership.is_leader')
@when('leadership.set.master')
def maybe_failover():
master = leadership.leader_get('master')
peer_rel = context.Relations().peer
if master != hookenv.local_unit() and master not in peer_rel:
leadership.leader_set(master=hookenv.local_unit())

@when('leadership.changed.master')
def follow_master():
master = leader_get('master')
rels = context.Relations()
rels.peer.local['following'] = master
reconfigure(master=master)  # maybe master is us

@when('leadership.set.master')
def update_status():
master = leader_get('master')
peer_rel = context.Relations().peer
for peer, reldata in rels.peer_rel.items():
if reldata.get('following') != 'master':
hookenv.status_set('waiting', 'Waiting for cluster
coordination')
return
if master == hookenv.local_unit():
hookenv.status_set('active', 'Master')
else:
hookenv.status_set('active', 'Secondary to {}'.format(master))

It seems to work best if the handlers making the leadership decisions are
separate to the handlers that act on them. Structured this way, your charm
does not care which unit is leader and it can change between any two hooks
and you don't care.


>
> (but note that the design has a bug in
> it, as it assumes there is a single peer relation while in reality charms
> can have multiple peer relations. You will consistently get the same peer
> relation, but it might not be the one you expect).
>
> - Uggg, possibly I've already been hitting this.
>

Only if you have more than one peer relation declared in metadata.yaml.
Which is pretty unusual.




-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Peers vs Members vs Terminology vs Usability

2017-01-28 Thread Stuart Bishop
On 25 January 2017 at 22:22, James Beedy <jamesbe...@gmail.com> wrote:

> Trying to use the peers interface to coordinate all units of an
> application to know about each other, I find myself feeling like this
> should be a built in functionality. In other words, "tell me who my peers
> are" shouldn't turn into a giant milestone for every charm who needs to
> know what peers it has (skewed usage of peers here per Juju definition of
> peers). I feel like there are a decent number of charms/applications that
> will end up recreating this functionality independently (or already have),
> to the extent that we should probably think about making this functionality
> more built in/generic. My understanding for what a peer is, is different
> than what Juju defines a peer to be, hence we should think about defining
> some terminology around the term 'peer'. Possibly there is just a need for
> a generic 'members' interface built off the peers interface, which would do
> the coordination and caching bits, and make some basic information
> available to each member about its complimentary members?
>
> Thoughts?
>

If you want to build this API, it could be done as a charm layer. If that
sounds too heavy weight just for this, it could be squeezed into
layer:coordinator with just a little bit of squinting. It also might be
doable as an interface, so all charms declaring a peer relation using the
'members' interface name get to share your implementation, which I think
matches what you are suggesting. For it to be built into Juju, I think you
would need a more concrete suggestion and a rationale on why it is better
built in as opposed to being pulled into the charm using a reactive layer
or interface or even just a Python library in the wheelhouse.

I personally would not have found it useful, as when I've needed a peer
relation I've needed a lot more than just knowing who the peers are and
their addresses.  Maybe I should write less complex charms :)

You might find that charmhelpers.context gives you enough to keep you happy
- context.Relations().peer gives you a dictionary of peer -> relation data,
and context.Relations().peer.local is a dictionary of the local peer
relation data that you can update (but note that the design has a bug in
it, as it assumes there is a single peer relation while in reality charms
can have multiple peer relations. You will consistently get the same peer
relation, but it might not be the one you expect).


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: normal charm to subordinate charm and now peer relation does not work

2017-01-25 Thread Stuart Bishop
On 25 January 2017 at 18:43, Tilman Baumann <tilman.baum...@canonical.com>
wrote:

> At this point I'm pretty sure that this is a bug or undocumented feature.
>
>
> The peer relation of a subordinate charm only has one conversation.
> Despite scope being 'global' in metadata.yaml and the RelationBase class
> being scope = scope.UNIT.
>

Relation scopes declared in metadata.yaml have no relation to conversation
scopes, which are a charms.reactive concept.

A relation scope of 'container' means that the relation data is not visible
to other units. The subordinate unit can see the host unit's relation data
and vice versa. Neither can see the relation data on the relation from
their peer units. It looks like a relation with just two end points, rather
than the normal case of one end point for each unit in the two related
services. It gets very, very confusing when one end declares container
scoped and the other end global, so don't do that :)

I don't know why your peer relation (with global scope) starts misbehaving
after you add the container scoped juju-info relation to turn your charm
into a subordinate. It might be helpful to inspect the peer relation with
the hook environment tools to try to narrow down if the problem is with
Juju, charms.reactive, or something else. Using debug-hooks, or 'juju run
--unit foo/0 "relation-ids ssh-peers"' and 'juju run --unit foo/0
"relation-list -r ssh-peers:64"' if you haven't done this before.

One thing to remember is that units join the peer relation one at a time.
So in your peer relation-joined hook, you will only see a single unit. Then
a relation-changed, again with a single unit. And maybe a few more times
with a single unit. Eventually the rest of the units will join, each time
triggering relation-changed one or more times. Maybe your problem is just
that you are looking too soon :)


>
> Either I'm wrong to expect this to work and subordinates are only
> supposed to have container scopes. Then it is a dokufix and should be
> caugt by charm proof.
>

Its fine as far as Juju in concerned. The cs:ntp subordinate has both a
global scoped peer relation and a container scoped juju-info relation (
https://jujucharms.com/ntp). I don't know about the charms.reactive
conversation or relation object, but I can't see why its behaviour would
change once you add the container scoped juju-info relation and think it
more likely that the problem lies elsewhere.


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: lxd and constraints

2017-01-13 Thread Stuart Bishop
On 13 January 2017 at 02:20, Nate Finch <nate.fi...@canonical.com> wrote:

I'm implementing constraints for lxd containers and provider... and
> stumbled on an impedance mismatch that I don't know how to handle.
>


> I'm not really sure how to resolve this problem.  Maybe it's not a
> problem.  Maybe constraints just have a different meaning for containers?
> You have to specify the machine number you're deploying to for any
> deployment past the first anyway, so you're already manually choosing the
> machine, at which point, constraints don't really make sense anyway.
>

I don't think Juju can handle this. Either constraints have different
meanings with different cloud providers, or lxd needs to accept minimum
constraints (along with any other cloud providers with this behavior).

If you decide constraints need to consistently mean minimum, then I'd argue
it is best to not pass them to current-gen lxd at all. Enforcing that
containers are restricted to the minimum viable resources declared in a
bundle does not seem helpful, and Juju does not have enough information to
choose suitable maximums (and if it did, would not know if they would
remain suitable tomorrow).

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: peer relation without public interface

2017-01-12 Thread Stuart Bishop
On 11 January 2017 at 20:23, Tilman Baumann <tilman.baum...@canonical.com>
wrote:

> Hi,
>
> I'm writing a layered reactive-python charm which uses a peer relation
> to know all units of the same application.
>
> However I don't seem to find a way to convince charm build to create the
> ./hook/ files for this relation for me.
>

'charm build' will only create the hook files for interfaces included by
your layer.yaml. If you don't want that, then you need to create the hooks
yourself in your charm. You do this by copying https://github.com/juju-
solutions/layer-basic/blob/master/hooks/hook.template to your
hooks/foo-relation-{joined,changed,departed,broken} files and making them
executable. It would be nice if 'charm build' handled this for you, but
nobody has gotten around to implementing the feature.


What is the general best practice for providing peer relations of a own
> type in a layered charm?
>

Have a look at layer:coordinator for a layer with a 'private' peer
relation. I don't think there is a best practice yet, but it is how I did
it :) https://code.launchpad.net/layer-coordinator or
https://github.com/stub42/layer-coordinator


I suppose there is no reason why I can't put the code in a class based
> off RelationBase like interface layers usually do? The code entry point
> seems to the @hook decorators around the methods. I don't need to create
> instances or anything like that, right?
>

Yes. There is nothing magical about interface layers - they are just like
any other layer, except with more arbitrary restrictions and their code
copied into a hooks/relations rather than the top level. You can most
certainly store this code in your main charm or layers tree rather than in
a separate branch pulled in at build time.


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Opaque automatic hook retries from API

2017-01-06 Thread Stuart Bishop
On 6 January 2017 at 01:39, Casey Marshall <casey.marsh...@canonical.com>
wrote:

> On Thu, Jan 5, 2017 at 3:33 AM, Adam Collard <adam.coll...@canonical.com>
> wrote:
>
>> Hi,
>>
>> The automatic hook retries[0] that landed as part of 2.0 (are documented
>> as) run indefinitely[1] - this causes problems as an API user:
>>
>> Imagine you are driving Juju using the API, and when you perform an
>> operation (e.g. set the configuration of a service, or reboot the unit, or
>> add a relation..) - you want to show the status of that operation.
>>
>> Prior to the automatic retries, you simply perform your operation, and
>> watch the delta streams for the corresponding change to the unit - the
>> success or otherwise of the operation is reflected in the unit
>> agent-status/workload-status pair.
>>
>> Now, with retries, if you see a unit in the error state, you can't
>> accurately reflect the status of the operation, since the unit will
>> undoubtedly retry the hook again. Maybe it succeeds, maybe it fails again.
>> How can one say after receiving the first delta of a unit error if the
>> operation succeeded or failed?
>>
>> With no visibility up front on the retry strategy that Juju will perform
>> (e.g. something representing the exponential backoff and a fixed number of
>> retries before Juju admits defeat) it is impossible to say at any point in
>> the delta stream what the result of a failed-at-least-once operation is.
>>
>
> I think the retry strategy is great -- it leverages the immutability we
> expect hooks to provide, to deliver a robust result over unreliable
> substrates -- and all substrates are unreliable where there's
> internetworking involved!
>
> However I see your point about the retry strategy muddling status. I've
> noticed this sometimes when watching openstack or k8s bundles "shake out"
> the errors as they come up. I don't think this is always a charm quality
> issue, it's maybe because we're trying to show two different things with
> status?
>

errors being 'shaken out' are almost always unhandled race conditions. I
find destroy-service/remove-application is particularly problematic,
because the doomed units don't know they are being destroyed but rather is
informed about departing one relation at a time (which is inherently racy,
because the units the doomed service are related too will process their
relation-departed hooks almost immediately and stop talking to the doomed
service, while the doomed service still thinks it can access their
resources while it falls apart one piece at a time).

I'm becoming more and more a believer that we can't reasonably avoid these
errors, and instead maybe we should assume that they will happen and it is
perfectly normal. We can stick to writing nice idempotent handlers, simpler
because we can ignore and bubble up failures. Simpler protocols (eg.
removing all the handshaking the PostgreSQL interface does to try to avoid
races with authorization). And going back to Adam's point, have hooks
retried a few times with some sort of backoff before even being reported as
a failure to the end user. One of the reasons test suites are currently
flaky is that there are race conditions we have no reasonable way of
solving, such as a database restarting itself while a hook on another unit
is attempting to use it. Even though I currently bootstrap test envs with
the retry behaviour off, I'm thinking of changing that.


What if Juju made a clearer distinction between result-state ("what I'm
> doing most recently or last attempted to do") vs. goal-state ("what I'm
> trying to get done") in the status? Would that help?
>

Isn't the goal state just the failed hook? I would certainly like to see
the list of hooks queued to run on each unit though if that is what you
mean (not in the default tabular status, but in the json status dump).



>> Can retries be limited to a small number, with a backoff algorithm
>> explicitly documented and stuck to by Juju, with the retry attempt number
>> included in the delta stream?
>>
>
This sounds like a good idea. The limit could even be dynamic, with a retry
attempted every time a unit it is related too successfully runs a hook,
until the environment is quiescent.



-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: using a bundle with manually added machines (redux)

2017-01-04 Thread Stuart Bishop
On 3 January 2017 at 19:07, Rick Harding <rick.hard...@canonical.com> wrote:

> I'm looking into this. The bundle deploy feature in Juju 2.0 does not
> allow referring to existing machines because it breaks the reusability of
> the bundle.
>

It would be great if Juju started supporting non-reusable bundles too. Its
a waste having to support two similarly named tools that do almost the same
thing. I'm not sure who is using 'juju deploy', but Amulet and Mojo both
depend on 'juju deployer' for this reason. Which slows feature adoption, as
juju-deployer doesn't seem to be owned by anyone and adding support for new
features happens on an ad-hoc basis (my team is just now adding storage and
resource support to it, needed for Mojo, so we can start using these
features with actual deployments).

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: A (Very) Minimal Charm

2016-12-16 Thread Stuart Bishop
On 16 December 2016 at 22:33, Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Tim Penhey <tim.pen...@canonical.com> writes:
>
> > Make sure you also run on LXD with a decent delay to the APT archive.
>
> Open question: is there any reason we shouldn't expect charm authors to
> take a hard-right towards charms with snaps embedded as resources? I know
> one of our long-standing conceptual problems is consistency across units
> which snaps solves nicely.
>

https://github.com/stub42/layer-snap is how I'm expecting things to go.
There is already one charm in the ~charmers review queue using it and I'm
aware of several more in various stages of development.

More work is needed though. In particular, Juju storage is inaccessible to
snaps, because there is no way to reach it from inside the containment.

(But none of this is a reason to not optimize Juju unit provisioning times,
since we will still need an environment setup capable of running the charms
so they can install the snaps for some time yet).

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: A (Very) Minimal Charm

2016-12-16 Thread Stuart Bishop
On 16 December 2016 at 22:33, Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Tim Penhey <tim.pen...@canonical.com> writes:
>
> > Make sure you also run on LXD with a decent delay to the APT archive.
>
> Open question: is there any reason we shouldn't expect charm authors to
> take a hard-right towards charms with snaps embedded as resources? I know
> one of our long-standing conceptual problems is consistency across units
> which snaps solves nicely.
>

https://github.com/stub42/layer-snap is how I'm expecting things to go.
There is already one charm in the ~charmers review queue using it and I'm
aware of several more in various stages of development.

More work is needed though. In particular, Juju storage is inaccessible to
snaps, because there is no way to reach it from inside the containment.

(But none of this is a reason to not optimize Juju unit provisioning times,
since we will still need an environment setup capable of running the charms
so they can install the snaps for some time yet).

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Leadership Election Tools

2016-12-14 Thread Stuart Bishop
On 14 December 2016 at 00:39, Matthew Williams <
matthew.willi...@canonical.com> wrote:

> Hey Folks,
>
> Let's say I'm a charm author that wants to test leadership election in my
> charm. Are there any tools available that will let me force leadership
> election in juju so that I can test how my charm handles it? I was looking
> at the docs here: https://jujucharms.com/docs/stable/developer-leadership
> but couldn't see anything
>

I don't think there is any supported way of doing this.

If you don't mind an unsupported hack though, use 'juju ssh' to shut down
the unit's jujud, wait 30 seconds for the lease to expire, and you should
have a new leader. 'juju ssh' again to restart the jujud, 'juju wait' for
the hooks to clear, and failover is done. 'juju run' will hang if you use
it to shutdown jujud, so don't do that.

juju ssh ubuntu/0 'sudo systemctl stop jujud-unit-ubuntu-0.service'
sleep 30
juju ssh ubuntu/0 'sudo systemctl stop jujud-unit-ubuntu-0.service'
juju wait

Ideally, you may be able to structure things so that it doesn't matter
which unit is leader. If all state relating to leadership decisions is
stored in the leadership settings, and if you avoid using @hook, then it
doesn't matter which unit makes the decisions. Worst case is that *no* unit
is leader when hooks are run, and decisions get deferred until
leader-elected runs.

(Interesting race condition for the day: It is possible for all units in a
service to run their upgrade-charm hook and for none of them to be leader
at the time, so @hook('upgrade-charm') code guarded by is-leader may never
run. And reactive handlers have no concept of priority and might kick in
rather late for upgrade steps, requiring more creative use of reactive
states to guard 'new' code from running too soon. Not specific to
upgrade-charm hooks either, so avoid using @hook and leadership together)


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Immutable configuration best practices for charms

2016-12-13 Thread Stuart Bishop
On 13 December 2016 at 03:01, Sandor Zeestraten <zando...@gmail.com> wrote:


> How are you all dealing with immutable configurations when charming?
>

For now, the best approach I could come up with is to detect the change
(hookenv.config().previous('foo') helps here), and put the unit into a
blocked state until the operator switches the configuration back. Assuming
you are writing a reactive charm, the trick is that this needs to happen
*before* your handlers kick in, and to sys.exit(0) after putting the unit
into a blocked state.

https://git.launchpad.net/postgresql-charm/tree/reactive/postgresql/preflight.py
has the code I use to handle immutable configuration and other config
validation, and uses
https://git.launchpad.net/postgresql-charm/tree/lib/preflight.py to inject
the code early in reactive charm startup (I have an open PR in github to
add a similar builtin feature to reactive)

(Long term, I'm sure we will get richer features. There will always be some
cases where charm code needs to run to validate configration, so maybe some
sort of exit status the config-changed hook can return. Immutable is
simpler than generic validation though, and could probably just be declared
in config.yaml)

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Issues with amulet tests

2016-12-07 Thread Stuart Bishop
On 7 December 2016 at 05:46, Tim Van Steenburgh <
tim.van.steenbu...@canonical.com> wrote:

> Not sure where it comes from but you can skip make targets by adding this
> line
> to your tests.yaml:
>
> makefile: []
>


The makefile using tox is being pulled from the basic layer. Maybe the
basic layer needs a matching tests/tests.yaml specifying tox & amulet.

I only just saw this the other day, as I normally have a Makefile in the
source layer that takes precedence.


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: A (Very) Minimal Charm

2016-12-01 Thread Stuart Bishop
On 1 December 2016 at 19:53, Marco Ceppi <marco.ce...@canonical.com> wrote:

> On Thu, Dec 1, 2016 at 5:00 AM Adam Collard <adam.coll...@canonical.com>
> wrote:
>
>> On Thu, 1 Dec 2016 at 04:02 Nate Finch <nate.fi...@canonical.com> wrote:
>>
>> On IRC, someone was lamenting the fact that the Ubuntu charm takes longer
>> to deploy now, because it has been updated to exercise more of Juju's
>> features.  My response was - just make a minimal charm, it's easy.  And
>> then of course, I had to figure out how minimal you can get.  Here it is:
>>
>> It's just a directory with a metadata.yaml in it with these contents:
>>
>> name: min
>> summary: nope
>> description: nope
>> series:
>>   - xenial
>>
>> (obviously you can set the series to whatever you want)
>> No other files or directories are needed.
>>
>>
>> This is neat, but doesn't detract from the bloat in the ubuntu charm.
>>
>
> I'm happy to work though changes to the Ubuntu charm to decrease "bloat".
>
>
>> IMHO the bloat in the ubuntu charm isn't from support for Juju features,
>> but the switch to reactive plus conflicts in layer-base wanting to a)
>> support lots of toolchains to allow layers above it to be slimmer and b) be
>> a suitable base for "just deploy me" ubuntu.
>>
>
> But it is to support the reactive framework, where we utilize newer Juju
> features, like status and application-version to make the charm rich
> despite it's minimal goal set. Honestly, a handful of cached wheelhouses
> and some apt packages don't strike me as bloat, but I do want to make sure
> the Ubuntu charm works for those using it. So,
>
> What's the real problem with the Ubuntu charm today?
> How does it not achieve it's goal of providing a relatively blank Ubuntu
> machine? What are people using the Ubuntu charm for?
>
> Other than demos, hacks/workarounds, and testing I'm not clear on the
> purpose of an Ubuntu charm in a model serves.
>

The cs:ubuntu charm gets used on production to attach subordinates too. For
example, we install cs:ubuntu onto our controller nodes so we can install
subordinates like cs:ntp, cs:nrpe, cs:~telegraf-chamers/telegraf and
others. Its also used in test suites for these sort of subordinates.

The 'problem' is, like all reactive charms, the first thing it does is pull
down approximately 160MB of packages and installs them (installing pip
pulls in build-essentials, or at least a big chunk of it). Its very
noticeable when working locally, and maybe in CI environments.

If I knew how to solve this for all reactive charms, I would have suggested
it already. It could be fixed in cs:ubuntu by making it non-reactive, if
people think it is worth it (its not like it actually needs any reactive
features. A minimal metadata.yaml and an install or start hook to set the
status is all it needs).

Maybe reactive is entrenched enough as the new world order that we can get
specific cloud images spun for it, where a pile of packages are
preinstalled so we don't need to wait for cloud-init or the charm to
install them. We might be able to lower deployment times from minutes to
seconds, since often this step is the main time sink.

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A (Very) Minimal Charm

2016-12-01 Thread Stuart Bishop
On 1 December 2016 at 19:53, Marco Ceppi <marco.ce...@canonical.com> wrote:

> On Thu, Dec 1, 2016 at 5:00 AM Adam Collard <adam.coll...@canonical.com>
> wrote:
>
>> On Thu, 1 Dec 2016 at 04:02 Nate Finch <nate.fi...@canonical.com> wrote:
>>
>> On IRC, someone was lamenting the fact that the Ubuntu charm takes longer
>> to deploy now, because it has been updated to exercise more of Juju's
>> features.  My response was - just make a minimal charm, it's easy.  And
>> then of course, I had to figure out how minimal you can get.  Here it is:
>>
>> It's just a directory with a metadata.yaml in it with these contents:
>>
>> name: min
>> summary: nope
>> description: nope
>> series:
>>   - xenial
>>
>> (obviously you can set the series to whatever you want)
>> No other files or directories are needed.
>>
>>
>> This is neat, but doesn't detract from the bloat in the ubuntu charm.
>>
>
> I'm happy to work though changes to the Ubuntu charm to decrease "bloat".
>
>
>> IMHO the bloat in the ubuntu charm isn't from support for Juju features,
>> but the switch to reactive plus conflicts in layer-base wanting to a)
>> support lots of toolchains to allow layers above it to be slimmer and b) be
>> a suitable base for "just deploy me" ubuntu.
>>
>
> But it is to support the reactive framework, where we utilize newer Juju
> features, like status and application-version to make the charm rich
> despite it's minimal goal set. Honestly, a handful of cached wheelhouses
> and some apt packages don't strike me as bloat, but I do want to make sure
> the Ubuntu charm works for those using it. So,
>
> What's the real problem with the Ubuntu charm today?
> How does it not achieve it's goal of providing a relatively blank Ubuntu
> machine? What are people using the Ubuntu charm for?
>
> Other than demos, hacks/workarounds, and testing I'm not clear on the
> purpose of an Ubuntu charm in a model serves.
>

The cs:ubuntu charm gets used on production to attach subordinates too. For
example, we install cs:ubuntu onto our controller nodes so we can install
subordinates like cs:ntp, cs:nrpe, cs:~telegraf-chamers/telegraf and
others. Its also used in test suites for these sort of subordinates.

The 'problem' is, like all reactive charms, the first thing it does is pull
down approximately 160MB of packages and installs them (installing pip
pulls in build-essentials, or at least a big chunk of it). Its very
noticeable when working locally, and maybe in CI environments.

If I knew how to solve this for all reactive charms, I would have suggested
it already. It could be fixed in cs:ubuntu by making it non-reactive, if
people think it is worth it (its not like it actually needs any reactive
features. A minimal metadata.yaml and an install or start hook to set the
status is all it needs).

Maybe reactive is entrenched enough as the new world order that we can get
specific cloud images spun for it, where a pile of packages are
preinstalled so we don't need to wait for cloud-init or the charm to
install them. We might be able to lower deployment times from minutes to
seconds, since often this step is the main time sink.

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Jenkins plugin to upload charm to store?

2016-11-02 Thread Stuart Bishop
On 2 November 2016 at 18:24, Konstantinos Tsakalozos <
kos.tsakalo...@canonical.com> wrote:

> Hi Tom,
>
> Yes, I have my own script right now. It is not elegant.
>
> Instead of each one of us maintaining their own scripts, we could have a
> single point of reference. In the Jenkins world I thought that would be a
> plugin, but a script would also work. Is there anyone open sourcing his CI
> <--> juju integration scripts?
>
>
It could be much, much more elegant. I've got open issues on getting 'charm
push' to report the revision better (so you can publish or tag), or even
having 'charm push --channel' do what you want. I personally would rather
see this improved so it helps everyone, to the point you don't need a
Jenkins plugin.

An automated system needs to deal with the auth problem, which is
unfortunate (someone typing 'charm login' and entering their SSO password
and a token on a possibly untrusted system, or manufacturing an auth token
and installing it somehow). Snappy has this sorted better, with Launchpad
able to build snaps from a branch and upload them to the snap store on your
behalf.


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: List plugins installed?

2016-09-30 Thread Stuart Bishop
On 30 September 2016 at 04:47, Nate Finch <nate.fi...@canonical.com> wrote:

> Seem alike the easiest thing to do is have a designated plugin directory
> and have juju install  copy the binary/script there.  Then
> we're only running plugins the user has specifically asked to install.
>

This does not work if the plugin has dependencies, such as the Python
standard library or external tools such as git or graphviz. Nothing running
inside the snap containment can access stuff outside of the containment.

I think it will be more complex solution that needs to be designed with the
snappy team. As far as I can tell its either going to need a small daemon
running outside of containment and a way of passing messages to it (such as
how a snap can open a web page in a browser running outside of
containment), or having plugins distributed as snaps and somehow allowing
the juju snap to call executables in these plugin snaps.

(which is going to take time, so I guess we need to keep the existing
mechanism going a while longer and the snap in devmode)

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: List plugins installed?

2016-09-30 Thread Stuart Bishop
On 30 September 2016 at 04:47, Nate Finch <nate.fi...@canonical.com> wrote:

> Seem alike the easiest thing to do is have a designated plugin directory
> and have juju install  copy the binary/script there.  Then
> we're only running plugins the user has specifically asked to install.
>

This does not work if the plugin has dependencies, such as the Python
standard library or external tools such as git or graphviz. Nothing running
inside the snap containment can access stuff outside of the containment.

I think it will be more complex solution that needs to be designed with the
snappy team. As far as I can tell its either going to need a small daemon
running outside of containment and a way of passing messages to it (such as
how a snap can open a web page in a browser running outside of
containment), or having plugins distributed as snaps and somehow allowing
the juju snap to call executables in these plugin snaps.

(which is going to take time, so I guess we need to keep the existing
mechanism going a while longer and the snap in devmode)

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Follow-up on unit testing layered charms

2016-09-09 Thread Stuart Bishop
On 9 September 2016 at 01:03, Pete Vander Giessen <
pete.vandergies...@canonical.com> wrote:

> Hi All,
>
> > Stuart Bishop wrote:
> > The tearDown method could reset the mock easily enough.
>
> If only it were that simple :-)
>
> To patch imports, the harness was actually providing a context that you
> could use the wrap the imports at the top of your test module. That solved
> the immediate issue of executing imports without errors, but it created a
> very complex situation when you went to figure out which references to
> cleanup or update when you wanted to reset mocks. You also weren't able to
> clean them up in tearDown, or even tearDownClass, because you had to handle
> the situation where you had multiple test classes in a module.
>
> One workaround is to do your imports inside of the setUp for a test. That
> didn't feel like the correct way to do things in a library meant for
> general use, where I'd prefer to stick to things that don't make Guido sad.
> I wouldn't necessarily object to the technique if it came up in a code
> review for a specific charm, though :-)
>

I'm thinking you insert a MagicMock into sys.modules instead of an import
statement (this is how we do it in the telegraf charm, and I'm sure helpers
could make this nicer):

# Mock layer modules
import charms
promreg = MagicMock()
charms.promreg = promreg
sys.modules['charms.promreg'] = promreg

To reset, you just iterate over sys.modules and reset everything that is a
MagicMock (or anything with a reset_mock() method). There is no need to
figure which to reset, since you want all of them reset every test to
preserve test isolation. I haven't actually tried this bit yet.

If you are using the standard Python unittest feature set, I think you
would need a TestCase subclass to do the reset. If you are using py.test, I
think it has features that can do this magically.

A moduleTearDown would be required if you want to remove the mocks from
sys.modules (or py.test magic). But I don't think we need to bother.



-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Follow-up on unit testing layered charms

2016-09-01 Thread Stuart Bishop
On 30 August 2016 at 23:02, Pete Vander Giessen <
pete.vandergies...@canonical.com> wrote:

The problems with the harness: patching sys.modules leads to a catch-22: if
> we don't leave the mocks in place, we still get import errors when using
> the mock library's mock.patch method, but if we do leave them in place,
> tests that set properties on them can end up interfering with each other.
> There are workarounds, but they're not intuitive, and they don't generate
> friendly error messages when they fail. We felt it best to leave the
> harness behind, and provide some advice on more straightforward things that
> you can do to work around the import errors. Thus the PR referenced above.
>

The tearDown method could reset the mock easily enough. I didn't need to do
that in the simple case I had (a single layer), but it should solve the
test isolation issue you have.



-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Question about scripts and blocking

2016-08-09 Thread Stuart Bishop
On 9 August 2016 at 18:32, Matthew Williams <matthew.willi...@canonical.com>
wrote:

>
> c) Is there some other solution that I've not considered?
>

Plugins can be used to add the behaviour you need, allowing time for the
core team to catch up. I've already got a blocking action runner that just
needs the juju 1 backport done for example, which we need to make the cli
usable before we use actions in anger on production. Running actions on all
units in a service is a small extension to that. git+ssh://
git.launchpad.net/~stub/+git/juju-act if you are curious.

But yes, I think what you describe is in theory suitable for actions. But
in practice, I think people are not using actions as 'juju run' is more
usable and flexible. https://bugs.launchpad.net/juju-core/+bug/1445066 is
the original bug report on needing blocking actions. I don't know about one
for running on all units in a service, but I have heard it discussed before.

-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju and snappy implementation spike - feedback please

2016-08-09 Thread Stuart Bishop
On 9 August 2016 at 19:08, Ian Booth <ian.bo...@canonical.com> wrote:

> I personally like the idea that the snap could use a juju-home interface to
> allow access to the standard ~/.local/share/juju directory; thus allowing
> a snap
> and regular Juju to be used interchangeably (at least initially). This will
> allow thw use case "hey, try my juju snap and you can use your existing
> settings" But, isn't it verboten for snaps to access dot directories in
> user
> home in any way, regardless of what any interface says? We could provide an
> import tool to copy from ~/.local/share/juju to ~/snap/blah...
>
> But in the other case, using a personal snap and sharing settings with the
> official Juju snap - do we know what the official snappy story is around
> this
> scenario? I can't imagine this is the first time it's come up?
>


The big difference to me is that $SNAP_USER_DATA will roll back if the snap
is rolled back. I'm not sure what happens if the snap is removed and
reinstalled. Given end users should no longer need to be messing around
with the dotfiles, I think the rollback behaviour is what should drive your
decision. Is it nice behaviour? Or will it mess things up because rollback
will cause things to get out of sync with the deployments?


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju as a snap

2016-08-07 Thread Stuart Bishop
On 3 August 2016 at 04:00, Nicholas Skaggs <nicholas.ska...@canonical.com>
wrote:

> The Juju client has been snappified and pushed to the juju store. Note for
> now it's just an amd64 build.
>
> snap install juju --beta --devmode
>
> The beta channel contains the latest beta, 2.0-beta13. This is a sneak
> preview of further builds, including the latest crack available in the edge
> channel as development happens.
>

You might want to get the snap added to the normal release process...
beta14 is the new hotness.

(Launchpad can do automatic builds of multiple architectures from a git or
bzr branch, and push them to the charm store for you)


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju as a snap

2016-08-03 Thread Stuart Bishop
On 3 August 2016 at 04:00, Nicholas Skaggs <nicholas.ska...@canonical.com>
wrote:

> The Juju client has been snappified and pushed to the juju store. Note for
> now it's just an amd64 build.
>
> snap install juju --beta --devmode
>
> The beta channel contains the latest beta, 2.0-beta13. This is a sneak
> preview of further builds, including the latest crack available in the edge
> channel as development happens.
>
> Note, that you will need to use juju add-credential to re-add your
> credentials for the snap -- credentials are not shared with the debian
> package. Also, should you have the debian package (from the archive or ppa)
> installed, it has priority in PATH for 'juju'. Uninstall the package, or
> run the snap directly by calling /snap/bin/juju.
>
> Feedback is welcome and appreciated! If you are running ubuntu 16.04 you
> already have snappy installed. Give it a try!


Any idea how plugins will work once you get this confined? I think plugins
will also need to be snapped and connected by an interface somehow (esp. as
deb packaged plugins would likely drag in a non-snap'd juju).

(Got my lxd environment bootstrapped, so working fine here!)


-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: charm build - transpose configs

2016-08-02 Thread Stuart Bishop
On 1 August 2016 at 01:16, James Beedy <jamesbe...@gmail.com> wrote:

> Team,
>
> I'm having a few issues I could use some insight on.
>
> 1. Config default values defined in my layer's config.yaml
> <https://github.com/jamesbeedy/layer-gitlab/blob/master/config.yaml> don't
> seem to be making it into the built charm.
> * The config values are defined to provide default values to the
> subordinate layer-apt config params.
> * The config definitions seem to make it into my built charm's
> config.yaml <http://paste.ubuntu.com/21659447/>, just not the specified
> default values from layer-gitlab config.yaml
> <https://github.com/jamesbeedy/layer-gitlab/blob/master/config.yaml>. <-
> This is strikingly odd, because when I include another layer, say
> juju-layer-node <https://github.com/battlemidget/juju-layer-node>,
> following which build my top layer into a charm, I get the default configs
> for layer-apt, defined by layer-node in my built charm. This is different
> than what happens when I build my charm-gitlab
> <https://github.com/jamesbeedy/charm-gitlab>, which includes layer-gitlab
> <https://github.com/jamesbeedy/layer-gitlab>, which has the default
> values specified to satisfy the params for the layer-apt.
>
> Where am I going wrong here?
>

You're doing it right. Looks like charm-build isn't merging config.yaml the
way that is needed. I think an issue on https://github.com/juju/charm-tools
is in order.



> 2. How are the merging configs into a built charm handled when multiple
> layers included by the built charm include the same layer, and define
> default config params for that layer?
>

Should be like any other merge, with the higher layers overriding the lower
layers in a reasonably standard multiple-inheritance style way. I'm not
sure how it could work at all otherwise.



-- 
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Layer-Supervisor

2016-07-21 Thread Stuart Bishop
On 21 July 2016 at 10:40, James Beedy <jamesbe...@gmail.com> wrote:

> I'll take all the input/feedback/criticism I can get on this - thanks!

Under Multi Application Support, the example references myapp1_ctxt
and myapp2_ctxt variables but they are both undefined and there is no
mention on what they are supposed to contain.

>From a charming perspective, I'm interested in if I should be using
Supervisor or systemd to control my applications. If you want uptake,
a simple Python API for generating simple templates is preferable to
expecting us to read the Supervisor documentation and learn its
configuration syntax ;)

I think you want a wheelhouse.txt listing supervisor in your layer, so
the dependency gets pulled in at 'charm build' time, or the Ubuntu
package listed in layer.yaml under options->basic->packages. I don't
see anywhere in the layer that is installing the dependency.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju status show blank for public name for one machine.

2016-07-06 Thread Stuart Bishop
On 6 July 2016 at 20:16, Narinder Gupta <narinder.gu...@canonical.com> wrote:
> Well I have created the bug
> https://bugs.launchpad.net/juju-core/+bug/1599507 and attached the logs of
> physical node where issue was seen.

I think this is a dupe of
https://bugs.launchpad.net/juju-core/+bug/1534757 (which expired, as I
had not been able to reproduce it).

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Missing yaml import for charmhelpers on Xenial?

2016-07-02 Thread Stuart Bishop
On 2 July 2016 at 03:06, Pete Vander Giessen
<pete.vandergies...@canonical.com> wrote:
> I wrote:
>> I think that I figured this one out. charmhelpers does have PyYAML and six
>> as dependencies, but the mariadb charm was deploying its own, probably out
>> of date version of charmhelpers. Telling it to install charmhelpers through
>> the juju tools, rather than stuff them into its own source tree should help
>> a lot ...
>
> ... but that was before I did my homework. I understand better now how
> charmhelpers gets integrated into non layered charms, and I see why it's
> bypassing its own setup.py file, and making it tricky to install the missing
> stuff, outside of a supplemental bash script. Hmmm ...

If you resync charm-helpers with an up to date version you should be
right. charm-helpers
bootstraps the dependencies itself (see the ImportError exception
handlers in charmhelpers/__init__.py).

This makefile stanza is the 'standard' way of resyncing charmhelpers:

sync:
@bzr cat \
lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
> .charm_helpers_sync.py
@python .charm_helpers_sync.py -c charm-helpers.yaml

And you need a charm-helpers.yaml desribing the bits that need
embedding in your charm, like the following:

destination: hooks/charmhelpers
branch: lp:charm-helpers
include:
- coordinator
- core
- fetch
- contrib.charmsupport
- contrib.templating.jinja
- contrib.network.ufw
- contrib.benchmark


(this is of course all superseded with charm-tools and reactive
charms, which handles dependencies better).

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: public-address and dns of nodes are IPs instead of hostnames

2016-06-30 Thread Stuart Bishop
On 30 June 2016 at 11:11, José Antonio Rey <j...@ubuntu.com> wrote:

> I believe this is not a bug, but a feature introduced a while ago.

It might be tied up with https://bugs.launchpad.net/bugs/1557769 ,
where in 1.25.4 DNS names started appearing with some providers where
IP addresses used to be. Which breaks a number of charms, but only on
the providers with the unexpected behaviour.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Charm push crashes when uploading big charms

2016-06-22 Thread Stuart Bishop
On 21 June 2016 at 02:40, Jay Wren <jay.w...@canonical.com> wrote:
> Thanks for the further testing.
>
> Now I'm questioning how in the world I was able to see the same error.
>
> I will continue my testing to attempt to reproduce the error.
>
> Also, the Apache Timeout 300 should behave exactly as Mark said. I'm still
> trying to find what is causing this failure.

The error from the bug report originates from Squid, which is probably
buffering the upload. The timeout might happen between Squid and
Apache if it takes to long to upload and there is no keep-alive
mechanism in place.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Question for cosmetic and understandability of breadcrumbs in github

2016-06-17 Thread Stuart Bishop
On 17 June 2016 at 04:42, David Britton <david.brit...@canonical.com> wrote:

> We are planning a tag for every push for the landscape-server and client
> charms, and bundles.  +1 on it being mentioned as a best practice (the same
> type of thing as when you release a version of any other software!).
> Though, I would recommend using the full charm store identifier eg:
> 'cs:~user/charm-3'.  Basically, the full standard out of the charm push
> operation.

This is what I'm doing with the PostgreSQL charm. I wrote up
https://github.com/stub42/ReactiveCharmingWorkflow to help polish my
processes as much as I could. I start with a layer, build to a
separate branch for testing, push that to the charmstore, tag it wth
the pushed revno and publish it to the dev channel. If tests pass, I
merge --no-ff the tested build into a release branch, publish it to
the release channel and tag it with both charmstore rev and semantic
version number. By doing the merges the right way, the change history
of the primary layer is visible on all three branches (the 'built'
commit has the source 'layer' commit as a parent, and 'git log' does
what you want). I have an issue open on the charmstore-client to make
parsing the pushed charmstore revno easier.

After doing the write up it became obvious that it is all still too
fiddly, so rather than polish it for publication I instead planed to
do some git plugins to smooth over the process (sketched out in the
'Future' section).

I'm mulling over how to handle dependant layers, and if we can get
them into the repo to solve the pinning problem and include their
change history. I haven't got this sketched out very far beyond some
vague thoughts that subtrees might be useful, and don't know if it is
at all practical. The build and publish process seems solid though,
and the 'manual' equivalent in my Makefile rules has been working well
for me. I want to get the basic build, push and publish plugins done
in the very near future.


> I also like the repo-info for traceability the other way around.  They solve
> a similar problem but depending on where you are looking are both useful.

Yes. This seems a good thing to have my proposed 'git charm-build' generate too.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to coordinate status messaging between layers?

2016-06-03 Thread Stuart Bishop
On 3 June 2016 at 16:26, Merlijn Sebrechts <merlijn.sebrec...@gmail.com> wrote:
> Hi all
>
>
> I'm having a hard time doing status reporting from a reactive Charm. I can't
> find a good way to coordinate status messaging between different handlers
> and layers.
>
> Let me explain one of my use-cases.
>
> 1. We have a service that connects to a gateway to request a port forward.
> When the service receives such a forward, it displays this forward in its
> status message. Our management software uses this message to figure out how
> to connect to the service from the internet.
>
> 2. The service has the ability to deploy a bundle. When the bundle
> deployment fails, it goes into blocked state and shows a message explaining
> what went wrong.
>
>
> At this point, the port-forward message isn't visible anymore meaning our
> management program can't connect to the service anymore.
>
>
> Message 1 and 2 are displayed by two different layers. I can't find a good
> way to coordinate status messaging between the two layers in a way that
> keeps compatibility with all the existing layers at
> interfaces.juju.solutions.
>
> I guess what I'd like to do is have status messages that consist of multiple
> parts. Each handler can choose what part of the message to update. This
> would enable my charm to keep the port-forward information in the message
> even though another layer displays a "bundle deployment failed" message.
>
> Any idea how I can solve this? Have other people experienced similar issues?

If you need your management program to see the port-forward message
from status even after your unit has gone tits up and blocked, you
could retrieve it from the status history rather than the current
status message. I recall allowing a charm to return some extra
information to the operator separate from the workload state being
argued to death and the feature eventually killed, but maybe we do
need it.

For my work I'm using a helper to set workload status. It allows me to
pass None for the status, meaning keep the same status but change the
message. I find I can use this in most circumstances and solves a lot
of issues. Particularly dependent layers, which usually have no
business setting maintenance or active and only occasionally waiting
or blocked.

But I don't have a true solution. Its still difficult to avoid
handlers stomping over important messages when we are writing
decoupled code.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Quick win - juju check

2016-05-26 Thread Stuart Bishop
On 24 May 2016 at 11:14, Tim Penhey <tim.pen...@canonical.com> wrote:

> We talked quite a bit in Vancouver about quick wins. Things we could get
> into Juju that are simple to write that add quick value.

For trivial, quick wins consider:

'juju do --wait', from
https://bugs.launchpad.net/juju-core/+bug/1445066 (hey, you filed that
bug).

Adding a common option for the *-set and other hook environment tools
to get their data from stdin, rather than the command line, from
https://bugs.launchpad.net/juju-core/+bug/1274460

My favourite is as always 'juju wait', but that might not turn out to
be trivial.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-28 Thread Stuart Bishop
On 9 March 2016 at 06:51, Mark Shuttleworth <m...@ubuntu.com> wrote:
> Hi folks
>
> We're starting to think about the next development cycle, and gathering
> priorities and requests from users of Juju. I'm writing to outline some
> current topics and also to invite requests or thoughts on relative
> priorities - feel free to reply on-list or to me privately.

Another item I'd like to see is distribution upgrades. We not have a
lot of systems deployed with Trusty that will need to be upgraded to
Xenial not too far in the future. For many services you would just
bring up a new service with a new name and cut over, but this is
impractical for other services such as database shards deployed on
MaaS provisioned hardware. Handling upgrades may be as simple as
allowing operators (or a charm action) perform the necessary
dist-upgrade one unit at a time and have the controller notice and
cope when the unit's jujud is bounced. Not all units would be running
the same distribution release at the same time, and I'm assuming the
service is running a multi-series charm here that supports both
releases (so we don't need to worry about how to handle upgrade-charm
hooks, at least for now)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-28 Thread Stuart Bishop
On 9 March 2016 at 06:51, Mark Shuttleworth <m...@ubuntu.com> wrote:
> Hi folks
>
> We're starting to think about the next development cycle, and gathering
> priorities and requests from users of Juju. I'm writing to outline some
> current topics and also to invite requests or thoughts on relative
> priorities - feel free to reply on-list or to me privately.

Another item I'd like to see is distribution upgrades. We not have a
lot of systems deployed with Trusty that will need to be upgraded to
Xenial not too far in the future. For many services you would just
bring up a new service with a new name and cut over, but this is
impractical for other services such as database shards deployed on
MaaS provisioned hardware. Handling upgrades may be as simple as
allowing operators (or a charm action) perform the necessary
dist-upgrade one unit at a time and have the controller notice and
cope when the unit's jujud is bounced. Not all units would be running
the same distribution release at the same time, and I'm assuming the
service is running a multi-series charm here that supports both
releases (so we don't need to worry about how to handle upgrade-charm
hooks, at least for now)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Backwards incompatible change to config.changed states

2016-04-25 Thread Stuart Bishop
On 23 April 2016 at 04:02, Cory Johns <cory.jo...@canonical.com> wrote:

> Is anyone depending on the current behavior?  Are there any objections to
> this change?

I can argue both designs, but think that the most useful one is what
you are proposing (config.changed not being set in the first hook,
which is not necessarily the install hook).

If you are clarifying this, you should also clarify how things work in
the upgrade-charm hook when new config options are added or removed.

Your proposed change does not affect my work, and will allow me to
simplify some things.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Advice on unit testing reactive handlers?

2016-04-10 Thread Stuart Bishop
On 10 April 2016 at 22:31, Pete Vander Giessen <pet...@gmail.com> wrote:
> Hello All,
>
> I'm currently working on writing a layered charm, and I had a question about
> unit (or unit-ish) testing. Is it practical to write unit test for, say,
> helpers for a reactive handler? Does anybody have any favorite examples they
> can point me at?
>
> There's a lot of awesome automagic in a juju charm, but I'm having a little
> trouble setting up paths and environments with that magic mocked out for
> unit-style tests. :-/
>
> (The reason I'm trying to write smaller tests is that I'd like to speed up
> my testing cycle while I'm still in the obvious mistakes and silly typos
> stage of writing the charm)

https://git.launchpad.net/postgresql-charm has both unit and integration tests.
https://git.launchpad.net/postgresql-charm/tree/tests/test_postgresql.py
are the bulk of the unit tests, testing the helpers in
reactive/postgresql/postgresql.py.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New juju in ubuntu

2016-04-07 Thread Stuart Bishop
On 7 April 2016 at 16:46, roger peppe <roger.pe...@canonical.com> wrote:
> On 7 April 2016 at 10:17, Stuart Bishop <stuart.bis...@canonical.com> wrote:
>> On 7 April 2016 at 16:03, roger peppe <roger.pe...@canonical.com> wrote:
>>> On 7 April 2016 at 09:38, Tim Penhey <tim.pen...@canonical.com> wrote:
>>>> We could probably set an environment variable for the plugin called
>>>> JUJU_BIN that is the juju that invoked it.
>>>>
>>>> Wouldn't be too hard.
>>>
>>> How does that stop old plugins failing because the new juju is trying
>>> to use them?
>>>
>>> An alternative possibility: name all new plugins with the prefix "juju2-" 
>>> rather
>>> than "juju".
>>
>> I've opened https://bugs.launchpad.net/juju-core/+bug/1567296 to track this.
>>
>> Prepending the $PATH is not hard either - just override the
>> environment in the exec() call.
>>
>> The nicest approach may be to not use 'juju1', 'juju2' and 'juju' but
>> instead just 'juju'. It would be a thin wrapper that sets the $PATH
>> and invokes the correct binary based on some configuration such as an
>> environment variable. This would fix plugins, and lots of other stuff
>> that are about to break too such as deployment scripts, test suites
>> etc.
>
> There are actually two problems here. One is the fact that plugins
> use the Juju binary. For that, setting the PATH might well be the right thing.
>
> But there's also a problem with other plugins that use the Juju API
> directly (they might be written in Go, for example) and therefore
> implicitly assume the that they're talking to a juju 1 or juju 2 environment.
> Since local configuration files have changed and the API has changed, it's
> important that a plugin written for Go 1 won't be invoked by a juju 2
> binary.

If juju 2.x changed the plugin prefix from juju- to juju2-, that would
also solve the issue of juju 2.x specific plugins showing up in juju
1.x's command line help and vice versa.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New juju in ubuntu

2016-04-07 Thread Stuart Bishop
On 7 April 2016 at 16:03, roger peppe <roger.pe...@canonical.com> wrote:
> On 7 April 2016 at 09:38, Tim Penhey <tim.pen...@canonical.com> wrote:
>> We could probably set an environment variable for the plugin called
>> JUJU_BIN that is the juju that invoked it.
>>
>> Wouldn't be too hard.
>
> How does that stop old plugins failing because the new juju is trying
> to use them?
>
> An alternative possibility: name all new plugins with the prefix "juju2-" 
> rather
> than "juju".

I've opened https://bugs.launchpad.net/juju-core/+bug/1567296 to track this.

Prepending the $PATH is not hard either - just override the
environment in the exec() call.

The nicest approach may be to not use 'juju1', 'juju2' and 'juju' but
instead just 'juju'. It would be a thin wrapper that sets the $PATH
and invokes the correct binary based on some configuration such as an
environment variable. This would fix plugins, and lots of other stuff
that are about to break too such as deployment scripts, test suites
etc.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New juju in ubuntu

2016-04-06 Thread Stuart Bishop
On 7 April 2016 at 03:55, Marco Ceppi <marco.ce...@canonical.com> wrote:
>
> On Wed, Apr 6, 2016 at 10:07 AM Stuart Bishop <stuart.bis...@canonical.com>
> wrote:
>>
>> On 5 April 2016 at 23:35, Martin Packman <martin.pack...@canonical.com>
>> wrote:
>>
>> > The challenge here is we want Juju 2.0 and all the new functionality
>> > to be the default on release, but not break our existing users who
>> > have working Juju 1.X environments and no deployment upgrade path yet.
>> > So, versions 1 and 2 have to be co-installable, and when upgrading to
>> > xenial users should get the new version without their existing working
>> > juju being removed.
>> >
>> > There are several ways to accomplish that, but based on feedback from
>> > the release team, we switched from using update-alternatives to having
>> > 'juju' on xenial always be 2.0, and exposing the 1.X client via a
>> > 'juju-1' binary wrapper. Existing scripts can either be changed to use
>> > the new name, or add the version-specific binaries directory
>> > '/var/lib/juju-1.25/bin' to the path.
>>
>> How do our plugins know what version of juju is in play? Can they
>> assume that the 'juju' binary found on the path is the juju that
>> invoked the plugin, or is there some other way to tell using
>> environment variables or such? Or will all the juju plugins just fail
>> if they are invoked from the non-default juju version?
>
>
> You can invoke `juju version` from within the plugin and parse the output.
> That's what I've been doing when I need to distinguish functionality.

That seems fine if you are invoking the plugin from the default
unnumbered 'juju'. But running 'juju2 wait' will mean that juju-wait
will be executing juju 1.x commands and fail. And conversely running
'juju1 wait' will invoke juju 2.x and probably fail.

I think the plugin API needs to be extended to support allowing
multiple juju versions to coexist. An environment variable would do
the trick but require every plugin to be fixed. Altering $PATH so
'juju' runs the correct juju would allow existing plugins to run
unmodified (the bulk of them will work with both juju1 and juju2,
since the cli is similar enough that many plugins will work
unmodified.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New juju in ubuntu

2016-04-06 Thread Stuart Bishop
On 5 April 2016 at 23:35, Martin Packman <martin.pack...@canonical.com> wrote:

> The challenge here is we want Juju 2.0 and all the new functionality
> to be the default on release, but not break our existing users who
> have working Juju 1.X environments and no deployment upgrade path yet.
> So, versions 1 and 2 have to be co-installable, and when upgrading to
> xenial users should get the new version without their existing working
> juju being removed.
>
> There are several ways to accomplish that, but based on feedback from
> the release team, we switched from using update-alternatives to having
> 'juju' on xenial always be 2.0, and exposing the 1.X client via a
> 'juju-1' binary wrapper. Existing scripts can either be changed to use
> the new name, or add the version-specific binaries directory
> '/var/lib/juju-1.25/bin' to the path.

How do our plugins know what version of juju is in play? Can they
assume that the 'juju' binary found on the path is the juju that
invoked the plugin, or is there some other way to tell using
environment variables or such? Or will all the juju plugins just fail
if they are invoked from the non-default juju version?

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Charm layers & Windows

2016-04-04 Thread Stuart Bishop
On 4 April 2016 at 14:17, Andrew Wilkins <andrew.wilk...@canonical.com> wrote:
> Hi,
>
> I would like to write a charm that should be mostly identical on Windows and
> Linux, so I think it would make sense to have common code in the form of a
> layer.
>
> Is anyone working on getting "charm build", layers, and friends to work with
> Windows workloads? If not, I may look into it myself.

I think you could do it right now if you added explicit powershell
hook stubs that a) ensure Python is installed on the path and b) kick
off the Python hooks. https://github.com/juju/charm-tools/issues/98 is
about having 'charm build' generate the required hook stubs from
layer.yaml instead of embedding them, which will help here in the
future.

You will need two root layers for the two charms, one for Ubuntu and
one for Windows, since you can't list both Windows and Ubuntu releases
as supported series in the same metadata.yaml.

A source of potential problems is charmhelpers as it is not tested
under Windows and there are likely linuxisms burried in there.
charms.reactive doesn't need much of it though, so it is likely to be
fine. I haven't seen anything obviously platform dependent apart from
the obvious stuff like wrappers around apt. There might be some linux
specific imports at the top level that need to be moved to import on
demand (import apt, import distro_info etc).

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Action return value too long for commandline

2016-04-03 Thread Stuart Bishop
On 4 April 2016 at 02:00, Merlijn Sebrechts <merlijn.sebrec...@gmail.com> wrote:
> Hi all
>
>
> The apache-kafka charm has an action "read-topic" that can return a lot of
> data. Sometimes, this data is too long to be passed to `action-set` by
> commandline. You get the following error:
>
> Traceback (most recent call last):
>   File "actions/read-topic", line 36, in 
> hookenv.action_set({'output': output})
>   File
> "/usr/local/lib/python2.7/dist-packages/charmhelpers/core/hookenv.py", line
> 615, in action_set
> subprocess.check_call(cmd)
>   File "/usr/lib/python2.7/subprocess.py", line 535, in check_call
> retcode = call(*popenargs, **kwargs)
>   File "/usr/lib/python2.7/subprocess.py", line 522, in call
> return Popen(*popenargs, **kwargs).wait()
>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
> errread, errwrite)
>   File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
> raise child_exception
> OSError: [Errno 7] Argument list too long
>
>
> Is there any way to pass a file to action-set?

This bug affects many of the Juju tools causing charms to fail at
scale. https://bugs.launchpad.net/juju-core/+bug/1437366
(relation-set) is the only one I know of that has been fixed.
https://bugs.launchpad.net/juju-core/+bug/1274460 (juju-log) is still
open. leader-set also fails now I think of it, but I haven't tripped
over that one (it limits scalability the way I'm using leadershp
settings, but I should be able to squeeze out several hundred units).
Maybe we can use the opportunity to fix quoting and encoding issues.

I suspect if you can get past command line length limitations, I
suspect the next glass ceiling is a 16MB document size limit in
MongoDB (which is large enough to not need fixing?)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-02 Thread Stuart Bishop
On 1 April 2016 at 20:50, Mark Shuttleworth <m...@ubuntu.com> wrote:
> On 19/03/16 01:02, Stuart Bishop wrote:
>> On 9 March 2016 at 10:51, Mark Shuttleworth <m...@ubuntu.com> wrote:
>>
>>> Operational concerns
>> I still want 'juju-wait' as a supported, builtin command rather than
>> as a fragile plugin I maintain and as code embedded in Amulet that the
>> ecosystem team maintain. A thoughtless change to Juju's status
>> reporting would break all our CI systems.
>
> Hmm.. I would have thought that would be a lot more reasonable now we
> have status well in hand. However, the charms need to support status for
> it to be meaningful to the average operator, and we haven't yet made
> good status support a requirement for charm promulgation in the store.
>
> I'll put this on the list to discuss.


It is easier with Juju 1.24+. You check the status. If all units are
idle, you wait about 15 seconds and check again. If all units are
still idle and the timestamps haven't changed, the environment is
probably idle. And for some (all?) versions of Juju, you also need to
ssh into the units and ensure that one of the units in each service
thinks it is the leader as it can take some time for a new leader to
be elected.

Which means 'juju wait' as a plugin takes quite a while to run and
only gives a probable result, whereas if this information about the
environment was exposed it could be instantaneous and correct.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-02 Thread Stuart Bishop
On 1 April 2016 at 20:50, Mark Shuttleworth <m...@ubuntu.com> wrote:
> On 19/03/16 01:02, Stuart Bishop wrote:
>> On 9 March 2016 at 10:51, Mark Shuttleworth <m...@ubuntu.com> wrote:
>>
>>> Operational concerns
>> I still want 'juju-wait' as a supported, builtin command rather than
>> as a fragile plugin I maintain and as code embedded in Amulet that the
>> ecosystem team maintain. A thoughtless change to Juju's status
>> reporting would break all our CI systems.
>
> Hmm.. I would have thought that would be a lot more reasonable now we
> have status well in hand. However, the charms need to support status for
> it to be meaningful to the average operator, and we haven't yet made
> good status support a requirement for charm promulgation in the store.
>
> I'll put this on the list to discuss.


It is easier with Juju 1.24+. You check the status. If all units are
idle, you wait about 15 seconds and check again. If all units are
still idle and the timestamps haven't changed, the environment is
probably idle. And for some (all?) versions of Juju, you also need to
ssh into the units and ensure that one of the units in each service
thinks it is the leader as it can take some time for a new leader to
be elected.

Which means 'juju wait' as a plugin takes quite a while to run and
only gives a probable result, whereas if this information about the
environment was exposed it could be instantaneous and correct.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Charm Store policy updates and refinement for 2.0

2016-03-22 Thread Stuart Bishop
On 23 March 2016 at 06:37, Jorge O. Castro <jo...@ubuntu.com> wrote:
> Thanks for the feedback Stuart, I've pushed up a new revision.
>
>> I think the acceptable software sources needs to be expanded.
>
> I've added your recommendations for this section except for:
>
>> In addition, any software sources not in the main Ubuntu or CentOS
>> archives should be listed in configuration items that can be
>> overridden rather than hard coded in the charm
>
> I've changed this to a MUST as it's not that much work to do this and
> the effort seems trivial compared to forcing users to mangle a charm
> just to get it to deploy on production systems without egress.

Yeah. And fixing it after people are using your charm in production is
a pain, which I learned the hard way :)



-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Charm Store policy updates and refinement for 2.0

2016-03-21 Thread Stuart Bishop
On 19 March 2016 at 02:58, Jorge O. Castro <jo...@ubuntu.com> wrote:

> Recommendations from everyone on what we should include here would be
> most welcome, specifically our recommendations around Windows charms
> is non-existent.

I think the acceptable software sources needs to be expanded.
Launchpad PPAs should be acceptable as signing keys are securely
retrieved when using 'add-apt-repository ppa:foo/bar'. Also, 3rd party
apt repositories should be acceptable if the signing key is embedded
in the charm (PyPi could be checked similarly, but it seems rare to
find signed packages there).

In addition, any software sources not in the main Ubuntu or CentOS
archives should be listed in configuration items that can be
overridden rather than hard coded in the charm, or else the charm is
useless in network restricted environments (and yes, migrating to
resources may be a better user experience in many cases).

As examples, the PostgreSQL charm pulls non-default packages from the
upstream PostgreSQL apt repository (PGDG, which is the source which
flows to Debian and Ubuntu). The Cassandra charm pulls a required
driver from a PPA I control. It also installs packages from either the
Apache apt repository or the DataStax apt repository. Cassandra is not
available in the Debian or Ubuntu main archives, probably as it
required the Oracle JVM. Both charms use the
install_sources/install_keys config items parsed by charm-helpers and
the apt layer to make this configurable.

On a side note, it is somewhat disingenuous to block charms in the
store from pulling dependencies from untrusted sources at run time
when we happily pull dependencies from untrusted sources at build
time. I think the fix here is to do better at build time (Moving the
interfaces web site to https: and ensuring clients use that address,
only allowing https:, git+ssh: and other secure protocols for pulling
branches, and checking GPG signatures of embedded wheels are the
issues here I'm aware of)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New feature for charmers - min-juju-version

2016-03-21 Thread Stuart Bishop
On 22 March 2016 at 01:06, Ryan Beisner <ryan.beis...@canonical.com> wrote:

> Rationale and use case:
> A single Keystone charm supports deployment (thereby enabling continued CI &
> testing) of Precise, Trusty, Wily, Xenial and onward.  It is planned to have
> a min-juju-version value of 1.25.x.  That charm will support >= 1.25.x,
> including 2.x, and is slated to release with 16.04.  This is representative
> of all of the OpenStack charms.

Bug #1545686 will cause you issues too, unless you are always testing
charms served from the store rather than local branches. 'series' is
more involved than min-juju-version, as the data type change to the
existing setting causes old versions to fail.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-18 Thread Stuart Bishop
sk space, but means you could
migrate a 10 unit Cassandra cluster to a new 5 unit Cassandra cluster.
(the charm doesn't actually do this yet, this is just speculation on
how it could be done). I imagine other services such as OpenStack
Swift would be in the same boat.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reactive roadmap

2016-03-14 Thread Stuart Bishop
On 9 March 2016 at 01:19, Simon Davy <simon.d...@canonical.com> wrote:

> 2) Layer pinning. Right now, layers are evolving fast, and the lack of
> pinning to layer versions has caused charm builds to break from day to
> day. Is this a planned feature?

You can pin the layers in your build environment - they are only
unpinned if you are pulling them direct from the upstream branch. A
tool like codetree could be used to checkout your dependencies with
pinned versions allowing repeatable builds if you need that. Allowing
you to deal with the accumulated breakage and incompatibilities in one
huge indigestible chunk later, far too late to affect upstream changes
;)


> 3) Downloading from the internet. This issue has been common in
> charmstore charms, and is discouraged, AIUI. But the same issue
> applies for layers, and possibly with more effect, due to a layer's
> composibility.  We simply can not utilise any layer that downloads
> things from github or similar, and I'm sure others are in a similar
> situation.  We're aware of resources, but not convinced this is a
> scalable solution for layers, as it makes using a charm that has
> layers that require resources much more complex. So, some clarity in
> this area would be helpful.

Yes, layers that do not work in network restricted environments are
not useful to many people. I think you will find layers will improve
things here. Layers only need to be written correctly once. And if
they are broken, only fixed once. A big improvement over cargo culted
code, where you could end up fixing essentially the same bug or adding
the same feature several times.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Testing Leader Election reconfiguration

2016-03-09 Thread Stuart Bishop
On 9 March 2016 at 20:31, Tom Barber <t...@analytical-labs.com> wrote:
> Morning all
>
> I'm trying to test for charm reconfiguration if the leader goes AWOL.

I put the role of the unit in its workload status, so it is easy for
operators to see which unit is master. And this also makes it easy for
tests to tell.


> Adam suggested that I watch the status waiting for the next leader election
> hook the wait on that and then check my service configs.

You are best of waiting for all the hooks to complete and a steady
state, not just leader elected (since things will still be in flux
when that hook fires, such as the leader-settings-changed hooks it
will probably trigger and the relation changes those hooks will likely
trigger). Use the juju-wait plugin, and maybe add support to
https://bugs.launchpad.net/juju-core/+bug/1488777 to get this into
core.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Automatic periodic upgrades as part of the base layer

2016-03-08 Thread Stuart Bishop
On 8 March 2016 at 05:24, Marco Ceppi <marco.ce...@canonical.com> wrote:
> This is definitely more an operator decision than a charm decision. There
> are two existing charms to address this. An unattended-upgrades charm and
> landscape-client. Check those out first to see if the fit your needs.
>
> Marco

I've been toying with the idea of a package upgrade action in the apt
layer. It would unhold held packages if necessary, set state giving
your handlers the opportunity for pre/post upgrade hooks (like
shutting daemons down and restarting them), run apt-get dist-upgrade,
and rehold packages if necessary. This would require implementing
action support into charms.reactive, which has been discussed a bit on
github.

Landscape takes care of most day to day updates, but the most
important packages tend to get held to ensure unattended upgrades
don't take down the service.



> On Mon, Mar 7, 2016, 5:16 PM Mark Shuttleworth <m...@ubuntu.com> wrote:
>>
>> On 07/03/16 13:29, Merlijn Sebrechts wrote:
>> > What is your experience with upgrades. Do they have a tendency to break
>> > things? Should this be enabled by default, added in as a configurable
>> > switch or not added at all?
>>
>> In 16.04, if unattended-upgrades is installed you will by default get
>> security updates automatically and can opt in to additional updates.
>> Common practice is just to turn them on, with some percentage of
>> machines also enabling the "proposed" pocket (where stuff goes before it
>> gets to the updates pocket). Machines with "proposed" act as canaries
>> for incoming updates. Security tends to land hard and fast because,
>> well, security, but then it gets a lot more QA and the changes are
>> generally tiny.
>>
>> Mark

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Semver instead of revisions for charms?

2016-02-28 Thread Stuart Bishop
On 28 February 2016 at 19:49, Marco Ceppi <marco.ce...@canonical.com> wrote:
> There is no plan at this time to move to anything other than revision
> numbers for charm. The idea behind the system is there should really /never/
> be any backwards compatible breaks. If you find yourself running into a
> situation, make use of the upgrade-charm hook to update/rewrite any internal
> data structures you're using to the new version.
>
> This way charms are always forward looking and up-gradable as such. While
> I'm a huge fan of semver, adding that as a primitive for charms means there
> may be chances where a user of a charm won't be able to upgrade. That's a
> precedence we don't want to set.

I'm not sure how semantic versioning would make the situation any
different. I can introduce a backwards incompatible change just as
easily between cs:foo/99 and cs:foo/100 as I can between v3.9.4 and
v4.0.0. For what its worth, I'm switching to semantic version numbers
in my branches via tags in order to properly document changes and
tested upgrade paths. This is decoupled from the charm store
revisions, as several stable releases can be made before it gets
synced into the charm store.

I believe backwards compatibility issues are impossible to avoid
completely. Sometimes they are necessary, sometimes they are
accidental, and reverting landed revisions very dangerous as it
introduces a completely untested upgrade path for deploys made from
the reverted version (it might be weeks before a backwards
incompatible change is found). While we can all do our best,
occasionally it has to happen and all we can do is make things as
clear as possible to users.

I find versioning is a common issue, particularly regarding upgrades.
It seems more common than not that people are deploying from branches
rather than the charm store (either for bug fixes, requested features,
or lack of egress). I hope the version tags, git commit ids and maybe
a version.yaml will make these deploys much easier to deal with. I was
thinking a version.yaml convention would be good for composed charms,
as charm-build could merge them and we get tracking of which versions
of what layers got used in a particular build. And layers don't get
charm store revision ids. Once we have version numbers, we can do
proper bug tracking and milestones etc. on all these components that
can be used to build a charm. Machine readable, so the charm store
could display it and we could tell at a glance which charms need
rebuilding because they embed buggy or insecure layers.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What to do with precise charms?

2016-02-19 Thread Stuart Bishop
On 20 February 2016 at 03:47, José Antonio Rey <j...@ubuntu.com> wrote:
> Hello,
>
> In approximately two months, Xenial is going to be released. Once that
> happens, we are going to have three supported LTS releases: precise, trusty
> and xenial.
>
> I know that there is some people that have both precise and trusty charms.
> However, if they want to move their charms to xenial, they are going to have
> to maintain not two, but three charms. And if we want to have the latest in
> all charms, then features and software versions would have to be backported
> all the way to precise, which may complicate things a bit more.
>
> I'm wondering, would it be suitable for us to establish a process where a
> charm author decides to no longer maintain a charm in an old but supported
> release and just move that specific series charm to ~unmaintained-charms? I
> think it's better to start thinking on this now, before it gets too close to
> release time.
>
> Happy to hear all your comments/suggestions on this.

I already have charms with deprecated precise branches, used for some
very old legacy installs.

With the 2.0 release and charm store updates, I will also want to
deprecate the trusty branches in favor of a series-independent branch.
I've already started this, moving the PostgreSQL source layer to
launchpad.net/postgresql-charm.The trusty bzr branch will just be a
hindrance when it is no longer needed for ingestion into the charm
store.

It is my understanding that the charm store will accept the series
independent branch and produce cs:trusty/foo series dependent blobs
for older Juju clients. There is still an open bug about allowing Juju
1.25 to deploy series independent branches or local charms without
hacking (https://bugs.launchpad.net/juju-core/+bug/1545686, not a huge
issue since with a local branch you can easily hack metadata.yaml).

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: @when('config.changed')

2016-02-19 Thread Stuart Bishop
On 19 February 2016 at 16:32, Merlijn Sebrechts
<merlijn.sebrec...@gmail.com> wrote:

> I completely agree with you on point two. The semantics I'm trying to get at
> for point one are a bit different. Events are information that is relevant
> for a single point in time, not until the information is processed. Events
> can be processed by multiple handlers or by none, and in both cases they
> become irrelevant after the single point in time.

> So what I'm getting at is this:
>
>  - Handlers with event decorators may only be queued immediately after the
> event has fired
>  - Events do not get retested in the queue
>  - States work as they do now

Apart from a file changing, what 'events' happen in the middle of a
hook that shouldn't remain set for the remainder of the hook? I can't
come up with any valid use cases, so maybe this all becomes a case of
fixing @when_file_changed rather than adding new stuff.

That said, if there are valid use cases, perhaps an @after decorator.
It would work like @when, except it would remain active until the
handler has been run. ie. @when('foo') might get dequeued if the 'foo'
state is removed, but the @after('foo') would remain on the queue even
if the 'foo' state is removed. If a handler was decorated by both
@after('foo') and @when('bar'), it could get queued when 'bar' is set
even if 'foo' has since been removed. It seems easy enough to
implement (just a new decorator - no need to change the reactor), and
the same approach could be used to implement a fixed
@when_file_changed.

(You could even implement @after in your charm or in a layer)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


PostgreSQL charm move

2016-02-18 Thread Stuart Bishop
Hi.

The PostgreSQL charm that I maintain has moved to git, the Reactive
Framework, and a series independent spot in Launchpad.

The new home is https://launchpad.net/postgresql-charm.

Bug reports go to https://bugs.launchpad.net/postgresql-charm. I will
keep monitoring bugs in the charms/trusty/postgresql namespace for
now.

Latest tested, stable and deployable version is in the built branch at
git+ssh://git.launchpad.net/postgresql-charm:

mkdir trusty
git clone -b built \
https://git.launchpad.net/postgresql-charm trusty/postgresql
JUJU_REPOSITORY=. juju deploy local:postgresql

The main source layer is in the master branch at
git+ssh://git.launchpad.net/postgresql-charm. Merge proposals should
be made against this branch. This is the branch used to build the
deployable built branch.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Dynamically registering reactive handlers at runtime

2016-02-17 Thread Stuart Bishop
On 12 February 2016 at 22:55, Merlijn Sebrechts
<merlijn.sebrec...@gmail.com> wrote:
> Hi all
>
>
> We have a layer that can install a number of different packages. It decides
> at runtime which packages will be installed. This layer uses the `apt` layer
> which sets a state `apt.installed.`.
>
> We want to react to this installed state, but we don't know what the
> packagename will be beforehand. This gets decided at runtime. Therefore, we
> can't put it in a @when() decorator.
>
> Is there a way to dynamically register handlers at runtime?

Not with reactive as it stands, unless by 'at runtime' you mean 'at
import time'.

I previously patched @when_file_changed to accept callables, allowing
the filenames to be generated at run time. I imagine the same thing
would need to be done for the @when and @when_not decorators.

I'd experiment by creating your own decorator, and if it turns out to
be useful propose it as a charms.reactive update.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: @when('config.changed')

2016-02-17 Thread Stuart Bishop
On 18 February 2016 at 01:44, Cory Johns <cory.jo...@canonical.com> wrote:

> If a charm config option has changed, the state "config.changed" will be set
> for the duration of the hook.  Additionally, specific states will be set for
> each config option that changed; that is, if option "foo" has changed, the
> state "config.changed.foo" will be set.  An example of code using this would

Will these states be set in the first hook?

I can come up with use cases for both yes and no answers to that
question. If I had done this in a layer, there would have been another
state named something like config.initial.

@when('config.changed')
def first_hook_and_config_changed(): pass

@when('config.changed')
@when_not('config.initial')
def config_changed_not_first_hook(): pass



> This provides a much cleaner way of detecting changes to config, and it is
> recommended that this be used in favor of @hook('config-changed') going
> forward, as the latter can actually run in to some situations, albeit rather
> rarely, where the charm sees new config option values before the
> config-changed hook has fired.  Using the reactive states avoids that
> completely as well as working more naturally with existing @when decorators.


Did you want to pull in the Leadership layer too? Its hard to know
when to stop :) I'd thought this would be better handled as a layer,
for no other reason than it could be done as a separate layer and keep
the basic layer thin.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Units & resources: are units homogeneous?

2016-02-16 Thread Stuart Bishop
On 17 February 2016 at 01:20, Katherine Cox-Buday
<katherine.cox-bu...@canonical.com> wrote:

> My understanding is that it's a goal to make the management of units more
> consistent, and making the units more homogeneous would support this, but
> I'm wondering from a workload perspective if this is also true? One example
> I could think of to support the discussion is a unit being elected leader
> and thus taking a different path through it's workflow than the other units.
> When it comes to resources, maybe this means it pulls a different sub-set of
> the declared resources, or maybe doesn't pull resources at all (e.g. it's
> coordinating the rest of the units or something).

While I have charms where units have distinct roles (one master,
multiple standbys, and the juju leader making decisions), they can be
treated as homogeneous since they need to be able to fail over from
one role to another. The only use case I can think of where different
resources might be pulled down on different units is deploying a new
service with data restored from a backup. The master would be the only
unit to pull down this resource (the backup) on deployment, and the
standbys would replicate it from the master.

And now I think of it, can I stream resources? I don't want to
provision a machine with 8TB of storage just so I can restore a 4TB
dump. Maybe this is just a terrible example, since I probably couldn't
be bothered uploading the 4TB dump in the first place, and would
instead setup tunnels and pipes to stream it into a 'juju run'
command. An abuse of Juju resources better suited to Juju blob
storage?

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Units & resources: are units homogeneous?

2016-02-16 Thread Stuart Bishop
On 17 February 2016 at 01:20, Katherine Cox-Buday
<katherine.cox-bu...@canonical.com> wrote:

> My understanding is that it's a goal to make the management of units more
> consistent, and making the units more homogeneous would support this, but
> I'm wondering from a workload perspective if this is also true? One example
> I could think of to support the discussion is a unit being elected leader
> and thus taking a different path through it's workflow than the other units.
> When it comes to resources, maybe this means it pulls a different sub-set of
> the declared resources, or maybe doesn't pull resources at all (e.g. it's
> coordinating the rest of the units or something).

While I have charms where units have distinct roles (one master,
multiple standbys, and the juju leader making decisions), they can be
treated as homogeneous since they need to be able to fail over from
one role to another. The only use case I can think of where different
resources might be pulled down on different units is deploying a new
service with data restored from a backup. The master would be the only
unit to pull down this resource (the backup) on deployment, and the
standbys would replicate it from the master.

And now I think of it, can I stream resources? I don't want to
provision a machine with 8TB of storage just so I can restore a 4TB
dump. Maybe this is just a terrible example, since I probably couldn't
be bothered uploading the 4TB dump in the first place, and would
instead setup tunnels and pipes to stream it into a 'juju run'
command. An abuse of Juju resources better suited to Juju blob
storage?

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Leadership layer released

2016-01-29 Thread Stuart Bishop
On 28 January 2016 at 21:42, Cory Johns <cory.jo...@canonical.com> wrote:

> I also very much like the pattern of having states that notify of changes to
> the leadership settings and think this pattern could apply well to config
> settings.  One addition I would suggest to that pattern would be to also

On config, yes, a similar pattern could be applied. It is somewhat
more complex than leadership though. The three different states I'm
thinking of are:

- config.set.{name}. The configuration option 'name' exists and is set
to a value. It is not set to null or the empty string.

- config.default.{name}. The configuration option 'name' exists, and
its value is unchanged from the default defined in config.yaml.

- config.changed.{name}. The configuration option 'name' exists and
its value has been changed since the previous hook was run. This state
will not be set in the next hook, unless the configuration option was
changed yet again.

(The wording specifically mentions the options existing, to be
specific about what will happen in upgrade-charm when config options
are dropped and no longer exist)

The very first hook invoked (be it the install hook or a storage hook)
will have interesting behaviour. What options get the
config.changed.{name} state set? None of them? All of them? All that
have values changed from the default? I'm unsure of the best answer,
which is why the layer doesn't exist yet :) Should handlers waiting on
the config.changed.foo state always be invoked in the first hook,
never invoked in the first hook, or possibly be invoked in the first
hook?


> have a blanket `leadership.changed` state that would be set when *any* value
> was changed.  This is mostly to handle the case where you need to react to
> any of a set of multiple values changing.  Thoughts?

I've added this.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Apt layer released

2016-01-28 Thread Stuart Bishop
  the package installed (one state for each package).

  If a package has already been installed it will not be reinstalled.

  If a package has already been queued it will not be requeued, and
  the install options will not be changed.

* `installed()`

  Returns the set of deb packages installed by this layer.

* `purge(packages)`

  Purge one or more deb packages from the system

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Apt layer released

2016-01-28 Thread Stuart Bishop
On 28 January 2016 at 23:01, Adam Stokes <adam.sto...@canonical.com> wrote:
> Why would someone want to use this instead of what's provided in
> charmhelpers?

This wraps what is provided in charmhelpers.

To use raw charmhelpers, you need to write the hooks and handlers to
call its features at the right time and in the right order. You need
to call fetch.configure_sources, whenever the relevant config items
change. You need to install the list of extra packages, whenever the
relevant config item changes. You need to repin held packages, if the
option is set, whenever new packages are installed. To use the layer,
you just add an entry to layers.yaml and get it all done for you, and
it is all done consistently across all charms using the layer. You
don't have to worry about inconsistencies between your
reimplementation of the wheel with the next charms reimplementation of
the wheel causing confusion to users.

It also removes all the boilerplate. You don't need to include all
that gumph in your config.yaml.

It sets reactive states your handlers can wait on so you don't have to
set reactive states your handlers can wait on.

By requesting a package install and having handlers wait on the
install to complete, you get to write highly decoupled code without
losing efficiency. Several different handlers in different parts of
the code base and even in different layers can all schedule the
packages they care about to be installed, and have a single apt-get
update run, and only if necessary.

Improvements to the layer improve all charms using the layer. When
Juju charm configuration gets richer data structures, we can write all
the migration, compatibility and deprecation stuff once and all charms
get it next time they are built. Fixes in the charmhelpers codebase
can't fix its callsites or improve the documentation in your
config.yaml.

So you want to use it to save initial work and future maintenance :)



-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Leadership layer released

2016-01-28 Thread Stuart Bishop
On 28 January 2016 at 22:47, Merlijn Sebrechts
<merlijn.sebrec...@gmail.com> wrote:
> Hi Marco
>
>
> Just to be clear, I was talking about wrapping the code to set a state that
> shows if something changed. I wasn't sure how to implement this. States are
> saved across hook invocations but 'x.changed' should be removed at the end
> of a hook invocation. Stuart's code solves this problem by updating the
> 'x.changed' state at the start of each hook invocation using
> hookenv.atstart().

Thats a bit of a hack.
https://github.com/juju-solutions/charms.reactive/pull/20 adds a
@setup decorator to charms.reactive which I think would be better.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Leadership layer released

2016-01-27 Thread Stuart Bishop
Hi.

I've put up my Leadership layer on http://interfaces.juju.solutions/.
This work was broken out from my PostgreSQL charm refactorings and
will also be used by the Cassandra charm when I get onto that. It is
obviously small and focused on leadership; I had been considering
consolidating a number of similar parts into a swiss army knife
'hookenv' layer, but decided that smaller bricks are more in the
spirit of layered charms and avoids most feature creep.

# Leadership Layer for Juju Charms

The Leadership layer is for charm-tools and 'charm build', making it
easier for layered charms to deal with Juju leadership.

This layer will initialize charms.reactive states, allowing you to
write handlers that will be activated by these states. It allows you
to completely avoid writing leader-elected and leader-settings-changed
hooks. As a simple example, these two handlers are all that is required
to make the leader unit generate a password if it is not already set,
and have the shared password stored in a file on all units:

```python
from reactive.leadership import leader_get, leader_set
from charmhelpers.core.host import pwgen


@when('leadership.is_leader')
@when_not('leadership.set.admin_password')
def generate_secret():
leader_set(admin_password=pwgen())


@when('leadership.changed.admin_password')
def store_secret():
write_file('/etc/foopass', leader_get('admin_password'))
```


## States

The following states are set appropriately on startup, before any @hook
decorated methods are invoked:

* `leadership.is_leader`

  This state is set when the unit is the leader. The unit will remain
  the leader for the remainder of the hook, but may not be leader in
  future hooks.

* `leadership.set.{varname}`

  This state is set for each leadership setting (ie. the
  `leadership.set.foo` state will be set if the leader has set
  the foo leadership setting to any value). It will remain
  set for the remainder of the hook, unless the unit is the leader
  and calls `reactive.leadership.leader_set()` and resets the value
  to None.

* `leadership.changed.{varname}`

  This state is set for each leadership setting that has changed
  since the last hook. It will remain set for the remainder of the
  hook. It will not be set in the next hook, unless the leader has
  changed the leadership setting yet again.


## Methods

The `reactive.leadership` module exposes the `leader_set()` and
`leader_get()` methods, which match the methods found in the
`charmhelpers.core.hookenv` module. `reactive.leadership.leader_set()`
should be used instead of the charmhelpers function to ensure that
the reactive state is updated when the leadership settings are. If you
do not do this, then you risk handlers waiting on these states to not
be run on the leader (because when the leader changes settings, it
triggers leader-settings-changed hooks on the follower units but
no hooks on itself).


## Support

This layer is maintained on Launchpad by
Stuart Bishop (stuart.bis...@canonical.com).

Code is available using git at git+ssh://git.launchpad.net/leadership-layer.

Bug reports can be made at https://bugs.launchpad.net/leadership-layer.

Queries and comments can be made on the Juju mailing list, Juju IRC
channels, or at https://answers.launchpad.net/leadership-layer.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Automatic retries of hooks

2016-01-20 Thread Stuart Bishop
On 20 January 2016 at 17:46, William Reade <william.re...@canonical.com> wrote:

> On Wed, Jan 20, 2016 at 8:46 AM, Stuart Bishop <stuart.bis...@canonical.com>
> wrote:

>> It happens naturally if you structure your charm to have a single hook
>> that does everything that needs to be done, rather than trying to
>> craft individual hooks to deal with specific events.
>
> Independent of everything else, *this* should *excellent* advice for
> speeding up your deployments. Have you already been writing charms like
> this? I'd love to hear your experiences; and, in particular, if you've
> noticed any improvement in deployment speed. The theoretically achievable
> speedup is vast, but the hook runner wasn't written with this approach in
> mind; we might need to make a couple of small tweaks [0] to get the best out
> of the approach.

The PostgreSQL charm has now existed in three forms. Traditional,
services framework, and now reactive framework. Using the services
framework, deployment speed was slower than traditional. You ended up
with one very long string of steps, many of which were unnecessary. I
felt it easier to maintain and understand, but logs noisier and it was
slower. The reactive framework is much faster deployment wise than all
other versions, as you can easily have only the necessary steps
triggered for the current state. The execution thread is harder to
follow, since there isn't really one, but it still seems very
maintainable and understandable. There is less code than the other
versions. It does drive you to create separate handlers for each hook,
but advice is to keep hooks at the absolute bare minimum to adjust the
charms state based on the event and put all the actual logic in the
state driven handlers.


-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Automatic retries of hooks

2016-01-19 Thread Stuart Bishop
On 20 January 2016 at 13:17, John Meinel <j...@arbash-meinel.com> wrote:

> There are classes of failures that a charm hook itself cannot handle. The
> specific one Bogdan was working with is the fact that the machine itself is
> getting restarted while the charm is in the middle of processing a hook.
> There isn't any way the hook itself can handle that, unless you could raise
> a very specific error that indicates you should be retried (so as it notices
> its about to die, it raises the try-me-again error).
>
> Hooks are supposed to be idempotent regardless, aren't they? So while we
> paper over transient bugs in them, doesn't it make the system more resilient
> overall?

The new update-status hook could be used to recover, as it is called
automatically at regular intervals. If the reboot really was random,
you would need to clear the error status first. But if it is triggered
by the charm, it is just a case of 'reboot(now+30s);
status_set('waiting', 'Waiting for reboot'); sys.exit(0)' and waiting
for the update-status hook to kick in.

It happens naturally if you structure your charm to have a single hook
that does everything that needs to be done, rather than trying to
craft individual hooks to deal with specific events.



-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Deprecating charm config options: postgresql case

2016-01-14 Thread Stuart Bishop
On 15 January 2016 at 02:00, Andreas Hasenack <andr...@canonical.com> wrote:

>   max_connections:
> default: 100
> type: int
> description: >
> DEPRECATED. Use extra_pg_conf.
> Maximum number of connections to allow to the PG database
>
> The option still exists and can be set, but does nothing. The service will
> get whatever is set in the new extra_pg_conf option, which happens to be
> 100.

That would be a bug. It is supposed to still work, with warnings in
your logs that you are using deprecated options.


> Other deprecated options have a more explicit warning:
>
>   performance_tuning:
> default: "Mixed"
> type: string
> description: >
> DEPRECATED AND IGNORED. The pgtune project has been abandoned
> and the packages dropped from Debian and Ubuntu. The charm
> still performs some basic tuning, which users can tweak using
> extra_pg_config.
>
> In this specific postgresql case, looks like all (I just tested two, btw)
> deprecated options should have been marked with the extra "... AND IGNORED"
> text. But then again, is it worth it to silently accept them and do nothing,
> thereby introducing subtle run-time failures?

Just dropping options risks breaking lots of mojo specs, all at the
same time. The plan is to log warnings, escalate to irritating
workload status messages later, and eventually drop them. All the
options that matter are supposed to still be functional, and the few
being ignored done so after careful consideration of the impact.

I think there are also issues to deal with for upgrade-charm, although
Juju might have changed its behaviour since I last looked (are service
settings that no longer exist now silently dropped, or do they block
the upgrade?)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Call for Feedback: LXD local provider

2016-01-08 Thread Stuart Bishop
On 8 January 2016 at 00:48, Jorge O. Castro <jo...@ubuntu.com> wrote:
> Hi everyone,
>
> Katherine walked me through using the new LXD provider in the Juju
> Alpha: https://linuxcontainers.org/lxd/
>
> The one caveat right now is that you need to be on wily or xenial as your 
> host.
>
> We are collecting feedback here along with the current working
> instructions: 
> https://docs.google.com/document/d/1lbh3ZkkSdBOGRadF_6FWrijbOhH4Vf2f7alrZFr8pz0/edit?usp=sharing

I don't seem to be able to add feedback there.

Initial feedback:
  - lxd hasn't made it into the release notes yet, at least in the
1.26alpha3 copy I have.

  - container creation is as slow or slower than lxc. I think there
are still some 'apt get updates', upgrades and package installs being
run between container creation and kicking off the charm. It is well
over an order of magnitude slower to do 'juju deploy ubuntu' than it
is to 'lxc launch ubuntu first'. We might need richer templates, with
agents and dependencies preinstalled. Yes, it is fast but seems only
as fast as the lxc provider with the btrfs hack has been for some time
(I'm using the btrfs hack with lxd too, per the lxd getting started
guide).

  - bootstrap spits out a well known and understood error. The images
team needs to fix this or juju team work around it, as it breaks
charms too (cassandra, rabbit, others have fallen victim): "sudo:
unable to resolve host
juju-f2339d90-dd3c-4a1f-8cd2-13e7c795df3f-machine-0". The fix is to
add the relevant entry for $hostname to /etc/hosts.

  - The namespace option in environments.yaml doesn't seem to have any
visible effect. I'm still getting container names like
juju-f2339d90-dd3c-4a1f-8cd2-13e7c795df3f-machine-0, whereas I'd like
something friendlier. This is likely just me not understanding what
this option does.

  - alas, I tripped over a show stopper for me elsewhere in 1.26alpha3
so haven't proceeded much further. Anecdotally it seems more reliable
than the old lxc provider, but I'll need to be able to do more runs to
confirm that.

  - I very much look forward to using a remote lxd server. Its always
surprising how many Cassandra nodes this little laptop can support,
but offloading it to a cloud vm while keeping the fast container
spinup times will be nice ;)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Hook -relation-broken is broken with charms.reactive

2015-12-19 Thread Stuart Bishop
On 20 December 2015 at 02:13, Cory Johns <cory.jo...@canonical.com> wrote:
> On Sat, Dec 19, 2015 at 3:45 AM, Stuart Bishop <stuart.bis...@canonical.com>
> wrote:
>>
>> Which unit name would you use? -relation-departed gets run multiple
>> times, once for each unit. -relation-broken gets run once, after all
>> the -departed hooks have been run.
>
>
> That can't be right.  Surely it must run at least once on each unit still
> remaining in the relation.  Every relation consists of two end-points, and

A relation consists of several end-points. A relation is between two
services, which means it is between several units (several on the
'local' side, and several on the 'remote' side). It is not a
conversation between two units, where two units are passing messages
between each other, but a conference involving several units, where
each unit is yelling at every other unit. I find the
relations-are-conversations metaphor breaks down as soon as you have
two or more units to a service.

>From the Juju docs:

""
[name]-relation-broken indicates that the current relation is no
longer valid, and that the charm's software must be configured as
though the relation had never existed. It will only be called after
every necessary -departed hook has been run; if it's being executed,
you can be sure that no remote units are currently known locally. It
is important to note that the -broken hook might run even if no other
units have ever joined the relation. This is not a bug: even if no
remote units have ever joined, the fact of the unit's participation
can be detected in other hooks via the relation-ids tool, and the
-broken hook needs to execute to give the charm an opportunity to
clean up any optimistically-generated configuration.
"""

A unit appears in the relation. For every unit in the remote service,
the relation-joined and relation-changed hooks are triggered. If a new
unit is added to the remote service, the relation-joined and
relation-changed hooks are triggered. If a unit in the remote service
departs, the relation-departed hook is triggered. If the relation is
destroyed, the relation-departed hook is triggered once for each unit
in the remote service and the relation-broken hook triggered a single
time, after all the relation-departed hooks have been triggered.


> even if the implementation of Juju means that it's difficult or impossible
> for the agent to populate that variable, there's still an objectively
> correct value to put there.  And it doesn't seem unreasonable to have
> expected Juju to do so, like it does for every relation hook.

> Also, as I referenced previously, I'm pretty sure that I could reconstruct
> the expected value by saving the list of departing unit(s) in the charm and
> comparing that to the list during the -broken hook.  But my point was that

I don't think there is a correct value to put there. In the -broken
hook, the there will be zero units in the remote service. At this
point, there is no longer a remote service at all. The only thing you
would be doing is setting $REMOTE_UNIT to the last unit to depart,
which is arbitrary.


> I'm not sure it's worth doing that because I'm not sure I see a use-case for
> this hook that couldn't just as easily be done with the -departed hook
> alone.

relation-departed is when a remote *unit* has gone.
relation-broken is when the remote *service* has gone (or never
appeared in the first place)

In theory, you can use it to clean up resources required by the remote
service. For example, you could shut down daemons no longer required
or destroy the database used by the remote service. In theory, you
don't do this from the relation-departed hook, because the remote
service still exists (even if it has no units) and new units could be
added.

In practice, I don't think I've ever seen this done and suspect
relation-broken hooks are unnecessary atavisms like 'start' and 'stop'
hooks. In particular, cleaning up for $SERVICE in relation-broken is
tricky as Juju does not tell us what $SERVICE actually was - I think
all you know is the former relation id and relation name.



>> As far as the reactive framework is concerned, I don't think it fits
>> as a handler on a RelationBase subclass. It would work fine as a
>> 'standard' parameterless handler. Maybe you want some magic to destroy
>> conversations and such from the now defunct and useless relation
>> object.
>
>
> I'm not sure I understand what you mean here.  I don't see the need for any
> "magic" here.  During a relation hook, charms.reactive needs to be able to
> determine what two units the relation applied to in order to know what
> conversations they may have been participating in and those conversations'
> associated states.  If it can't determine that for -broken, then -broken
> can't be used with charms.reactive.


Re: Review Queue - midonet-api

2015-12-19 Thread Stuart Bishop
On 17 December 2015 at 20:10, James Page <james.p...@ubuntu.com> wrote:

> datastax and midonet repositories and some DNS hijacking (I'll write that up

I'd be interested in knowing what would be required of the Cassandra
charm for it to be used, rather than having Cassandra embedded.

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: The future of Charm Helpers

2015-11-23 Thread Stuart Bishop
On 23 November 2015 at 02:23, Marco Ceppi <marco.ce...@canonical.com> wrote:

> Under this proposal, `charmhelpers.core.hookenv` would more or less become
> `charms.helper` and items like `charmhelpers.contrib.openstack` would be
> moved to their own upstream project and be available as `charms.openstack`.
> They will instead depend on `charms.helper` for previous `hookenv` methods.
> This is a cleaner namespace that still providing the discoverability (search
> pypi index for `charms` and you'll see the ecosystem basically owns that
> space) desired from the current source tree.

> With the new charm build pattern and reactive framework this would fit in
> nicely as we continue on a "charming 2.0" march for quick, easy, and concise
> ways to create charms. I welcome a continued discussion on the subject with
> the hope we can reach a consensus soon on the best way forward. One thing is
> clear, the current way of maintaining charm-helpers is neither scalable or
> manageable.

I don't think it matters what you do with the low level hookenv
library, as reactive charms should be using a higher level library
that sets states appropriately (and mixing calls just means state and
hook environment will get out of sync).

I think it is worth doing this in tandem with creating
charms.reactive.hookenv. It is really, really useful having handlers
watching for states like 'leadership.set.foo' or 'config.changed.bar'
or 'workloadstatus.blocked', but if layers start using the lower level
API then state will get out of sync with the hook environment.

Or should everything under the charms namespace be reactive framework
aware, with charms.reactive just being where the engine is stored?

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Making logging to MongoDB the default

2015-10-22 Thread Stuart Bishop
On 22 October 2015 at 22:17, Nate Finch <nate.fi...@canonical.com> wrote:

> IMO, all-machines.log is a bad idea anyway (it duplicates what's in the log
> files already, and makes it very likely that the state machines will run out
> of disk space, since they're potentially aggregating hundreds or thousands
> of machines' logs, not to mention adding a lot of network overhead). I'd be
> happy to see it go away.  However, I am not convinced that dropping text
> file logs in general is a good idea, so I'd love to hear what we're gaining
> by putting logs in Mongo.

I'm looking forward to having access to them in a structured format so
I can generate logs, reports and displays the way I like rather than
dealing with the hard to parse strings in the text logs. 'juju
debug-logs [--from ts] [--until ts] [-F] [--format=json]' would keep
me quite happy and I can filter, format, interleave and colorize the
output to my hearts content. I can even generate all-machines.log if I
feel like a headache ;)

-- 
Stuart Bishop <stuart.bis...@canonical.com>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


  1   2   >