Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-28 Thread Stuart Bishop
On 9 March 2016 at 06:51, Mark Shuttleworth  wrote:
> Hi folks
>
> We're starting to think about the next development cycle, and gathering
> priorities and requests from users of Juju. I'm writing to outline some
> current topics and also to invite requests or thoughts on relative
> priorities - feel free to reply on-list or to me privately.

Another item I'd like to see is distribution upgrades. We not have a
lot of systems deployed with Trusty that will need to be upgraded to
Xenial not too far in the future. For many services you would just
bring up a new service with a new name and cut over, but this is
impractical for other services such as database shards deployed on
MaaS provisioned hardware. Handling upgrades may be as simple as
allowing operators (or a charm action) perform the necessary
dist-upgrade one unit at a time and have the controller notice and
cope when the unit's jujud is bounced. Not all units would be running
the same distribution release at the same time, and I'm assuming the
service is running a multi-series charm here that supports both
releases (so we don't need to worry about how to handle upgrade-charm
hooks, at least for now)

-- 
Stuart Bishop 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-02 Thread Stuart Bishop
On 1 April 2016 at 20:50, Mark Shuttleworth  wrote:
> On 19/03/16 01:02, Stuart Bishop wrote:
>> On 9 March 2016 at 10:51, Mark Shuttleworth  wrote:
>>
>>> Operational concerns
>> I still want 'juju-wait' as a supported, builtin command rather than
>> as a fragile plugin I maintain and as code embedded in Amulet that the
>> ecosystem team maintain. A thoughtless change to Juju's status
>> reporting would break all our CI systems.
>
> Hmm.. I would have thought that would be a lot more reasonable now we
> have status well in hand. However, the charms need to support status for
> it to be meaningful to the average operator, and we haven't yet made
> good status support a requirement for charm promulgation in the store.
>
> I'll put this on the list to discuss.


It is easier with Juju 1.24+. You check the status. If all units are
idle, you wait about 15 seconds and check again. If all units are
still idle and the timestamps haven't changed, the environment is
probably idle. And for some (all?) versions of Juju, you also need to
ssh into the units and ensure that one of the units in each service
thinks it is the leader as it can take some time for a new leader to
be elected.

Which means 'juju wait' as a plugin takes quite a while to run and
only gives a probable result, whereas if this information about the
environment was exposed it could be instantaneous and correct.

-- 
Stuart Bishop 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-30 Thread John Meinel
On Mar 29, 2016 3:47 AM, "William (Will) Forsyth" <
william.fors...@liferay.com> wrote:

> A feature that I think would clean up the deployment of multi-charm
> bundles would be the ability to deploy directly to lxc containers without
> specifying or pre-adding a machine.
>


I'm not sure of the details of bundles, but I believe today you can do
"juju deploy --to lxc:" (and soon it will be changing to --to lxd:) Leaving
off the machine number has Juju allocate a new machine.


>
> For example, in a juju on maas deployment, upon charm deploy, juju would
> query maas and request a new machine, but let maas choose the series. Once
> provisioned, juju would then create a lxc container with the required
> series for the charm being deployed. If maas reports that there are no
> machines available for allocation, then it will pick a current machine
> based on utilization and spawn the container there.
>
> This would allow for greater and more seamless homogeneity of the deployed
> machines, and would help with the push for container-per-charm deployments
> that will be critical in enabling live migration of lxd containers.
>
>
Unfortunately capacity planning gets us much more into AI/application
specific territory. What is currently Idle may just be because production
hasn't been exposed to the world yet. So all your Nova Compute nodes look
completely idle, but they'll be heavily loaded with user VMs.  Or your
Production Database machine doesn't have load yet.
This is where Juju is *intentionally* workload agnostic and wants to make
it easy for 3rd party applications to bring their own intelligence into the
system. Things like the Openstack Autopilot that understands what the
actual charms and workloads running are, can leverage Juju to orchestrate
the deployment.
We have had some discussions about "stacks" or charm brains, or something
along those lines to allow the charms themselves to provide understanding
of the workload back into the system. This might be something to focus on
in the 2.2 timeline. At the very least first steps of this where when Juju
is trying to make a decision (such as placement), we could have registered
3rd parties that we will consult to see if they have  an answer for us.

John
=:->




>
> William Forsyth
>
> Infrastructure Administrator
> Liferay, Inc.
> Enterprise. Open Source. For life.
>
> From: Mark Shuttleworth <m...@ubuntu.com>
>
>> Date: Tue, Mar 8, 2016 at 6:52 PM
>> Subject: Planning for Juju 2.2 (16.10 timeframe)
>> To: juju <j...@lists.ubuntu.com>, juju-dev@lists.ubuntu.com <
>> juju-dev@lists.ubuntu.com>
>>
>>
>> Hi folks
>>
>> We're starting to think about the next development cycle, and gathering
>> priorities and requests from users of Juju. I'm writing to outline some
>> current topics and also to invite requests or thoughts on relative
>> priorities - feel free to reply on-list or to me privately.
>>
>> An early cut of topics of interest is below.
>>
>>
>>
>> *Operational concerns ** LDAP integration for Juju controllers now we
>> have multi-user controllers
>> * Support for read-only config
>> * Support for things like passwords being disclosed to a subset of
>> user/operators
>> * LXD container migration
>> * Shared uncommitted state - enable people to collaborate around changes
>> they want to make in a model
>>
>> There has also been quite a lot of interest in log control - debug
>> settings for logging, verbosity control, and log redirection as a systemic
>> property. This might be a good area for someone new to the project to lead
>> design and implementation. Another similar area is the idea of modelling
>> machine properties - things like apt / yum repositories, cache settings
>> etc, and having the machine agent setup the machine / vm / container
>> according to those properties.
>>
>>
>>
>> *Core Model * * modelling individual services (i.e. each database
>> exported by the db application)
>>  * rich status (properties of those services and the application itself)
>>  * config schemas and validation
>>  * relation config
>>
>> There is also interest in being able to invoke actions across a relation
>> when the relation interface declares them. This would allow, for example, a
>> benchmark operator charm to trigger benchmarks through a relation rather
>> than having the operator do it manually.
>>
>> *Storage*
>>
>>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>>  * object storage abstraction (probably just mapping to S3-compatible
>> APIS)
>>
>> I'm interested in feedback on

Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-20 Thread roger peppe
On 16 March 2016 at 12:31, Kapil Thangavelu  wrote:
>
>
> On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth  wrote:
>>
>> Hi folks
>>
>> We're starting to think about the next development cycle, and gathering
>> priorities and requests from users of Juju. I'm writing to outline some
>> current topics and also to invite requests or thoughts on relative
>> priorities - feel free to reply on-list or to me privately.
>>
>> An early cut of topics of interest is below.
>>
>> Operational concerns
>>
>> * LDAP integration for Juju controllers now we have multi-user controllers
>> * Support for read-only config
>> * Support for things like passwords being disclosed to a subset of
>> user/operators
>> * LXD container migration
>> * Shared uncommitted state - enable people to collaborate around changes
>> they want to make in a model
>>
>> There has also been quite a lot of interest in log control - debug
>> settings for logging, verbosity control, and log redirection as a systemic
>> property. This might be a good area for someone new to the project to lead
>> design and implementation. Another similar area is the idea of modelling
>> machine properties - things like apt / yum repositories, cache settings etc,
>> and having the machine agent setup the machine / vm / container according to
>> those properties.
>>
>
> ldap++. as brought up in the user list better support for aws best practice
> credential management, ie. bootstrapping with transient credentials (sts
> role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
> servers.
>
>
>>
>> Core Model
>>
>>  * modelling individual services (i.e. each database exported by the db
>> application)
>>  * rich status (properties of those services and the application itself)
>>  * config schemas and validation
>>  * relation config
>>
>> There is also interest in being able to invoke actions across a relation
>> when the relation interface declares them. This would allow, for example, a
>> benchmark operator charm to trigger benchmarks through a relation rather
>> than having the operator do it manually.
>>
>
> in priority order, relation config

What do you understand by the term "relation config"?

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Kapil Thangavelu
On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth  wrote:

> Hi folks
>
> We're starting to think about the next development cycle, and gathering
> priorities and requests from users of Juju. I'm writing to outline some
> current topics and also to invite requests or thoughts on relative
> priorities - feel free to reply on-list or to me privately.
>
> An early cut of topics of interest is below.
>
>
>
> *Operational concerns ** LDAP integration for Juju controllers now we
> have multi-user controllers
> * Support for read-only config
> * Support for things like passwords being disclosed to a subset of
> user/operators
> * LXD container migration
> * Shared uncommitted state - enable people to collaborate around changes
> they want to make in a model
>
> There has also been quite a lot of interest in log control - debug
> settings for logging, verbosity control, and log redirection as a systemic
> property. This might be a good area for someone new to the project to lead
> design and implementation. Another similar area is the idea of modelling
> machine properties - things like apt / yum repositories, cache settings
> etc, and having the machine agent setup the machine / vm / container
> according to those properties.
>
>
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
servers.



>
>
> *Core Model * * modelling individual services (i.e. each database
> exported by the db application)
>  * rich status (properties of those services and the application itself)
>  * config schemas and validation
>  * relation config
>
> There is also interest in being able to invoke actions across a relation
> when the relation interface declares them. This would allow, for example, a
> benchmark operator charm to trigger benchmarks through a relation rather
> than having the operator do it manually.
>
>
in priority order, relation config, config schemas/validation, rich status.
relation config is a huge boon to services that are multi-tenant to other
services, as is the workaround is to create either copies per tenant or
intermediaries.


> *Storage*
>
>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>  * object storage abstraction (probably just mapping to S3-compatible APIS)
>
> I'm interested in feedback on the operations aspects of storage. For
> example, whether it would be helpful to provide lifecycle management for
> storage being re-assigned (e.g. launch a new database application but reuse
> block devices previously bound to an old database  instance). Also, I think
> the intersection of storage modelling and MAAS hasn't really been explored,
> and since we see a lot of interest in the use of charms to deploy
> software-defined storage solutions, this probably will need thinking and
> work.
>
>
it maybe out of band, but with storage comes backups/snapshots. also of
interest, is encryption on block and object storage using cloud native
mechanisms where available.


>
>
> *Clouds and providers *
>  * System Z and LinuxONE
>  * Oracle Cloud
>
> There is also a general desire to revisit and refactor the provider
> interface. Now we have seen many cloud providers get done, we are in a
> better position to design the best provider interface. This would be a
> welcome area of contribution for someone new to the project who wants to
> make it easier for folks creating new cloud providers. We also see constant
> requests for a Linode provider that would be a good target for a refactored
> interface.
>
>
>
>
> *Usability * * expanding the set of known clouds and regions
>  * improving the handling of credentials across clouds
>


Autoscaling, either tighter integration with cloud native features or juju
provided abstraction.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-18 Thread roger peppe
On 16 March 2016 at 15:04, Kapil Thangavelu  wrote:
> Relations have associated config schemas that can be set by the user
> creating the relation. I.e. I could run one autoscaling service and
> associate with relation config for autoscale options to the relation with a
> given consumer service.

Great, I hoped that's what you meant.
I'm also +1 on this feature - it would enable all kinds of useful flexibility.

One recent example I've come across that could use this feature
is that we've got a service that can hand out credentials to services
that are related to it. At the moment the only way to state that
certain services should be handed certain classes of credential
is to have a config value that holds a map of service name to
credential info, which doesn't seem great - it's awkward, easy
to get wrong, and when a service goes away, its associated info
hangs around.

Having the credential info associated with the relation itself would be perfect.

>
> On Wed, Mar 16, 2016 at 9:17 AM roger peppe 
> wrote:
>>
>> On 16 March 2016 at 12:31, Kapil Thangavelu  wrote:
>> >
>> >
>> > On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth 
>> > wrote:
>> >>
>> >> Hi folks
>> >>
>> >> We're starting to think about the next development cycle, and gathering
>> >> priorities and requests from users of Juju. I'm writing to outline some
>> >> current topics and also to invite requests or thoughts on relative
>> >> priorities - feel free to reply on-list or to me privately.
>> >>
>> >> An early cut of topics of interest is below.
>> >>
>> >> Operational concerns
>> >>
>> >> * LDAP integration for Juju controllers now we have multi-user
>> >> controllers
>> >> * Support for read-only config
>> >> * Support for things like passwords being disclosed to a subset of
>> >> user/operators
>> >> * LXD  container migration
>> >> * Shared uncommitted state - enable people to collaborate around
>> >> changes
>> >> they want to make in a model
>> >>
>> >> There has also been quite a lot of interest in log control - debug
>> >> settings for logging, verbosity control, and log redirection as a
>> >> systemic
>> >> property. This might be a good area for someone new to the project to
>> >> lead
>> >> design and implementation. Another similar area is the idea of
>> >> modelling
>> >> machine properties - things like apt / yum repositories, cache settings
>> >> etc,
>> >> and having the machine agent setup the machine / vm / container
>> >> according to
>> >> those properties.
>> >>
>> >
>> > ldap++. as brought up in the user list better support for aws best
>> > practice
>> > credential management, ie. bootstrapping with transient credentials (sts
>> > role assume, needs AWS_SECURITY_TOKEN support), and instance role for
>> > state
>> > servers.
>> >
>> >
>> >>
>> >> Core Model
>> >>
>> >>  * modelling individual services (i.e. each database exported by the db
>> >> application)
>> >>  * rich status (properties of those services and the application
>> >> itself)
>> >>  * config schemas and validation
>> >>  * relation config
>> >>
>> >> There is also interest in being able to invoke actions across a
>> >> relation
>> >> when the relation interface declares them. This would allow, for
>> >> example, a
>> >> benchmark operator charm to trigger benchmarks through a relation
>> >> rather
>> >> than having the operator do it manually.
>> >>
>> >
>> > in priority order, relation config
>>
>> What do you understand by the term "relation config"?

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-18 Thread Stuart Bishop
On 9 March 2016 at 10:51, Mark Shuttleworth  wrote:

> Operational concerns

I still want 'juju-wait' as a supported, builtin command rather than
as a fragile plugin I maintain and as code embedded in Amulet that the
ecosystem team maintain. A thoughtless change to Juju's status
reporting would break all our CI systems.

> Core Model

At the moment logging, monitoring (alerts) and metrics involve
customizing your charm to work with a specific subordinate. And at
deploy time, you of course need to deploy and configure the
subordinate, relate it etc. and things can get quite cluttered.

Could logging, monitoring and metrics be brought into the core model somehow?

eg. I attach a monitoring service such as nagios to the model, and all
services implicitly join the monitoring relation. Rather than talk
bespoke protocols, units use the 'monitoring-alert' tool send a JSON
dict to the monitoring service (for push alerts). There is some
mechanism for the monitoring service to trigger checks remotely.
Requests and alerts go via a separate SSL channel rather than the
relation, as relations are too heavy weight to trigger several times a
second and may end up blocked by eg. other hooks running on the unit
or jujud having been killed by OOM.

Similarly, we currently handle logging by installing a subordinate
that knows how to push rotated logs to Swift. It would be much nicer
to set this at the model level, and have tools available for the charm
to push rotated logs or stream live logs to the desired logging
service. syslog would be a common approach, as would streaming stdout
or stderr.

And metrics, where a charm installs a cronjob or daemon to spit out
performance metrics as JSON dicts to a charm tool which sends them to
the desired data store and graphing systems, maybe once a day or maybe
several times a second. Rather than the current approach of assuming
statsd as the protocol and spitting out packages to an IP address
pulled from the service configuration.

>  * modelling individual services (i.e. each database exported by the db
> application)
>  * rich status (properties of those services and the application itself)
>  * config schemas and validation
>  * relation config
>
> There is also interest in being able to invoke actions across a relation
> when the relation interface declares them. This would allow, for example, a
> benchmark operator charm to trigger benchmarks through a relation rather
> than having the operator do it manually.

This is interesting. You can sort of do this already if you setup ssh
so units can run commands on each other, but network partitions are an
issue. Triggering an action and waiting on the result works around
this problem.

For failover in the PostgreSQL charm, I currently need to leave
requests in the leader settings and wait for units to perform the
requested tasks and report their results using the peer relation. It
might be easier to coordinate if the leader was able to trigger these
tasks directly on the other units.

Similarly, most use cases for charmhelpers.coordinator or the
coordinator layer would become easier. Rather than using several
rounds of leadership and peer relation hooks to perform a rolling
restart or rolling upgrade, the leader could trigger the operations
remotely one at a time via a peer relation.


> Storage
>
>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>  * object storage abstraction (probably just mapping to S3-compatible APIS)
>
> I'm interested in feedback on the operations aspects of storage. For
> example, whether it would be helpful to provide lifecycle management for
> storage being re-assigned (e.g. launch a new database application but reuse
> block devices previously bound to an old database  instance). Also, I think
> the intersection of storage modelling and MAAS hasn't really been explored,
> and since we see a lot of interest in the use of charms to deploy
> software-defined storage solutions, this probably will need thinking and
> work.

Reusing an old mount on a new unit is a common use case. Single unit
PostgreSQL is simplest here - it detects an existing database is on
the mount, and rather than recreate it fixes permissions (uids and
gids will often not match), mounts it and recreates any resources the
charm needs (such as the 'nagios' user so the monitoring checks work).
But if you deploy multiple PostgreSQL units reusing old mounts, what
do you do? At the moment, the one lucky enough to be elected master
gets used and the others destroyed.

Cassandra is problematic, as the newly provisioned units will have
different positions and ranges in the replication ring and the
existing data will usually actually belong to other units in the
service. It would be simpler to create a new cluster, then attach the
old data as an 'import' mount and have the storage hook load it into
the cluster. Which requires twice the disk space, but means you could
migrate a 10 unit Cassandra cluster to a new 5 unit 

Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-18 Thread Andrew Wilkins
On Sat, Mar 19, 2016 at 12:53 AM Jacek Nykis 
wrote:

> On 08/03/16 23:51, Mark Shuttleworth wrote:
> > *Storage*
> >
> >  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
> >  * object storage abstraction (probably just mapping to S3-compatible
> APIS)
> >
> > I'm interested in feedback on the operations aspects of storage. For
> > example, whether it would be helpful to provide lifecycle management for
> > storage being re-assigned (e.g. launch a new database application but
> > reuse block devices previously bound to an old database  instance).
> > Also, I think the intersection of storage modelling and MAAS hasn't
> > really been explored, and since we see a lot of interest in the use of
> > charms to deploy software-defined storage solutions, this probably will
> > need thinking and work.
>
> Hi Mark,
>
> I took juju storage for a spin a few weeks ago. It is a great idea and
> I'm sure it will simplify our models (no more need for
> block-storage-broker and storage charms). It will also improve security
> because block-storage-broker needs nova credentials to work
>
> I only played with storage briefly but I hope my feedback and ideas will
> be useful
>
> * IMO it would be incredibly useful to have storage lifecycle
> management. Deploying a new database using pre-existing block device you
> mentioned would certainly be nice. Another scenario could be users who
> deploy to local disk and decide to migrate to block storage later
> without redeploying and manual data migration
>
>

> One day we may even be able to connect storage with actions. I'm
> thinking "storage snapshot" action followed by juju deploy to create up
> to date database clone for testing/staging/dev
>
> * I found documentation confusing. It's difficult for me to say exactly
> what is wrong but I had to read it a few times before things became
> clear. I raised some specific points on github:
> https://github.com/juju/docs/issues/889
>
> * cli for storage is not as nice as other juju commands. For example we
> have the in the docs:
>
> juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
>
> I suspect most charms will use single storage device so it may be
> possible to optimize for that use case. For example we could have:
>
> juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G
>

It seems like the issues you've noted below are all documentation issues,
rather than limitations in the implementation. Please correct me if I'm
wrong.


> If we come up with sensible defaults for different providers we could
> make end users' experience even better by making --storage-type optional
>

Storage type is already optional. If you omit it, you'll get the provider
default. e.g. for AWS, that's EBS magnetic disks.


> * it would be good to have ability to use single storage stanza in
> metadata.yaml that supports all types of storage. They way it is done
> now [0] means I can't test block storage hooks in my local dev
> environment. It also forces end users to look for storage labels that
> are supported
>
> [0] http://paste.ubuntu.com./15414289/


Not quite sure what you mean here. If you have a "filesystem" type, you can
use any storage provider that supports natively creating filesystems (e.g.
"tmpfs") or block devices (e.g. "ebs"). If you specify the latter, Juju
will manage the filesystem on the block device.

* the way things are now hooks are responsible for creating filesystem
> on block devices. I feel that as a charmer I shouldn't need to know that
> much about storage internals. I would like to ask juju and get
> preconfigured path back. Whether it's formatted and mounted block
> device, GlusterFS or local filesystem it should not matter


That is exactly what it does, so again, I think this is an issue of
documentation clarity. If you're using the "filesystem" type, Juju will
create the filesystem; if you use "block", it won't.

If you could provide more details on what you're doing (off list, I think
would be best), I can try and help. We can then feed back into the docs to
make it clearer.

Cheers,
Andrew

* finally I hit 2 small bugs:
>
> https://bugs.launchpad.net/juju-core/+bug/1539684
> https://bugs.launchpad.net/juju-core/+bug/1546492



>
>
> If anybody is interested in more details just ask, I'm happy to discuss
> or try things out just note that I will be off next week so will most
> likely reply on 29th
>
>
> Regards,
> Jacek
>
> --
> Juju mailing list
> j...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-18 Thread Eric Snow
On Fri, Mar 18, 2016 at 8:57 AM, Tom Barber  wrote:
> c) upload files with actions. Currently for some things I need to pass in
> some files then trigger an action on the unit upon that file. It would be
> good to say path=/tmp/myfile.xyz and have the action upload that to a place
> you define.

Have you taken a look at resources in the upcoming 2.0?  You define
resources in your charm metadata and use "juju attach" to upload them
to the controller (e.g. "juju attach my-service/0
my-resource=/tmp/myfile.xyz"). *  Then charms can use the
"resource-get" hook command to download the resource file from the
controller.  "resource-get" returns the path where the downloaded file
was saved.

-eric


* You will also upload the resources to the charm store for charm store charms.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev