Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-01 Thread Stuart Bishop
On 1 April 2016 at 20:50, Mark Shuttleworth  wrote:
> On 19/03/16 01:02, Stuart Bishop wrote:
>> On 9 March 2016 at 10:51, Mark Shuttleworth  wrote:
>>
>>> Operational concerns
>> I still want 'juju-wait' as a supported, builtin command rather than
>> as a fragile plugin I maintain and as code embedded in Amulet that the
>> ecosystem team maintain. A thoughtless change to Juju's status
>> reporting would break all our CI systems.
>
> Hmm.. I would have thought that would be a lot more reasonable now we
> have status well in hand. However, the charms need to support status for
> it to be meaningful to the average operator, and we haven't yet made
> good status support a requirement for charm promulgation in the store.
>
> I'll put this on the list to discuss.


It is easier with Juju 1.24+. You check the status. If all units are
idle, you wait about 15 seconds and check again. If all units are
still idle and the timestamps haven't changed, the environment is
probably idle. And for some (all?) versions of Juju, you also need to
ssh into the units and ensure that one of the units in each service
thinks it is the leader as it can take some time for a new leader to
be elected.

Which means 'juju wait' as a plugin takes quite a while to run and
only gives a probable result, whereas if this information about the
environment was exposed it could be instantaneous and correct.

-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Big Data charm & bundle updates!

2016-04-01 Thread Cory Johns
Yesterday afternoon, we released a big update of the big data charms and
bundles!

This update converts all of the main big data charms to use layers, and
brings the version of the services, such as Hadoop and Spark, up to much
more recent releases.  We also changed the name of a couple of the charms
to bring them more in line with the naming convention used by the community.

The conversion to layers makes the charm code much easier to follow,
maintain, and contribute to.

Because of the conversion to layers and topology change, there is
unfortunately not a way to upgrade a bundle from the old charms to the
new.  Instead, it will require a side-by-side migration, or redeployment.

If you deploy using any of the bundles listed on
https://jujucharms.com/big-data you should not run in to any issues.
However, if you have any manual deployment scripts, you will need to update
them to account for the following name changes:

  * apache-hadoop-hdfs-master -> apache-hadoop-namenode
  * apache-hadoop-yarn-master -> apache-hadoop-resourcemanager
  * apache-hadoop-compute-slave -> apache-hadoop-slave
  * apache-hadoop-client -> hadoop-client

The change to the name of the client charm better reflects that, by using
the plugin model, the client is not tied to any specific set of core Hadoop
charms.  It is also used as the base layer for charms such as apache-spark,
making it even easier than before to connect your charm to the Hadoop
cluster.
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-01 Thread Samuel Cozannet
* Resource Management :
  ** GPU:
>From a very operational perspective, as GPUs become more widely available
across clouds, having a way to constrain/schedule them would be
interesting.

  ** Mapping to constraints
When colocating services, one could want to drive Juju to make clever
decisions to make sure resources are available to them. That is to say
expand the --to command to map specific machine constraints (such as --to
"cpu-cores>4"), similarly to cgroup constraints for containers.

* Monitoring / Logging / Support Tools:
Stuart made a good point on the relation for "support services" like
logging / monitoring...
I think the monitoring tools should be clever enough they recon where they
run and adapt, and users shall be free to use whichever. However the
subordinate relation is really cumbersome.

Quick win, a flag such as "juju deploy --to all " would make sense.

Alternatively, IT tools really are really attributes / meta-services of a
user's models. When one selects to run Logstash, one make that decision
globally, for all existing and future nodes and services. Therefore,
* juju enable logstash <--model foo> <--cloud bar> --all : this deploys
logstash agents to all nodes from one or more models / cloud instances
Furthermore, as the model expands, new units would then automagically get
the support service enabled (deploy + relate)
* juju add-relation logstash < other service or json config>: this other
command inherited from the classic relates to either another charm
(elasticsearch) or provide "fake relation data" to connect to an external
service (proxy charm also possible) that is non juju driven (SaaS)

* UX
  ** Tags
One feature I really really love on Google Cloud Platform is the ability to
tag pretty much any and everything to my own vocabulary. I would love the
ability to tag charms to functional layers of my choice (middleware, front
end, back end...), to then be able to filter them efficiently.
  ** Filters
If I run a vast model, with many units of many types, I would love the
ability to filter the status by the tags I have, and not only by names.

Best,
Sam


--
Samuel Cozannet
Cloud, Big Data and IoT Strategy Team
Business Development - Cloud and ISV Ecosystem
Changing the Future of Cloud
Ubuntu   / Canonical UK LTD  / Juju

samuel.cozan...@canonical.com
mob: +33 616 702 389
skype: samnco
Twitter: @SaMnCo_23
[image: View Samuel Cozannet's profile on LinkedIn]


On Wed, Mar 9, 2016 at 12:51 AM, Mark Shuttleworth  wrote:

> Hi folks
>
> We're starting to think about the next development cycle, and gathering
> priorities and requests from users of Juju. I'm writing to outline some
> current topics and also to invite requests or thoughts on relative
> priorities - feel free to reply on-list or to me privately.
>
> An early cut of topics of interest is below.
>
>
>
> *Operational concerns ** LDAP integration for Juju controllers now we
> have multi-user controllers
> * Support for read-only config
> * Support for things like passwords being disclosed to a subset of
> user/operators
> * LXD container migration
> * Shared uncommitted state - enable people to collaborate around changes
> they want to make in a model
>
> There has also been quite a lot of interest in log control - debug
> settings for logging, verbosity control, and log redirection as a systemic
> property. This might be a good area for someone new to the project to lead
> design and implementation. Another similar area is the idea of modelling
> machine properties - things like apt / yum repositories, cache settings
> etc, and having the machine agent setup the machine / vm / container
> according to those properties.
>
>
>
> *Core Model * * modelling individual services (i.e. each database
> exported by the db application)
>  * rich status (properties of those services and the application itself)
>  * config schemas and validation
>  * relation config
>
> There is also interest in being able to invoke actions across a relation
> when the relation interface declares them. This would allow, for example, a
> benchmark operator charm to trigger benchmarks through a relation rather
> than having the operator do it manually.
>
> *Storage*
>
>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>  * object storage abstraction (probably just mapping to S3-compatible APIS)
>
> I'm interested in feedback on the operations aspects of storage. For
> example, whether it would be helpful to provide lifecycle management for
> storage being re-assigned (e.g. launch a new database application but reuse
> block devices previously bound to an old database  instance). Also, I think
> the intersection of storage modelling and MAAS hasn't really been explored,
> and since we see a lot of interest in the use of charms to deploy
> software-defined storage solutions, this probably will need thinking and
> work.
>
>
>
> *Clouds and p

Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-01 Thread Jacek Nykis
On 01/04/16 14:34, Mark Shuttleworth wrote:
> 
>> * cli for storage is not as nice as other juju commands. For example we
>> have the in the docs:
>>
>> juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
>>
>> I suspect most charms will use single storage device so it may be
>> possible to optimize for that use case.
> 
> That, however, means you still have to know IF there's only one store.
> Or you have to know what the default store is. Better to just be explicit.

I think it's possible to handle all scenarios nicely.

For charms with just one store only require "--storage-size" and DTRT

For charms with multiple stores require "--store" parameter on top of
that. If not given error with "This charm supports more than one store,
please specify"

For charms without storage support when users provide one of storage
options error with "Storage not supported"

And for charms that do support storage but users don't ask for it print
something like "This charm supports storage, you can try it with --size
10G option"

>>  For example we could have:
>>
>> juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G
>>
>> If we come up with sensible defaults for different providers we could
>> make end users' experience even better by making --storage-type optional
> 
> What would sensible defaults look like for storage? The default we have
> is quite sensible, you get the root filesystem :)

I was thinking about defaults for block device backed storage. We could
allow users to skip "ebs-ssd" and pick the most sensible store type for
every supported cloud. And for clouds which support just one block
storage type use that automatically without need to specify anything.

>> * it would be good to have ability to use single storage stanza in
>> metadata.yaml that supports all types of storage. They way it is done
>> now [0] means I can't test block storage hooks in my local dev
>> environment. It also forces end users to look for storage labels that
>> are supported
>>
>> [0] http://paste.ubuntu.com./15414289/
> 
> I'm not sure what the issue is with this one.
> 
> If we have filesystem storage it's always at the same place.
> 
> If we have a single mounted block store, it's always at the same place.
> 
> If we can attach multiple block devices, THEN you need to handle them as
> they are attached.
> 
> Can you explain the problem more clearly? We do have an issue with the
> LXD provider and block devices, which we think will be resolved thanks
> to some good kernel work on a range of fronts, but that can't surely be
> what's driving your concerns.

It's my bad, I misunderstood how things worked, you can ignore this
point. Andrew Wilkins helpfully explained things to me earlier in this
thread (thanks Andrew)

>> * the way things are now hooks are responsible for creating filesystem
>> on block devices. I feel that as a charmer I shouldn't need to know that
>> much about storage internals. I would like to ask juju and get
>> preconfigured path back. Whether it's formatted and mounted block
>> device, GlusterFS or local filesystem it should not matter
> 
> Well, yes, that's the idea, but these things are quite subtle.
> 
> In some cases you very explicitly want the raw block. So we have to
> allow that. In other cases you just want a filesystem there, and IIRC
> that's the default behaviour in the common case. Finally, we have to
> deal with actual network filesystems (as opposed to block devices) and I
> don't think we have implemented that yet.

Sorry this was also me misunderstanding things, Andrew already clarified
them for me (thanks again)

Jacek

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-01 Thread Mark Shuttleworth
On 19/03/16 01:02, Stuart Bishop wrote:
> On 9 March 2016 at 10:51, Mark Shuttleworth  wrote:
>
>> Operational concerns
> I still want 'juju-wait' as a supported, builtin command rather than
> as a fragile plugin I maintain and as code embedded in Amulet that the
> ecosystem team maintain. A thoughtless change to Juju's status
> reporting would break all our CI systems.

Hmm.. I would have thought that would be a lot more reasonable now we
have status well in hand. However, the charms need to support status for
it to be meaningful to the average operator, and we haven't yet made
good status support a requirement for charm promulgation in the store.

I'll put this on the list to discuss.

>> Core Model
> At the moment logging, monitoring (alerts) and metrics involve
> customizing your charm to work with a specific subordinate. And at
> deploy time, you of course need to deploy and configure the
> subordinate, relate it etc. and things can get quite cluttered.
>
> Could logging, monitoring and metrics be brought into the core model somehow?
>
> eg. I attach a monitoring service such as nagios to the model, and all
> services implicitly join the monitoring relation. Rather than talk
> bespoke protocols, units use the 'monitoring-alert' tool send a JSON
> dict to the monitoring service (for push alerts). There is some
> mechanism for the monitoring service to trigger checks remotely.
> Requests and alerts go via a separate SSL channel rather than the
> relation, as relations are too heavy weight to trigger several times a
> second and may end up blocked by eg. other hooks running on the unit
> or jujud having been killed by OOM.
>
> Similarly, we currently handle logging by installing a subordinate
> that knows how to push rotated logs to Swift. It would be much nicer
> to set this at the model level, and have tools available for the charm
> to push rotated logs or stream live logs to the desired logging
> service. syslog would be a common approach, as would streaming stdout
> or stderr.
>
> And metrics, where a charm installs a cronjob or daemon to spit out
> performance metrics as JSON dicts to a charm tool which sends them to
> the desired data store and graphing systems, maybe once a day or maybe
> several times a second. Rather than the current approach of assuming
> statsd as the protocol and spitting out packages to an IP address
> pulled from the service configuration

I'm pretty comfortable with logging, in this list. The others make me
feel like we'd require modification of the monitoring stuff anyhow, from
the vanilla tools people have today. Logging is AFAICT relatively
standardised, so I can see us setting loggin policy per model or per
application, and having the agents do the right thing.


>> There is also interest in being able to invoke actions across a relation
>> when the relation interface declares them. This would allow, for example, a
>> benchmark operator charm to trigger benchmarks through a relation rather
>> than having the operator do it manually.
> This is interesting. You can sort of do this already if you setup ssh
> so units can run commands on each other, but network partitions are an
> issue. Triggering an action and waiting on the result works around
> this problem.
>
> For failover in the PostgreSQL charm, I currently need to leave
> requests in the leader settings and wait for units to perform the
> requested tasks and report their results using the peer relation. It
> might be easier to coordinate if the leader was able to trigger these
> tasks directly on the other units.

Yes. On peers it should be completely uncontroversial since these are
the same charm and, well, it should always work if the charm developer
tested it :)

The slightly controversial piece comes on invocation of actions across a
relation, because it starts to imply that a different charm can't be
substituted in on the other side of the relation unless it ALSO
implements the actions that this charm expects.


> Similarly, most use cases for charmhelpers.coordinator or the
> coordinator layer would become easier. Rather than using several
> rounds of leadership and peer relation hooks to perform a rolling
> restart or rolling upgrade, the leader could trigger the operations
> remotely one at a time via a peer relation.

Right.

I'll take that as a +1 from you then :)


>> Storage
>>
>>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>>  * object storage abstraction (probably just mapping to S3-compatible APIS)
>>
>> I'm interested in feedback on the operations aspects of storage. For
>> example, whether it would be helpful to provide lifecycle management for
>> storage being re-assigned (e.g. launch a new database application but reuse
>> block devices previously bound to an old database  instance). Also, I think
>> the intersection of storage modelling and MAAS hasn't really been explored,
>> and since we see a lot of interest in the use of charms to deploy
>> software-defined storage solu

Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-01 Thread Mark Shuttleworth

> * cli for storage is not as nice as other juju commands. For example we
> have the in the docs:
>
> juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
>
> I suspect most charms will use single storage device so it may be
> possible to optimize for that use case.

That, however, means you still have to know IF there's only one store.
Or you have to know what the default store is. Better to just be explicit.

>  For example we could have:
>
> juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G
>
> If we come up with sensible defaults for different providers we could
> make end users' experience even better by making --storage-type optional

What would sensible defaults look like for storage? The default we have
is quite sensible, you get the root filesystem :)


> * it would be good to have ability to use single storage stanza in
> metadata.yaml that supports all types of storage. They way it is done
> now [0] means I can't test block storage hooks in my local dev
> environment. It also forces end users to look for storage labels that
> are supported
>
> [0] http://paste.ubuntu.com./15414289/

I'm not sure what the issue is with this one.

If we have filesystem storage it's always at the same place.

If we have a single mounted block store, it's always at the same place.

If we can attach multiple block devices, THEN you need to handle them as
they are attached.

Can you explain the problem more clearly? We do have an issue with the
LXD provider and block devices, which we think will be resolved thanks
to some good kernel work on a range of fronts, but that can't surely be
what's driving your concerns.


> * the way things are now hooks are responsible for creating filesystem
> on block devices. I feel that as a charmer I shouldn't need to know that
> much about storage internals. I would like to ask juju and get
> preconfigured path back. Whether it's formatted and mounted block
> device, GlusterFS or local filesystem it should not matter

Well, yes, that's the idea, but these things are quite subtle.

In some cases you very explicitly want the raw block. So we have to
allow that. In other cases you just want a filesystem there, and IIRC
that's the default behaviour in the common case. Finally, we have to
deal with actual network filesystems (as opposed to block devices) and I
don't think we have implemented that yet.

Mark



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Query on juju 2.0 installation

2016-04-01 Thread Mark Shuttleworth
Hi Krishna

Those docs are not great. Here's a Tl;DR step-by-step guide for a fresh
16.04 machine. I recommend you use the 16.04 beta because it includes
everything you need for a great experience by default.

  sudo apt install zfsutils-linux
  sudo add-apt-repository ppa:juju/devel
  sudo apt-get update
  sudo apt install juju2

Those give you all the latest bits you'll need.

  sudo mkdir /var/lib/zfs
  sudo truncate -s 50G /var/lib/zfs/lxd.img

This makes a sparse 50G file as a storage back-end for LXD containers.
Being a sparse file it doesn't actually take up 50G but will fill up as
you use the ZFS filesystem for various containers over time. You could
alternatively use a spare disk, preferably a nice fast one :)

  sudo zpool create lxd /var/lib/zfs/lxd.img
  sudo zpool status

At this point you should see that you have a ZFS pool setup with nothing
going on.

  sudo lxd init --auto --storage-backend zfs --storage-pool lxd
  newgrp -  # I think this is needed to get into the LXD group

You may need to logout and login to be in the lxd group.

And finally:

  juju bootstrap --config default-series=xenial lxd-test lxd

So now:

~$ juju list-controllers
CONTROLLER   MODELUSER SERVER
local.lxd-test*  default  admin@local  10.0.3.161:17070

~$ juju create-model test
created model "test"

~$ juju switch
local.lxd-test:test

~$ juju deploy wordpress

... and you're off.

Mark

On 01/04/16 10:46, Matthew Williams wrote:
> Hi Krishna,
>
> You can use the lxd provider. There are instructions at the following places
>
> https://jujucharms.com/docs/devel/config-LXD
> https://jujucharms.com/docs/devel/controllers-creating
>
> Thanks
>
> Matty
>
> On Fri, Apr 1, 2016 at 8:25 AM, Krishna Bandi  wrote:
>
>> Hi Team,
>>
>>  Hope you all doing great. I was trying to use ibm-base-layer and as
>> confirm by Matt it uses juju 2.0 so I was trying to install juju 2.0 on
>> ubuntu x86 machine by following the link
>> https://jujucharms.com/docs/devel/introducing-2and found it has steps for
>> aws setup only and I am not able to install local lxc containers.
>>
>> Can you please advise on how to set up local lxc containers using juju
>> 2.0.
>>
>> Also we are not able to install juju 2.0 on power machine, please suggest
>> on this.
>>
>> Thanks & Regards,
>>
>> Bandi K Chaitanya
>> --
>> *Mobile:*+91-973145
>> *E-mail:* *kriba...@in.ibm.com* 
>> [image: IBM]
>>
>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
>
>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Query on juju 2.0 installation

2016-04-01 Thread Matthew Williams
Hi Krishna,

You can use the lxd provider. There are instructions at the following places

https://jujucharms.com/docs/devel/config-LXD
https://jujucharms.com/docs/devel/controllers-creating

Thanks

Matty

On Fri, Apr 1, 2016 at 8:25 AM, Krishna Bandi  wrote:

> Hi Team,
>
>  Hope you all doing great. I was trying to use ibm-base-layer and as
> confirm by Matt it uses juju 2.0 so I was trying to install juju 2.0 on
> ubuntu x86 machine by following the link
> https://jujucharms.com/docs/devel/introducing-2and found it has steps for
> aws setup only and I am not able to install local lxc containers.
>
> Can you please advise on how to set up local lxc containers using juju
> 2.0.
>
> Also we are not able to install juju 2.0 on power machine, please suggest
> on this.
>
> Thanks & Regards,
>
> Bandi K Chaitanya
> --
> *Mobile:*+91-973145
> *E-mail:* *kriba...@in.ibm.com* 
> [image: IBM]
>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Query on juju 2.0 installation

2016-04-01 Thread Krishna Bandi
Hi Team,

 Hope you all doing great. I was trying to use ibm-base-layer and as 
confirm by Matt it uses juju 2.0 so I was trying to install juju 2.0 on 
ubuntu x86 machine by following the link 
https://jujucharms.com/docs/devel/introducing-2 and found it has steps for 
aws setup only and I am not able to install local lxc containers. 

Can you please advise on how to set up local lxc containers using juju 
2.0. 

Also we are not able to install juju 2.0 on power machine, please suggest 
on this.

Thanks & Regards,

Bandi K Chaitanya 

Mobile: +91-973145
E-mail: kriba...@in.ibm.com




-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju