Re: lxd hook failed change-config

2016-10-20 Thread Adam Stokes
Odd it looks like the container has a read only file system? I ran through
a full openstack-novalxd deployment today and one of the upstream
maintainers ran through the same deployment and didn't run into any issues.

On Thu, Oct 20, 2016, 10:02 PM Heather Lanigan  wrote:

>
> I used conjure-up to deploy openstack-novalxd on a Xenial system.  Before
> deploying, the operating system was updated.  LXD init was setup with dir,
> not xfs.  All but one of the charms has a status of “unit is ready"
>
> The lxd/0 subordinate charm has a status of: hook failed:
> "config-changed”.  See details below.
>
> I can boot an instance within this OpenStack deployment.  However deleting
> the instance fails. A side effect of the lxd/0 issues?
>
> Juju version 2.0.0-xenial-amd64
> conjure-up version 2.0.2
> lxd charm version 2.0.5
>
> Any ideas?
>
> Thanks in advance,
> Heather
>
> ++
>
> The /var/log/juju/unit-lxd-0.log on the unit reports:
>
> 2016-10-21 01:09:33 INFO config-changed Traceback (most recent call last):
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 140, in
> 
> 2016-10-21 01:09:33 INFO config-changed main()
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 134, in
> main
> 2016-10-21 01:09:33 INFO config-changed hooks.execute(sys.argv)
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/hookenv.py",
> line 715, in execute
> 2016-10-21 01:09:33 INFO config-changed self._hooks[hook_name]()
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 78, in
> config_changed
> 2016-10-21 01:09:33 INFO config-changed configure_lxd_host()
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/decorators.py",
> line 40, in _retry_on_exception_inner_2
> 2016-10-21 01:09:33 INFO config-changed return f(*args, **kwargs)
> 2016-10-21 01:09:33 INFO config-changed   File
> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/lxd_utils.py", line 429, in
> configure_lxd_host
> 2016-10-21 01:09:33 INFO config-changed with open(EXT4_USERNS_MOUNTS,
> 'w') as userns_mounts:
> 2016-10-21 01:09:33 INFO config-changed IOError: [Errno 30] Read-only file
> system: '/sys/module/ext4/parameters/userns_mounts'
> 2016-10-21 01:09:33 ERROR juju.worker.uniter.operation runhook.go:107 hook
> "config-changed" failed: exit status 1
>
>
> root@juju-456efd-13:~# touch /sys/module/ext4/parameters/temp-file
> touch: cannot touch '/sys/module/ext4/parameters/temp-file': Read-only
> file system
> root@juju-456efd-13:~# df -h /sys/module/ext4/parameters/userns_mounts
> Filesystem  Size  Used Avail Use% Mounted on
> sys0 0 0- /dev/.lxc/sys
> root@juju-456efd-13:~# touch /home/ubuntu/temp-file
> root@juju-456efd-13:~# ls /home/ubuntu/temp-file
> /home/ubuntu/temp-file
> root@juju-456efd-13:~# df -h
> Filesystem   Size  Used Avail Use% Mounted on
> /dev/mapper/mitaka--vg-root  165G   47G  110G  30% /
> none 492K 0  492K   0% /dev
> udev  16G 0   16G   0% /dev/fuse
> tmpfs 16G 0   16G   0% /dev/shm
> tmpfs 16G   49M   16G   1% /run
> tmpfs5.0M 0  5.0M   0% /run/lock
> tmpfs 16G 0   16G   0% /sys/fs/cgroup
> tmpfs3.2G 0  3.2G   0% /run/user/112
> tmpfs3.2G 0  3.2G   0% /run/user/1000
>
> +
>
> heather@mitaka:~$ nova boot --image d2eba22a-e1b1-4a2b-aa87-450ee9d9e492
> --flavor d --nic net-name=ubuntu-net --key-name keypair-admin
> xenial-instance
> heather@mitaka:~/goose-work/src/gopkg.in/goose.v1$ nova list
>
> +--+-+++-+---+
> | ID   | Name| Status | Task
> State | Power State | Networks  |
>
> +--+-+++-+---+
> | 80424b94-f24d-45ff-a330-7b67a911fbc6 | xenial-instance | ACTIVE | -
>  | Running | ubuntu-net=10.101.0.8 |
>
> +--+-+++-+---+
>
> heather@mitaka:~$ nova delete 80424b94-f24d-45ff-a330-7b67a911fbc6
> Request to delete server 80424b94-f24d-45ff-a330-7b67a911fbc6 has been
> accepted.
> heather@mitaka:~$ nova list
>
> +--+-+++-+--+
> | ID   | Name| Status | Task
> State 

lxd hook failed change-config

2016-10-20 Thread Heather Lanigan

I used conjure-up to deploy openstack-novalxd on a Xenial system.  Before 
deploying, the operating system was updated.  LXD init was setup with dir, not 
xfs.  All but one of the charms has a status of “unit is ready"

The lxd/0 subordinate charm has a status of: hook failed: "config-changed”.  
See details below.

I can boot an instance within this OpenStack deployment.  However deleting the 
instance fails. A side effect of the lxd/0 issues?

Juju version 2.0.0-xenial-amd64
conjure-up version 2.0.2
lxd charm version 2.0.5

Any ideas?

Thanks in advance,
Heather

++

The /var/log/juju/unit-lxd-0.log on the unit reports:
2016-10-21 01:09:33 INFO config-changed Traceback (most recent call last):
2016-10-21 01:09:33 INFO config-changed   File 
"/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 140, in 

2016-10-21 01:09:33 INFO config-changed main()
2016-10-21 01:09:33 INFO config-changed   File 
"/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 134, in main
2016-10-21 01:09:33 INFO config-changed hooks.execute(sys.argv)
2016-10-21 01:09:33 INFO config-changed   File 
"/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/hookenv.py", 
line 715, in execute
2016-10-21 01:09:33 INFO config-changed self._hooks[hook_name]()
2016-10-21 01:09:33 INFO config-changed   File 
"/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 78, in 
config_changed
2016-10-21 01:09:33 INFO config-changed configure_lxd_host()
2016-10-21 01:09:33 INFO config-changed   File 
"/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/decorators.py", 
line 40, in _retry_on_exception_inner_2
2016-10-21 01:09:33 INFO config-changed return f(*args, **kwargs)
2016-10-21 01:09:33 INFO config-changed   File 
"/var/lib/juju/agents/unit-lxd-0/charm/hooks/lxd_utils.py", line 429, in 
configure_lxd_host
2016-10-21 01:09:33 INFO config-changed with open(EXT4_USERNS_MOUNTS, 'w') 
as userns_mounts:
2016-10-21 01:09:33 INFO config-changed IOError: [Errno 30] Read-only file 
system: '/sys/module/ext4/parameters/userns_mounts'
2016-10-21 01:09:33 ERROR juju.worker.uniter.operation runhook.go:107 hook 
"config-changed" failed: exit status 1

root@juju-456efd-13:~# touch /sys/module/ext4/parameters/temp-file
touch: cannot touch '/sys/module/ext4/parameters/temp-file': Read-only file 
system
root@juju-456efd-13:~# df -h /sys/module/ext4/parameters/userns_mounts
Filesystem  Size  Used Avail Use% Mounted on
sys0 0 0- /dev/.lxc/sys
root@juju-456efd-13:~# touch /home/ubuntu/temp-file
root@juju-456efd-13:~# ls /home/ubuntu/temp-file
/home/ubuntu/temp-file
root@juju-456efd-13:~# df -h
Filesystem   Size  Used Avail Use% Mounted on
/dev/mapper/mitaka--vg-root  165G   47G  110G  30% /
none 492K 0  492K   0% /dev
udev  16G 0   16G   0% /dev/fuse
tmpfs 16G 0   16G   0% /dev/shm
tmpfs 16G   49M   16G   1% /run
tmpfs5.0M 0  5.0M   0% /run/lock
tmpfs 16G 0   16G   0% /sys/fs/cgroup
tmpfs3.2G 0  3.2G   0% /run/user/112
tmpfs3.2G 0  3.2G   0% /run/user/1000

+

heather@mitaka:~$ nova boot --image d2eba22a-e1b1-4a2b-aa87-450ee9d9e492 
--flavor d --nic net-name=ubuntu-net --key-name keypair-admin xenial-instance
heather@mitaka:~/goose-work/src/gopkg.in/goose.v1$ nova list
+--+-+++-+---+
| ID   | Name| Status | Task State 
| Power State | Networks  |
+--+-+++-+---+
| 80424b94-f24d-45ff-a330-7b67a911fbc6 | xenial-instance | ACTIVE | -  
| Running | ubuntu-net=10.101.0.8 |
+--+-+++-+---+

heather@mitaka:~$ nova delete 80424b94-f24d-45ff-a330-7b67a911fbc6
Request to delete server 80424b94-f24d-45ff-a330-7b67a911fbc6 has been accepted.
heather@mitaka:~$ nova list
+--+-+++-+--+
| ID   | Name| Status | Task State 
| Power State | Networks |
+--+-+++-+--+
| 80424b94-f24d-45ff-a330-7b67a911fbc6 | xenial-instance | ERROR  | -  
| Running |  |
+--+-+++-+--+
heather@mitaka:~$ nova show 80424b94-f24d-45ff-a330-7b67a911fbc6
…
| fault   

Re: What are the best practices for stop hook handling?

2016-10-20 Thread Rye Terrell
Thanks; I see what you mean - I'll see if I can find any examples.

On Thu, Oct 20, 2016 at 5:33 PM, Free Ekanayaka <
free.ekanay...@canonical.com> wrote:

> On 20 October 2016 at 23:16, Rye Terrell 
> wrote:
>
>> > Do you have a real world example at hand?
>>
>> No, why?
>>
>
> Easier to reason around a specific concrete case than talk about
> theoretical situations. After looking at a few concrete situations we can
> come up with a more realistic understanding of the problem and figure some
> situations.
>
> One thing that I would note though, is that if you have to co-located
> charms sharing a service, chances are that you might want to make that
> shared service a (colocated) charm by itself, which relates to the two
> consuming charms. In that situation there's no problem at all for the stop
> hook. This is just an waving example, as said, concrete cases would help.
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What are the best practices for stop hook handling?

2016-10-20 Thread Free Ekanayaka
On 20 October 2016 at 23:16, Rye Terrell  wrote:

> > Do you have a real world example at hand?
>
> No, why?
>

Easier to reason around a specific concrete case than talk about
theoretical situations. After looking at a few concrete situations we can
come up with a more realistic understanding of the problem and figure some
situations.

One thing that I would note though, is that if you have to co-located
charms sharing a service, chances are that you might want to make that
shared service a (colocated) charm by itself, which relates to the two
consuming charms. In that situation there's no problem at all for the stop
hook. This is just an waving example, as said, concrete cases would help.
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What are the best practices for stop hook handling?

2016-10-20 Thread Rye Terrell
> Do you have a real world example at hand?

No, why?

On Thu, Oct 20, 2016 at 2:26 PM, Free Ekanayaka <
free.ekanay...@canonical.com> wrote:

> On 20 October 2016 at 16:09, Rye Terrell 
> wrote:
>
>> > Subordinate charms only make sense when collocated. And I would argue that
>> subordinates are extremely common, at least in production environments.
>>
>> > In this context clean up is very important because it's not unusual
>> for operators to switch technologies. For example replace telegraf with 
>> node-exporter
>> or collectd.
>>
>> With that in mind, I'd like to reiterate one of my original questions:
>> how do we handle cleanup in the case where two or more colocated charms
>> have the same dependencies? In the case of background services, do we not
>> stop & disable them? Do we stop & disable them and expect the remaining
>> charms to repair the situation?
>>
>
> Do you have a real world example at hand?
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What are the best practices for stop hook handling?

2016-10-20 Thread Free Ekanayaka
On 20 October 2016 at 16:09, Rye Terrell  wrote:

> > Subordinate charms only make sense when collocated. And I would argue that
> subordinates are extremely common, at least in production environments.
>
> > In this context clean up is very important because it's not unusual for 
> > operators
> to switch technologies. For example replace telegraf with node-exporter
> or collectd.
>
> With that in mind, I'd like to reiterate one of my original questions: how
> do we handle cleanup in the case where two or more colocated charms have
> the same dependencies? In the case of background services, do we not stop &
> disable them? Do we stop & disable them and expect the remaining charms to
> repair the situation?
>

Do you have a real world example at hand?
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


AWS US East (Ohio) Region now supported for Juju 2.x

2016-10-20 Thread Aaron Bentley
The Juju QA & Release team is pleased to announce support for Amazon's
new US East (Ohio) Region, aka us-east-2.

To use it with 2.0.0, just run "juju update-clouds" once.  You will see
the message:
Updated your list of public clouds with 1 cloud region added:

added cloud region:
- aws/us-east-2

After that, the new region will work the same as any other.

Support for the 1.x series is also planned, but will happen as part of a
Juju release.

Enjoy!

Aaron



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What are the best practices for stop hook handling?

2016-10-20 Thread Rye Terrell
> Subordinate charms only make sense when collocated. And I would argue that
subordinates are extremely common, at least in production environments.

> In this context clean up is very important because it's not unusual for 
> operators
to switch technologies. For example replace telegraf with node-exporter or
collectd.

With that in mind, I'd like to reiterate one of my original questions: how
do we handle cleanup in the case where two or more colocated charms have
the same dependencies? In the case of background services, do we not stop &
disable them? Do we stop & disable them and expect the remaining charms to
repair the situation?

On Thu, Oct 20, 2016 at 3:32 AM, Jacek Nykis 
wrote:

> On 19/10/16 16:15, Marco Ceppi wrote:
> >> 2. Don't colocate units if at all possible.  In separate containers on
> the
> >> same machine, sure.  But there's absolutely no guarantee that colocated
> >> units won't conflict with each other. What you're asking about is the
> very
> >> problem colocation causes. If both units try to take over the same
> port, or
> >> a common service, or write to the same file on disk, etc... the results
> >> will very likely be bad.  Stop hooks should clean up everything they
> >> started.  Yes, this may break other units that are colocated, but the
> >> alternative is leaving machines in a bad state when they're not
> colocated.
> >>
> >
> > Colocation is a rare scenario, a more common one is manual provider.
>
> Subordinate charms only make sense when collocated. And I would argue
> that subordinates are extremely common, at least in production
> environments.
>
> In any production deployment I expect some form of monitoring (nrpe,
> telegraf). Many deployments will also use logstash-forwarder,
> landscape-client, ntp, container-log-archive and other subordinate charms.
>
> So you are looking at 3-4 or more services on each juju machine,
> including LXC/LXD guests and manually provisioned systems.
>
> In this context clean up is very important because it's not unusual for
> operators to switch technologies. For example replace telegraf with
> node-exporter or collectd.
>
> Regards,
> Jacek
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What are the best practices for stop hook handling?

2016-10-20 Thread Jacek Nykis
On 19/10/16 16:15, Marco Ceppi wrote:
>> 2. Don't colocate units if at all possible.  In separate containers on the
>> same machine, sure.  But there's absolutely no guarantee that colocated
>> units won't conflict with each other. What you're asking about is the very
>> problem colocation causes. If both units try to take over the same port, or
>> a common service, or write to the same file on disk, etc... the results
>> will very likely be bad.  Stop hooks should clean up everything they
>> started.  Yes, this may break other units that are colocated, but the
>> alternative is leaving machines in a bad state when they're not colocated.
>>
> 
> Colocation is a rare scenario, a more common one is manual provider.

Subordinate charms only make sense when collocated. And I would argue
that subordinates are extremely common, at least in production environments.

In any production deployment I expect some form of monitoring (nrpe,
telegraf). Many deployments will also use logstash-forwarder,
landscape-client, ntp, container-log-archive and other subordinate charms.

So you are looking at 3-4 or more services on each juju machine,
including LXC/LXD guests and manually provisioned systems.

In this context clean up is very important because it's not unusual for
operators to switch technologies. For example replace telegraf with
node-exporter or collectd.

Regards,
Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju