Re: nodejs reactive layer

2015-10-20 Thread Charles Butler
Greetings Adam,

Replies inline - and to keep things consistent, I will be referring to the
newer nomenclature of the charm build process/tooling:



On Mon, Oct 19, 2015 at 9:23 AM, Adam Stokes 
 wrote:

> 'm looking to get my nodejs layer[1] included at
> http://interfaces.juju.solutions and wanted to make sure that this charm
> can be properly tested. I noticed that a Makefile is generated during
> `charm-compose`
>

The Makefile can be overridden in upper layers if you have one that you're
using in your charm layer. The default behavior when building charms is the
following:

If file conflict, take the top most layers file in leu of the lower layers
file.

If no conflict, simply copy the files across.

2 files are special cases to this rule, config.yaml and metadata.yaml's
relations - in which case the files are merged according to their
respective strategy.

There are other strategies available, however - I'm not as intimiately
familiar with these and hesitate to recommend changing the defaults at this
time until we have further documentation around the merge strategies
shipping with `charm build`

With that being said, I've noticed some troubles with the shipping makefile
that assumes its for the base layer project, and I feel this is a good
candidate for simply replacing it in your layer.


> and was curious how I can utilize that with amulet or whatever the
> preferred way to test charms is. The amulet documentation doesn't really go
> into how to actually run the tests locally so I can easily verify my charm.
> Is there a how-to on the preferred (blessed) way of testing charms locally?
>
>
This is a really good point that we need to update the docs to reflect that
`bundletester` is our preferred method to test charms. It serves a few
purposes

- Running the gambit that we use in CI
- charm proof
- linting
- make unit_test target (if your charm has unit tests)
- anything in tests/*  that is chmod+x



> For the curious my charm layer simply installs Node.js based on whatever
> version you set in the config (0.10, 0.12, 4.x) and will be used when
> deploying whatever node app you require.
>
> Oh one other thing, I didn't see how to pass config options in amulet's
> deploy so that I can test my different default versions from the config.
>
>
You can set the version on the charm like the following example code:

d.deployment = amulet.Deployment(series='trusty')
d.deployment.add('mycharm')
d.deployment.configure('mycharm', {'version': '1.0.0'})





> 1: https://github.com/battlemidget/juju-layer-node
>
>

On Mon, Oct 19, 2015 at 11:08 AM, Adam Stokes 
 wrote:

> Additionally I was hoping I could just layer my application on top of the
> nodejs layer and have my node version available to the application layer so
> it would look like:
>
> - base layer
> - node layer
> - application api layer
>
> This is exactly what we are solving for. Abstract out the common bases
that can be re-used across projects. This eases updates in the future for
everyone involved. +1 to this sentiment


> The application api layer would then have access to node/npm during charm
> deployment and be able to react to node's layer state. However, in testing
> this out it doesn't look like my application layer ever gets executed once
> the node layer has finished.
>
>
I need to look deeper at how you have this setup, but I imagine what's
happened is you've defined 2 layers that execute under the same hook
context. Take for example:

layer0:
@hook('install')
layer1:
@hook('install')

This lends itself to troubling race conditions where layer1 may execute
before layer0. What is a perferred method is to set a reactive state, and
react to the new state

layer0:
@hook('install')
... stuff...
reactive.set_state('thing.available')

layer1:
@when('thing.available')
.. stuff..

This will ensure your methods in layer1 only execute after the dependent
layer0 has completed its run and set the state. Please note, that you
cannot decorate methods across the hook context and reactive states. Allow
me to illustrate what not to do:

@hook('config-changed')
@when('thing.available')

This will fail 100% of the time, and not execute, and its not terribly
obvious that this is the case.


> My thinking was that in order to test multiple node versions for my
> application all I would need to do is deploy the charm with the node layers
> default version configuration. To give you a better idea this is my api
> service layer for an application I have:
>
> https://github.com/wffm/juju-layer-wffmapi
>
> That layer inherits from my node layer which inherits the base layer. I
> was hoping to keep each layer's responsibility to just one thing, ie node
> layer only installs node.js and exposes its installed state.
>

This sounds completely reasonable. Perhaps ship some extra bits in the node
layer if applicable to handle common/routine node tasks, and include
actions to perform things like 

Re: Neutron networking question with openstack-base

2015-10-20 Thread Daniel Bidwell
In juju-gui have have the ext-port set to eth3, my public nic.

With all networks and routers deleted, "juju get neutron-gatway | grep
ext-port" returns "ext-port:", nothing, yet juju-gui still have the
correct eth3 value.

I add the provider-router and ext_net with:
./neutron-ext-net -g 143.207.94.1 -c 143.207.94.0/24 -f
143.207.94.10:143.207.94.254 ext_net

ext_net_subnet says that it is UP and active
The port associated with the ext_net_subnet is UP, but the status is
down and "juju get neutron-gateway | fgrep ext-port" still returns no
nic.

juju set neutron-gateway ext-port=eth3 now returns
WARNING the configuration setting "ext-port" already has the value
"eth3" even though the get doesn't return it and the port is still down.

But I can now ping the ext_net interface IP address and the rest of my
physical machines that are on the 143.207.94.0/24 network, just not the
floating-ip.

The internal network has 2 ports, the compute:None which is UP and
Active, and the network:router_interface which is UP and Down.  This is
probably where my problem lies, but I am not sure how to go about fixing
it.

The provider-router has 2 interfaces, on the ext_net and internal, both
have an Admin State of UP, but a status of down.  My vm has a
floating-ip of 143.207.94.11, but I can't ping it or ssh to it.

When the vm boots, but appears to have failed to get an IP address.  The
ci-info console line says that eth0 is up, but has no address or mask.
I don't think I can get into the vm from the console.

How do I tell the exact version of OpenStack that I am running?  I have
Ubuntu 14.04.3 LTS installed and have installed openstack with the
openstack-base juju charm.

On Mon, 2015-10-19 at 11:10 +0100, Liam Young wrote:
> Hi Daniel,
> 
> Have you set 'ext-port' in the neutron-gateway charm?  ext-port 
> specifies the external port the neutron-gateway charm should use for 
> routing of instance traffic to the external public network. If eth1 on 
> the neutron-gateway is plumbed into 143.207.94.0/24 network then you 
> would set it with:
> 
> juju set neutron-gateway ext-port=eth1
> 
> I'd also take a look at the Network Topology page in the horizon 
> dashboard and check that ext_net and your private net are both plumbed 
> into your router and that your instances are plumbed into your private 
> network. I'm not sure which version of Openstack you are deploying but 
> certainly in Icehouse I have seen horizon claim the state of 
> network:router_gateway to be 'down' when it is actually working.
> 
> Do you see the floating IPs that you have assigned your instance(s) 
> listed when you look at the 'Attached Devices' in the detail tab of 
> ext_net ?
> 
> Also check that when the instance booted DHCP succeeded. You can do this 
> through horizon by going to Compute -> Instances -> Click on Instance 
> Name -> Log tab. The message you are looking for will differ depending 
> on the image you booted the instance from but something like:
> 
> ci-info: eth0  : 1 192.168.0.4 255.255.255.0   fa:16:3e:ef:d8:53
> 
> or
> 
> Lease of 192.168.0.2 obtained, lease time 86400
> 
> Thanks,
> Liam Young
> 
> On 15/10/15 20:49, Daniel Bidwell wrote:
> > I have installed a small openstack cloud with the openstack-base charm.
> >
> > After everything came up I ran the neutron-ext-net and
> > neutron-tenant-net.  I ran neutron-ext-net with -g 143.207.94.1 -c
> > 143.207.94.0/24 -f 143.207.94.10:143.207.94.254 ext_net
> >
> > Each machine in my cloud has an interface on the 143.207.94.0/24
> > network.  When I look at the horizon dashboard, under
> > admin->networks->ext_net it says that the port on 143.207.94.x as
> > network:router_gateway has an admin state of UP, but a Status of Down
> > and I have now access from the external addresses on 143.207.94.0 to the
> > internal addresses on the 143.207.94.0 network.  I have tried to set the
> > access rules for ping and ssh, but nothing goes through.
> >
> > How do I find out why the two different networks don't connect?  How do
> > I make them connect?
> 

-- 
Daniel Bidwell 


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju