So, I finally got back to revisiting this. I still am not getting
nova-network installing on my compute nodes.

So you can get an idea of what I have setup so far, here's the "juju
status" output from my current environment: http://pastebin.com/0D1tngrm

I did originally have the nova-cloud-controller network-manager config set
to "FlatManager" (which really is what I want) and have since set it to
"FlatDHCPManager" with "juju set nova-cloud-controller
network-manager=FlatDHCPManager". It's probably worth noting that the
nova.conf on my nova-cloud-controller node contained "network_manager =
nova.network.manager.FlatDHCPManager" even with "FlatManager" set in the
juju config.

So, based on what you guys have said, I should be expecting the
nova-network package to be installed on the compute nodes, and yet:

root@a88tx:~# aptitude search nova* | grep ^i
i A nova-common                     - OpenStack Compute - common files

i   nova-compute                    - OpenStack Compute - compute node base

i   nova-compute-kvm                - OpenStack Compute - compute node
(KVM)
i A nova-compute-libvirt            - OpenStack Compute - compute node
libvirt s
i A python-nova                     - OpenStack Compute Python libraries

If I install nova-network by hand I can get things working-ish, but it's
still not quite right.

Also, I looked at the contents of /var/log/juju/unit-nova-compute-0.log on
the compute node (contents here http://pastebin.com/EAymFdG8 ) and at the
end of the log it is trying to start nova-network, but it doesn't know
about the service.

Also, if I issue the command you reference above to create the initial
network on controller node, I get a syntax error (snipped out help text):

root@juju-machine-0-lxc-2:~# nova-manage network create private
172.16.0.0/12 1 256 --bridge=br100 --bridge-interface=eth1 --multi-host=T
nova-manage: error: unrecognized arguments: --bridge-interface=eth1
--multi-host=T

Instead I resorted to:
nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=br100

But this probably isn't the place to dive further into that portion of
things.

QH




On Thu, May 1, 2014 at 7:27 AM, James Page <james.p...@ubuntu.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On 01/05/14 13:13, Quentin Hartman wrote:
> >> nova-compute should be if one of the nova network options is used
> >> in the nova-cloud-controller charm.  I'll run this option through
> >> our lab again today to make sure that it is.
> >
> > Thanks again for the additional info. I'd love to hear what your
> > results are and appreciate you looking into it. Is there anything
> > that need be done to specify that other than using the flatmanager
> > or flatdhcpmanager options?
>
> I ran a nova-network topology test - nova-network and nova-api
> services get enabled on the nova-compute nodes as spec for multi-host
> mode.
>
> Looking at our config we just set - FlatDHCPManager for
> network-manager option in the nova-cloud-controller charm.
>
> >> Note that pushing multiple charms onto the same service units
> >> without LXC container use is not support other than for the
> >> following charms - which need to not be in LXC containers:
> >>
> >> nova-compute ceph/ceph-osd swift-storage
> >>
> >> This allows you to maximise storage/compute usage in deployments
> >> - but it won't fit everyones requirements.
> >
> > That is the solution I hit on as well, and so will be exploring lxc
> > when I return to this project next week. For my purposes having
> > Maas, juju, and the control services for open stack on separate
> > boxes is wasteful. My deployment isn't large enough to need that
> > much hardware in the control layer.
>
> We run an internal test cloud on 6 nodes; 5 compute/ceph/swift nodes
> and 1 control node with everything else in LXC containers.  Juju
> controls the creation of the lxc containers:
>
>    juju deploy cinder --to lxc:0
>
> Deploys the cinder charm to a lxc container on physical machine 0 (yes
> - - that is the bootstrap node :-)).
>
> We also push the quantum-gateway charm onto the bootstrap node without
> LXC for north/south network connectivity for instances.
>
> >> Nick, who also wrote the MAAS and Juju documentation, has been
> >> working on a OpenStack + Juju + MAAS guide this cycle - its in
> >> final review and hopefully should be out in the next couple of
> >> weeks.
> >
> > Is there a way I can help with and or review the wip? I'm hoping to
> > have this deployment in service in a couple weeks and I'm sure the
> > information in the guide would be quite valuable to me, even if
> > it's not yet perfect.
>
> Not sure - I'll let Nick answer this question.
>
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.p...@ubuntu.com
> jamesp...@debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJTYkvYAAoJEL/srsug59jDXHQQAKXL9FpgdPBNK05kzdk6ZVdA
> FVqK/NSzLHP6MklxLfKWTNqQVbmlIrG90oSPyq1uTxMhTW32K8RMNJEtGMxq3krA
> 0dpsfi3JH72T+z0uObeqDdI575GHgNq1eyiHoBQlzC76GPnbj8fNtSkc9eTm1EPf
> Wl8DIcbUwtcbpT0aul0b3DG9Ed/j8dL54v2c6x64MahN8srdPRuwhLUQ4evyVcSN
> zNYznijWnV1t04PiW4a5tPI8jY2qaA7pDuPcNlJSVc0o+7UrbSRGowKwfLeS8gM/
> wLZzezObZAi3jXQE4e0jG/b/A2tMqli4wZ5G4CtdR4cCapwHZi3EmAzrNmK6onFt
> 0ftn2nMy6xMiHq7jZ5PEG2gtr9v6/MjZd638yD66h3CxnnYXngsG5I6DvwmCl/zy
> IhaAdkyO2+cEMYXPMAWcTnu5FGmtJB6yZkczJuDFt0V+J8uCWWe1nLKQ0/L7pefl
> QeZ4eafCWEmGQk9dUa8spLXAbUYhB3t1q63WtL0cFYpN992XMXWb4j+1wE1sK7dz
> fNhiCEHx5MG5uM/u5xOsvAC8nlcT+a4Ty02Gh2o5xt21pUY+qMSP+iD8uXJ4ml2O
> P/sVoQAEOn4Uz363wWNk9dEVb878C3bEByXhwsyo4VQ279iEiQgj+O70Fai6iaTC
> h2cwHUZIPUGYssPZQ0tw
> =G8xH
> -----END PGP SIGNATURE-----
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to