Flavio wrote The reasoning, as explained in an other
email, is that from a use-case perspective, strict ordering won't hurt
you if you don't need it whereas having to implement it in the client
side because the service doesn't provide it can be a PITA.
The reasoning is flawed though. If
I'm interested in how this relates/conflicts with the TripleO goal of using
OpenStack to deploy OpenStack.
It looks like (maybe just superficially) that Kubernetes is simply a
combination of (nova + docker driver) = container schedualer and (heat) =
orchestration. They both schedule
There are other options. 3 docker containers with ceph in them perhaps.
Kevin
From: Steven Dake
Sent: Tuesday, September 23, 2014 7:21:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project -
Ive had good luck recently enabling heat suppprt and then tweaking the trove
default template to use a standard image and install the guest agent at launch.
So no custom image needed.
Thanks,
Kevin
From: Swapnil Kulkarni
Sent: Wednesday, September 24, 2014
09, 2013 8:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][TripleO] Nested resources
On Dec 5, 2013, at 8:11 PM, Fox, Kevin M wrote:
I think the security issue can be handled by not actually giving the
underlying resource to the user
Yeah. Its likely that the metadata server stuff will get more scalable/hardened
over time. If it isn't enough now, lets fix it rather then coming up with a new
system to work around it.
I like the idea of using the network since all the hypervisors have to support
network drivers already. They
to Marconi's team to handle security issues while it is part
of their mission statement to deliver a messaging service in between VMs.
Le 12 déc. 2013 22:09, Fox, Kevin M
kevin@pnnl.govmailto:kevin@pnnl.gov a écrit :
Yeah, I think the extra nic is unnecessary too. There already
[l...@redhat.com]
Sent: Monday, December 16, 2013 8:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal
On Fri, Dec 13, 2013 at 11:32:01AM -0800, Fox, Kevin M wrote:
I hadn't thought about that use case, but that does
Someone's gotta make/maintain the trove/savanna images though. They usually are
built from packages. If there is a unified agent, then it only has to be
packaged once. If there is one per special type of agent, its one package per
special type of agent. I don't think there is a free lunch here,
How about a different approach then... OpenStack has thus far been very
successful providing an API and plugins for dealing with things that cloud
providers need to be able to switch out to suit their needs.
There seems to be two different parts to the unified agent issue:
* How to get rpc
Sounds very useful. Would there be a diskimage-builder flag then to say you
prefer packages over source? Would it fall back to source if you specified
packages and there were only source-install.d for a given element?
Thanks,
Kevin
From: James Slagle
I was going to stay silent on this one, but since you asked...
/me puts his customer hat on
We source OpenStack from RDO for the packages and additional integration
testing that comes from the project instead of using OpenStack directly. I was
a little turned off from Triple-O when I saw it
One of the major features using a distro over upstream gives is integration.
rhel6 behaves differently then ubuntu 13.10. Sometimes it takes a while to fix
upstream for a given distro, and even then it may not even be accepted because
the distro's too old, go away. Packages allow a distro to
Another piece to the conversation I think is update philosophy. If you are
always going to require a new image and no customization after build ever,
ever, the messiness that source usually cause in the file system image really
doesn't matter. The package system allows you to easily update,
Let me give you a more concrete example, since you still think one size fits
all here.
I am using OpenStack on my home server now. In the past, I had one machine with
lots of services on it. At times, I would update one service and during the
update process, a different service would break.
Option 5:
If the implementation works good, but its just a confusing ui, you could always
change the code so it filters out the floating-ip ports from view. Make them a
pure implementation detail that a user never sees.
Kevin
From: Salvatore Orlando
What about a configuration option on the volume for delete type? I can see some
possible options:
* None - Don't clear on delete. Its junk data for testing and I don't want to
wait.
* Zero - Return zero's from subsequent reads either by zeroing on delete, or by
faking zero reads initially.
*
@lists.openstack.org
Subject: Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of
cinder.
On 01/15/2014 06:00 PM, Fox, Kevin M wrote:
What about a configuration option on the volume for delete type? I can see
some possible options:
* None - Don't clear on delete. Its junk data for testing
From:Fox, Kevin M kevin@pnnl.gov
To:OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:01/15/2014 06:06 PM
Subject:Re: [openstack-dev] Proposal for dd disk i/o
Yeah, I think the evil firmware issue is separate and should be solved
separately.
Ideally, there should be a mode you can set the bare metal server into where
firmware updates are not allowed. This is useful to more folks then just
baremetal cloud admins. Something to ask the hardware vendors
Another tricky bit left is how to handle service restarts as needed?
Thanks,
Kevin
From: Dan Prince [dpri...@redhat.com]
Sent: Wednesday, January 22, 2014 10:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re:
I think most of the time taken to reboot is spent in bringing down/up the
services though, so I'm not sure what it really buys you if you do it all. It
may let you skip the crazy long bootup time on enterprise hardware, but that
could be worked around with kexec on the full reboot method too.
Maybe I misunderstand, but I thought:
kexec - lets you boot a new kernel/initrd starting at the point a boot loader
would skipping the bios init. All previous running processes are not running in
the new boot just like a normal reboot.
CRIU - Lets you snapshot/restart running processes.
While
Would docker work for this?
Assume every service gets its own docker container. A deployed node is then a
docker base image with a set of service containers. Updating an image could be:
Check if base part of image updated (kernel, docker). if so, full redeploy the
node.
Sync each container
Would it make sense to simply have the neutron metadata service re-export every
endpoint listed in keystone at /openstack/api/endpoint-name?
Thanks,
Kevin
From: Murray, Paul (HP Cloud Services) [pmur...@hp.com]
Sent: Friday, January 24, 2014 11:04 AM
To:
The reuse case can be handled via using a nested Stack. The scaled_resource
type property would allow that to happen in the first arrangement. I don't
think you can specify a resource type/nested stack with a LaunchConfig which
makes it much less preferable I think. So its less flexible and
Yeah, we are running it on RHEL6.5 and it seems to just work. Haven't tried
CentOS 6.5 specifically but should work the same.
Thanks,
Kevin
From: Robert Nettleton [rnettle...@hortonworks.com]
Sent: Friday, January 31, 2014 1:34 PM
To:
Another scaling down/update use case:
Say I have a pool of ssh servers for users to use (compute cluster login nodes).
Autoscaling up is easy. Just launch a new node and add it to the load balancer.
Scaling down/updating is harder. It should ideally:
* Set the admin state on the load balancer
I think a lot of projects don't bother to gate, because its far to much work to
set up a workable system.
I can think of several projects I've worked on that would benefit from it but
haven't because of time/cost of setting it up.
If I could just say solum create project foo and get it, I'm
I submitted a blueprint a while back that I think is relevant:
https://blueprints.launchpad.net/heat/+spec/elasticloadbalancing-lbaas
Currently heat autoscaling doesn't interact with Neutron lbaas and the
configurable bits aren't configurable enough to allow it without code changes
as far as I
Hi Chris,
That's great to hear. I'm looking forward to installing icehouse and testing
that out. :)
Thanks,
Kevin
From: Chris Armstrong [chris.armstr...@rackspace.com]
Sent: Wednesday, March 12, 2014 1:29 PM
To: Fox, Kevin M; OpenStack Development Mailing List
Funny this topic came up. I was just looking into some of this yesterday.
Here's some links that I came up with:
*
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-qemu-ga-freeze-thaw.html
- Describes how
Can someone please give more detail into why MongoDB being AGPL is a problem?
The drivers that Marconi uses are Apache2 licensed, MongoDB is separated by the
network stack and MongoDB is not exposed to the Marconi users so I don't think
the 'A' part of the GPL really kicks in at all since the
: [openstack-dev] [Marconi] Why is marconi a queue implementation vs
a provisioning API?
On 03/19/2014 02:24 PM, Fox, Kevin M wrote:
Can someone please give more detail into why MongoDB being AGPL is a
problem? The drivers that Marconi uses are Apache2 licensed, MongoDB is
separated by the network
I added our priorities. I hope its formatted well enough. I just took a stab in
the dark.
Thanks,
Kevin
From: Eugene Nikanorov [enikano...@mirantis.com]
Sent: Tuesday, April 01, 2014 3:02 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev]
-dev@lists.openstack.org
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question
On 10/09/2013 08:43 PM, Fox, Kevin M wrote:
Thanks for the docs. It looks like I got through all of that already, its the
authentication module part that is throwing me.
I managed to manually get a token
From: Simo Sorce [s...@redhat.com]
Sent: Monday, October 14, 2013 6:58 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question
On Mon, 2013-10-14 at 14:31 -0700, Fox, Kevin M wrote:
Hi Adam,
I was trying to get both kerberos
Has the case been considered where REMOTE_USER is used with authentication
mechanisms where the username is an email address? It will have to keep the
@domain part because that's the only thing that makes it unique.
Thanks,
Kevin
From: Álvaro López
I'm not sure how you could avoid dependencies in any network configuration
worth dumping and restoring.
One case that I'd like to use the functionality you list is the following:
I have an external network, and I create a private network per tenant and
attach it via a router to the public
There is a high priority approved blueprint for a Neutron PoolMember:
https://blueprints.launchpad.net/heat/+spec/loadballancer-pool-members
Thanks,
Kevin
From: Christopher Armstrong [chris.armstr...@rackspace.com]
Sent: Thursday, November 21, 2013 9:44 AM
I agree that maybe an external file might be better suited to extra metadata.
I've found it rare that you ever use just one template per stack. Usually it is
a set of nested templates. This would allow for advanced ui features like an
icon for the stack.
On the other hand, there is the
This use case is sort of a providence case. Where did the stack come from so I
can find out more about it.
You could put a git commit field in the template itself but then it would be
hard to keep updated.
How about the following:
Extend heat to support setting a scmcommit metadata item on
Hmm... Yeah. when you tell heat client the url to a template file, you could
set a flag telling the heat client it is in a git repo. It could then
automatically look for repo information and set a stack metadata item pointing
back to it.
If you didn't care about taking a performance hit, heat
Hi all,
I just want to run a crazy idea up the flag pole. TripleO has the concept of an
under and over cloud. In starting to experiment with Docker, I see a pattern
start to emerge.
* As a User, I may want to allocate a BareMetal node so that it is entirely
mine. I may want to run multiple
I've been anxious to try out Barbican, but haven't had quite enough time to try
it yet. But finding out it won't work with Qpid makes it unworkable for us at
the moment. I think a large swath of the OpenStack community won't be able to
use it in this form too.
Thanks,
Kevin
What is the difference between Heater, Cloudify
(http://appcatalog.cloudifysource.org/#/?tumblr) and Murano
(https://wiki.openstack.org/wiki/Murano)
If Heater is intended to be a subset of Cloudify/Murano that they both would
use, it might be good to start off like the Solum folks are?
My 2 cents. Glance currently deals with single file images.
A user is going to want the heat repo to operate at a stack level. IE, I want
to launch stack foo.
For all but the most trivial cases, a stack is made up of more then one
template. These templates should be versioned as set (stack),
From: Mark McLoughlin [mar...@redhat.com]
Sent: Thursday, December 05, 2013 1:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][TripleO] Nested resources
Hi Kevin,
On Mon, 2013-12-02 at 12:39 -0800, Fox, Kevin M wrote:
Hi all,
I just
Another option is this:
https://github.com/cloudbase/cloudbase-init
It is python based on windows rather then .NET.
Thanks,
Kevin
From: Sandy Walsh [sandy.wa...@rackspace.com]
Sent: Friday, December 06, 2013 12:12 PM
To: openstack-dev@lists.openstack.org
I'm not seeing anything here about non http(s) related Load balancing. We're
interested in load balancing ssh, ftp, and other services too.
Thanks,
Kevin
From: Samuel Bercovici [samu...@radware.com]
Sent: Sunday, April 06, 2014 5:51 AM
To: OpenStack Development
I've seen unusable error messages out of heat as well. I've been telling users
(our ops guys) to look at the heat-engine logs when it happens and usually its
fairly apparent what is wrong with their templates.
In the future, Should I report each of these I see as a new bug or add each to
the
Maybe an Intel AMT driver too:
http://www.intel.com/content/www/us/en/architecture-and-technology/intel-active-management-technology.html
You could use desktop class machines with ironic then.
Kevin
From: Devananda van der Veen [devananda@gmail.com]
Sent:
I've been independently been looking at something similar. Some things of
interest:
https://blueprints.launchpad.net/nova/+spec/quiesced-image-snapshots-with-qemu-guest-agent
Particularly interesting to trove may be this example hook:
Different distro's move the binaries and services too. ubuntu/debian does:
/usr/sbin/apache2, not httpd. The service is also named apache2, not httpd.
So, I think distro specific sets of packages are somewhat unavoidable.
Now, this use case might be a good case for supporting:
+1
And since most of the monitoring systems have standardized on supporting Nagios
plug ins, it would be great if it supported them too.
Thanks,
Kevin
From: Alexandre Viau [alexandre.v...@savoirfairelinux.com]
Sent: Thursday, May 01, 2014 2:17 PM
To:
I have a cloud that is using vxlan's over infiniband. Its working great.
Now, I have a second system, a Lustre cluster that I would like to make
available to a tenant in the cloud. I can't bridge into this network since its
IB. Routing onto it is also proving to be tricky...
One idea I had was
+1
From: Gordon Sim [g...@redhat.com]
Sent: Wednesday, September 24, 2014 10:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zaqar] The horse is dead. Long live the horse.
On 09/24/2014 06:07 PM, Clint
Doesn't nova with a docker driver and heat autoscaling handle case 2 and 3 for
control jobs? Has anyone tried yet?
Thanks,
Kevin
From: Angus Lees [g...@inodes.org]
Sent: Wednesday, September 24, 2014 6:33 PM
To: openstack-dev@lists.openstack.org
Subject:
Then you still need all the kubernetes api/daemons for the master and slaves.
If you ignore the complexity this adds, then it seems simpler then just using
openstack for it. but really, it still is an under/overcloud kind of setup,
your just using kubernetes for the undercloud, and openstack
Why can't you manage baremetal and containers from a single host with
nova/neutron? Is this a current missing feature, or has the development teams
said they will never implement it?
Thanks,
Kevin
From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday,
Ah. So the goal of project Kolla then is to deploy OpenStack via Docker using
whatever means that works, not, to deploy OpenStack using Docker+Kubernetes,
where the first stab at an implementation is using Kubernetes. That seems like
a much more reasonable goal to me.
Thanks,
Kevin
-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: Thursday, September 25, 2014 9:35 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
Manage OpenStack using Kubernetes and Docker
First, Kevin, please try to figure
-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: Thursday, September 25, 2014 9:44 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
Manage OpenStack using Kubernetes and Docker
Excerpts from Fox, Kevin M's
-Original Message-
From: Angus Lees [mailto:gusl...@gmail.com] On Behalf Of Angus Lees
Sent: Thursday, September 25, 2014 9:01 PM
To: openstack-dev@lists.openstack.org
Cc: Fox, Kevin M
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
Manage OpenStack using
Has anyone figured out a way of having a floating ip like feature with docker
so that you can have rabbitmq, mysql, or ceph mon's at fixed ip's and be able
to migrate them around from physical host to physical host and still have them
at fixed locations that you can easily put in static config
Absolutely this needs splitting out. I ran into an issue a few years ago with
this antipattern with the mythtv folks. The myth client on my laptop got
upgraded and it was overly helpful in that it connected directly to the
database and upgraded the schema for me, breaking the server, and all
Same thing works with cloud init too...
I've been waiting on systemd working inside a container for a while. it seems
to work now.
The idea being its hard to write a shell script to get everything up and
running with all the interactions that may need to happen. The init system's
already
Ok, why are you so down on running systemd in a container?
Pacemaker works, but its kind of a pain to setup compared just yum installing a
few packages and setting init to systemd. There are some benefits for sure, but
if you have to force all the docker components onto the same physical
I'm not arguing that everything should be managed by one systemd, I'm just
saying, for certain types of containers, a single docker container with systemd
in it might be preferable to trying to slice it unnaturally into several
containers.
Systemd has invested a lot of time/effort to be able
docker exec would be awesome.
So... whats redhat's stance on docker upgrades here?
I'm running centos7, and dockers topped out at
docker-0.11.1-22.el7.centos.x86_64.
(though redhat package versions don't always reflect the upstream version)
I tried running docker 1.2 binary from docker.io but
use a OS::Neutron::PoolMember instead. Then each member template can add itself
to the pool.
From: Magesh GV [magesh...@oneconvergence.com]
Sent: Tuesday, October 21, 2014 12:07 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Combination of Heat
Images that are premade and ready to go would be a huge step in the right
direction.
You currently are expected to make them all yourself, which involves a lot of
work/knowledge.
Its great to be able to build them, but right out of the gate, they are too
much work for a new user.
Thanks,
guess ResourceGroup is
the only iterative construct in Heat. Is the use case supported today? I think
this is more than a simple usage question, hence posting it here. Thank you.
Regards
Subra
On Tue, Oct 21, 2014 at 8:55 AM, Fox, Kevin M
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
use
From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, October 28, 2014 11:34 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack
*SNIP*
I think we should actually just rip the git repos out of the images in
That would be very useful. It would eliminate a few more places where I've
needed the aws if function.
It would be good to keep the get_ prefix for consistency.
Id vote for seperate function. Its cleaner.
Thanks,
Kevin
From: Lee, Alexis
Sent: Wednesday,
Except it penalizes us bad spellers. ;)
Kevin
From: Clint Byrum
Sent: Wednesday, November 05, 2014 11:26:43 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Heat] New function: first_nonnull
Excerpts from Lee, Alexis's message of 2014-11-05 15:46:43 +0100:
Perhaps they are there to support older browsers?
Thanks,
Kevin
From: Matthias Runge [mru...@redhat.com]
Sent: Wednesday, November 19, 2014 12:27 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Horizon] the future of angularjs
Would net booting a minimal discovery image work? You usually can dump ipmi
network information from the host.
Thanks,
Kevin
From: Matthew Mosesohn [mmoses...@mirantis.com]
Sent: Wednesday, November 19, 2014 7:46 AM
To: OpenStack Development Mailing List
...@mirantis.com]
Sent: Wednesday, November 19, 2014 9:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Bogdan Dobrelya
Subject: Re: [openstack-dev] [Fuel] Power management in Cobbler
On 19 Nov 2014, at 17:56, Fox, Kevin M kevin@pnnl.gov wrote:
Would net booting
I think it depends totally on if you want trunk to be a distribution mechanism
or not. If you encourage people to 'just use trunk' for deployment, then you
better not break out of tree drivers on people. If you have a stable release
branch, that you tell people to use, there is plenty of
How about this?
https://wiki.openstack.org/wiki/Monasca
Kevin
From: Dmitriy Shulyak
Sent: Friday, November 21, 2014 12:57:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fuel] fuel master monitoring
I have
Simply having a git repository does not imply that its source.
In fact, if its considered compiled (minified), I'm thinking the debian rules
would prevent sourcing from it?
Thanks,
Kevin
From: Donald Stufft [don...@stufft.io]
Sent: Friday, November 21,
One of the selling points of tripleo is to reuse as much as possible from the
cloud, to make it easier to deploy. While monasca may be more complicated, if
it ends up being a component everyone learns, then its not as bad as needing to
learn two different monitoring technologies. You could say
:36 PM, Fox, Kevin M
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
One of the selling points of tripleo is to reuse as much as possible from the
cloud, to make it easier to deploy. While monasca may be more complicated, if
it ends up being a component everyone learns, then its not as bad
+1. Well said. I second the applauding of the Fuel's development team's for
their changing of their communications patterns (that's never easy) and also
the desire for closer integration with the rest of the OpenStack community.
From: Jay Pipes
Choosing the right instrument for the job in an open source community involves
choosing technologies that the community is familiar/comfortable with as well,
as it will allow you access to a greater pool of developers.
With that in mind then, I'd add:
Pro Pecan, blessed by the OpenStack
We've been interested in Ironic as a replacement for Cobbler for some of our
systems and have been kicking the tires a bit recently.
While initially I thought this thread was probably another Fuel not playing
well with the community kind of thing, I'm not thinking that any more. Its
deeper
.
No time frame on any of it.
Thanks,
Kevin
From: xianchaobo
Sent: Thursday, December 11, 2014 1:07:54 AM
To: openstack-dev@lists.openstack.org
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic service
Hi,Fox Kevin M
Thanks
I was asking earlier this week about keystone resources on the irc channel...
We're thinking about having a tenant per user on one of our clouds. We're using
neutron. So setting this up involves:
* Creating a User
* Creating a Tenant
* Assigning Roles
* Creating the Tenants default Private
But selecting from a list is harder then from a grid. A grid would give you
ample room for icons, which also make finding what your looking for easier.
Having a bit more space makes selecting the thing you want with a mouse(or
finger on a tablet) easier.
To make it not visually overloaded, you
There was a thread named Wrong Status at the Dashboard - IceHouse Horizon mid
last year.
Thread archived here:
http://www.gossamer-threads.com/lists/openstack/dev/38332
I seem to have hit the same issue using Juno.
I had some bad passwords on a hypervisor while I was figuring out the deploy
For reference, most of our kickstart scripts for storage bricks go by size. The
little disks are system disks to be assembled into a software raid. The big
ones are raid arrays to be preserved.
Kickstart's ability to let you run a shell script on the host to build the
partitioning instructions
Also, can
https://review.openstack.org/#/c/163647https://review.openstack.org/#/c/163647/2
be considered. all it does is move code from the accepted nfs.py to posix.py
as described in the approved spec. This enables all posix complient file
systems to be used, such as lustre instead of just
It sounds like you want to be able to allocate and manage floating ips out of a
neutron subnet and attach them to vms in that same subnet? No router needed?
Sounds useful.
Would probably need different quotas. Since they arent public floating ips.
Maybe floating ip quotas should be seperated
I think the main disconnect comes from this
Is NaaS a critical feature of the cloud, or not? nova-network asserts no. The
neutron team asserts yes, and neutron is being developed with that in mind
currently. This is a critical assertion that should be discussed.
With my app developer hat
The floating ip only on external netwoks thing has always been a little odd to
me...
Floating ip's are very important to ensure a user can switch out one instance
with another and keep 'state' consistent (the other piece being cinder
volumes). But why can't you do this on a provider network?
No, no. Most OpenStack deployments are neutron based with ovs because its the
default these days.
There are all sorts of warnings to folks for years saying if you start with
nova-network, there will be pain for you later. Hopefully, that has scared away
most new folks from doing it. Most of
I'm not sure how to feel about this... Its clever...
It kind of feels like your really trying to be able to register 'actions' in
heat so that heat users can poke the vm's to do something... For example
perform a chef run.
While using stack updates listed below could be made to work, is that
We've run into issues with nova+neutron and keystone v3 with Juno.
Specifically this one:
https://bugs.launchpad.net/nova/+bug/1424462
But there may be others that I don't know about.
Thanks,
Kevin
From: Rich Megginson [rmegg...@redhat.com]
Sent: Friday, April
...@gmail.com]
Sent: Monday, April 20, 2015 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)
On Mon, Apr 20, 2015 at 12:07 PM, Fox, Kevin M
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
Another parallel
1 - 100 of 734 matches
Mail list logo