I think those are two different, complementary things.
One's metrics and the other is monitoring. You probably want both at the same
time.
Thanks,
Kevin
From: Steven Dake (stdake) [std...@cisco.com]
Sent: Friday, July 22, 2016 3:52 PM
To: OpenStack Develo
I think its an interesting idea. If nothing else, it will show what it would be
like to have a split set of repo's before it actually is a thing and can't be
undone.
Thanks,
Kevin
From: Dave Walker [em...@daviey.com]
Sent: Friday, July 22, 2016 2:19 PM
To: OpenSt
Zane, Thanks. You have managed to articulate my concern where I've failed so
far. +1 :)
Kevin
From: Zane Bitter [zbit...@redhat.com]
Sent: Thursday, July 21, 2016 3:04 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][all] Big tent
I like the cat idea. The app cat has a very nice ring to it. :)
From: Christopher Aedo [d...@aedo.net]
Sent: Thursday, July 21, 2016 10:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [app-catalog] App Cata
https://en.wikipedia.org/wiki/Platypus#/media/File:Wild_Platypus_4.jpg :)
Thanks,
Kevin
From: Zane Bitter [zbit...@redhat.com]
Sent: Wednesday, July 20, 2016 2:32 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Mascot/logo for your pro
From: James Bottomley
Sent: Wednesday, July 20, 2016 12:42:27 PM
To: OpenStack Development Mailing List (not for usage questions); Clint Byrum
Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)
On Wed, 2016-07-20 at 18:18 +, Fox, Kevin M
M
To: OpenStack Development Mailing List (not for usage questions); Clint Byrum
Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)
On Wed, 2016-07-20 at 18:18 +0000, Fox, Kevin M wrote:
> I wish it was so simple. Its not.
>
> There is a good coding practice:
>
20, 2016 9:57 AM
To: OpenStack Development Mailing List (not for usage questions); Clint Byrum
Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)
On Wed, 2016-07-20 at 16:08 +, Fox, Kevin M wrote:
> +1 to the finding of a middle ground.
Thanks ... I have actually been an
We use ceph with cinder and glance. I don't see a reason not to.
We do not set nova to use it for anything but cinder volumes though.
The reason being, if you set it up that way, your users have no way of opting
out of the potential performance hit if using no local storage for non pets.
If you
I have a preference towards option 2 as well. I usually use templates with all
the logic in it, and an environment file with just the specific parameters
defined for launching an instance of the template so I can repeatedly
deploy/delete/redeploy it.
I've got a good template set I think that wo
+1 to the finding of a middle ground.
The problem I've seen with your suggested OpenSource solution is the current
social monetary system of OpenStack makes it extremely difficult.
Each project currently prints its own currency. Reviews. It takes quite a few
Reviews (time/effort) on a project t
From: Zane Bitter [zbit...@redhat.com]
Sent: Tuesday, July 19, 2016 1:08 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)
On 14/07/16 16:30, Fox, Kevin M wrote:
> I'm going to go ahead and ask the difficult question now as the
ne Bitter [zbit...@redhat.com]
Sent: Tuesday, July 19, 2016 1:08 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)
On 14/07/16 16:30, Fox, Kevin M wrote:
> I'm going to go ahead and ask the difficult question now as the answer i
fe [e...@leafe.com]
Sent: Monday, July 18, 2016 9:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)
On Jul 18, 2016, at 12:39 PM, Fox, Kevin M wrote:
> I'm arguing the opposite. It should be a r
] Big tent? (Related to Plugins for all)
On 16 Jul 2016 1:27 PM, "Thomas Herve"
mailto:the...@redhat.com>> wrote:
>
> On Fri, Jul 15, 2016 at 8:36 PM, Fox, Kevin M
> mailto:kevin@pnnl.gov>> wrote:
> > Some specific things:
> >
> > Magnum
ent? (Related to Plugins for all)
On Fri, Jul 15, 2016 at 8:36 PM, Fox, Kevin M wrote:
> Some specific things:
>
> Magnum trying to not use Barbican as it adds an addition dependency. See the
> discussion on the devel mailing list for details.
>
> Horizon discussions at the sum
an optional dependency, and I believe nobody
has proposed to remove Barbican entirely. I have no position about big tent but
using Magnum as an example of "projects are not working together" is
inappropriate.
Best regards,
Hongbin
> -Original Message-
> From: Fox, Kevin
day, July 16, 2016 8:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)
> -Original Message-
> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> Sent: Friday, July 15, 2016 2:37 PM
> To
27;ve said repeatedly and
people keep willfully ignoring, was *not* to "make the community more
inclusive". It was to replace the inconsistently-applied-by-the-TC
*subjective* criteria for project applications to OpenStack with an
*objective* list of application requirements that could be
I'm going to go ahead and ask the difficult question now as the answer is
relevant to the attached proposal...
Should we reconsider whether the big tent is the right approach going forward?
There have been some major downsides I think to the Big Tent approach, such as:
* Projects not working to
I fought for two weeks to figure out why one of my clouds didn't seem to want
to work properly. It was in fact one of those helpful souls you mention below
filtering out PMTU's. I had to play with some rather gnarly iptables rules to
workaround the issue. -j TCPMSS --clamp-mss-to-pmtu
So it
exposing a keystone token to a JS client
On 07/01 at 19:41, Fox, Kevin M wrote:
> Hi David,
>
> How do you feel about the approach here:
> https://review.openstack.org/#/c/311189/
>
> Its lets the existing angular js module:
> horizon.app.core.openstack-service-api.keystone
&
+1. Id like to see a similar thing for keystone validate user tokens.
Thanks,
Kevin
From: Johannes Grassler
Sent: Monday, July 04, 2016 2:43:47 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum][heat] Global stack-list for Magnum service
I wrote the app-catalog-ui plugin. I was going to bring this up but hadn't
gotten to it yet. Thanks for bringing it up.
We do package it up in an rpm, so if its installed with the rest of the
packages it should just work. The horizon compress/collect rpm hook does the
right thing already. It do
Hi David,
How do you feel about the approach here:
https://review.openstack.org/#/c/311189/
Its lets the existing angular js module:
horizon.app.core.openstack-service-api.keystone
access the current token via getCurrentUserSession().token
Thanks,
Kevin
Ah. I was going to bring this up eventually but hadn't gotten to it yet.
I started up a patch for adding similar support for horizon here:
https://review.openstack.org/#/c/311189/
My intention is to use it to make a Horizon Plugin to speak to a Keystone
authenticated Kubernetes api directly.
Th
+spec/volume-delete-on-terminate .
If neither is entirely correct, please feel free to submit a new one.
Thanks!
Peter
From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: June-28-16 7:10 PM
To: OpenStack Development Mailing List (not for usage questions); Ali Adil
Subject: Re: [openstack-dev] C
To me, one of the benefits of cinder is the ability to have the volume outlast
the vm. So, for example, if you knew a yum upgrade went bad on the vm, but the
db data is safe, it would be nice to be able to just delete the vm and have
trove relaunch using the existing volume, not having to import
Worse, if you use ironic, I think configdrive is mapped to a partition at
provision time. so hot plugging can't ever work in that scenario.
Thanks,
Kevin
From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, June 21, 2016 4:15 PM
To: openstack-dev
Subject: R
Some other topics:
* Rolling upgrades - For example, I haven't figured out a way to safely drain
an rpc based service rather then just shooting it and hoping things go well...
Is it safe? This safety code should be built into them all consistently.
* How to get events to users in a usable way.
+1
From: Clint Byrum [cl...@fewbar.com]
Sent: Monday, June 20, 2016 10:27 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all] Proposal: Architecture Working Group
Excerpts from Doug Wiegley's message of 2016-06-20 10:40:56 -0600:
> So, it sounds like yo
In the mean time, the package list you have listed below is rather small.
Easiest thing might be to just add them all to the docker/base/Dockerfile
Thanks,
Kevin
From: Ryan Hallisey [rhall...@redhat.com]
Sent: Monday, June 20, 2016 6:47 AM
To: OpenStack D
understand what each project is
about.
Regards,
-steve
From: "Fox, Kevin M" mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, June 17, 2016 at 9:11 AM
To: &q
Um, why try and reimplement Kolla from scratch rather then use the existing
Kolla system and make it available via Fuel?
There is already a project do deploy OpenStack containers in Kubernetes:
http://docs.openstack.org/developer/kolla-kubernetes/
https://review.openstack.org/#/c/304182/
Lets wo
Some counter arguments for keeping them in:
* It gives the developers of the code that's being plugged into a better view
of how the plugin api is used and what might break if they change it.
* Vendors don't tend to support drivers forever. Often they drop support for a
driver once the "new" ha
I think there's a weird cycle in plugging into nova too...
Say the cloud has no native container support added except for Zun. The
container api Zun provides could be mapped to a Zun plugin that:
1. nova boot --image centos --user-data 'yum install -y docker; yum start
docker; '
2. starts
As an operator that has clouds that are partitioned into different host
aggregates with different flavors targeting them, I totally believe we will
have users that want to have a single k8s cluster span multiple different
flavor types. I'm sure once I deploy magnum, I will want it too. You could
Has anyone talked with the gnocchi folks? It seems like a good time to. :)
Thanks,
Kevin
From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, June 02, 2016 4:55 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Monasca] influxDB cluster
I've been holding off, but I'll chime in now.
I believe Higgins should be about abstracting away differences in the container
systems where there are needless differences the user really could care less
about. IE,
"here, launch this container" can be done:
* k8s pod create ...
* docker run ...
Hi Zane,
I've been working on the k8s side of the equation right now...
See these two PR's:
https://github.com/kubernetes/kubernetes/pull/25391
https://github.com/kubernetes/kubernetes/pull/25624
I'm still hopeful these can make k8s 1.3 as experimental plugins. There is
keystone username/passwo
Two issues I can see with that approach.
1. It needs to be incredibly well documented as well as tools provided to
update states in etcd manually when an op needs to recover from things
partially working.
2. Consider the case where an op has an existing cloud. He/She installs k8s on
their exist
+1. very good discussion.
From: Sean Dague [s...@dague.net]
Sent: Wednesday, May 25, 2016 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"
I've been watching the threads, trying to diges
Frankly, this is one of the major negatives we've felt from the Big Tent idea...
OpenStack use to be more of a product then it is now. When there were common
problems to be solved, there was pressure applied to solve them in a way
everyone (OpenStack Project, OpenStack Users, and Openstack Opera
OpenStack is more then the sum of its various pieces. Often features need to
cross more then one project. Cross project work is already extremely hard
without having to change languages between. Language change should be done very
carefully and deliberately.
Thanks,
Kevin
_
+1 for using k8s to do work where possible.
-1 for trying to shoehorn a feature in so that k8s can deal with stuff its not
ready to handle. We need to ensure Operators have everything they need in order
to successfully operate their cloud.
The current upgrade stuff in k8s is focused around repl
Containers are popular right now for the same reason statically linking is. But
it gives you something half way in between. Static linking is really hard to do
right. Even go's "static link all the things" is incomplete. if you run ldd on
a go binary, it isn't actually static. Containers ensure
+1. OpenStack does meet the requirements for being an Operating System at the
data center level in my opinion. We just keep trying to beat around that bush
for some reason... yes, we do that by Integrating a lot of things together, but
thats not what its about.
__
I think I remember something about resourcegroups having a way to delete one of
them too. Might double check.
Thanks,
Kevin
From: Cammann, Tom
Sent: Monday, May 16, 2016 2:28:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [ope
Sounds ok, but there needs to be a careful upgrade/migration path, where both
are supported until after all pods are migrated out of nodes that are in the
resourcegroup.
Thanks,
Kevin
From: Hongbin Lu
Sent: Sunday, May 15, 2016 3:49:39 PM
To: OpenStack Developme
Whats the issue?
From: Dieterly, Deklan [deklan.diete...@hpe.com]
Sent: Friday, May 13, 2016 3:07 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Freezer] Replace Gnu Tar with DAR
Does anybody see any issues if Freezer used DAR instead of
Is there a copy-from-url method that's not deprecated yet?
The app catalog is still pointing users at the command line in v1 mode
Thanks,
Kevin
From: Matt Fischer [m...@mattfischer.com]
Sent: Thursday, May 12, 2016 4:43 PM
To: Flavio Percoco
Cc: openstack-dev@
Heat added a panel in horizon to describe all the resources they have. Not sure
you would want to do it the same way, but there is at least one other project
that ran into this issue and took a stab at solving it. Might be worth a look.
Thanks,
Kevin
From: Hongbi
/me puts on his Operator hat.
Operators do care about being able to debug issues with the stuff they have to
deploy/manage.
One of our complains about Java (and probably Erlang.. Not sure about go): In
addition to the standard C like issues you hit, file descriptor limits, etc,
the languag
Thomas, fully agree. :)
Rayson Ho, even with containers, distro packages are preferable. Its really
difficult at the moment to ensure your containers don't have security
vulnerabilities backed into them. None of the docker repo's I've seen really
help you with automating this. The only trick I'
Go static linking/portability is something of a misnomer too. Last I looked Go
only statically linked Go code, so any system libraries used still were
dynamic. So it gets you somewhat to the use case of container level portability
but not all the way unless you can do literally everything in Go.
k-dev] [tc] supporting Go
On 09/05/2016 19:09, Fox, Kevin M wrote:
> I think you'll find that being able to embed a higher performance language
> inside python will be much easier to do for optimizing a function or two
> rather then deal with having a separate server have to be created
I think you'll find that being able to embed a higher performance language
inside python will be much easier to do for optimizing a function or two rather
then deal with having a separate server have to be created, authentication be
added between the two, and marshalling/unmarshalling the data t
Sent: Monday, May 09, 2016 1:53:06 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] Swift api compat. Was: supporting Go
Fox, Kevin M wrote:
> I think part of the problem with the whole Swift situation is that it does
> something most other OpenStack projects do
I was under the impression bifrost was 2 things, one, an installer/configurator
of ironic in a stand alone mode, and two, a management tool for getting
machines deployed without needing nova using ironic.
The first use case seems like it should just be handled by enhancing kolla's
ironic contai
Another option, should the install playbook be enhanced to support simply
skipping the steps that wouldn't apply to building in the container?
Seems to me, all the ironic stuff could just be done with the kolla ironic
container, so no systemd stuff should be needed.
Thanks,
Kevin
__
Another few related things:
* We tend to build our images in vm's. so if libguestfs has to spawn in a vm
nested in that vm, it may have some performance issues. DIB doesn't have that
problem.
* We used to build our centos 6 images without grub a while back, but the
first yum upgrade that upgr
ncorrect. Its been mostly good enough sort of with a bunch of issues that have
plagued us.
Thanks,
Kevin
________
From: Pete Zaitcev [zait...@redhat.com]
Sent: Thursday, May 05, 2016 9:38 AM
To: Fox, Kevin M
Cc: OpenStack Development Mailing List (not for usage questi
__
From: Monty Taylor [mord...@inaugust.com]
Sent: Wednesday, May 04, 2016 12:47 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] supporting Go
On 05/04/2016 01:31 PM, Fox, Kevin M wrote:
> Explicitly, no. I agree. Implicitly... ? Why bother to even propose join
on doesn't mean exclusion hasn't
happened.
Thanks,
Kevin
From: Pete Zaitcev [zait...@redhat.com]
Sent: Wednesday, May 04, 2016 10:40 AM
To: Fox, Kevin M
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-de
+1. Thanks for starting the discussion.
We may need a spec for it. The support in k8s currently is marked experimental
(thankfully) and is insufficient. It currently only supports username/password
authentication via keystone v2. There's an issue in the works to fix it in k8s:
https://github.com
inobu Kinjo wrote:
> Could we kindly stop discussing radosgw at this moment?
>
> On Wed, May 4, 2016 at 6:47 PM, Thierry Carrez wrote:
>> Fox, Kevin M wrote:
>>>
>>> RadosGW has been excluded from joining the OpenStack community in part due
>>> to its use
RadosGW has been excluded from joining the OpenStack community in part due to
its use of c++. Now that we're talking about alternate languages, that may be
on the table now?
Thanks,
Kevin
From: Doug Hellmann [d...@doughellmann.com]
Sent: Tuesday, May 03,
If we let go in, and there are no pluggable middleware, where does RadosGW and
other Swift api compatible implementations then stand? Should we bless c++ too?
As I understand it, there are a lot of clouds deployed with the RadosGW but
Refstack rejects them.
Thanks,
Kevin
__
+1 to one set of containers for all. If kolla-k8s needs tweaks to the abi, the
request should go to the kolla core team (involving everyone) and discuss why
they are needed/reasonable. This should be done regardless of if there are 1or
2 repo's in the end.
Thanks,
Kevin
One thing we didn't talk about too much at the summit is the part of the spec
that says we will reuse a bunch of ansible stuff to generate configs for the
k8s case...
Do we believe that code would be minimal and not impact separate repo's much or
is the majority of the work in the end going to
I really hadnt thought about distro deps for non containers leaking into
containers like nova->libvirt. What about making a kolla-container rpm that
provides a virtual libvirt package and anything else not actually needed in the
container?
Thanks,
Kevin
From: S
Are there any plans in kolla-kubernetes to integrate with the package managers
that are starting to spring up?
https://github.com/kubespray/kpm or https://github.com/helm/helm ? It would
probably be better to reuse at least one then try and reinvent it if possible.
(tangential question. is ther
t
Barbican/OSSP mid-cycle was that most people were interested in the
push model.
[1]
http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCusto
merKeys.html
On 4/22/16 4:03 PM, Fox, Kevin M wrote:
> Can you please give a little more detail on what its about?
>
> Does th
Can you please give a little more detail on what its about?
Does this have any overlap with the instance user session:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9485
Thanks,
Kevin
From: Rob C [hyaku...@gmail.com]
Sent: Friday, April 22,
Trove does that for you
through a single set of API's because todays datacenters have a wide diversity
of databases. Hope that helps.
> Trove and Sahara operators on the value vs. customer confusion or operator
> overhead they get from those LCDs if they are required parts of the
> se
Kevin
From: Monty Taylor [mord...@inaugust.com]
Sent: Thursday, April 21, 2016 1:43 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
abstraction for all COEs
I believe you just described Murano.
On 04/21/2016 03:31 PM, Fox, Kevin M
; From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: April-21-16 4:42 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
> On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> > Here
p://lists.openstack.org/pipermail/openstack-dev/2016-April/091982.html
> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: Thursday, April 21, 2016 4:42 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][app-catalog][
: [openstack-dev] [magnum][app-catalog][all] Build unified
abstraction for all COEs
I thought this was also what the goal of https://cncf.io/ was starting
to be? Maybe to early to tell if that standardization will be an real
outcome vs just an imagined outcome :-P
-Josh
Fox, Kevin M wrote:
>
ontinue that conversation.
Thanks,
Kevin
From: Monty Taylor [mord...@inaugust.com]
Sent: Thursday, April 21, 2016 1:41 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
abstraction for all COEs
O
would be best served as a
different effort. Magnum would be best focused on bay interactions and
should not try to pick a COE winner or require an operator to run a
lowest-common-demonitor API abstraction.
Thanks for listening to my soap-box.
-Keith
On 4/21/16, 2:36 PM, "Fox, Kevin M&quo
Here's where we disagree.
Your speaking for everyone in the world now, and all you need is one counter
example. I'll be that guy. Me. I want a common abstraction for some common LCD
stuff.
Both Sahara and Trove have LCD abstractions for very common things. Magnum
should too.
You are falsely a
I agree with that, and thats why providing some bare minimum abstraction will
help the users not have to choose a COE themselves. If we can't decide, why can
they? If all they want to do is launch a container, they should be able to
script up "magnum launch-container foo/bar:latest" and get one.
The COE's have a pressure not to standardize their api's between competing
COE's. If you can lock a user into your api, then they cant go to your
competitor.
The standard api really needs to come from those invested in not being locked
in. OpenStack's been largely about that since the beginning
+1.
From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, April 21, 2016 7:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
abstraction for all COEs
> -Origi
Its new enough that people haven't thought to ask until recently. The recent
interest is starting in the topic due to Magnum getting mature enough folks are
starting to deploy it and finding out it doesn't solve a bunch of issues they
had thought it would. Its pretty natural. Don't just blow it
I think Magnum much is much closer to Sahara or Trove in its workings. Heat's
orchestration. Thats what the COE does.
Sahara is and has plugins to deploy various Hadoopy like clusters, get them
assembled into something useful, and has a few abstraction api's like "submit a
job to the deployed h
+1 to plugins. it has suited nova/trove/sahara/etc well.
Thanks,
Kevin
From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, April 20, 2016 3:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]
I'll go ahead and be the guy to ask for N flavors. :)
AZ zones are kind of restrictive in what they can do, so we usually use
flavors, which are much more flexable.
I can totally see a project with 3 different types of flavors and want them all
in the same k8s cluster managed by labels.
Thanks
you think it is a bad idea, I would love to hear your inputs as
>> well:
>> >* Why it is bad?
>> >* If there is no common abstraction, how to address the pain of
>> >leveraging native COE APIs as reported below?
>> >
>> >[1]
>> >https://
20 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Fox, Kevin M
Subject: Re: [openstack-dev] [Magnum]Cache docker images
Kevin,
I agree this is not ideal solution, but it's probably the best option to deal
with public cloud "stability" (e.g. we switched to t
Thomas,
I normally side with the distro's take on making sure there is no duplication,
but I think Thierry's point comes from two differences coming up that the
traditional distro's don't tend to account for.
The traditional distro usually is about a single machine. being able to install
all t
I've got scripts I use nova floating ip subcommands to attach/detach floating
ips occasionally because it was easier to write then using the equiv neutron
commands even though I'm using neutron. I'd think some folks will be doing the
same.
That being said, we'll have to rewrite all that code fo
As an Op, I've been bitten by both sides of this
I've seen dib updated and broken things.
I've seen dib elements updated and things broke (centos6 removal in particular
hurt.)
I've appreciated dib elements getting fixed quickly at times because distro's
changed, and the element needed change
I'm kind of uncomfortable as an op with the prebundled stuff. how do you
upgrade things when needed if there is no way to pull updated images from a
central place?
Thanks,
Kevin
From: Hongbin Lu [hongbin...@huawei.com]
Sent: Tuesday, April 19, 2016 11:56 AM
To: O
is another approach. I think Magnum can offer both.
[1] https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig
Best regards,
Hongbin
From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: April-19-16 1:12 PM
To: OpenStack Development Mailing List (not for usage questions)
S
What about a plugin cache like feature? if no cache, or cache older then 24
hours, dump all discovered plugins into a python file that loads them more
statically. if the cache is valid, just use it. and a hook in rpms/debs to
remove the cache on plugin install/uninstall.
Thanks,
Kevin
_
why not just allow a prefix to be added to the container name?
you can then have a container named:
foo/mycontainer
and the prefix could be set to mylocalserver.org:8080:
mylocalserver.org:8080/foo/mycontainer
Then if the site needs local only containers, they can set up a local repo. Be
it a s
Its also could be more difficult for ops to debug if it autocreates it on a
cluster when config management is broken and you don't realize it, and you see
keys that were created, but totally wrong for the cluster.
Though maybe there should be a generic ha=True option in keystone to override
tha
I'd love to attend, but this is right on top of the app catalog meeting. I
think the app catalog might be one of the primary users of a cross COE api.
At minimum we'd like to be able to be able to store url's for
Kubernetes/Swarm/Mesos templates and have an api to kick off a workflow in
Horizon
201 - 300 of 756 matches
Mail list logo