On 12/06/2016 04:00 PM, melanie witt wrote:
On Tue, 6 Dec 2016 15:42:18 -0500, Jay Pipes wrote:
On 12/06/2016 03:28 PM, Ed Leafe wrote:
On Dec 6, 2016, at 2:16 PM, Jay Pipes wrote:
I would prefer:
GET /resource_providers?resources=DISK_GB:40,VCPU:2,MEMORY_MB:2048
to "group" the
On 12/06/2016 03:28 PM, Ed Leafe wrote:
On Dec 6, 2016, at 2:16 PM, Jay Pipes wrote:
I would prefer:
GET /resource_providers?resources=DISK_GB:40,VCPU:2,MEMORY_MB:2048
to "group" the resources parameter together. When we add in trait lookups,
we're going to want a way to cl
On 12/06/2016 02:02 PM, Ed Leafe wrote:
On Dec 6, 2016, at 9:56 AM, Chris Dent wrote:
* There is unresolved debate about the structure of the request being
made to the API. Is it POST or a GET, does it have a body or use
query strings? The plan is to resolve this discussion in the review
of
On 12/05/2016 10:59 AM, Bence Romsics wrote:
Hi,
I measured how the new trunk API scales with lots of subports. You can
find the results here:
https://wiki.openstack.org/wiki/Neutron_Trunk_API_Performance_and_Scaling
Hope you find it useful. There are several open ends, let me know if
you're i
On Dec 2, 2016 5:21 PM, "Matt Riedemann" wrote:
On 12/2/2016 12:04 PM, Chris Dent wrote:
>
>
> Latest news on what's going on with resource providers and the
> placement API. I've made some adjustments in the structure of this
> since last time[0]. The new structure tries to put the stuff we nee
Thanks for the update, Chris, appreciated. No comments from me other
than to say thanks :)
On 12/02/2016 01:04 PM, Chris Dent wrote:
Latest news on what's going on with resource providers and the
placement API. I've made some adjustments in the structure of this
since last time[0]. The new st
Ironic colleagues, heads up, please read the below fully! I'd like your
feedback on a couple outstanding questions.
tl;dr
-
Work for custom resource classes has been proceeding well this cycle,
and we're at a point where reviews from the Ironic community and
functional testing of a series
+1
On 12/02/2016 10:22 AM, Matt Riedemann wrote:
I'm proposing that we add Stephen Finucane to the nova-core team.
Stephen has been involved with nova for at least around a year now,
maybe longer, my ability to tell time in nova has gotten fuzzy over the
years. Regardless, he's always been eager
Just following up on this. All of the above are now approved and working
their way through the gate. Big thanks to Sean Dague, Feodor Tersin, and
Melanie Witt for review assistance over the last couple weeks. Also
thanks to Chris Dent for reviewing the bulk of the series today as well
and leavi
On 11/28/2016 01:33 PM, Doug Hellmann wrote:
I'm raising this as an issue because it's not just a hypothetical
problem. The Cisco networking driver team, having been removed from
the Neutron stadium, is asking for status as a separate official
team [1]. I would very much like to find a way to say
On 11/27/2016 10:27 PM, Zhenguo Niu wrote:
hi Jay,
Ironic's existing API is admin only, which should not be exposed to end
users, it is best thought of as a bare metal hypervisor API.
And Ironic lacks the ability of scheduling, quotas management and
multi-tenancy support, it heavily depends on N
On 11/25/2016 04:41 AM, Zhenguo Niu wrote:
hi all,
We are pleased to introduce Nimble, a new OpenStack project which aims
to provide bare metal computing management.
Compared with Nova, it's more bare metal specific and with more advanced
features that VM users don't need, it's not
bounded by No
On 11/23/2016 10:53 AM, Matt Riedemann wrote:
By the time the nova meeting rolls around I'll be crying because of (1)
my beloved MN Vikings losing to the lowly Lions and (2) eating too much
and feeling ashamed.
At least with (1) I'm not Jay who is a Browns fan. They are the worst. :)
They are
On 11/22/2016 11:39 AM, Andrew Laski wrote:
I should have sent this weeks ago but I'm a bad person who forgets
common courtesy. My employment situation has changed in a way that does
not afford me the necessary time to remain a regular contributor to
Nova, or the broader OpenStack community. So i
On 11/21/2016 07:43 AM, Andreas Jaeger wrote:
On 2016-11-21 13:24, Steven Dake (stdake) wrote:
Pete,
The main problem with that is publishing docs to docs.oo which would
then confuse the reader even more then they are already confused by
reading our docs ;)
Regards
-steve
From: Pete Birley m
On 11/17/2016 10:15 AM, Matt Riedemann wrote:
I just wanted to say thanks to everyone reviewing specs this week. I've
seen a lot of non-core newer people to the specs review process chipping
in and helping to review a lot of the specs we're trying to get approved
for Ocata. It can be hard to grin
On 11/14/2016 10:22 PM, Akira Yoshiyama wrote:
2016-11-14 2:19 GMT+09:00 Jay Pipes mailto:jaypi...@gmail.com>>:
On 11/13/2016 01:52 AM, Akira Yoshiyama wrote:
No. "physical storages" means storage products like EMC VNX, NetApp
Data ONTAP, HPE Lefthand and so on.
Say there
Awesome start, Monty :) Comments inline.
On 11/15/2016 09:56 AM, Monty Taylor wrote:
Hey everybody!
At this past OpenStack Summit the results of the Interop Challenge were
shown on stage. It was pretty awesome - 17 different people from 17
different clouds ran the same workload. And it worked!
On 11/15/2016 10:09 AM, Matt Riedemann wrote:
I've got a docs patch that failed both nova-net and neutron vmware third
party CI:
https://review.openstack.org/#/c/397382/
The nova-net one failed some aggregates response validation:
http://208.91.1.172/logs/ext-nova-dsvm/397382/2/2074/tempest_re
On 11/13/2016 01:52 AM, Akira Yoshiyama wrote:
Hi Jay,
2016-11-13 3:12 GMT+09:00 Jay Pipes :
On 11/12/2016 09:31 AM, Akira Yoshiyama wrote:
Hi Stackers,
In TripleO, Ironic provides physical servers for an OpenStack
deployment but we have to configure physical storages manually, or
with any
On 11/12/2016 09:31 AM, Akira Yoshiyama wrote:
Hi Stackers,
In TripleO, Ironic provides physical servers for an OpenStack
deployment but we have to configure physical storages manually, or
with any tool, if required. It's better that an OpenStack service
manages physical storages as same as Iron
On 11/08/2016 04:29 PM, Thierry Carrez wrote:
Hi everyone,
The registration is now open for the first OpenStack Project Teams
Gathering event (which will take place in Atlanta, GA the week of
February 20, 2017). This event is geared toward existing upstream team
members, and will provide a venue
On 11/11/2016 03:19 PM, Matt Riedemann wrote:
On 11/11/2016 12:28 PM, Jay Pipes wrote:
Matt, thanks much for your excellent summary of the resource provider
sessions in Barcelona. A couple minor corrections noted below.
On 11/02/2016 01:54 PM, Matt Riedemann wrote:
- Custom resource classes
Matt, thanks much for your excellent summary of the resource provider
sessions in Barcelona. A couple minor corrections noted below.
On 11/02/2016 01:54 PM, Matt Riedemann wrote:
- Custom resource classes
The code for this is moving along and being reviewed. There will be
namespaces on the sta
On Nov 8, 2016 1:13 PM, "Matt Riedemann" wrote:
>
> On 11/8/2016 11:39 AM, Dan Smith wrote:
>>>
>>> I do imagine, however, that most folks who have been working
>>> on nova for long enough have a list of domain experts in their heads
>>> already. Would actually putting that on paper really hurt?
>
On 11/05/2016 01:15 AM, Steve Martinelli wrote:
The keystone team has a new spec being proposed for the Ocata release,
it essentially boils down to adding properties / metadata for projects
(for now) [1].
Yes, I'd seen that particular spec review and found it interesting in a
couple ways.
W
On 11/01/2016 10:14 AM, Alex Xu wrote:
Currently we only update the resource usage with Placement API in the
instance claim and the available resource update periodic task. But
there is no claim for migration with placement API yet. This works is
tracked by https://bugs.launchpad.net/nova/+bug/16
On 11/01/2016 10:41 AM, Matthew Booth wrote:
For context, this is a speculative: should we, shouldn't we?
The VMware driver currently allows the user to specify what storage
their instance will use, I believe using a flavor extra-spec. We've also
got significant interest in adding the same to th
On 10/20/2016 05:07 AM, Joshua Harlow wrote:
Matt Riedemann wrote:
There are a lot of specs up for review in ocata related to adding new
versioned notifications for operations that we didn't have notifications
on before, like CRUD operations on resources like flavors and server
groups.
We've go
On 10/20/2016 05:08 AM, Joshua Harlow wrote:
Matt Riedemann wrote:
There are a lot of specs up for review in ocata related to adding new
versioned notifications for operations that we didn't have notifications
on before, like CRUD operations on resources like flavors and server
groups.
We've go
On 10/19/2016 05:32 PM, Brian Curtin wrote:
I'm currently facing what looks more and more like an impossible
problem in determining the root of each service on a given cloud. It
is apparently a free-for-all in how endpoints can be structured, and I
think we're out of ways to approach it that catc
On 10/17/2016 11:14 PM, Ed Leafe wrote:
Now that we’re starting to model some more complex resources, it seems that
some of the original design decisions may have been mistaken. One approach to
work around this is to create multiple levels of resource providers. While that
works, it is unneces
Alex, so sorry for the long delayed response! :( This just crept to
the back of my inbox unfortunately. Answer inline...
On 09/14/2016 07:24 PM, Bashmakov, Alexander wrote:
Glance and Keystone do not participate in a rolling upgrade,
because Keystone and Glance do not have a distributed componen
On 10/13/2016 04:02 AM, Zhenyu Zheng wrote:
Hi All,
We would like to propose FusionCompute driver to become an official Nova
driver.
FusionCompute is an computing virtualization software developed by
Huawei, which can provide tuned high-performance and high reliabilities
in VM instance provisio
On 10/07/2016 09:43 AM, Bence Romsics wrote:
Hi,
To follow up on the complications of bringing up trunk subports [1] I
have written up a small proposal for a tiny new feature affecting
neutron and nova. That is how to expose trunk details over metadata
API. To avoid big process overhead I have o
On 10/06/2016 11:58 AM, Naveen Joy (najoy) wrote:
It’s primarliy because we have seen better stability and scalability
with etcd over rabbitmq.
Well, that's kind of comparing apples to oranges. :)
One is a distributed k/v store. The other is a message queue broker.
The way that we (IMHO) over
On 10/06/2016 11:39 AM, Neil Jerram wrote:
On Thu, Oct 6, 2016 at 3:44 PM Jay Pipes mailto:jaypi...@gmail.com>> wrote:
On 10/06/2016 03:46 AM, Jerome Tollet (jtollet) wrote:
> Hey Kevin,
>
> Thanks for your interest in this project.
>
> We found et
On 10/06/2016 11:43 AM, Jeremy Stanley wrote:
On 2016-10-06 10:30:30 -0500 (-0500), Kevin L. Mitchell wrote:
[...]
Problem with that is that ':' is a valid character within an ISO date,
though I do like the 'between' prefix. Now, '/' can be used if it's URL
encoded, but I agree that that is non
Great question! My opinion inline...
On 10/06/2016 08:56 AM, milanisko k wrote:
Dear API WG,
We've got ourselves into a bike shedding situation[1] w/r the WG
filtering suggestion[2]. What we find difficult is to express the
ISO8601 time intervals[3][4] in accordance to the filtering suggestion.
On 10/06/2016 03:46 AM, Jerome Tollet (jtollet) wrote:
Hey Kevin,
Thanks for your interest in this project.
We found etcd very convenient to store desired states as well as
operational states. It made the design easy to provide production grade
features (e.g. agent restart, mechanical driver re
On 10/04/2016 11:53 AM, Hague, Darren wrote:
On Tue, 4 Oct 2016, Chris Dent wrote:
On Tue, 4 Oct 2016, Julien Danjou wrote:
Considering the split of Ceilometer in subprojects (Aodh and Panko)
during those last cycles, and the increasing usage of Gnocchi, I am
starting to wonder if it makes sen
On 09/23/2016 05:07 PM, Sylvain Bauza wrote:
Le 23/09/2016 18:41, Jay Pipes a écrit :
5. Nested resource providers
Things like SR-IOV PCI devices are actually resource providers that
are embedded within another resource provider (the compute node
itself). In order to tag things like SR-IOV PFs
John, appreciate your candor and candidacy. Some questions inline for you...
On 09/26/2016 06:57 AM, John Davidge wrote:
Last year, the TC moved OpenStack away from the Integrated Release, and
into The Big Tent. This removed the separation between those projects
considered integral to OpenStack,
On 09/23/2016 01:04 PM, Steven Dake (stdake) wrote:
I also fail to see how training the Fuel team with the choices Kolla
has made in implementation puts OpenStack first.
Sorry, could you elaborate on what exactly you mean above? What do you
mean by "training the Fuel team with the choices Koll
Hi Stackers,
In Newton, we had a major goal of having Nova sending inventory and
allocation records from the nova-compute daemon to the new placement API
service over HTTP (i.e. not RPC). I'm happy to say we achieved this
goal. We had a stretch goal from the mid-cycle of implementing the
cust
On 09/21/2016 07:43 PM, Matt Riedemann wrote:
The s3 image configuration options were deprecated for removal in newton
[1].
Clint has a patch up to remove the boto dependency from nova [2] which
is only used in the nova.image.s3 module.
Rather than remove the boto dependency, I think we should
On 09/01/2016 05:29 AM, Henry Nash wrote:
So as the person who drove the rolling upgrade requirements into
keystone in this cycle (because we have real customers that need it),
and having first written the keystone upgrade process to be
“versioned object ready” (because I assumed we would do this
On 09/13/2016 08:23 PM, Terry Wilson wrote:
On Tue, Sep 13, 2016 at 6:31 PM, Jay Pipes wrote:
On 09/13/2016 01:40 PM, Terry Wilson wrote:
On Thu, Jun 11, 2015 at 8:33 AM, Sean Dague wrote:
On 06/11/2015 09:02 AM, Jay Pipes wrote:
On 06/11/2015 01:16 AM, Robert Collins wrote:
But again
On 09/13/2016 01:40 PM, Terry Wilson wrote:
On Thu, Jun 11, 2015 at 8:33 AM, Sean Dague wrote:
On 06/11/2015 09:02 AM, Jay Pipes wrote:
On 06/11/2015 01:16 AM, Robert Collins wrote:
But again - where in OpenStack does this matter the slightest?
Precisely. I can't think of a single
On 09/09/2016 02:10 PM, Doug Hellmann wrote:
Excerpts from Jay Pipes's message of 2016-09-09 13:03:42 -0400:
My vote is definitely for something #2-like, as I've said before and on
the review, I believe OpenStack should be a "cloud toolkit" composed of
well-scoped and limited services in the vei
On 09/09/2016 06:22 AM, John Davidge wrote:
Thierry Carrez wrote:
[...]
In the last years there were a lot of "questions" asked by random
contributors, especially around the "One OpenStack" principle (which
seems to fuel most of the reaction here). Remarks like "we should really
decide once and
On 08/29/2016 12:40 PM, Matt Riedemann wrote:
I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done thi
On 08/31/2016 01:57 AM, Bogdan Dobrelya wrote:
I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be aband
On 08/29/2016 12:40 PM, Matt Riedemann wrote:
I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done thi
On 08/29/2016 05:53 AM, Andrew Laski wrote:
Personally I believe the cat is out of the bag on bdms overriding
flavors and we should just commit to that path and make it work well.
And for deployers who rely on flavors being the source of truth maybe we
provide them a policy check or some other me
On 08/29/2016 05:11 AM, Sylvain Bauza wrote:
Le 29/08/2016 13:25, Jay Pipes a écrit :
On 08/26/2016 09:20 AM, Ed Leafe wrote:
On Aug 25, 2016, at 3:19 PM, Andrew Laski wrote:
One other thing to note is that while a flavor constrains how much
local
disk is used it does not constrain volume
On 08/26/2016 09:20 AM, Ed Leafe wrote:
On Aug 25, 2016, at 3:19 PM, Andrew Laski wrote:
One other thing to note is that while a flavor constrains how much local
disk is used it does not constrain volume size at all. So a user can
specify an ephemeral/swap disk <= to what the flavor provides b
On 08/27/2016 11:16 AM, HU, BIN wrote:
The challenge in OpenStack is how to enable the innovation built on top of
OpenStack.
No, that's not the challenge for OpenStack.
That's like saying the challenge for gasoline is how to enable the
innovation of a jet engine.
So telco use cases is not
On 08/28/2016 09:02 PM, joehuang wrote:
Hello, Bin,
Understand your expectation. In Tricircle big-tent application:
https://review.openstack.org/#/c/338796/, a proposal was also given to add
plugin mechnism in Nova/Cinder API layer, just like Neutron support plugin
mechanism in API layer, tha
On 08/25/2016 06:38 PM, joehuang wrote:
Hello, Ed,
Just as Peter mentioned, "BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we
will have compute highly distributed around the network (from thousands to millions of sites)
". vCPE is only one use case, but not all. And the har
On 08/25/2016 11:08 AM, Thierry Carrez wrote:
Jay Pipes wrote:
[...]
How is vCPE a *cloud* use case?
From what I understand, the v[E]CPE use case is essentially that Telcos
want to have the set-top boxen/routers that are running cable television
apps (i.e. AT&T U-verse or Verizon FiOS-
On 08/24/2016 04:26 AM, Peter Willis wrote:
Colleagues,
I'd like to confirm that scalability and multi-site operations are key
to BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we
will have compute highly distributed around the network (from thousands
to millions of sites). BT w
On 08/21/2016 03:24 PM, Fawaz Mohammed wrote:
I belive utilizing host aggregate is better than availability zone in
this case.
Users don't know anything about host aggregates. They are a
cloud-admin-only way of grouping like compute resources together and the
end user doesn't have any way of
Roger that.
On 08/18/2016 11:48 AM, Matt Riedemann wrote:
We have a lot of open changes for the centralize / cleanup config option
work:
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/centralize-config-options-newton
We said at the midcycle we'd all
On 08/16/2016 04:58 AM, Vladimir Kozhukalov wrote:
Dear colleagues,
We finally have working custom deployment job that deploys Fuel admin
node using online RPM repositories (not ISO) on vanilla Centos 7.0.
Bravo! :)
Currently all Fuel system and deployment tests use ISO and we are
planning t
On 08/15/2016 03:57 AM, Alex Xu wrote:
2016-08-15 12:56 GMT+08:00 Yingxin Cheng mailto:yingxinch...@gmail.com>>:
Hi,
I'm concerned with the dependencies between "os-capabilities"
library and all the other OpenStack services such as Nova,
Placement, Ironic, etc.
Rather than
On 08/15/2016 12:56 AM, Yingxin Cheng wrote:
Hi,
I'm concerned with the dependencies between "os-capabilities" library
and all the other OpenStack services such as Nova, Placement, Ironic, etc.
Rather than embedding the universal "os-capabilities" in Nova, Cinder,
Glance, Ironic services that w
On 08/15/2016 03:47 PM, Joshua Harlow wrote:
I've been experimenting/investigating/playing around with the 'new'
jenkins pipeline support (see https://jenkins.io/doc/pipeline/ for those
who don't know what this is) and it got me thinking that there are
probably X other people/groups/companies
On 08/15/2016 01:19 PM, Joshua Harlow wrote:
Hi folks,
I've been experimenting/investigating/playing around with the 'new'
jenkins pipeline support (see https://jenkins.io/doc/pipeline/ for those
who don't know what this is) and it got me thinking that there are
probably X other people/groups/co
On 08/15/2016 12:01 PM, Doug Hellmann wrote:
Excerpts from Jay Pipes's message of 2016-08-15 10:33:49 -0400:
On 08/15/2016 09:27 AM, Andrew Laski wrote:
Currently in Nova we're discussion adding a "capabilities" API to expose
to users what actions they're allowed to take, and having compute hos
On 08/15/2016 10:50 AM, Dean Troyer wrote:
On Mon, Aug 15, 2016 at 9:33 AM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:
On 08/15/2016 09:27 AM, Andrew Laski wrote:
After some thought, I think I've changed my mind on referring to
the adjectives as "c
On 08/15/2016 09:27 AM, Andrew Laski wrote:
Currently in Nova we're discussion adding a "capabilities" API to expose
to users what actions they're allowed to take, and having compute hosts
expose "capabilities" for use by the scheduler. As much fun as it would
be to have the same term mean two ve
Word up, Travis :) A few comments inline, but overall I'm looking
forward to collaborating with you, Steve, and all the other Searchlight
contributors on os-capabilities (or os-caps as Sean wants to rename it ;)
On 08/11/2016 08:00 PM, Tripp, Travis S wrote:
[Graffit] was originally co-sponsor
On 08/12/2016 04:05 AM, Daniel P. Berrange wrote:
On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:
Hi Novas and anyone interested in how to represent capabilities in a
consistent fashion.
I spent an hour creating a new os-capabilities Python library this evening:
http://github.com
On 08/12/2016 07:49 AM, Jim Rollenhagen wrote:
On Thu, Aug 11, 2016 at 9:03 PM, Jay Pipes wrote:
On 08/11/2016 05:46 PM, Clay Gerrard wrote:
On Thu, Aug 11, 2016 at 2:25 PM, Ed Leafe mailto:e...@leafe.com>> wrote:
Overall this looks good, although it seems a bit odd t
On 08/11/2016 05:50 PM, John Dickinson wrote:
On 3 Aug 2016, at 16:47, Jay Pipes wrote:
Hi Novas and anyone interested in how to represent capabilities in
a consistent fashion.
I spent an hour creating a new os-capabilities Python library this
evening:
http://github.com/jaypipes/os
On 08/11/2016 05:25 PM, Ed Leafe wrote:
On Aug 3, 2016, at 6:47 PM, Jay Pipes wrote:
Please see the README for examples of how the library works and how I'm
thinking of structuring these capability strings and symbols. I intend
os-capabilities to be the place where the OpenStack comm
On 08/11/2016 05:46 PM, Clay Gerrard wrote:
On Thu, Aug 11, 2016 at 2:25 PM, Ed Leafe mailto:e...@leafe.com>> wrote:
Overall this looks good, although it seems a bit odd to have
ALL_CAPS_STRINGS to represent all:caps:strings throughout. The
example you gave:
>>> print os_caps.H
On 08/09/2016 04:28 AM, Vasyl Saienko wrote:
Hello Ironic'ers!
We've recorded a demo that shows how static portgroup works at the moment:
Flat network scenario: https://youtu.be/vBlH0ie6Lm4
Multitenant network scenario: https://youtu.be/Kk5Cc_K1tV8
Just watched both the above demo videos. Gre
Tempest devs,
Let me please draw your attention to a LP bug that may not seem
particularly high priority, but I believe could be resolved easily with
a patch already proposed.
LP bug 1536251 [1] accurately states that Tempest is actively verifying
that an OpenStack API call violates RFC 7230
On 08/08/2016 07:48 AM, Steven Dake (stdake) wrote:
Cool thanks for the response. Appreciate it. I think the big take away
is all the ODMs are free from churn in Newton and have a full cycle to
adapt to the changes which is great news!
Yes, that is absolutely the case.
Best,
-jay
__
On 08/08/2016 06:14 AM, Chris Dent wrote:
On Mon, 8 Aug 2016, Alex Xu wrote:
Chris, thanks for the blog to explain your idea! It helps me understand
your idea better.
Thanks for reading it. As I think I've mentioned a few times I'm not
really trying to sell the idea, just make sure it is clea
On 08/04/2016 06:40 PM, Clint Byrum wrote:
Excerpts from Jay Pipes's message of 2016-08-04 18:14:46 -0400:
On 08/04/2016 05:30 PM, Clint Byrum wrote:
Excerpts from Fox, Kevin M's message of 2016-08-04 19:20:43 +:
I disagree. I see glare as a superset of the needs of the image api and one f
On 08/04/2016 05:30 PM, Clint Byrum wrote:
Excerpts from Fox, Kevin M's message of 2016-08-04 19:20:43 +:
I disagree. I see glare as a superset of the needs of the image api and one feature I
need thats image related was specifically shot down as "the artefact api will solve
that".
You ha
On 08/04/2016 01:17 PM, Chris Friesen wrote:
On 08/04/2016 09:28 AM, Edward Leafe wrote:
The idea that by specifying a distinct microversion would somehow
guarantee
an immutable behavior, though, is simply not the case. We discussed
this at
length at the midcycle regarding the dropping of the n
On 08/04/2016 10:31 AM, Jim Rollenhagen wrote:
On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:
Hi Novas and anyone interested in how to represent capabilities in a
consistent fashion.
I spent an hour creating a new os-capabilities Python library this evening:
http://github.com
Hi Novas and anyone interested in how to represent capabilities in a
consistent fashion.
I spent an hour creating a new os-capabilities Python library this evening:
http://github.com/jaypipes/os-capabilities
Please see the README for examples of how the library works and how I'm
thinking of s
On 08/03/2016 10:03 AM, Matt Riedemann wrote:
On 8/2/2016 1:11 AM, han.ro...@zte.com.cn wrote:
patchset url: https://review.openstack.org/#/c/334747/
Allow "revert_resize" to recover error instance after resize/migrate.
When resize/migrate instance, if error occurs on source compute node,
inst
On 08/03/2016 04:35 AM, Tuan Luong wrote:
Hi,
When we try to add ephemeral disk in booting instance, as we know that
it will create disk.local and the backing file in _/base. Both of them
are referred to the ephemeral disk. When nova reports the disk usage of
compute, does it count both of them
On 08/02/2016 08:19 AM, Alex Xu wrote:
Chris have a thought about using ResourceClass to describe Capabilities
with an infinite inventory. In the beginning we brain storming the idea
of Tags, Tan Lin have same thought, but we say no very quickly, due to
the ResourceClass is really about Quantitat
On 08/02/2016 11:29 AM, Thierry Carrez wrote:
Doug Hellmann wrote:
[...]
Likewise, what if the Manila project team decides they aren't interested
in supporting Python 3.5 or a particular greenlet library du jour that
has been mandated upon them? Is the only filesystem-as-a-service project
going
ve several ideas how to improve vmware dvs and my question is can I
full-fledged develop it or not?
Of course you can! Only you need to propose these improvements to the
upstream master branch first.
Best,
-jay
On Thu, Jul 14, 2016 at 3:21 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrot
On 08/01/2016 05:20 PM, Jim Rollenhagen wrote:
Yes, I know this is stupid late for these.
I'd like to request two exceptions to the non-priority feature freeze,
for a couple of features in the Ironic driver. These were not requested
at the normal time as I thought they were nowhere near ready.
On 08/01/2016 08:33 AM, Sean Dague wrote:
On 07/29/2016 04:55 PM, Doug Hellmann wrote:
One of the outcomes of the discussion at the leadership training
session earlier this year was the idea that the TC should set some
community-wide goals for accomplishing specific technical tasks to
get the pr
On 07/31/2016 10:03 PM, Alex Xu wrote:
2016-07-28 22:31 GMT+08:00 Jay Pipes mailto:jaypi...@gmail.com>>:
On 07/20/2016 11:25 PM, Alex Xu wrote:
One more for end users: Capabilities Discovery API, it should be
'GET
/resource_providers/tags'. Or a pro
On 07/27/2016 10:48 AM, Sam Betts (sambetts) wrote:
While discussing the proposal to add resource_class’ to Ironic nodes for
interacting with the resource provider system in Nova with Jim on IRC, I
voiced my concern about having a resource_class per node. My thoughts
were that we could achieve th
On 07/29/2016 04:45 PM, Chris Dent wrote:
On Fri, 29 Jul 2016, Jay Pipes wrote:
On 07/29/2016 02:31 PM, Chris Dent wrote:
* resource_provider_aggregates as it was plus a new small aggregate
id<->uuid mapping table.
Yes, this.
The integer ID values aren't relevant outside of th
On 07/29/2016 02:31 PM, Chris Dent wrote:
On Thu, 28 Jul 2016, Jay Pipes wrote:
The decision at the mid-cycle was to add a new
placement_sql_connection configuration option to the nova.conf. The
default value would be None which would mean the code in
nova/objects/resource_provider.py would
On 07/28/2016 09:02 PM, Devananda van der Veen wrote:
On 07/28/2016 05:40 PM, Brad Morgan wrote:
I'd like to solicit some advice about potentially implementing
get_all_bw_counters() in the Ironic virt driver.
https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L438
Example Impleme
On 07/20/2016 05:07 AM, Daniel P. Berrange wrote:
For FPGA, I'd like to see an initial proposal that assumed the FPGA
is pre-programmed & pre-divided into a fixed number of slots and simply
deal with this.
For the record, this is precisely what is described in the first version
of the dynamic-
401 - 500 of 1868 matches
Mail list logo