On Wed, 2017-06-21 at 07:01 -0400, Sean Dague wrote:
> On 06/21/2017 04:43 AM, sfinu...@redhat.com wrote:
> > On Tue, 2017-06-20 at 16:48 -0600, Chris Friesen wrote:
> > > On 06/20/2017 09:51 AM, Eric Fried wrote:
> > > > Nice Stephen!
> > > >
> > > > For those who aren't aware, the rendered
On 06/21/2017 04:43 AM, sfinu...@redhat.com wrote:
> On Tue, 2017-06-20 at 16:48 -0600, Chris Friesen wrote:
>> On 06/20/2017 09:51 AM, Eric Fried wrote:
>>> Nice Stephen!
>>>
>>> For those who aren't aware, the rendered version (pretty, so pretty) can
>>> be accessed via the
On Tue, 2017-06-20 at 16:48 -0600, Chris Friesen wrote:
> On 06/20/2017 09:51 AM, Eric Fried wrote:
> > Nice Stephen!
> >
> > For those who aren't aware, the rendered version (pretty, so pretty) can
> > be accessed via the gate-nova-docs-ubuntu-xenial jenkins job:
> >
> >
On 06/20/2017 09:51 AM, Eric Fried wrote:
Nice Stephen!
For those who aren't aware, the rendered version (pretty, so pretty) can
be accessed via the gate-nova-docs-ubuntu-xenial jenkins job:
Nice Stephen!
For those who aren't aware, the rendered version (pretty, so pretty) can
be accessed via the gate-nova-docs-ubuntu-xenial jenkins job:
http://docs-draft.openstack.org/10/475810/1/check/gate-nova-docs-ubuntu-xenial/25e5173//doc/build/html/scheduling.html?highlight=scheduling
On
On 06/20/2017 09:51 AM, Alex Xu wrote:
2017-06-19 22:17 GMT+08:00 Jay Pipes >:
* Scheduler then creates a list of N of these data structures,
with the first being the data for the selected host, and the the
rest being data
On Mon, 2017-06-19 at 09:36 -0500, Matt Riedemann wrote:
> On 6/19/2017 9:17 AM, Jay Pipes wrote:
> > On 06/19/2017 09:04 AM, Edward Leafe wrote:
> > > Current flow:
>
> As noted in the nova-scheduler meeting this morning, this should have
> been called "original plan" rather than "current
On Jun 20, 2017, at 8:38 AM, Jay Pipes wrote:
>
>>> The example I posted used 3 resource providers. 2 compute nodes with no
>>> local disk and a shared storage pool.
>> Now I’m even more confused. In the straw man example
>> (https://review.openstack.org/#/c/471927/
>>
2017-06-19 22:17 GMT+08:00 Jay Pipes :
> On 06/19/2017 09:04 AM, Edward Leafe wrote:
>
>> Current flow:
>> * Scheduler gets a req spec from conductor, containing resource
>> requirements
>> * Scheduler sends those requirements to placement
>> * Placement runs a query to
On 06/20/2017 08:43 AM, Edward Leafe wrote:
On Jun 20, 2017, at 6:54 AM, Jay Pipes > wrote:
It was the "per compute host" that I objected to.
I guess it would have helped to see an example of the data returned
for multiple compute nodes. The
On Jun 20, 2017, at 6:54 AM, Jay Pipes wrote:
>
>>> It was the "per compute host" that I objected to.
>> I guess it would have helped to see an example of the data returned for
>> multiple compute nodes. The straw man example was for a single compute node
>> with SR-IOV,
On 06/19/2017 09:26 PM, Boris Pavlovic wrote:
Hi,
Does this look too complicated and and a bit over designed.
Is that a question?
For example, why we can't store all data in memory of single python
application with simple REST API and have
simple mechanism for plugins that are filtering.
On 06/19/2017 08:05 PM, Edward Leafe wrote:
On Jun 19, 2017, at 5:27 PM, Jay Pipes > wrote:
It was from the straw man example. Replacing the $FOO_UUID with
UUIDs, and then stripping out all whitespace resulted in about 1500
bytes. Your example,
Hi,
Does this look too complicated and and a bit over designed.
For example, why we can't store all data in memory of single python
application with simple REST API and have
simple mechanism for plugins that are filtering. Basically there is no any
kind of problems with storing it on single
On Jun 19, 2017, at 5:27 PM, Jay Pipes wrote:
>
>> It was from the straw man example. Replacing the $FOO_UUID with UUIDs, and
>> then stripping out all whitespace resulted in about 1500 bytes. Your
>> example, with whitespace included, is 1600 bytes.
>
> It was the "per
On 06/19/2017 05:24 PM, Edward Leafe wrote:
On Jun 19, 2017, at 1:34 PM, Jay Pipes > wrote:
OK, thanks for clarifying that. When we discussed returning 1.5K per
compute host instead of a couple of hundred bytes, there was
discussion that paging
On Jun 19, 2017, at 1:34 PM, Jay Pipes wrote:
>
>> OK, thanks for clarifying that. When we discussed returning 1.5K per compute
>> host instead of a couple of hundred bytes, there was discussion that paging
>> would be necessary.
>
> Not sure where you're getting the whole
On 06/19/2017 01:59 PM, Edward Leafe wrote:
While we discussed the fact that there may be a lot of entries, we did
not say we'd immediately support a paging mechanism.
OK, thanks for clarifying that. When we discussed returning 1.5K per
compute host instead of a couple of hundred bytes, there
On Jun 19, 2017, at 9:17 AM, Jay Pipes wrote:
As Matt pointed out, I mis-wrote when I said “current flow”. I meant “current
agreed-to design flow”. So no need to rehash that.
>> * Placement returns a number of these data structures as JSON blobs. Due to
>> the size of the
On 6/19/2017 9:17 AM, Jay Pipes wrote:
On 06/19/2017 09:04 AM, Edward Leafe wrote:
Current flow:
As noted in the nova-scheduler meeting this morning, this should have
been called "original plan" rather than "current flow", as Jay pointed
out inline.
* Scheduler gets a req spec from
On 06/19/2017 09:04 AM, Edward Leafe wrote:
Current flow:
* Scheduler gets a req spec from conductor, containing resource requirements
* Scheduler sends those requirements to placement
* Placement runs a query to determine the root RPs that can satisfy those
requirements
Not root RPs.
There is a lot going on lately in placement-land, and some of the changes being
proposed are complex enough that it is difficult to understand what the final
result is supposed to look like. I have documented my understanding of the
current way that the placement/scheduler interaction works,
22 matches
Mail list logo