Jay,
The status of the "removed" GPU still shows as "Available" in
pci_devices table.
2017-07-07 8:34 GMT+08:00 Jay Pipes <jaypi...@gmail.com
<mailto:jaypi...@gmail.com>>:
Hi again, Eddie :) Answer inline...
On 07/06/2017 08:14 PM, Eddie Yen wrote:
On 06/26/2017 12:58 PM, Jose Renato Santos wrote:
Hi
I am accessing the nova api using the gophercloud SDK
https://github.com/rackspace/gophercloud
I am running Openstack Newton installed with Openstack Ansible
I am accessing the “List Servers” call of the nova Api with the
Changes-Since
On 06/26/2017 02:27 PM, Jose Renato Santos wrote:
Jay,
Thanks for your response
Let me clarify my point.
I am not expecting to see a change in the updated_at column of a server when
the rules of its security group changes.
I agree that would be a change to be handled by the Neutron Api, and
You have installed a really old version of Nova on that server. What are
you using to install OpenStack?
Best,
-jay
On 06/14/2017 12:13 PM, SGopinath s.gopinath wrote:
Hi,
I'm trying to install Openstack Ocata in
Ubuntu 16.04.2 LTS.
During installation of nova at this step
su -s /bin/sh -c
Awesome, thanks Jose!
On 06/26/2017 11:12 PM, Jose Renato Santos wrote:
Jay
I created a bug report as you suggested:
https://bugs.launchpad.net/nova/+bug/1700684
Thanks for your help
Best
Renato
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, June 26
On 10/06/2017 10:18 AM, Ramu, MohanX wrote:
Hi Jay,
I am able to create custom traits without any issue. Want to associate some
value to that traits.
Like I mentioned in the previous email, that's not how traits work :)
A trait *is* the value that is associated with a resource provider.
Rock on :)
On 10/04/2017 09:33 AM, Ramu, MohanX wrote:
Thank you so much Jay. After adding this header, working fine.
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 11:36 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits
Against the Pike placement API endpoint, make sure you send the
following HTTP header:
OpenStack-API-Version: placement 1.10
Best,
-jay
On 10/03/2017 02:01 PM, Ramu, MohanX wrote:
Please refer attached original one.
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com
"
]
}
BTW, a great place to see examples of both good and bad API usage is to
check out the Gabbit functional API tests for the placement API. Here is
the set of tests for the traits functionality:
https://github.com/openstack/nova/blob/master/nova/tests/functional/api/openstack/
than 50 if you want to launch a
16GB instance on a host with 64GB of RAM. Try reserving 32 1GB huge pages.
Best,
-jay
2017-09-06 1:47 GMT+08:00 Jay Pipes <jaypi...@gmail.com
<mailto:jaypi...@gmail.com>>:
Please remember to add a topic [nova] marker to your subject line.
you think Jay?
On Wed, Sep 6, 2017 at 9:22 PM, Jay Pipes <jaypi...@gmail.com
<mailto:jaypi...@gmail.com>> wrote:
On 09/06/2017 01:21 AM, Weichih Lu wrote:
Thanks for your response.
Is this mean if I want to create an instance with flavor: 16G
Please remember to add a topic [nova] marker to your subject line.
Answer below.
On 09/05/2017 04:45 AM, Weichih Lu wrote:
Dear all,
I have a compute node with 64GB ram. And I set 50 hugepages wiht 1GB
hugepage size. I used command "free", it shows free memory is about
12GB. And free
Detach the volume, then resize it, then re-attach.
Best,
-jay
On 09/26/2017 09:22 AM, Volodymyr Litovka wrote:
Colleagues,
can't find ways to resize attached volume. I'm on Pike.
As far as I understand, it required to be supported in Nova, because
Cinder need to check with Nova whether it's
On 09/26/2017 10:20 AM, Volodymyr Litovka wrote:
Hi Jay,
I know about this way :-) but Pike introduced ability to resize attached
volumes:
"It is now possible to signal and perform an online volume size change
as of the 2.51 microversion using the|volume-extended|external event.
Nova will
,
-jay
Thanks & Regards,
Mohan Ramu
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 9:26 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working
On 10/03/2017 11:34 AM, Ramu, MohanX wrote:
Hi,
We have impleme
On 10/03/2017 11:34 AM, Ramu, MohanX wrote:
Hi,
We have implemented OpenStack Ocata and Pike releases, able to consume
Placement resource providers API, not able to consume resource class APIs’.
I tried to run Triats API in Pike set up too. I am not able to run any
Traits API.
As per the
On 08/18/2017 08:50 AM, Divneet Singh wrote:
Hello, I have trying to install ocata on Ubuntu 16.04 , for the time
being i have 2 nodes . just can't figure this out.
I have setup Placement API. But get error after restart nova service or
reboot
" 017-08-18 08:27:41.496 1422 WARNING
On 11/16/2017 12:06 AM, Ramu, MohanX wrote:
Hi All,
I have a use case that I need to apply some filter (Custom traits)
while Placement API fetch the resource providers for launching instance.
So that I can have list of resource provided which meets my
condition/filter/validation. The
.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:validate_provider_summaries
You will need to wait for the Queens release for the complete
traits-based scheduling functionality to be operational.
Best,
-jay
-----Original Message-
From: Jay Pipes [mailto:jaypi...@gmail
On 12/01/2017 08:57 AM, si...@turka.nl wrote:
Hi,
I have created a flavor with the following metadata:
quota:disk_write_bytes_sec='10240'
This should limit writing to disk to 10240 bytes (10KB/s). I also tried it
with a higher number (100MB/s).
Using the flavor I have launched an instance and
On 07/02/2018 09:45 AM, Houssam ElBouanani wrote:
Hi,
I have recently finished installing a minimal OpenStack Queens
environment for a school project, and was asked whether it is possible
to deploy an additional compute node on bare metal, aka without an
underlying operating system, in order
On 05/02/2018 02:43 PM, Torin Woltjer wrote:
I am working on setting up Openstack for HA and one of the last orders of
business is getting HA behavior out of the compute nodes.
There is no HA behaviour for compute nodes.
Is there a project that will automatically evacuate instances from a
On 05/02/2018 04:39 PM, Torin Woltjer wrote:
> There is no HA behaviour for compute nodes.
>
> You are referring to HA of workloads running on compute nodes, not HA of
> compute nodes themselves.
It was a mistake for me to say HA when referring to compute and
instances. Really I want to
On 01/17/2018 12:46 PM, Jorge Luiz Correa wrote:
Hi, I would like some help to understand what does means each field in
output of the command 'openstack hypervisor stats show':
it's an amalgamation of legacy information that IMHO should be
deprecated from the Compute API.
FWIW, the
On 01/15/2018 12:58 PM, Satish Patel wrote:
But Fuel is active project, isn't it?
https://docs.openstack.org/fuel-docs/latest/
No, it is no longer developed or supported.
-jay
___
Mailing list:
On 01/02/2018 06:09 AM, Guo James wrote:
Hi guys
I know that Ironic has support multi-nova-compute.
But I am not sure whether OpenStack support the situation than every
nova-compute has a unshare ironic
And these ironic share a nova and a neutron
I'm not quite following you... what do you
ironic, a nova, a neutron in a OpenStack environment
Does everything go well?
Sure, that should work just fine.
Best,
-jay
Thanks
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, January 02, 2018 8:59 PM
To: openstack@lists.openstack.org
Subject: Re
On 08/08/2018 09:37 AM, Cody wrote:
On 08/08/2018 07:19 AM, Bernd Bausch wrote:
I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?
The message is in the conductor log because it's the conductor that does
="")
NoValidHost: No valid host was found.
: NoValidHost_Remote: No valid host was found.
2018-08-08 09:28:36.331 1648 WARNING nova.scheduler.utils
[req-ef0d8ea1-e801-483e-b913-9148a6ac5d90
2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 -
default default] [instance: b466a974
On 08/07/2018 10:57 AM, Cody wrote:
Hi everyone,
I intentionally triggered an error by launching more instances than it
is allowed by the 'cpu_allocation_ratio' set on a compute node. When it
comes to logs, the only place contained a clue to explain the launch
failure was in the
On 08/08/2018 07:19 AM, Bernd Bausch wrote:
I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?
The message is in the conductor log because it's the conductor that does
most of the work. The others are just
On 08/27/2018 09:40 AM, Risto Vaaraniemi wrote:
Hi,
I tried to migrate a guest to another host but it failed with a
message saying there's not enough capacity on the target host even
though the server should me nearly empty. The guest I'm trying to
move needs 4 cores, 4 GB of memory and 50 GB
On 07/16/2018 10:30 AM, Toni Mueller wrote:
Hi Jay,
On Fri, Jul 06, 2018 at 12:46:04PM -0400, Jay Pipes wrote:
There is no current way to say "On this dual-Xeon compute node, put all
workloads that don't care about dedicated CPUs on this socket and all
workloads that DO care about dedi
On 08/30/2018 10:54 AM, Eugen Block wrote:
Hi Jay,
You need to set your ram_allocation_ratio nova.CONF option to 1.0 if
you're running into OOM issues. This will prevent overcommit of memory
on your compute nodes.
I understand that, the overcommitment works quite well most of the time.
It
Hi Tony,
The short answer is that you cannot do that today. Today, each Nova
compute node is either "all in" for NUMA and CPU pinning or it's not.
This means that for resource-constrained environments like "The Edge!",
there are not very good ways to finely divide up a compute node and make
On 09/07/2018 03:46 PM, Hang Yang wrote:
Hi there,
I'm new to the DIB tool and ran into an issue when used 2.16.0 DIB tool
to build a CentOS based image with pip-and-virtualenv element. It failed
at
https://bugs.launchpad.net/neutron/+bug/1777640
Best,
-jay
On 11/06/2018 08:21 AM, Terry Lundin wrote:
Hi all,
I've been struggling with instances suddenly not being able to fetch
metadata from Openstack Queens (this has worked fine earlier).
Newly created VMs fail to connect to the magic
On 08/30/2018 10:19 AM, Eugen Block wrote:
When does Nova apply its filters (Ram, CPU, etc.)?
Of course at instance creation and (live-)migration of existing
instances. But what about existing instances that have been shutdown and
in the meantime more instances on the same hypervisor have been
On 08/23/2018 11:01 PM, 余婷婷 wrote:
Hi:
Sorry fo bothering everyone. Now I update my openstack to queen,and
use the nova-placement-api to provider resource.
When I use "/resource_providers/{uuid}/inventories/MEMORY_MB" to
update memory_mb allocation_ratio, and it success.But after some
ssibly "shelve and then offload an instance", then that
is a different thing, and in both of *those* cases, resources are
released on the compute host.
Best,
-jay
Zitat von Jay Pipes :
On 08/30/2018 10:54 AM, Eugen Block wrote:
Hi Jay,
You need to set your ram_allocation_ratio nova.CONF option to 1.0
On 11/30/2018 05:52 PM, Mike Carden wrote:
Have you set the placement_randomize_allocation_candidates CONF option
and are still seeing the packing behaviour?
No I haven't. Where would be the place to do that? In a nova.conf
somewhere that the nova-scheduler containers on the
On 11/30/2018 02:53 AM, Mike Carden wrote:
I'm seeing a similar issue in Queens deployed via tripleo.
Two x86 compute nodes and one ppc64le node and host aggregates for
virtual instances and baremetal (x86) instances. Baremetal on x86 is
working fine.
All VMs get deployed to compute-0. I
On 11/28/2018 02:50 AM, Zufar Dhiyaulhaq wrote:
Hi,
Thank you. I am able to fix this issue by adding this configuration into
nova configuration file in controller node.
driver=filter_scheduler
That's the default:
On 09/17/2018 09:39 AM, Peter Penchev wrote:
Hi,
So here's a possibly stupid question - or rather, a series of such :)
Let's say a company has two (or five, or a hundred) datacenters in
geographically different locations and wants to deploy OpenStack in both.
What would be a deployment scenario
On 08/07/2014 01:17 PM, Ronak Shah wrote:
Hi,
Following a very interesting and vocal thread on GBP for last couple of
days and the GBP meeting today, GBP sub-team proposes following name
changes to the resource.
policy-point for endpoint
policy-group for endpointgroup (epg)
Please reply if
On 08/08/2014 08:55 AM, Kevin Benton wrote:
The existing constructs will not change.
A followup question on the above...
If GPB API is merged into Neutron, the next logical steps (from what I
can tell) will be to add drivers that handle policy-based payloads/requests.
Some of these
On 08/08/2014 12:29 PM, Sumit Naiksatam wrote:
Hi Jay, To extend Ivar's response here, the core resources and core
plugin configuration does not change with the addition of these
extensions. The mechanism to implement the GBP extensions is via a
service plugin. So even in a deployment where a
On 08/08/2014 08:49 AM, Tomoki Sekiyama wrote:
Hi all,
I'm considering how I can apply image download/upload bandwidth limit for
glance for network QoS.
There was a review for the bandwidth limit, however it is abandoned.
* Download rate limiting
https://review.openstack.org/#/c/21380/
Paul, does this friend of a friend have a reproduceable test script for
this?
Thanks!
-jay
On 08/08/2014 04:42 PM, Kevin Benton wrote:
If this is true, I think the issue is not on Neutron side but the Nova
side.
Neutron just receives and handles individual port requests. It has no
notion of
On 08/10/2014 10:36 PM, Jay Lau wrote:
I was asking this because I got a -2 for
https://review.openstack.org/109505 , just want to know why this new
term metadetails was invented when we already have details,
metadata, system_metadata, instance_metadata, and properties (on
images and volumes).
Thanks, Paul!
On 08/11/2014 10:10 AM, CARVER, PAUL wrote:
Armando M. [mailto:arma...@gmail.com] wrote:
On 9 August 2014 10:16, Jay Pipes jaypi...@gmail.com wrote:
Paul, does this friend of a friend have a reproduceable test
script for this?
We would also need to know the OpenStack
On 08/11/2014 11:06 AM, Dan Smith wrote:
As the person who -2'd the review, I'm thankful you raised this issue on
the ML, Jay. Much appreciated.
The metadetails term isn't being invented in this patch, of course. I
originally complained about the difference when this was being added:
Hi Li, comments inline.
On 08/08/2014 12:03 AM, Li Ma wrote:
Getting a massive amount of information from data storage to be displayed is
where most of the activity happens in OpenStack. The two activities of reading
data and writing (creating, updating and deleting) data are fundamentally
On 08/11/2014 05:58 PM, Jay Lau wrote:
I think the metadata in server group is an important feature and it
might be used by
https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group
Actually, we are now doing an internal development for above bp and want
to contribute this back
On 08/12/2014 10:56 AM, Mark McLoughlin wrote:
Hey
(Terrible name for a policy, I know)
From the version_cap saga here:
https://review.openstack.org/110754
I think we need a better understanding of how to approach situations
like this.
Here's my attempt at documenting what I think we're
On 08/12/2014 04:13 AM, Michael Still wrote:
Hi,
this is just a friendly reminder that we are now 9 days away from
feature proposal freeze for nova. If you think your blueprint isn't
going to make it in time, then now would be a good time to let me know
so that we can defer it until Kilo. That
On 08/13/2014 06:35 PM, Russell Bryant wrote:
On 08/13/2014 06:23 PM, Mark McLoughlin wrote:
On Wed, 2014-08-13 at 12:05 -0700, James E. Blair wrote:
cor...@inaugust.com (James E. Blair) writes:
Sean Dague s...@dague.net writes:
This has all gone far enough that someone actually wrote a
On 08/13/2014 08:31 PM, zhiwei wrote:
Hi all,
We wrote a nova schedule plugin that need to fetch image metadata by
image_id, but encountered one thing, we did not have the glance context.
Our solution is to configure OpenStack admin user and password to
nova.conf, as you know this is not good.
your problems.
One of our scheduler component need to fetch image metadata by image_id(
at this time, there is not instance ).
Why? Again, the request_spec contains all the information you need about
the image...
Best,
-jay
On Thu, Aug 14, 2014 at 9:29 AM, Jay Pipes jaypi...@gmail.com
On 08/12/2014 06:57 PM, Michael Still wrote:
Hi.
One of the action items from the nova midcycle was that I was asked to
make nova's expectations of core reviews more clear. This email is an
attempt at that.
Nova expects a minimum level of sustained code reviews from cores. In
the past this has
On 08/13/2014 11:50 PM, Manickam, Kanagaraj wrote:
Hi,
Nova provides a flag ‘allow_resize_to_same_host’ to resize the given
instance on the same hypervisor where it is currently residing. When
this flag is set to True, the nova.compute.api: resize() method does not
set the scheduler hint with
On 08/13/2014 11:06 PM, zhiwei wrote:
Hi Jay.
The case is: When heat create a stack, it will first call our
scheduler(will pass image_id), our scheduler will get image metadata by
image_id.
Our scheduler will build a placement policy through image metadata, then
start booting VM.
How exactly
On 08/15/2014 04:21 AM, Roman Podoliaka wrote:
Hi Oslo team,
I propose that we add Mike Bayer (zzzeek) to the oslo.db core reviewers team.
Mike is an author of SQLAlchemy, Alembic, Mako Templates and some
other stuff we use in OpenStack. Mike has been working on OpenStack
for a few months
On 08/15/2014 03:14 PM, Matthew Treinish wrote:
Hi Everyone,
So as part of splitting out common functionality from tempest into a library [1]
we need to create a new repository. Which means we have the fun task of coming
up with something to name it. I'm personally thought we should call it:
On 08/16/2014 12:27 PM, Marc Koderer wrote:
Hi all,
Am 15.08.2014 um 23:31 schrieb Jay Pipes jaypi...@gmail.com:
I suggest that tempest should be the name of the import'able library, and that the
integration tests themselves should be what is pulled out of the current Tempest repository
On 08/17/2014 05:11 AM, Stan Lagun wrote:
On Fri, Aug 15, 2014 at 7:17 PM, Sandy Walsh sandy.wa...@rackspace.com
mailto:sandy.wa...@rackspace.com wrote:
I recently suggested that the Ceilometer API (and integration tests)
be separated from the implementation (two repos) so others might
Caution: words below may cause discomfort. I ask that folks read *all*
of my message before reacting to any piece of it. Thanks!
On 08/19/2014 02:41 AM, Robert Collins wrote:
On 18 August 2014 09:32, Clint Byrum cl...@fewbar.com wrote:
I can see your perspective but I don't think its
On 08/19/2014 11:23 AM, Russell Bryant wrote:
On 08/19/2014 05:31 AM, Robert Collins wrote:
Hey everybody - https://wiki.openstack.org/wiki/TripleO/SpecReviews
seems pretty sane as we discussed at the last TripleO IRC meeting.
I'd like to propose that we adopt it with the following tweak:
On 08/20/2014 07:34 AM, Miguel Angel Ajo Pelayo wrote:
I couldn't resist making a little benchmark test of the new RPC implementation
shihanzhang wrote:
http://www.ajo.es/post/95269040924/neutron-security-group-rules-for-devices-rpc-rewrite
The results are awesome :-)
Indeed, fantastic news.
On 08/20/2014 04:48 AM, Nikola Đipanov wrote:
On 08/20/2014 08:27 AM, Joe Gordon wrote:
On Aug 19, 2014 10:45 AM, Day, Phil philip@hp.com
mailto:philip@hp.com wrote:
-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com
mailto:ndipa...@redhat.com]
Sent: 19
Hi Thierry, thanks for the reply. Comments inline. :)
On 08/20/2014 06:32 AM, Thierry Carrez wrote:
Jay Pipes wrote:
[...] If either of the above answers is NO, then I believe the
Technical Committee should recommend that the integrated project be
removed from the integrated release.
HOWEVER
On 08/20/2014 11:41 AM, Zane Bitter wrote:
On 19/08/14 10:37, Jay Pipes wrote:
By graduating an incubated project into the integrated release, the
Technical Committee is blessing the project as the OpenStack way to do
some thing. If there are projects that are developed *in the OpenStack
On 08/20/2014 05:06 PM, Chris Friesen wrote:
On 08/20/2014 07:21 AM, Jay Pipes wrote:
Hi Thierry, thanks for the reply. Comments inline. :)
On 08/20/2014 06:32 AM, Thierry Carrez wrote:
If we want to follow your model, we probably would have to dissolve
programs as they stand right now
On 08/21/2014 07:58 AM, Chris Dent wrote:
On Thu, 21 Aug 2014, Sean Dague wrote:
By blessing one team what we're saying is all the good ideas pool for
tackling this hard problem can only come from that one team.
This is a big part of this conversation that really confuses me. Who is
that one
On 08/20/2014 11:54 PM, Clint Byrum wrote:
Excerpts from Jay Pipes's message of 2014-08-20 14:53:22 -0700:
On 08/20/2014 05:06 PM, Chris Friesen wrote:
On 08/20/2014 07:21 AM, Jay Pipes wrote:
...snip
We already run into issues with something as basic as competing SQL
databases.
If the TC
On 08/22/2014 08:33 AM, Thierry Carrez wrote:
Hi everyone,
We all know being a project PTL is an extremely busy job. That's because
in our structure the PTL is responsible for almost everything in a project:
- Release management contact
- Work prioritization
- Keeping bugs under control
-
On 08/22/2014 01:48 PM, Clint Byrum wrote:
It has been brought to my attention that Ironic uses the biggest hammer
in the IPMI toolbox to control chassis power:
https://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ipminative.py#n142
Which is
ret =
On 08/23/2014 06:35 PM, Clint Byrum wrote:
I agree as well. PTL is a servant of the community, as any good leader
is. If the PTL feels they have to drop the hammer, or if an impass is
reached where they are asked to, it is because they have failed to get
everyone communicating effectively, not
On 08/25/2014 11:10 AM, Joe Cropper wrote:
Hello,
Is our long-term vision to allow a VMs to be dynamically added/removed
from a group? That is, unless I'm overlooking something, it appears
that you can only add a VM to a server group at VM boot time and
effectively remove it by deleting the
, Aug 25, 2014 at 10:16 AM, Jay Pipes jaypi...@gmail.com wrote:
On 08/25/2014 11:10 AM, Joe Cropper wrote:
Hello,
Is our long-term vision to allow a VMs to be dynamically added/removed
from a group? That is, unless I'm overlooking something, it appears
that you can only add a VM to a server
On 08/25/2014 12:08 PM, Aryeh Friedman wrote:
http://www.quora.com/Why-would-the-creators-of-OpenStack-the-market-leader-in-cloud-computing-platforms-refuse-to-use-it-and-use-AWS-instead
Would you mind please not posting to the developer's mailing list
inflammatory random web pages?
Thanks,
On 08/25/2014 03:50 PM, Adam Lawson wrote:
I recognize I'm joining the discussion late but I've been following the
dialog fairly closely and want to offer my perspective FWIW. I have a
lot going through my head, not sure how to get it all out there so I'll
do a brain dump, get some feedback and
On 08/26/2014 07:09 PM, James E. Blair wrote:
Hi,
After reading https://wiki.openstack.org/wiki/Network/Incubator I have
some thoughts about the proposed workflow.
We have quite a bit of experience and some good tools around splitting
code out of projects and into new projects. But we don't
On 08/27/2014 06:41 AM, Markus Zoeller wrote:
The review of the spec to blueprint hot-resize has several comments
about the need of refactoring the existing code base of resize and
migrate before the blueprint could be considered (see [1]).
I'm interested in the result of the blueprint therefore
On 08/26/2014 05:41 PM, Kurt Griffiths wrote:
* uWSGI + gevent
* config: http://paste.openstack.org/show/100592/
* app.py: http://paste.openstack.org/show/100593/
Hi Kurt!
Thanks for posting the benchmark configuration and results. Good stuff :)
I'm curious
interfaces and data structures are cleaned up and
versioned will just lead to greater technical debt and an increase in
frustration on the part of Nova developers and scheduler developers alike.
-jay
On Wed, Aug 27, 2014 at 11:52 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote
On 08/28/2014 12:50 PM, Michael Still wrote:
On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange berra...@redhat.com wrote:
On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote:
How to do we handle specs that have slipped through the cracks
and did not make it for Juno?
Rebase the
On 08/27/2014 11:34 AM, Doug Hellmann wrote:
On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org
wrote:
Hi everyone,
I've been thinking about what changes we can bring to the Design
Summit format to make it more productive. I've heard the feedback
from the mid-cycle meetups and
On 08/28/2014 02:21 PM, Sean Dague wrote:
On 08/28/2014 01:58 PM, Jay Pipes wrote:
On 08/27/2014 11:34 AM, Doug Hellmann wrote:
On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org
wrote:
Hi everyone,
I've been thinking about what changes we can bring to the Design
Summit
On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
I’ll try and not whine about my pet project but I do think there is a
problem here. For the Gantt project to split out the scheduler there is
a crucial BP that needs to be implemented (
https://review.openstack.org/#/c/89893/ ) and, unfortunately,
On 08/28/2014 03:31 PM, Sean Dague wrote:
On 08/28/2014 03:06 PM, Jay Pipes wrote:
On 08/28/2014 02:21 PM, Sean Dague wrote:
On 08/28/2014 01:58 PM, Jay Pipes wrote:
On 08/27/2014 11:34 AM, Doug Hellmann wrote:
On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org
wrote:
Hi
On 08/28/2014 04:05 PM, Chris Friesen wrote:
On 08/28/2014 01:44 PM, Jay Pipes wrote:
On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
I understand that reviews are a burden and very hard but it seems wrong
that a BP with multiple positive reviews and no negative reviews is
dropped because
Message- From: Jay Pipes
[mailto:jaypi...@gmail.com] Sent: Thursday, August 28, 2014 1:44 PM
To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
[nova] Is the BP approval process broken?
On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
I'll try and not whine about my pet project
On 08/29/2014 10:19 AM, David Kranz wrote:
While reviewing patches for moving response checking to the clients, I
noticed that there are places where client methods do not return any value.
This is usually, but not always, a delete method. IMO, every rest client
method should return at least the
On 08/29/2014 12:25 PM, Zane Bitter wrote:
On 28/08/14 17:02, Jay Pipes wrote:
I understand your frustration about the silence, but the silence from
core team members may actually be a loud statement about where their
priorities are.
I don't know enough about the Nova review situation to say
On 08/29/2014 02:48 AM, Preston L. Bannister wrote:
Looking to put a proper implementation of instance backup into
OpenStack. Started by writing a simple set of baseline tests and running
against the stable/icehouse branch. They failed!
On 08/29/2014 05:15 PM, Zane Bitter wrote:
On 29/08/14 14:27, Jay Pipes wrote:
On 08/26/2014 10:14 AM, Zane Bitter wrote:
Steve Baker has started the process of moving Heat tests out of the
Tempest repository and into the Heat repository, and we're looking for
some guidance on how they should
On 09/02/2014 07:15 AM, Duncan Thomas wrote:
On 11 August 2014 19:26, Jay Pipes jaypi...@gmail.com wrote:
The above does not really make sense for MySQL Galera/PXC clusters *if only
Galera nodes are used in the cluster*. Since Galera is synchronously
replicated, there's no real point
On 09/04/2014 11:32 AM, Vladik Romanovsky wrote:
+1
I very much agree with Dan's the propsal.
I am concerned about difficulties we will face with merging
patches that spreads accross various regions: manager, conductor, scheduler,
etc..
However, I think, this is a small price to pay for
On 09/04/2014 11:32 AM, Steven Hardy wrote:
On Thu, Sep 04, 2014 at 10:45:59AM -0400, Jay Pipes wrote:
On 08/29/2014 05:15 PM, Zane Bitter wrote:
On 29/08/14 14:27, Jay Pipes wrote:
On 08/26/2014 10:14 AM, Zane Bitter wrote:
Steve Baker has started the process of moving Heat tests out
101 - 200 of 2014 matches
Mail list logo