Hi David,
This email is long overdue.
1. I don't recall ever stating that we were going to use OAuth 1.0a over
2.0, or vice versa. I've checked
https://etherpad.openstack.org/havana-saml-oauth-scim and
https://etherpad.openstack.org/havana-external-auth and couldn't find
anything that said
Hi Jeff,
thanks for contribution. As long as there is no more input in gathering
issues, I'll try to wrap all the problems up in BP's whiteboard and we
can start designing proposals.
Best
-- Jarda
On 2013/09/07 16:09, Walls, Jeffrey Joel (HP Converged Cloud - Cloud OS)
wrote:
One issue I
Hi,
I have completely missed this discussion as it does not have quantum/Neutron in
the subject (modify it now)
I think that the security group is the right place to control this.
I think that this might be only allowed to admins.
Let me explain what we need which is more than just disable
Ceilometer is a great project for taking metrics available in Nova and other
systems and making them available for use by Operations, Billing, Monitoring,
etc - and clearly we should try and avoid having multiple collectors of the
same data.
But making the Nova scheduler dependent on
Hi,
First want to mention that currently health monitor is not a pure DB
object: upon creation the request is also send to device/driver.
Another thing is that there is no delete_health_monitor in driver API
(delete_pool_health_monitor only deletes the association) which is weird
because
+1
Just to add to the context - keep in mind that within HP (and I assume other
large organisations such as IBM but can't speak directly from them) there are
now many separate divisions working with Openstack (Public Cloud, Private Cloud
Products, Labs, Consulting, etc), each bringing
-Original Message-
From: Dan Smith [mailto:d...@danplanet.com]
Sent: 16 July 2013 14:51
To: OpenStack Development Mailing List
Cc: Day, Phil
Subject: Re: [openstack-dev] Moving task flow to conductor - concern about
scale
In the original context of using Conductor as a database
On 07/18/2013 10:12 PM, Lu, Lianhao wrote:
snip
Using ceilometer as the source of those metrics was discussed in the
nova-scheduler subgroup meeting. (see #topic extending data in host
state in the following link).
Hi Josh,
My idea's really pretty simple - make DB proxy and Task workflow separate
services, and allow people to co-locate them if they want to.
Cheers.
Phil
-Original Message-
From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
Sent: 17 July 2013 14:57
To: OpenStack Development
On 07/19/2013 06:18 AM, Day, Phil wrote:
Ceilometer is a great project for taking metrics available in Nova and other
systems and making them available for use by Operations, Billing, Monitoring,
etc - and clearly we should try and avoid having multiple collectors of the
same data.
But
On 19/07/2013 07:10, Steve Martinelli wrote:
Hi David,
This email is long overdue.
1. I don't recall ever stating that we were going to use OAuth 1.0a over
2.0, or vice versa. I've checked
_https://etherpad.openstack.org/havana-saml-oauth-scim_and
Hi Sean,
Do you think the existing static allocators should be migrated to going through
ceilometer - or do you see that as different? Ignoring backward compatibility.
The reason I ask is I want to extend the static allocators to include a couple
more. These plugins are the way I would have
On 07/19/13 at 12:08pm, Murray, Paul (HP Cloud Services) wrote:
Hi Sean,
Do you think the existing static allocators should be migrated to going through
ceilometer - or do you see that as different? Ignoring backward compatibility.
It makes sense to keep some things in Nova, in order to
Hi guys,
Both 0.0.16 and 0.0.17 seem to have a broken tests counter. It shows that 2
times more tests have been run than I actually have.
Thanks,
Roman
On Thu, Jul 18, 2013 at 2:29 AM, David Ripton drip...@redhat.com wrote:
On 07/17/2013 04:54 PM, Robert Collins wrote:
On 18 July 2013
On 07/18/2013 05:56 PM, Eric Windisch wrote:
These callback methods are part of the Kombu driver (and maybe part of
Qpid), but are NOT part of the RPC abstraction. These are private
methods. They can be broken for external consumers of these methods,
because there
On 07/18/2013 11:12 PM, Lu, Lianhao wrote:
Sean Dague wrote on 2013-07-18:
On 07/17/2013 10:54 PM, Lu, Lianhao wrote:
Hi fellows,
Currently we're implementing the BP
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
The main idea is to have
an extensible plugin
-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 19 July 2013 12:04
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] Ceilometer vs. Nova internal metrics
collector for scheduling (was: New DB column or new DB table?)
On 07/19/2013 06:18
On 07/19/2013 09:43 AM, Sandy Walsh wrote:
On 07/18/2013 11:12 PM, Lu, Lianhao wrote:
Sean Dague wrote on 2013-07-18:
On 07/17/2013 10:54 PM, Lu, Lianhao wrote:
Hi fellows,
Currently we're implementing the BP
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
I wanted to raise another design failure of why creating the port on
nova-compute is bad. Previously, we have encountered this bug
(https://bugs.launchpad.net/neutron/+bug/1160442). What was causing the
issue was that when nova-compute calls into quantum to create the port;
quantum creates
On 07/19/2013 08:30 AM, Andrew Laski wrote:
On 07/19/13 at 12:08pm, Murray, Paul (HP Cloud Services) wrote:
Hi Sean,
Do you think the existing static allocators should be migrated to
going through ceilometer - or do you see that as different? Ignoring
backward compatibility.
It makes sense
I'm not sure that I agree with this direction. In our investigation, KMIP is a
problematic protocol for several reasons:
* We haven't found an implementation of KMIP for Python. (Let us know if
there is one!)
* Support for KMIP by HSM vendors is limited.
* We haven't found software
There's nothing I've seen so far that causes me alarm, but then
again we're in the very early stages and haven't moved anything
really complex.
The migrations (live, cold, and resize) are moving there now. These are
some of the more complex stateful operations I would expect conductor
to
If we agree that something like capabilities should go through Nova, what do
you suggest should be done with the change that sparked this debate:
https://review.openstack.org/#/c/35760/
I would be happy to use it or a modified version.
Paul.
-Original Message-
From: Sean Dague
On Fri, Jul 19, 2013 at 11:06 AM, Dan Smith d...@danplanet.com wrote:
FWIW, I don't think anyone is suggesting a single conductor, and
especially not a single database proxy.
This is a critical detail that I missed. Re-reading Phil's original email,
I see you're debating the ratio of
On Jul 18, 2013, at 5:16 PM, Aaron Rosen aro...@nicira.com wrote:
Hi,
I wanted to raise another design failure of why creating the port on
nova-compute is bad. Previously, we have encountered this bug
(https://bugs.launchpad.net/neutron/+bug/1160442). What was causing the issue
was
Hi everyone,
The #openstack-infra channel has been increasing in traffic and
attention these past several months (hooray!). It finally became clear
to us that discussions happening there were often of interest to the
wider project and that we should start logging the channel.
Today we added the
On Jul 19, 2013 9:57 AM, Day, Phil philip@hp.com wrote:
-Original Message-
From: Dan Smith [mailto:d...@danplanet.com]
Sent: 19 July 2013 15:15
To: OpenStack Development Mailing List
Cc: Day, Phil
Subject: Re: [openstack-dev] Moving task flow to conductor - concern
about
Adding the original people conversing on this subject to this mail.
Regards,
-Sam.
On Jul 19, 2013, at 11:57 AM, Samuel Bercovici
samu...@radware.commailto:samu...@radware.com wrote:
Hi,
I have completely missed this discussion as it does not have quantum/Neutron in
the subject
On Tue, 2013-07-16 at 22:11 +0100, Mark McLoughlin wrote:
Hi
We're having an IRC meeting on Friday to sync up again on the messaging
work going on:
https://wiki.openstack.org/wiki/Meetings/Oslo
https://etherpad.openstack.org/HavanaOsloMessaging
Feel free to add other topics to the
On Thu, Jul 18, 2013 at 07:05:10AM -0400, Sean Dague wrote:
On 07/17/2013 10:54 PM, Lu, Lianhao wrote:
Hi fellows,
Currently we're implementing the BP
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
The main idea is to have an extensible plugin framework on
On Fri, Jul 19, 2013 at 10:15 AM, Dan Smith d...@danplanet.com wrote:
So rather than asking what doesn't work / might not work in the
future I think the question should be aside from them both being
things that could be described as a conductor - what's the
architectural reason for
Would it be an idea to make the host evacuation to use the scheduler to
pick where the VMs are supposed to address the note
http://sebastien-han.fr/blog/2013/07/19/openstack-instance-evacuation-goes-to-host/?
___
OpenStack-dev mailing list
On 07/19/2013 12:30 PM, Sean Dague wrote:
On 07/19/2013 10:37 AM, Murray, Paul (HP Cloud Services) wrote:
If we agree that something like capabilities should go through Nova,
what do you suggest should be done with the change that sparked this
debate: https://review.openstack.org/#/c/35760/
I think Soren suggested this way back in Cactus to use MQ for compute node
state rather than database and it was a good idea then.
On Jul 19, 2013, at 10:52 AM, Boris Pavlovic bo...@pavlovic.me wrote:
Hi all,
In Mirantis Alexey Ovtchinnikov and me are working on nova scheduler
Boris
I think you in fact covered two topic, one is if use db or rpc for
communication. This has been discussed a lot. But I didn't find the conclusion.
From the discussion, seems the key thing is the fan out messages. I'd suggest
you to bring this to scheduler sub meeting.
Hi all,
We have to much different branches about scheduler (so I have to repeat
here also).
I am against to add some extra tables that will be joined to compute_nodes
table on each scheduler request (or adding large text columns).
Because it make our non scalable scheduler even less scalable.
Nova-conductor is the gateway to the database for nova-compute
processes. So permitting a single nova-conductor process would
effectively serialize all database queries during instance creation,
deletion, periodic instance refreshes, etc.
FWIW, I don't think anyone is suggesting a single
On 07/19/2013 05:01 PM, Boris Pavlovic wrote:
Sandy,
Hm I don't know that algorithm. But our approach doesn't have
exponential exchange.
I don't think that in 10k nodes cloud we will have a problems with 150
RPC call/sec. Even in 100k we will have only 1.5k RPC call/sec.
More then
On 19 July 2013 22:55, Day, Phil philip@hp.com wrote:
Hi Josh,
My idea's really pretty simple - make DB proxy and Task workflow separate
services, and allow people to co-locate them if they want to.
+1, for all the reasons discussed in this thread. I was weirded out
when I saw
I've pushed out a review [1] to enable support for setting additional ML2
options when running with devstack. Since H2, ML2 has now added support for
both GRE and VXLAN tunnels, and the patch below allows for this configuration
when running with devstack. Feedback from folks on the Neutron ML2
Sandy,
I don't think that we have such problems here.
Because scheduler doesn't pool compute_nodes.
The situation is another compute_nodes notify scheduler about their state.
(instead of updating their state in DB)
So for example if scheduler send request to compute_node, compute_node is
able to
Sandy,
Hm I don't know that algorithm. But our approach doesn't have exponential
exchange.
I don't think that in 10k nodes cloud we will have a problems with 150 RPC
call/sec. Even in 100k we will have only 1.5k RPC call/sec.
More then (compute nodes update their state in DB through conductor
Hi Sylvain,
Sorry for the slow reply, I'll have to look closer next week, but I did have
some comments.
1. This isn't something a tenant should be able to do, so should be admin-only,
correct?
2. I think it would be useful for an admin to be able to add metering rules for
all tenants with a
Along these lines, OpenStack is now maintaining sqlachemy-migrate, and we
have our first patch up for better SQLite support, taken from nova,
https://review.openstack.org/#/c/37656/
Do we want to go the direction of explicitly not supporting sqllite and not
running migrations with it, like
On Fri, Jul 19, 2013 at 8:47 AM, Kyle Mestery (kmestery) kmest...@cisco.com
wrote:
On Jul 18, 2013, at 5:16 PM, Aaron Rosen aro...@nicira.com wrote:
Hi,
I wanted to raise another design failure of why creating the port on
nova-compute is bad. Previously, we have encountered this bug (
On 07/19/2013 05:36 PM, Boris Pavlovic wrote:
Sandy,
I don't think that we have such problems here.
Because scheduler doesn't pool compute_nodes.
The situation is another compute_nodes notify scheduler about their
state. (instead of updating their state in DB)
So for example if
[arosen] - sure, in this case though then we'll have to add even more
queries between nova-compute and quantum as nova-compute will need to query
quantum for ports matching the device_id to see if the port was already
created and if not try to create them.
The cleanup job doesn't look like a
On Fri, Jul 19, 2013 at 3:37 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
[arosen] - sure, in this case though then we'll have to add even more
queries between nova-compute and quantum as nova-compute will need to
query
quantum for ports matching the device_id to see if the port was already
In the network installation guide(
http://docs.openstack.org/grizzly/openstack-network/admin/content/install_ubuntu.html
) there is a sentence “quantum-lbaas-agent, etc (see below for more
information about individual services agents).” in the pluggin installation
section. However, lbaas is
Hi,
I have run into a problem with pep8 for
https://review.openstack.org/#/c/37539/. The issue is that have run the script
in the subject and the pep8 fails.
Any ideas?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
Thanks for bringing it to the attention of the list -- I've logged this doc
bug. https://bugs.launchpad.net/openstack-manuals/+bug/1203230 Hopefully a
Neutron team member can pick it up and investigate.
Anne
On Fri, Jul 19, 2013 at 7:35 PM, Qing He qing...@radisys.com wrote:
In the network
On Jul 19, 2013, at 6:01 PM, Aaron Rosen aro...@nicira.com wrote:
On Fri, Jul 19, 2013 at 3:37 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
[arosen] - sure, in this case though then we'll have to add even more
queries between nova-compute and quantum as nova-compute will need to query
Looks like it's complaining because you changed nova.conf.sample. Based
on the readme:
https://github.com/openstack/nova/tree/master/tools/conf
Did you running ./tools/conf/analyze_opts.py? I'm assuming you need to
run the tools and if there are issues you have to resolve them before
By the way, I'm wondering if lbaas has a separate doc somewhere else?
From: Anne Gentle [mailto:a...@openstack.org]
Sent: Friday, July 19, 2013 6:33 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] lbaas installation guide
Thanks for bringing it to the attention of
Thanks Anne!
From: Anne Gentle [mailto:a...@openstack.org]
Sent: Friday, July 19, 2013 6:33 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] lbaas installation guide
Thanks for bringing it to the attention of the list -- I've logged this doc
bug.
On 2013-07-20 10:02:34 +0800 (+0800), Gareth wrote:
[...]
xattr can't be installed correctly.
[...]
BTW, Swift and Glance are influenced.
One of xattr's dependencies was cached in a broken state, but should
be resolved as of the past couple hours.
--
Jeremy Stanley
56 matches
Mail list logo