On 19 November 2014 11:58, Jay Pipes jaypi...@gmail.com wrote:
Some code paths that used locking in the past were rewritten to retry
the operation if they detect that an object was modified concurrently.
The problem here is that all DB operations (CRUD) are performed in the
scope of some
Sorry I'm a bit late to this, but that's what you get from being on
holiday... (Which is also why there are no new MTU and VLAN specs yet, but
I swear I'll get to them.)
On 17 November 2014 01:13, Mathieu Rohon mathieu.ro...@gmail.com wrote:
Hi
On Fri, Nov 14, 2014 at 6:26 PM, Armando M.
On 18 November 2014 15:33, Mark McClain m...@mcclain.xyz wrote:
On Nov 18, 2014, at 5:45 PM, Doug Hellmann d...@doughellmann.com
wrote:
There would not be a service or REST API associated with the Advanced
Services code base? Would the REST API to talk to those services be part of
the
On 12 November 2014 11:11, Steve Gordon sgor...@redhat.com wrote:
NUMA
We still need to identify some hardware to run third party CI for the
NUMA-related work, and no doubt other things that will come up. It's
expected that this will be an interim solution until OPNFV resources can be
Maruti's talk is, in fact, so interesting that we should probably get
together and talk about this earlier in the week. I very much want to see
virtual-physical programmatic bridging, and I know Kevin Benton is also
interested. Arguably the MPLS VPN stuff also is similar in scope. Can I
propose
On 31 October 2014 06:29, Erik Moe erik@ericsson.com wrote:
I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk
network + L2GW were different use cases.
Still I get the feeling that the proposals are put up against each other.
I think we agreed they were
Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
blueprints
*Hi,*
*Pl fine the reply for the same.*
*Regards,*
*keshava*
*From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk ijw.ubu...@cack.org.uk]
*Sent:* Tuesday, October 28
To address a point or two that Armando has raised here that weren't covered
in my other mail:
On 28 October 2014 11:00, Armando M. arma...@gmail.com wrote:
- Core Neutron changes: what needs to happen to the core of Neutron, if
anything, so that we can implement this NFV-enabling constructs
Path MTU discovery works on a path - something with an L3 router in the way
- where the outbound interface has a smaller MTU than the inbound one.
You're transmitting across an L2 network - no L3 routers present. You send
a 1500 byte packet, the network fabric (which is not L3, has no address,
Nested tagged packet on
such interfaces ?
For trunking ports, I don't believe anyone was considering it.
Thanks Regards,
Keshava
*From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
*Sent:* Monday, October 27, 2014 9:45 PM
*To:* OpenStack Development Mailing List (not for usage
creation. It's only an initial
configuration thing, therefore. It might involve better cloud-init support
for network configuration, something that gets discussed periodically.
--
Ian.
…
Thanks regards,
Keshava
*From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
*Sent
On 25 October 2014 15:36, Erik Moe erik@ericsson.com wrote:
Then I tried to just use the trunk network as a plain pipe to the
L2-gateway and connect to normal Neutron networks. One issue is that the
L2-gateway will bridge the networks, but the services in the network you
bridge to is
There are two categories of problems:
1. some networks don't pass VLAN tagged traffic, and it's impossible to
detect this from the API
2. it's not possible to pass traffic from multiple networks to one port on
one machine as (e.g.) VLAN tagged traffic
(1) is addressed by the VLAN trunking
Aaron: untrue. It does, but OVS doesn't, and so networks implemented with
the OVS driver will drop packets. Use Linuxbridge instead.
--
Ian.
On 19 September 2014 22:27, Aaron Rosen aaronoro...@gmail.com wrote:
Neutron doesn't allow you to send tagged traffic from the guest today
On 25 August 2014 10:34, Aryeh Friedman aryeh.fried...@gmail.com wrote:
Do you call Martin Meckos having no clue... he is the one that leveled the
second worse criticism after mine... or is Euclapytus not one the founding
members of OpenStack (after all many of the glance commands still use
On 13 August 2014 06:01, Kyle Mestery mest...@mestery.com wrote:
On Wed, Aug 13, 2014 at 5:15 AM, Daniel P. Berrange berra...@redhat.com
wrote:
This idea of fixed slots is not really very appealing to me. It sounds
like we're adding a significant amount of buerocratic overhead to our
On 4 August 2014 07:03, Elena Ezhova eezh...@mirantis.com wrote:
Hi!
I feel I need a piece of advice regarding this bug [1].
The gist of the problem is that although there is an option
network_device_mtu that can be specified in neutron.conf VMs are not
getting that mtu on their
Speaking as someone who was reviewing both specs, I would personally
recommend you grant both exceptions. The code changes are very limited in
scope - particularly the Nova one - which makes the code review simple, and
they're highly unlikely to affect anyone who isn't actually using DPDK OVS
On 23 July 2014 10:52, Dan Smith d...@danplanet.com wrote:
What is our story for people who are developing new network or
storage drivers for Neutron / Cinder and wish to test Nova ? Removing
vif_driver and volume_drivers config parameters would mean that they
would have to directly
Funnily enough, when I first reported this bug I was actually trying to run
Openstack in VMs on Openstack. This works better now (not well; just
better) in that there's L3 networking options, but the basic L2-VLAN
networking option has never worked (fascinating we can't eat our own
dogfood on
On 14 July 2014 12:57, Collins, Sean sean_colli...@cable.comcast.com
wrote:
The URL structure and schema of data within may be 100% openstack invented,
but the idea of having a link local address that takes HTTP requests and
returns metadata was (to my knowledge) an Amazon EC2 idea from the
I hadn't realised until today that the BP window for new Nova specs in Juno
had closed, and I'm hoping I might get an exception for a minor, but I
think helpful, change in the way VIF plugging works.
At the moment, Neutron's plugin returns a binding_type to Nova when
plugging takes place,
On 10 July 2014 08:19, Czesnowicz, Przemyslaw
przemyslaw.czesnow...@intel.com wrote:
Hi,
Thanks for Your answers.
Yep using binding:vif_details makes more sense. We would like to reuse
VIF_TYPE_OVS and modify the nova to use the userspace vhost when ‘use_dpdk’
flag is present.
I
On 7 July 2014 10:43, Andrew Mann and...@divvycloud.com wrote:
What's the use case for an IPv6 endpoint? This service is just for
instance metadata, so as long as a requirement to support IPv4 is in place,
using solely an IPv4 endpoint avoids a number of complexities:
- Which one to try
On 7 July 2014 11:37, Sean Dague s...@dague.net wrote:
When it's on a router, it's simpler: use the nexthop, get that metadata
server.
Right, but that assumes router control.
It does, but then that's the current status quo - these things go on
Neutron routers (and, by extension, are
On 7 July 2014 12:29, Scott Moser smo...@ubuntu.com wrote:
I'd honestly love to see us just deprecate the metadata server.
If I had to deprecate one or the other, I'd deprecate config drive. I do
realize that its simplicity is favorable, but not if it is insufficient.
The question of
1. [NFV] is a tag.
2. This would appear to be a set of 'review me' mails to the mailing list,
which I believe is frowned upon.
3. garyk's stuff is only questionably [NFV], I would argue, though all
worthwhile patches. (That's a completely subjective judgement, so take it
as you will.)
Might be a
On 21 June 2014 17:17, A, Keshava keshav...@hp.com wrote:
Hi Thomas,
This is interesting.
I have some of the basic question about deployment model of using this
BaGPipe BGP in virtual cloud network.
1. We want MPLS to start right from compute node as part Tennant traffic ?
2. We want L3
I'm only part way through reviewing this, but I think there's a fundamental
error in it. We were at one point going to use 'enable_dhcp' in the
current set of flags to indicate something meaningful, but eventually we
decided that its current behaviour (despite the naming) really meant 'no
address
I've tested exabgp against a v6 peer, and it's an independent feature, so I
added that as a row separately from whether v6 advertisements work. Might
be worth making the page general and adding in the vpn feature set too.
On 30 May 2014 16:50, Nachi Ueno na...@ntti3.com wrote:
Hi folks
I think the Service VM discussion resolved itself in a way that reduces the
problem to a form of NFV - there are standing issues using VMs for
services, orchestration is probably not a responsibility that lies in
Neutron, and as such the importance is in identifying the problems with the
plumbing
I would go with 'define the use cases and identify and prioritise the
requirements', personally, but that's a nit. We seem to have absolved our
members from actually providing the implementation, which is a bit cheeky...
--
Ian.
On 19 May 2014 10:19, Nicolas Barcet nico...@barcet.com wrote:
There's a sheet down there. There's actually something on at the Neutron
pod at that time, but you might as well meet up then and see who's
interested (I certainly am).
On 15 May 2014 09:31, Vinay Yadhav vinayyad...@gmail.com wrote:
Hi,
Cool, I propose today at 2:20 PM near the neutron pod.
I was just about to respond to that in the session when we ran out of
time. I would vote for simply insisting that VMs run without the privacy
extension enabled, and only permitting the expected ipv6 address based on
MAC. Its primary purpose is to conceal your MAC address so that your IP
address
On 8 April 2014 10:35, Zane Bitter zbit...@redhat.com wrote:
To attach a port to a network and give it an IP from a specific subnet
on that network, you would use the *--fixed-ip subnet_id *option.
Otherwise, the create port request will use the first subnet it finds
attached to that
On 3 April 2014 08:21, Khanh-Toan Tran khanh-toan.t...@cloudwatt.comwrote:
Otherwise we cannot provide redundancy to client except using Region which
is dedicated infrastructure and networked separated and anti-affinity
filter which IMO is not pragmatic as it has tendency of abusive usage.
My proposals:
On 29 January 2014 16:43, Robert Li (baoli) ba...@cisco.com wrote:
1. pci-flavor-attrs is configured through configuration files and will be
available on both the controller node and the compute nodes. Can the cloud
admin decide to add a new attribute in a running cloud? If
On 29 January 2014 23:50, Robert Kukura rkuk...@redhat.com wrote:
On 01/29/2014 05:44 PM, Robert Li (baoli) wrote:
Hi Bob,
that's a good find. profileid as part of IEEE 802.1br needs to be in
binding:profile, and can be specified by a normal user, and later
possibly
the pci_flavor.
On 27 January 2014 15:58, Robert Li (baoli) ba...@cisco.com wrote:
Hi Folks,
In today's meeting, we discussed a scheduler issue for SRIOV. The basic
requirement is for coexistence of the following compute nodes in a cloud:
-- SRIOV only compute nodes
-- non-SRIOV only compute
Live migration for the first release is intended to be covered by macvtap,
in my mind - direct mapped devices have limited support in hypervisors
aiui. It seemed we had a working theory for that, which we test out and
see if it's going to work.
--
Ian.
On 27 January 2014 21:38, Robert Li
On 22 January 2014 00:00, Robert Collins robe...@robertcollins.net wrote:
I think dropping frames that can't be forwarded is entirely sane - at
a guess it's what a physical ethernet switch would do if you try to
send a 1600 byte frame (on a non-jumbo-frame switched network) - but
perhaps
On 22 January 2014 12:01, Robert Collins robe...@robertcollins.net wrote:
Getting the MTU *right* on all hosts seems to be key to keeping your hair
attached to your head for a little longer. Hence the DHCP suggestion to
set
it to the right value.
I certainly think having the MTU set to
On 21 January 2014 22:46, Veiga, Anthony anthony_ve...@cable.comcast.comwrote:
Hi, Sean and Xuhan:
I totally agree. This is not the ultimate solution with the assumption
that we had to use “enable_dhcp”.
We haven’t decided the name of another parameter, however, we are open
to any
for both PCI and
non-PCI cases.
--
Ian.
On 20 January 2014 13:38, Ian Wells ijw.ubu...@cack.org.uk wrote:
On 20 January 2014 09:28, Irena Berezovsky ire...@mellanox.com wrote:
Hi,
Having post PCI meeting discussion with Ian based on his proposal
https://docs.google.com/document/d
Paul,
There's an extension for this that is, I think, presently only implemented
by the Nicira plugin. Look for portsecurity. Whatever they do is probably
the way you should do it too.
Cheers,
--
Ian.
On 21 January 2014 13:10, CARVER, PAUL pc2...@att.com wrote:
Feel free to tell me this
/30 is the minimum allowable mask, not /31.
On 21 January 2014 22:04, Edgar Magana emag...@plumgrid.com wrote:
Wouldn't be easier just to check if:
cidr is 32?
I believe it is a good idea to not allow /32 network but this is just my
opinion
Edgar
From: Paul Ward wpw...@us.ibm.com
On 20 January 2014 09:28, Irena Berezovsky ire...@mellanox.com wrote:
Hi,
Having post PCI meeting discussion with Ian based on his proposal
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1#
,
I am not sure that the case that quite usable for SR-IOV
On 20 January 2014 10:13, Mathieu Rohon mathieu.ro...@gmail.com wrote:
With such an architecture, we wouldn't have to tell neutron about
vif_security or vif_type when it creates a port. When Neutron get
called with port_create, it should only return the tap created.
Not entirely true. Not
On 17 January 2014 01:16, Chris Friesen chris.frie...@windriver.com wrote:
On 01/16/2014 05:12 PM, CARVER, PAUL wrote:
Jumping back to an earlier part of the discussion, it occurs to me
that this has broader implications. There's some discussion going on
under the heading of Neutron with
On 17 January 2014 09:12, Robert Collins robe...@robertcollins.net wrote:
The physical function is the one with the real PCI config space, so as
long as the host controls it then there should be minimal risk from the
guests since they have limited access via the virtual
On 16 January 2014 10:51, Robert Collins robe...@robertcollins.net wrote:
1. assigned to the interface attached to default gateway
Which you may not have, or may be on the wrong interface (if I'm setting up
a control node I usually have the default gateway on the interface with the
API
On 16 January 2014 09:07, yongli he yongli...@intel.com wrote:
On 2014年01月16日 08:28, Ian Wells wrote:
This is based on Robert's current code for macvtap based live migration.
The issue is that if you wish to migrate a VM and it's tied to a physical
interface, you can't guarantee
Logically, the port is really an alias for the external port of the router,
rather than being just detached. I'm not sure this adds much to the
discussion, but clearly that's where its traffic goes and is terminated.
From past experience (don't ask) weird things happen if you start creating
your
To clarify a couple of Robert's points, since we had a conversation earlier:
On 15 January 2014 23:47, Robert Li (baoli) ba...@cisco.com wrote:
--- do we agree that BDF address (or device id, whatever you call it),
and node id shouldn't be used as attributes in defining a PCI flavor?
Note
like a workable approach?
Who is willing to implement any of (1), (2) and (3)?
Cheers,
John
On 9 January 2014 17:47, Ian Wells ijw.ubu...@cack.org.uk wrote:
I think I'm in agreement with all of this. Nice summary, Robert.
It may not be where the work ends, but if we could get
earlier help with
working this out :)). The idea is to put the SR-IOV hardware to work
behind-the-scenes of a normal software switch.
We will definitely check out the Passthrough when it's ready and see
if we should also support that somehow.
On 11 January 2014 01:04, Ian Wells ijw.ubu
I would say that since v4 dhcp_mode is core, the DHCPv6/RA setting should
similarly be core.
To fill others in, we've had discussions on the rest of the patch and
Shixiong is working on it now, the current plan is:
New subnet attribute ipv6_address_auto_config (not catchy, but because of
the way
,
Robert
On 1/10/14 9:34 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
OK - so if this is good then I think the question is how we could change
the 'pci_whitelist' parameter we have - which, as you say, should either
*only* do whitelisting or be renamed - to allow us to add information
*From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
*Sent:* Monday, January 13, 2014 10:57 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [nova] [neutron] PCI pass-through network
support
It's worth noting that this makes the scheduling
On 10 January 2014 07:40, Jiang, Yunhong yunhong.ji...@intel.com wrote:
Robert, sorry that I’m not fan of * your group * term. To me, *your
group” mixed two thing. It’s an extra property provided by configuration,
and also it’s a very-not-flexible mechanism to select devices (you can only
.
--
Ian.
On 10 January 2014 13:08, Ian Wells ijw.ubu...@cack.org.uk wrote:
On 10 January 2014 07:40, Jiang, Yunhong yunhong.ji...@intel.com wrote:
Robert, sorry that I’m not fan of * your group * term. To me, *your
group” mixed two thing. It’s an extra property provided by configuration
On 10 January 2014 15:30, John Garbutt j...@johngarbutt.com wrote:
We seemed happy with the current system (roughly) around GPU passthrough:
nova flavor-key three_GPU_attached_30GB set
pci_passthrough:alias= large_GPU:1,small_GPU:2
nova boot --image some_image --flavor three_GPU_attached_30GB
Hey Yunhong,
The thing about 'group' and 'flavor' and 'whitelist' is that they once
meant distinct things (and I think we've been trying to reduce them back
from three things to two or one):
- group: equivalent devices at a host level - use any one, no-one will
care, because they're either
Hey Luke,
If you look at the passthrough proposals, the overview is that part of the
passthrough work is to ensure there's an PCI function available to allocate
to the VM, and part is to pass that function on to the Neutron plugin via
conventional means. There's nothing that actually mandates
I think I'm in agreement with all of this. Nice summary, Robert.
It may not be where the work ends, but if we could get this done the rest
is just refinement.
On 9 January 2014 17:49, Robert Li (baoli) ba...@cisco.com wrote:
Hi Folks,
With John joining the IRC, so far, we had a
On 9 January 2014 20:19, Brian Schott brian.sch...@nimbisservices.comwrote:
Ian,
The idea of pci flavors is a great and using vendor_id and product_id make
sense, but I could see a case for adding the class name such as 'VGA
compatible controller'. Otherwise, slightly different generations
On 9 January 2014 22:50, Ian Wells ijw.ubu...@cack.org.uk wrote:
On 9 January 2014 20:19, Brian Schott brian.sch...@nimbisservices.comwrote:
On the flip side, vendor_id and product_id might not be sufficient.
Suppose I have two identical NICs, one for nova internal use and the
second
See Sean Collins' review https://review.openstack.org/#/c/56381 which
disables hairpinning when Neutron is in use. tl;dr - please upvote the
review. Long form reasoning follows...
There's a solid logical reason for enabling hairpinning, but it only
applies to nova-network. Hairpinning is used
On autodiscovery and configuration, we agree that each compute node finds
out what it has based on some sort of list of match expressions; we just
disagree on where they should live.
I know we've talked APIs for setting that matching expression, but I would
prefer that compute nodes are
Randy has it spot on. The antispoofing rules prevent you from doing this
in Neutron. Clearly a router transmits traffic that isn't from it, and
receives traffic that isn't addressed to it - and the port filtering
discards them.
You can disable them for the entire cloud by judiciously tweaking
John:
At a high level:
Neutron:
* user wants to connect to a particular neutron network
* user wants a super-fast SRIOV connection
Administration:
* needs to map PCI device to what neutron network the connect to
The big question is:
* is this a specific SRIOV only (provider) network
*
On 19 December 2013 06:35, Isaku Yamahata isaku.yamah...@gmail.com wrote:
Hi Ian.
I can't see your proposal. Can you please make it public viewable?
Crap, sorry - fixed.
Even before I read the document I could list three use cases. Eric's
covered some of them himself.
I'm not
I'm easy.
On 20 December 2013 00:47, Randy Tuttle randy.m.tut...@gmail.com wrote:
Any of those times suit me.
Sent from my iPhone
On Dec 19, 2013, at 5:12 PM, Collins, Sean
sean_colli...@cable.comcast.com wrote:
Thoughts? I know we have people who are not able to attend at our
tomorrow during sub-team meeting, we can reach
consensus. If you can not make it, then please set up a separate meeting to
invite key placeholders so we have a chance to sort it out.
Shixiong
On Dec 18, 2013, at 8:25 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
On 18 December 2013 14:10, Shixiong
Per the discussions this evening, we did identify a reason why you might
need a dhcp namespace for v6 - because networks don't actually have to have
routers. It's clear you need an agent in the router namespace for RAs and
another one in the DHCP namespace for when the network's not connected to
On 19 December 2013 15:15, John Garbutt j...@johngarbutt.com wrote:
Note, I don't see the person who boots the server ever seeing the
pci-flavor, only understanding the server flavor.
[IrenaB] I am not sure that elaborating PCI device request into server
flavor is the right approach for
separate the service, now the system become
clumsy and less efficient. As you can see in IPv6 cases, we are forced to
deal with two namespaces now. It just doesn’t make any sense.
Shixiong
On Dec 19, 2013, at 7:27 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
Per the discussions this evening
Hey Shixiong,
This is intended as a replacement for [1], correct? Do you have code for
this already, or should we work with Sean's patch?
There's a discussion document at [2], which is intended to be more specific
behind the reasoning for the choices we make, and the interface offered to
the
I think we're all happy that if a project *does* have a broad support base
we're good; this is only the case for projects in one of two situations:
- support is spread so thinly that each company involved in the area has
elected to support a different solution
- the project is just not that
On 18 December 2013 14:10, Shixiong Shang sparkofwisdom.cl...@gmail.comwrote:
Hi, Ian:
I won’t say the intent here is to replace dnsmasq-mode-keyword BP.
Instead, I was trying to leverage and enhance those definitions so when
dnsmasq is launched, it knows which mode it should run in.
That
do without.
--
Ian.
On 17 December 2013 21:41, Collins, Sean sean_colli...@cable.comcast.comwrote:
On Tue, Dec 17, 2013 at 07:39:14PM +0100, Ian Wells wrote:
1. The patch ties Neutron's parameters specifically to dnsmasq. It would
be, I think, impossible to reimplement this for isc-dhcpd
On 17 December 2013 18:57, Shixiong Shang sparkofwisdom.cl...@gmail.comwrote:
Yes, the man page is a little bit confusing. The “slaac” mode requires
“—enable-ra” since it needs to manipulate MOAL bits in the RA. As matter of
fact, all of the modes available for IPv6 rely on “—enable-ra”.
My
On 13 December 2013 16:13, Alessandro Pilotti
apilo...@cloudbasesolutions.com wrote:
2) The HTTP metadata service accessible from the guest with its magic
number is IMO quite far from an optimal solution. Since every hypervisor
commonly
used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi)
to populate this data from its cached network info. It
might also be nice to stick it in a known location on the metadata server
so the neutron proxy could potentially overwrite it with more current
network data if it wanted to.
Vish
On Dec 4, 2013, at 8:26 AM, Ian Wells
*ijw.ubu
On 12 December 2013 19:48, Clint Byrum cl...@fewbar.com wrote:
Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
On 12/10/2013 03:49 PM, Ian Wells wrote:
On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com
mailto:cl...@fewbar.com wrote:
I've read through this email
Can we go over the use cases for the multiple different address allocation
techniques, per my comment on the blueprint that suggests we expose
different dnsmasq modes?
And perhaps also what we're going to do with routers in terms of equivalent
behaviour for the current floating-ip versus
Are these NSX routers *functionally* different?
What we're talking about here is a router which, whether it's distributed
or not, behaves *exactly the same*. So as I say, maybe it's an SLA thing,
but 'distributed' isn't really user meaningful if the user can't actually
prove he's received a
On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com wrote:
If it is just a network API, it works the same for everybody. This
makes it simpler, and thus easier to scale out independently of compute
hosts. It is also something we already support and can very easily expand
by just adding a
I would imagine that, from the Neutron perspective, you get a single router
whether or not it's distributed. I think that if a router is distributed -
regardless of whether it's tenant-tenant or tenant-outside - it certainly
*could* have some sort of SLA flag, but I don't think a simple
Next time, could you perhaps do it (a) with a bit more notice and (b) at a
slightly more amenable time for us Europeans?
On 4 December 2013 15:27, Richard Woo richardwoo2...@gmail.com wrote:
Shixiong,
Thank you for the updates, do you mind to share the slide to the openstack
mailing list?
How frequent do you imagine these notifications being? There's a wide
variation here between the 'blue moon' case where disk space is low and
frequent notifications of things like OS performance, which you might want
to display in Horizon or another monitoring tool on an every-few-seconds
basis,
We seem to have bound our config drive file formats to those used by the
operating system we're running, which doesn't seem like the right approach
to take.
Firstly, the above format doesn't actually work even for Debian-based
systems - if you have a network without ipv6, ipv6 ND will be enabled
On 20 November 2013 00:22, Robert Collins robe...@robertcollins.net wrote:
On 20 November 2013 13:00, Sean Dague s...@dague.net wrote:
So we recently moved devstack gate to do con fig drive instead of
metadata
service, and life was good (no one really noticed). In what ways is
configdrive
On 7 August 2013 11:53, John Garbutt j...@johngarbutt.com wrote:
I have thought about a (slightly messy) alternative:
* use ConfigDrive only for basic networking config on the first nic
(i.e. just enough to get to the metadata service)
* get all other data from the metadata service, once the
* periodic updates can overwhelm things. Solution: remove unneeded updates,
most scheduling data only changes when an instance does some state change.
It's not clear that periodic updates do overwhelm things, though.
Boris ran the tests. Apparently 10k nodes updating once a minute
extend the
A while back (just before the summit, as I recall), there was a patch
submitted to remove the constraints on being able to connect multiple
interfaces of the same VM to the same Neutron network. [1]
It was unclear at the time whether this is a bug being fixed or a
feature being added, which
I wanted to raise another design failure of why creating the port on
nova-compute is bad. Previously, we have encountered this bug
(https://bugs.launchpad.net/neutron/+bug/1160442). What was causing the
issue was that when nova-compute calls into quantum to create the port;
quantum creates
[arosen] - sure, in this case though then we'll have to add even more
queries between nova-compute and quantum as nova-compute will need to query
quantum for ports matching the device_id to see if the port was already
created and if not try to create them.
The cleanup job doesn't look like a
I'd still like the simpler and more general purpose 'disable spoofing'
option as well. That doesn't allow MAC spoofing and it doesn't work
for what I'm up to.
Read the document properly, Ian. I take back the MAC spoofing
comment, but it still won't work for what I'm up to ;)
On 18 July 2013 00:45, Aaron Rosen aro...@nicira.com wrote:
Hi Ian,
For shared networks if the network is set to port_security_enabled=True then
the tenant will not be able to remove port_security_enabled from their port
if they are not the owner of the network. I believe this is the correct
101 - 200 of 203 matches
Mail list logo