(not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
>
> Hi
>
> Please find some additions to Ian and responses below.
>
> /Alan
>
>
>
> *From:* A, Keshava [mailto:keshav...@hp.com ]
> *Sent:* Oc
To address a point or two that Armando has raised here that weren't covered
in my other mail:
On 28 October 2014 11:00, Armando M. wrote:
> - Core Neutron changes: what needs to happen to the core of Neutron, if
> anything, so that we can implement this NFV-enabling constructs
> successfully? Ar
On 12 November 2014 11:11, Steve Gordon wrote:
NUMA
>
>
> We still need to identify some hardware to run third party CI for the
> NUMA-related work, and no doubt other things that will come up. It's
> expected that this will be an interim solution until OPNFV resources can be
> used (note cd
On 21 June 2014 17:17, A, Keshava wrote:
> Hi Thomas,
>
> This is interesting.
> I have some of the basic question about deployment model of using this
> BaGPipe BGP in virtual cloud network.
>
> 1. We want MPLS to start right from compute node as part Tennant traffic ?
> 2. We want L3 VRF separa
1. [NFV] is a tag.
2. This would appear to be a set of 'review me' mails to the mailing list,
which I believe is frowned upon.
3. garyk's stuff is only questionably [NFV], I would argue, though all
worthwhile patches. (That's a completely subjective judgement, so take it
as you will.)
Might be a b
On 7 July 2014 10:43, Andrew Mann wrote:
> What's the use case for an IPv6 endpoint? This service is just for
> instance metadata, so as long as a requirement to support IPv4 is in place,
> using solely an IPv4 endpoint avoids a number of complexities:
>
- Which one to try first?
>
http://en.wik
On 7 July 2014 11:37, Sean Dague wrote:
> > When it's on a router, it's simpler: use the nexthop, get that metadata
> > server.
>
> Right, but that assumes router control.
>
It does, but then that's the current status quo - these things go on
Neutron routers (and, by extension, are generally not
On 7 July 2014 12:29, Scott Moser wrote:
>
> > I'd honestly love to see us just deprecate the metadata server.
>
> If I had to deprecate one or the other, I'd deprecate config drive. I do
> realize that its simplicity is favorable, but not if it is insufficient.
>
The question of deprecation is
I hadn't realised until today that the BP window for new Nova specs in Juno
had closed, and I'm hoping I might get an exception for a minor, but I
think helpful, change in the way VIF plugging works.
At the moment, Neutron's plugin returns a binding_type to Nova when
plugging takes place, describi
On 10 July 2014 08:19, Czesnowicz, Przemyslaw <
przemyslaw.czesnow...@intel.com> wrote:
> Hi,
>
>
>
> Thanks for Your answers.
>
>
>
> Yep using binding:vif_details makes more sense. We would like to reuse
> VIF_TYPE_OVS and modify the nova to use the userspace vhost when ‘use_dpdk’
> flag is pre
Funnily enough, when I first reported this bug I was actually trying to run
Openstack in VMs on Openstack. This works better now (not well; just
better) in that there's L3 networking options, but the basic L2-VLAN
networking option has never worked (fascinating we can't eat our own
dogfood on this
On 14 July 2014 12:57, Collins, Sean
wrote:
> The URL structure and schema of data within may be 100% openstack invented,
> but the idea of having a link local address that takes HTTP requests and
> returns metadata was (to my knowledge) an Amazon EC2 idea from the
> beginning.
>
'link local' -
Speaking as someone who was reviewing both specs, I would personally
recommend you grant both exceptions. The code changes are very limited in
scope - particularly the Nova one - which makes the code review simple, and
they're highly unlikely to affect anyone who isn't actually using DPDK OVS
(sub
On 23 July 2014 10:52, Dan Smith wrote:
> > What is our story for people who are developing new network or
> > storage drivers for Neutron / Cinder and wish to test Nova ? Removing
> > vif_driver and volume_drivers config parameters would mean that they
> > would have to directly modify the exist
On 4 August 2014 07:03, Elena Ezhova wrote:
> Hi!
>
> I feel I need a piece of advice regarding this bug [1].
>
> The gist of the problem is that although there is an option
> network_device_mtu that can be specified in neutron.conf VMs are not
> getting that mtu on their interfaces.
>
Correct.
On 4 July 2013 23:42, Robert Collins wrote:
> Seems like a tweak would be to identify virtual IPs as separate to the
> primary IP on a port:
> you don't need to permit spoofing of the actual host IP for each host in
> the HA cluster; you just need to permit spoofing of the virtual IP. This
> woul
On 10 July 2013 21:14, Vishvananda Ishaya wrote:
>> It used to be essential back when we had nova-network and all tenants
>> ended up on one network. It became less useful when tenants could
>> create their own networks and could use them as they saw fit.
>>
>> It's still got its uses - for insta
On 17 July 2013 00:11, Jay Pipes wrote:
> Absolutely, that is what our tools team is now having to do. All I'm saying
> is that this wasn't necessary in Folsom and wouldn't be necessary if the API
> didn't force networks to be created with a tenant ID.
What's wrong with a shared network? It's be
It's already possible to port-create with an IP address-and-subnet
specified, which seems like an effective way of allocating an address
and setting it aside for later. Doesn't this satisfy your needs?
--
Ian.
On 16 July 2013 19:42, Mark McClain wrote:
> Have you considered altering the alloca
> I'd still like the simpler and more general purpose 'disable spoofing'
> option as well. That doesn't allow MAC spoofing and it doesn't work
> for what I'm up to.
Read the document properly, Ian. I take back the MAC spoofing
comment, but it still won't work for what I'm up to ;)
_
On 18 July 2013 00:45, Aaron Rosen wrote:
> Hi Ian,
>
> For shared networks if the network is set to port_security_enabled=True then
> the tenant will not be able to remove port_security_enabled from their port
> if they are not the owner of the network. I believe this is the correct
> behavior we
On 18 July 2013 19:48, Aaron Rosen wrote:
> Is there something this is missing that could be added to cover your use
> case? I'd be curious to hear where this doesn't work for your case. One
> would need to implement the port_security extension if they want to
> completely allow all ips/macs to p
> I wanted to raise another design failure of why creating the port on
> nova-compute is bad. Previously, we have encountered this bug
> (https://bugs.launchpad.net/neutron/+bug/1160442). What was causing the
> issue was that when nova-compute calls into quantum to create the port;
> quantum create
> [arosen] - sure, in this case though then we'll have to add even more
> queries between nova-compute and quantum as nova-compute will need to query
> quantum for ports matching the device_id to see if the port was already
> created and if not try to create them.
The cleanup job doesn't look like
Per the last summit, there are many interested parties waiting on PCI
support. Boris (who unfortunately waasn't there) jumped in with an
implementation before the rest of us could get a blueprint up, but I
suspect he's been stretched rather thinly and progress has been much
slower than I was hopin
A while back (just before the summit, as I recall), there was a patch
submitted to remove the constraints on being able to connect multiple
interfaces of the same VM to the same Neutron network. [1]
It was unclear at the time whether this is a bug being fixed or a
feature being added, which rather
On 22 July 2013 21:08, Boris Pavlovic wrote:
> Ian,
>
> I don't like to write anything personally.
> But I have to write some facts:
>
> 1) I see tons of hands and only 2 solutions my and one more that is based on
> code.
> 2) My code was published before session (18. Apr 2013)
> 3) Blueprints fro
> * periodic updates can overwhelm things. Solution: remove unneeded updates,
> most scheduling data only changes when an instance does some state change.
It's not clear that periodic updates do overwhelm things, though.
Boris ran the tests. Apparently 10k nodes updating once a minute
extend the
On 7 August 2013 11:53, John Garbutt wrote:
> I have thought about a (slightly messy) alternative:
> * use ConfigDrive only for basic networking config on the first nic
> (i.e. just enough to get to the metadata service)
> * get all other data from the metadata service, once the first nic is up
T
Speaking as someone who didn't know what Savanna was before now... I still
don't know what Savanna is. You need to be more specific than that.
--
Ian.
On 10 September 2013 18:34, Sergey Lukjanov wrote:
> +1 for both program name and mission statement
>
>
> Sincerely yours,
> Sergey Lukjanov
>
On 11 July 2016 at 11:12, Chris Friesen wrote:
> On 07/11/2016 10:39 AM, Jay Pipes wrote:
>
> Out of curiosity, in what scenarios is it better to limit the instance's
>> MTU to
>> a value lower than that of the maximum path MTU of the infrastructure? In
>> other
>> words, if the infrastructure su
On 11 July 2016 at 11:49, Sean M. Collins wrote:
> Sam Yaple wrote:
> > In this situation, since you are mapping real-ips and the real world runs
> > on 1500 mtu
>
> Don't be so certain about that assumption. The Internet is a very big
> and diverse place
OK, I'll contradict myself now - th
On 18 April 2016 at 04:33, Ihar Hrachyshka wrote:
> Akihiro Motoki wrote:
>
> 2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :
>>
>>> Sławek Kapłoński wrote:
>>>
>>> Hello,
What MTU have You got configured on VMs? I had issue with performance on
vxlan network with standard MTU (1500)
I wrote the spec for the MTU work that's in the Neutron API today. It
haunts my nightmares. I learned so many nasty corner cases for MTU, and
you're treading that same dark path.
I'd first like to point out a few things that change the implications of
what you're reporting in strange ways. [1] p
On 23 January 2016 at 11:27, Adam Lawson wrote:
> For the sake of over-simplification, is there ever a reason to NOT enable
> jumbo frames in a cloud/SDN context where most of the traffic is between
> virtual elements that all support it? I understand that some switches do
> not support it and tr
Us in a mixed environment, at least if everything is working as
intended.
--
Ian.
[1]
https://github.com/openstack/neutron/blob/544ff57bcac00720f54a75eb34916218cb248213/releasenotes/notes/advertise_mtu_by_default-d8b0b056a74517b8.yaml#L5
> On Jan 24, 2016 20:48, "Ian Wells" wrote:
On 22 January 2016 at 10:35, Neil Jerram wrote:
> * Why change from ML2 to core plugin?
>
> - It could be seen as resolving a conceptual mismatch.
> networking-calico uses
> IP routing to provide L3 connectivity between VMs, whereas ML2 is
> ostensibly
> all about layer 2 mechanisms.
You've
one using the 1550+hacks and other methods of today will find their
system changes behaviour if we started setting that specific default.
Regardless, we need to take that documentation and update it. It was a
nasty hack back in the day and not remotely a good idea now.
> On Jan 24, 2016 23:
Actually, I note that that document is Juno and there doesn't seem to be
anything at all in the Liberty guide now, so the answer is probably to add
settings for path_mtu and segment_mtu in the recommended Neutron
configuration.
On 24 January 2016 at 22:26, Ian Wells wrote:
> On 24 Janu
;s a
> behavior change considering the current behavior is annoying. :)
> On Jan 24, 2016 23:31, "Ian Wells" wrote:
>
>> On 24 January 2016 at 22:12, Kevin Benton wrote:
>>
>>> >The reason for that was in the other half of the thread - it's not
>&
On 25 January 2016 at 07:06, Matt Kassawara wrote:
> Overthinking and corner cases led to the existing implementation which
doesn't solve the MTU problem and arguably makes the situation worse
because options in the configuration files give operators the impression
they can control it.
We are giv
As I recall, network_device_mtu sets up the MTU on a bunch of structures
independently of whatever the correct value is. It was a bit of a
workaround back in the day and is still a bit of a workaround now. I'd
sooner we actually fix up the new mechanism (which is kind of hard to do
when the close
On 27 January 2016 at 11:06, Flavio Percoco wrote:
> FWIW, the current governance model does not prevent competition. That's
> not to
> be understood as we encourage it but rather than there could be services
> with
> some level of overlap that are still worth being separate.
>
There should alwa
On 19 July 2015 at 03:46, Neil Jerram wrote:
> The change at [1] creates and describes a new 'routed' value for
> provider:network_type. It means that a compute host handles data
> to/from the relevant TAP interfaces by routing it, and specifically
> that those TAP interfaces are not bridged.
On 20 July 2015 at 10:21, Neil Jerram wrote:
> Hi Ian,
>
> On 20/07/15 18:00, Ian Wells wrote:
>
>> On 19 July 2015 at 03:46, Neil Jerram > <mailto:neil.jer...@metaswitch.com>> wrote:
>>
>> The change at [1] creates and describes a new 'rout
There are two routed network models:
- I give my VM an address that bears no relation to its location and ensure
the routed fabric routes packets there - this is very much the routing
protocol method for doing things where I have injected a route into the
network and it needs to propagate. It's a
On 21 July 2015 at 07:52, Carl Baldwin wrote:
> > Now, you seem to generally be thinking in terms of the latter model,
> particularly since the provider network model you're talking about fits
> there. But then you say:
>
> Actually, both. For example, GoDaddy assigns each vm an ip from the
> l
ion of routing for floating IPs is also a scheduling
> problem, though one that would require a lot more changes to how FIP are
> allocated and associated to solve.
>
> John
>
> [1] https://review.openstack.org/#/c/180803/
> [2] https://bugs.launchpad.net/neutron/+bug/1458890/c
It is useful, yes; and posting diffs on the mailing list is not the way to
get them reviewed and approved. If you can get this on gerrit it will get
a proper review, and I would certainly like to see something like this
incorporated.
On 21 July 2015 at 15:41, John Nielsen wrote:
> I may be in a
Neutron already offers a DNS server (within the DHCP namespace, I think).
It does forward on non-local queries to an external DNS server, but it
already serves local names for instances; we'd simply have to set one
aside, or perhaps use one in a 'root' but nonlocal domain
(metadata.openstack e.g.).
Can I ask a different question - could we reject a few simple-to-check
things on the push, like bad commit messages? For things that take 2
seconds to fix and do make people's lives better, it's not that they're
rejected, it's that the whole rejection cycle via gerrit review (push/wait
for tests t
On 7 October 2015 at 16:00, Chris Friesen
wrote:
> 1) Some resources (RAM) only require tracking amounts. Other resources
> (CPUs, PCI devices) require tracking allocation of specific individual host
> resources (for CPU pinning, PCI device allocation, etc.). Presumably for
> the latter we woul
On 7 October 2015 at 22:17, Chris Friesen
wrote:
> On 10/07/2015 07:23 PM, Ian Wells wrote:
>
>>
>> The whole process is inherently racy (and this is inevitable, and
>> correct),
>>
>>
> Why is it inevitable?
>
It's inevitable because everythin
On 8 October 2015 at 09:10, Ed Leafe wrote:
> You've hit upon the problem with the current design: multiple, and
> potentially out-of-sync copies of the data.
Arguably, this is the *intent* of the current design, not a problem with
it. The data can never be perfect (ever) so go with 'good enou
On 8 October 2015 at 13:28, Ed Leafe wrote:
> On Oct 8, 2015, at 1:38 PM, Ian Wells wrote:
> > Truth be told, storing that data in MySQL is secondary to the correct
> functioning of the scheduler.
>
> I have no problem with MySQL (well, I do, but that's not relevant to
On 9 October 2015 at 12:50, Chris Friesen
wrote:
> Has anybody looked at why 1 instance is too slow and what it would take to
>
>> make 1 scheduler instance work fast enough? This does not preclude the
>> use of
>> concurrency for finer grain tasks in the background.
>>
>
> Currently we pull data
On 9 October 2015 at 18:29, Clint Byrum wrote:
> Instead of having the scheduler do all of the compute node inspection
> and querying though, you have the nodes push their stats into something
> like Zookeeper or consul, and then have schedulers watch those stats
> for changes to keep their in-me
On 10 October 2015 at 23:47, Clint Byrum wrote:
> > Per before, my suggestion was that every scheduler tries to maintain a
> copy
> > of the cloud's state in memory (in much the same way, per the previous
> > example, as every router on the internet tries to make a route table out
> of
> > what i
On 11 October 2015 at 00:23, Clint Byrum wrote:
> I'm in, except I think this gets simpler with an intermediary service
> like ZK/Consul to keep track of this 1GB of data and replace the need
> for 6, and changes the implementation of 5 to "updates its record and
> signals its presence".
>
OK, s
On 12 October 2015 at 21:18, Clint Byrum wrote:
> We _would_ keep a local cache of the information in the schedulers. The
> centralized copy of it is to free the schedulers from the complexity of
> having to keep track of it as state, rather than as a cache. We also don't
> have to provide a way
The fix should work fine. It is technically a workaround for the way
checksums work in virtualised systems, and the unfortunate fact that some
DHCP clients check checksums on packets where the hardware has checksum
offload enabled. (This doesn't work due to an optimisation in the way QEMU
treats
VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this to
a hopefully interested audience.
At the summit, we wrote up a spec we were thinking of doing at [1]. It
actually proposes two things, which is a little naughty really, but hey.
Firstly we propose that we turn binding into
dency.
> Thank you for sharing this,
> Irena
> [1] https://review.openstack.org/#/c/162468/
>
> On Tue, Jun 2, 2015 at 10:45 AM, Ian Wells wrote:
>
>> VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this
>> to a hopefully interested audience
I don't see a problem with this, though I think you do want plug/unplug
calls to be passed on to Neutron so that has the opportunity to set up the
binding from its side (usage >0) and tear it down when you're done with it
(usage <1).
There may be a set of races you need to deal with, too - what ha
On 11 June 2015 at 15:34, Michael Still wrote:
> On Fri, Jun 12, 2015 at 7:07 AM, Mark Boo wrote:
> > - What functionality is missing (if any) in config drive / metadata
> service
> > solutions to completely replace file injection?
>
> None that I am aware of. In fact, these two other options pr
On 11 June 2015 at 12:37, Richard Raseley wrote:
> Andrew Laski wrote:
>
>> There are many reasons a deployer may want to live-migrate instances
>> around: capacity planning, security patching, noisy neighbors, host
>> maintenance, etc... and I just don't think the user needs to know or
>> care t
On 11 June 2015 at 02:37, Andreas Scheuring
wrote:
> > Do you happen to know how data gets routed _to_ a VM, in the
> > type='network' case?
>
> Neil, sorry no. Haven't played around with that, yet. But from reading
> the libvirt man, it looks good. It's saying "Guest network traffic will
> be fo
In general, while you've applied this to networking (and it's not the first
time I've seen this proposal), the same technique will work with any device
- PF or VF, networking or other:
- notify the VM via an accepted channel that a device is going to be
temporarily removed
- remove the device
- mi
On 28 January 2015 at 17:32, Robert Collins
wrote:
> E.g. its a call (not cast) out to Neutron, and Neutron returns when
> the VIF(s) are ready to use, at which point Nova brings the VM up. If
> the call times out, we error.
>
I don't think this model really works with distributed systems, and i
On 2 February 2015 at 09:49, Chris Friesen
wrote:
> On 02/02/2015 10:51 AM, Jay Pipes wrote:
>
>> This is a bug that I discovered when fixing some of the NUMA related nova
>> objects. I have a patch that should fix it up shortly.
>>
>
> Any chance you could point me at it or send it to me?
>
> T
With apologies for derailing the question, but would you care to tell us
what evil you're planning on doing? I find it's always best to be informed
about these things.
--
Ian.
(Why yes, it *is* a Saturday morning.)
On 6 March 2015 at 12:23, Michael Krotscheck wrote:
> Heya!
>
> So, a while ag
On 6 March 2015 at 13:16, Sławek Kapłoński wrote:
> Hello,
>
> Today I found bug https://bugs.launchpad.net/neutron/+bug/1314614 because
> I
> have such problem on my infra.
>
(For reference, if you delete a port that a Nova is using - it just goes
ahead and deletes the port from Neutron and lea
On 11 March 2015 at 04:27, Fredy Neeser wrote:
> 7: br-ex.1: mtu 1500 qdisc noqueue state
> UNKNOWN group default
> link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.1
>valid_lft forever preferred_lft forever
>
> 8: br-
On 11 March 2015 at 10:56, Matt Riedemann
wrote:
> While looking at some other problems yesterday [1][2] I stumbled across
> this feature change in Juno [3] which adds a config option
> "allow_duplicate_networks" to the [neutron] group in nova. The default
> value is False, but according to the s
On 12 March 2015 at 05:33, Fredy Neeser wrote:
> 2. I'm using policy routing on my hosts to steer VXLAN traffic (UDP
> dest. port 4789) to interface br-ex.12 -- all other traffic from
> 192.168.1.14 is source routed from br-ex.1, presumably because br-ex.1 is a
> lower-numbered interface than
On 18 March 2015 at 03:33, Duncan Thomas wrote:
> On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core) <
> amos.steven.da...@hp.com> wrote:
>
>> Ceph/Cinder:
>> LVM or other?
>> SCSI-backed?
>> Any others?
>>
>
> I'm wondering why any of the above matter to an application.
>
The Neutron requiremen
There are precedents for this. For example, the attributes that currently
exist for IPv6 advertisement are very similar:
- added during the run of a stable Neutron API
- properties added on a Neutron object (MTU and VLAN affect network, but
IPv6 affects subnet - same principle though)
- settable,
Per the other discussion on attributes, I believe the change walks in
historical footsteps and it's a matter of project policy choice. That
aside, you raised a couple of other issues on IRC:
- backward compatibility with plugins that haven't adapted their API - this
is addressed in the spec, whic
On 19 March 2015 at 11:44, Gary Kotton wrote:
> Hi,
> Just the fact that we did this does not make it right. But I guess that we
> are starting to bend the rules. I think that we really need to be far more
> diligent about this kind of stuff. Having said that we decided the
> following on IRC:
>
ml
> [3]
> https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
> [4]
> https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/nfv-vlan-trunks,n,z
> [5] https://review.openstack.org/#/c/136760/
&g
On 20 March 2015 at 15:49, Salvatore Orlando wrote:
> The MTU issue has been a long-standing problem for neutron users. What
> this extension is doing is simply, in my opinion, enabling API control over
> an aspect users were dealing with previously through custom made scripts.
>
Actually, versi
On 22 March 2015 at 07:48, Jay Pipes wrote:
> On 03/20/2015 05:16 PM, Kevin Benton wrote:
>
>> To clarify a bit, we obviously divide lots of things by tenant (quotas,
>> network listing, etc). The difference is that we have nothing right now
>> that has to be unique within a tenant. Are there obj
That spec ensures that you can tell what the plugin is doing. You can ask
for a VLAN transparent network, but the cloud may tell you it can't make
one.
The OVS driver in Openstack drops VLAN tagged packets, I'm afraid, and the
spec you're referring to doesn't change that. The spec does ensure th
On 24 March 2015 at 11:45, Armando M. wrote:
> This may be besides the point, but I really clash with the idea that we
> provide a reference implementation on something we don't have CI for...
>
Aside from the unit testing, it is going to get a test for the case we can
test - when using the stan
7:48, Guo, Ruijing wrote:
> I am trying to understand how guest os use trunking network.
>
>
>
> If guest os use bridge like Linuxbride and OVS, how we launch it and how
> libvirt to support it?
>
>
>
> Thanks,
>
> -Ruijing
>
>
>
>
>
> *From:* Ian
This puts me in mind of a previous proposal, from the Neutron side of
things. Specifically, I would look at Erik Moe's proposal for VM ports
attached to multiple networks:
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms .
I believe that you want logical ports hiding behind a conventi
On 20 April 2015 at 13:02, Kevin L. Mitchell
wrote:
> On Mon, 2015-04-20 at 13:57 -0600, Chris Friesen wrote:
> > > However, minor changes like that could still possibly break clients
> that are not
> > > expecting them. For example, a client that uses the json response as
> arguments
> > > to a
On 20 April 2015 at 15:23, Matthew Treinish wrote:
> On Mon, Apr 20, 2015 at 03:10:40PM -0700, Ian Wells wrote:
> > It would be nice to have a consistent policy here; it would make future
> > decision making easier and it would make it easier to write specs if we
> > knew
On 20 April 2015 at 07:40, Boris Pavlovic wrote:
> Dan,
>
> IMHO, most of the test coverage we have for nova's neutronapi is more
>> than useless. It's so synthetic that it provides no regression
>> protection, and often requires significantly more work than the change
>> that is actually being a
On 20 April 2015 at 17:52, David Kranz wrote:
> On 04/20/2015 08:07 PM, Ian Wells wrote:
>
> Whatever your preference might be, I think it's best we lose the
> ambiguity. And perhaps advertise that page a little more widely, actually
> - I hadn't come across it in
On 13 May 2015 at 10:30, Vinod Pandarinathan (vpandari)
wrote:
> - Traditional monitoring tools (Nagios, Zabbix, ) are necessary anyway
> for infrastructure monitoring (CPU, RAM, disks, operating system, RabbitMQ,
> databases and more) and diagnostic purposes. Adding OpenStack service
> check
In conjunction with the release of VPP 17.04, I'd like to invite you all to
try out networking-vpp for VPP 17.04. VPP is a fast userspace forwarder
based on the DPDK toolkit, and uses vector packet processing algorithms to
minimise the CPU time spent on each packet and maximise throughput.
network
There are two steps to how this information is used:
Step 1: create a network - the type driver config on the neutron-server
host will determine which physnet and VLAN ID to use when you create it.
It gets stored in the DB. No networking is actually done, we're just
making a reservation here. Th
I'm coming to this cold, so apologies when I put my foot in my mouth. But
I'm trying to understand what you're actually getting at, here - other than
helpful simplicity - and I'm not following the detail of you're thinking,
so take this as a form of enquiry.
On 14 May 2017 at 10:02, Monty Taylor
On 5 July 2017 at 14:14, Ihar Hrachyshka wrote:
> Heya,
>
> we have https://bugs.launchpad.net/neutron/+bug/1671634 approved for
> Pike that allows setting MTU for network on creation.
This was actually in the very first MTU spec (in case no one looked),
though it never got implemented. The sp
OK, so I should read before writing...
On 5 July 2017 at 18:11, Ian Wells wrote:
> On 5 July 2017 at 14:14, Ihar Hrachyshka wrote:
>
>> Heya,
>>
>> we have https://bugs.launchpad.net/neutron/+bug/1671634 approved for
>> Pike that allows setting MTU for netwo
On 7 July 2017 at 12:14, Ihar Hrachyshka wrote:
> > That said: what will you do with existing VMs that have been told the
> MTU of
> > their network already?
>
> Same as we do right now when modifying configuration options defining
> underlying MTU: change it on API layer, update data path with t
In conjunction with the release of VPP 17.07, I'd like to invite you all to
try out networking-vpp 17.07.1 for VPP 17.07. VPP is a fast userspace
forwarder based on the DPDK toolkit, and uses vector packet processing
algorithms to minimise the CPU time spent on each packet and maximise
throughput.
Since OVS is doing L2 forwarding, you should be fine setting the MTU to as
high as you choose, which would probably be the segment_mtu in the config,
since that's what it defines - the largest MTU that (from the Neutron API
perspective) is usable and (from the OVS perspective) will be used in the
s
In conjunction with the release of VPP 17.10, I'd like to invite you all to
try out networking-vpp 17.10(*) for VPP 17.10. VPP is a fast userspace
forwarder based on the DPDK toolkit, and uses vector packet processing
algorithms to minimise the CPU time spent on each packet and maximise
throughput
101 - 200 of 218 matches
Mail list logo