Some comments inline,
Salvatore
On 21 May 2014 15:23, Mandeep Dhami wrote:
> Hi Sean:
>
> While the APIs might not be changing*, I suspect that there are
> significant design decisions being made**. These changes are probably more
> significant than any new feature being discussed. As a communi
In principle there is nothing that should prevent us from implementing an
IP reservation mechanism.
As with anything, the first thing to check is literature or "related work"!
If any other IaaS system is implementing such a mechanism, is it exposed
through the API somehow?
Also this feature is lik
I'll be happy to attend this meeting.
I would also be happy to do the actual work, or part of it - assuming all
the work items have not been already assigned.
Even if I do realise the importance of the issue - to me it's still a
design/implementation issue.
We usually have subteams for sub-project
I'm +1 with this plan as well.
Regards,
Salvatore
On 21 May 2014 22:58, Nachi Ueno wrote:
> +1!
> Carl is doing great contribution for the community.
> He is really active on reviews with very high quality comments.
> His input based on large scale deployment is also very valuable for
> the co
Some comments inline.
Salvatore
On 19 May 2014 20:32, sridhar basam wrote:
>
>
>
> On Mon, May 19, 2014 at 1:30 PM, Jay Pipes wrote:
>
>> Stackers,
>>
>> On Friday in Atlanta, I had the pleasure of moderating the database
>> session at the Ops Meetup track. We had lots of good discussions and
You can try and reach out the original assignee (sweston) on IRC.
I haven't heard from him in a while; if he's not actively working on this
topic anymore I think it's ok for you to take over.
Feel free to get in touch with me (salv-orlando) on IRC for discussing
implementation alternatives.
Salva
Thanks Maru,
I've added this patch to my list of patches to review.
I've also targeted the bug of J-1 because I love being a pedant bookkeeper.
Salvatore
On 8 May 2014 11:41, Maru Newby wrote:
> Memory usage due to plugin+mock leakage is addressed by the following
> patch:
>
> https://review.
Technically we don't need anything in neutron to migrate to a single config
files if not rearrange files in ./etc
For devstack, iniset calls to plugin-specific configuration files should
then be adjusted accordingly.
I think we started with plugin specific configuration files because at that
time i
It seems to me that there are two aspects being discussed in this thread:
style and practicality.
>From a style perspective, it is important to give a "uniform" experience to
Neutron API users.
Obviously this does not mean the Load Balancing API must adhere to some
strict criteria.
For instance, 2
The patch you've been looking at just changes the way in which SystemExit
is used, it does not replace it with sys.exit.
In my experience sys.exit was causing unit test threads to interrupt
abruptly, whereas SystemExit was being caught by the test runner and
handled.
I find therefore a bit strange
On 30 April 2014 17:28, Jesse Pretorius wrote:
> On 30 April 2014 16:30, Oleg Bondarev wrote:
>
>> I've tried updating interface while running ssh session from guest to
>> host and it was dropped :(
>>
>
Please allow me to tell you "I told you so!" ;)
>
> The drop is not great, but ok if the in
Anna,
It's good to see progress being made on this blueprint.
I have some comments inline.
Also, I would recommend keeping in mind the comments Mark had regarding
migration generation and plugin configuration in hist post on the email
thread I started.
Salvatore
On 30 April 2014 14:16, Anna Ka
relevant
stakeholders (operators and developers), define an actionable plan for
Juno, and then distribute tasks.
It would be a shame if it turns out several contributors are working on
this topic independently.
Salvatore
On 22 April 2014 16:27, Jesse Pretorius wrote:
> On 22 April 2014 14:58
>From previous requirements discussions, I recall that:
- A control plan outage is unavoidable (I think everybody agrees here)
- Data plane outages should be avoided at all costs; small l3 outages
deriving from the transition to the l3 agent from the network node might be
allowed.
However, a L2 da
When I initially spoke to the infra team regarding this problem, they
suggested that "just fixing migrations" so that the job could run was not a
real option.
I tend to agree with this statement.
However, I'm open to options for getting grenade going until the migration
problem is solved. Ugly work
On 17 April 2014 04:02, Aaron Rosen wrote:
> Hi,
>
> Comments inline:
>
>
> On Tue, Apr 8, 2014 at 3:16 PM, Salvatore Orlando wrote:
>
>> I have been recently investigating reports of slowness for list responses
>> in the Neutron API.
>> This was first
fficult to have some insight into set of policy
> rules and see, if there are any rules that apply per attribute value of
> given resource? If no - use simplified approach, if yes - fallback to
> existing slow one.
> Wouldn't it both speedup most of operations while preserving existi
if the image you're adding is a diagram, I would think about asciiflow.comfirst!
On 16 April 2014 15:09, Kyle Mestery wrote:
> I think the problem is that your spec should be at the toplevel of the
> juno directory, and that's why the UT is failing. Can you move your
> spec up a level, includin
Thanks Anna.
I've been following the issue so far, but I am happy to hand it over to you.
I think the problem assessment is complete, but if you have more questions
ping me on IRC.
Regarding the solution, I think we already have a fairly wide consensus on
the approach.
There are however a few det
On 14 April 2014 17:27, Sean Dague wrote:
> On 04/14/2014 12:09 PM, Kyle Mestery wrote:
> > On Mon, Apr 14, 2014 at 10:54 AM, Salvatore Orlando
> wrote:
>
> >>> The system could be made smarter by storing also a list of "known"
> >>> migrations
Resending with [Neutron] tag.
Salvatore
On 14 April 2014 16:00, Salvatore Orlando wrote:
> This is a rather long post. However the gist of it is that Neutron
> migrations are failing to correctly perform database upgrades when service
> plugins are involved, and this probably
This is a rather long post. However the gist of it is that Neutron
migrations are failing to correctly perform database upgrades when service
plugins are involved, and this probably means the conditional migration
path we designed for the Grizzly release is proving not robust enough when
dealing wi
On 11 April 2014 19:11, Robert Kukura wrote:
>
> On 4/10/14, 6:35 AM, Salvatore Orlando wrote:
>
> The bug for documenting the 'multi-provider' API extension is still open
> [1].
> The bug report has a good deal of information, but perhaps it might be
> worth al
The bug for documenting the 'multi-provider' API extension is still open
[1].
The bug report has a good deal of information, but perhaps it might be
worth also documenting how ML2 uses the segment information, as this might
be useful to understand when one should use the 'provider' extension and
wh
Auditing has been discussed for the firewall extension.
However, it is reasonable to expect some form of auditing for security
group rules as well.
To the best of my knowledge there has never been an explicit decision to
not support logging.
However, my guess here is that we might be better off wi
I have been recently investigating reports of slowness for list responses
in the Neutron API.
This was first reported in [1], and then recently was observed with both
the ML2 and the NSX plugins.
The root cause of this issues is that a policy engine check is performed
for every attribute of every r
Xuhan, Sean and others.
As stated in the commit message and bug, this is a temporary measure to
avoid making ra_mode and ipv6_address_mode consumable in the icehouse
release.
I made this move because https://review.openstack.org/#/c/70649/ did not
land in Icehouse, and that was needed to make thos
Hi Mike,
For all neutron-related fuel developments please feel free to reach to to
the neutron team for any help you might need either by using the ML or
pinging people in #openstack-neutron.
Regarding the fuel blueprints you linked in your first post, I am looking
in particular at
https://bluepri
Hi Simon,
I agree with your concern.
Let me point out however that VMware mine sweeper runs almost all the smoke
suite.
It's been down a few days for an internal software upgrade, so perhaps you
have not seen any recent report from it.
I've seen some CI systems testing as little as tempest.api.ne
Hi Thomas,
it probably won't be a bad idea if you can share the patches you're
applying to the default configuration files.
I think all distros are patching them anyway, so this might allow us to
provide mostly ready to use config files.
Is there a chance you can push something to gerrit?
Salvat
On 26 March 2014 19:19, James E. Blair wrote:
> Salvatore Orlando writes:
>
> > On another note, we noticed that the duplicated jobs currently executed
> for
> > redundancy in neutron actually seem to point all to the same build id.
> > I'm not sure then if w
ihiro were saying.
Salvatore
[1] https://bugs.launchpad.net/neutron/+bug/1297469
On 26 March 2014 09:02, Salvatore Orlando wrote:
> The thread branched, and it's getting long.
> I'm trying to summarize the discussion for other people to quickly catch
> up.
>
>
ssue
Nachi's pointing out regarding the fact that a libvirt network filter name
should not be added in guest config.
Salvatore
On 26 March 2014 05:57, Akihiro Motoki wrote:
> Hi Nachi and the teams,
>
> (2014/03/26 9:57), Salvatore Orlando wrote:
> > I hope we can sort this o
I hope we can sort this out on the mailing list IRC, without having to
schedule emergency meetings.
Salvatore
On 25 March 2014 22:58, Nachi Ueno wrote:
> Hi Nova, Neturon Team
>
> I would like to discuss issue of Neutron + Nova + OVS security group fix.
> We have a discussion in IRC today, but
Inline
Salvatore
On 24 March 2014 23:01, Matthew Treinish wrote:
> On Mon, Mar 24, 2014 at 09:56:09PM +0100, Salvatore Orlando wrote:
> > Thanks a lot!
> >
> > We now need to get on these bugs, and define with QA an acceptable
> failure
> > rate criterion for sw
Hi Jakub,
thanks for finding this out.
I think there might be a migration which needs to be fixed.
I will look into the logs you linked to see what can be done.
Salvatore
ps: I've also added a [NEUTRON] tag to the subject so it will be easy for
people doing filters on mailing list to retrieve th
Thanks a lot!
We now need to get on these bugs, and define with QA an acceptable failure
rate criterion for switching the full job to voting.
It would be good to have a chance to only run the tests against code which
is already in master.
To this aim we might push a dummy patch, and keep it spinni
Hi Vinay,
I left a few comments on the specification document.
While I understand this is functional for the VPC use case, there might be
applications also outside of the VPC.
My only concern is that, at least in the examples in the document, this
appear to be violating a bit the tenet of neutron
Replies inline,
Salvatore
On 17 March 2014 21:38, Eugene Nikanorov wrote:
>
>
>
> On Mon, Mar 17, 2014 at 11:46 PM, Salvatore Orlando
> wrote:
>
>> It is a common practice to have both an operational and an administrative
>> status.
>> I agree ACTIVE as a
It is a common practice to have both an operational and an administrative
status.
I agree ACTIVE as a term might result confusing. Even in the case of a
port, it is not really clear whether it means "READY" or "LINK UP".
Terminology-wise I would suggest "READY" rather than "DEPLOYED", as it is a
te
wrote:
> Hello devs,
>
> I wanted the update the analysis performed by Salvatore Orlando few weeks
> ago [1]
> I used the following query for Logstash [2] to detect the failures of the
> last 48 hours.
>
> There were 77 failures (40% of the total).
> I classi
Hi,
I read this thread and I think this moves us in the right direction of
moving away from provider mapping, and, most importantly, abstracting away
backend-specific details.
I was however wondering if "flavours" (or "service offerings") will act
more like a catalog or a scheduler.
The differenc
Hi Kyle,
I think conceptually your approach is fine.
I would have had concerns if you were trying to manage ODL life cycle
through devstack (like installing/uninstalling it or configuring the ODL
controller).
But looking at your code it seems you're just setting up the host so that
it could work w
I understand the fact that resources with invalid tenant_ids can be created
(only with admin rights at least for Neutron) can be annoying.
However, I support Jay's point on cross-project interactions. If tenant_id
validation (and orphaned resource management) can't be efficiently handled,
then I'd
Hi Assaf,
some comments inline.
As a general comment, I'd prefer to move all the discussions to gerrit
since the patches are now in review.
This unless you have design concerns (the ones below look more related to
the implementation to me)
Salvatore
On 24 February 2014 15:58, Assaf Muller wrot
not assigned. Most of the bugs are assigned to
> you, I was wondering if you´d use some help. I guess we can coordinate
> better when you are online.
>
> cheers,
>
> Rossella
>
>
> On 02/23/2014 03:14 AM, Salvatore Orlando wrote:
>
> I have tried to collect more infor
I have tried to collect more information on neutron full job failures.
So far there have been 219 failures and 891 successes, for an overall
success rate of 19.8% which is inline with Sean's evaluation.
The count has performed exclusively on jobs executed against master branch.
The failure rate fo
Hi,
I've provided an update on this bug (which by the way is finally not
anymore #1 gate blocker!).
Please see the bug report [1]
I've tagged heat as well since over 50% of hits are from a non-voting heat
job which is trying to ssh into machines using their private IP; this might
not work with Ne
It seems this work item is made of several blueprints, some of which are
not yet approved. This is true at least for the Neutron blueprint regarding
policy extensions.
Since I first looked at this spec I've been wondering why nova has been
selected as an endpoint for network operations rather than
+1
Il 11/feb/2014 10:47 "Gary Kotton" ha scritto:
> +1
>
>
> On 2/11/14 1:28 AM, "Mark McClain" wrote:
>
> >All-
> >
> >I¹d like to nominate Oleg Bondarev to become a Neutron core reviewer.
> >Oleg has been valuable contributor to Neutron by actively reviewing,
> >working on bugs, and contributi
.
But still, we should probably make gate tests symmetric again.
Salvatore
On 7 February 2014 15:05, Salvatore Orlando wrote:
> Hi,
>
> By merging patch https://review.openstack.org/#/c/53459/, full tenant
> isolation has been turned on for neutron API tests.
> We have
Hi,
By merging patch https://review.openstack.org/#/c/53459/, full tenant
isolation has been turned on for neutron API tests.
We have executed a few experimental checks before merging and we found out
that the isolated jobs were always passing.
However, the gating jobs both in tempest and nova do
Sean,
You surely have my permission. If it's publicly available you should not
even ask for it!
Well, unless it's copyrighted, but at least I could not possibly copyright
material mostly stolen from other people ;)
On the other hand the issue with slideshare and such is that material might
come a
It might be creative, but it's a shame that it did not serve the purpose.
At least it confirmed the kernel bug was related to process termination in
network namespaces but was due to SIGKILL exlusively, as it occurred with
SIGTERM as well.
On the bright side, Mark has now pushed another patch whic
Thanks Chris!
some comments inline.
On 25 January 2014 02:08, Chris Wright wrote:
> * Salvatore Orlando (sorla...@nicira.com) wrote:
> > I've found out that several jobs are exhibiting failures like bug 1254890
> > [1] and bug 1253896 [2] because openvswitch seem to be c
I've found out that several jobs are exhibiting failures like bug 1254890
[1] and bug 1253896 [2] because openvswitch seem to be crashing the kernel.
The kernel trace reports as offending process usually either
neutron-ns-metadata-proxy or dnsmasq, but [3] seem to clearly point to
ovs-vsctl.
254 ev
Hi,
My expertise on the subject is pretty much zero, but the performance loss
(which I think is throughput loss) is so blatant that perhaps there's more
than the MTU issue. From what Robert writes, GRO definitely plays a role
too.
My only suggestion would be to route the question to ovs-discuss i
Hi Sukhdev,
Some comments inline.
Salvatore
On 23 January 2014 03:10, Sukhdev Kapur wrote:
> Hi All,
>
> During tempest sprint in Montreal, as we were writing API tests, we
> noticed a behavior which we believed is an issue with the Neutron API (or
> perhaps documentation or both)
>
> Let me s
An openstack deployment with an external DHCP server is definetely a
possible scenario; I don't think it can be implemented out-of-the-box with
the components provided by the core openstack services, but it should be
doable and a possibly even a requirement for deployments which integrate
openstack
It's worth noticing that elastic recheck is signalling bug 1253896 and bug
1224001 but they have actually the same signature.
I found also interesting that neutron is triggering a lot bug 1254890,
which appears to be a hang on /dev/nbdX during key injection; so far I have
no explanation for that.
I have some hints which the people looking at neutron failures might find
useful.
# 1 - in [1] a weird thing happens with DHCP. A DHCPDISCOVER with for
fa:16:3e:cc:d9:c7
is pretty much simultaneously received by two dnsmasq instances, which are
listening on ports belonging to two distinct networks
Yair is probably referring to statistically independent tests, or whatever
case for which the following is true (P(x) is the probably that a test
succeeds):
P(4|3|2|1) = P(4|1) * P(3|1) * P(2|1)
This might apply to the tests we are adding to network_basic_ops scenario;
however it is worth noting
I gave a -2 yesterday to all my Neutron patches. I did that because I
thought something was wrong with them, but then I started to realize it's a
general problem.
It makes sense to give some priority to the patches Eugene linked, even if
it would be better to have some people root causing the issue
I think you're right Darragh.
It was actually Montreal's snow and cold freezing my brain as I
investigated the same issue a while ago and tried to change cirrOS to send
a DHCPDISCOVER every 10 seconds instead of 60 seconds, but then I moved to
something else as I wasn't even sure a new centos base
I have been seeing in the past 2 days timeout failures on gate jobs which I
am struggling to explain. An example is available in [1]
These are the usual failure that we associate with bug 1253896, but this
time I can verify that:
- The floating IP is correctly wired (IP and NAT rules)
- The DHCP po
its just a confusing ui, you could
> always change the code so it filters out the floating-ip ports from view.
> Make them a pure implementation detail that a user never sees.
>
> Kevin
> ------
> *From:* Salvatore Orlando [sorla...@nicira.com]
> *Sent
TL;DR;
I have been looking back at the API and found out that it's a bit weird how
floating IPs are mapped to ports. This might or might not be an issue, and
several things can be done about it.
The rest of this post is a boring description of the problem and a possibly
even more boring list of pot
I don't think I can use better words than Mark's.
So I have nothing to add.
Salvatore
On 13 January 2014 23:29, Mark McClain wrote:
>
> On Jan 13, 2014, at 12:24 PM, Collins, Sean <
> sean_colli...@cable.comcast.com> wrote:
>
> > Hi,
> >
> > I posted a message to the mailing list[1] when I fir
Hi Jay,
replies inline.
I have probably have found one more cause for this issue in the logs, and I
have added a comment to the bug report.
Salvatore
On 9 January 2014 19:10, Jay Pipes wrote:
> On Thu, 2014-01-09 at 09:09 +0100, Salvatore Orlando wrote:
> > I am afraid I need to co
I think I have found another fault triggering bug 1253896 when neutron is
enabled.
I've added a comment to https://bugs.launchpad.net/bugs/1253896
On another note, I'm seeing also occurrences of this bug with nova-network.
Is there anybody from the nova side looking at it (I can give it a try, but
I am afraid I need to correct you Jay!
This actually appears to be bug 1253896 [1]
Technically, what we call 'bug' here is actually a failure manifestation.
So far, we have removed several bugs causing this failure. The last patch
was pushed to devstack around Christmas.
Nevertheless, if you look
/review.openstack.org/#/c/61341/
> [2]https://review.openstack.org/#/c/63917/
>
> On Mon, Jan 6, 2014 at 9:24 PM, Salvatore Orlando
> wrote:
> > This thread is starting to get a bit confusing, at least for people with
> a
> > single-pipeline brain like me!
> >
This thread is starting to get a bit confusing, at least for people with a
single-pipeline brain like me!
I am not entirely sure if I understand correctly Isaku's proposal
concerning deferring the application of flow changes.
I think it's worth discussing in a separate thread, and a supporting pat
for ovs-vsctl, and I would
definitely welcome it; I just don't think it will be the ultimate solution.
Salvatore
On 6 January 2014 11:40, Isaku Yamahata wrote:
> On Fri, Dec 27, 2013 at 11:09:02AM +0100,
> Salvatore Orlando wrote:
>
> > Hi,
> >
> > We
Hi,
On IRC Yair Fried reminded me that we have not yet solved the issue around
security groups not enforced on the gate.
An accurate report of the current status is here [1]
And it seems there is some consensus around using the additional port
binding parameters for security groups (lp: [2] and
2, 2014 7:53:05 PM
> >> Subject: Re: [openstack-dev] [Neutron][qa] Parallel testing update
> >>
> >> Thanks for the updates here Salvatore, and for continuing to push on
> >> this! This is all great work!
> >>
> >> On Jan 2, 2014, at 6:57 AM, Salva
27;ve
associated all the bug reports with neutron, but some are probably more
tempest or nova issues.
Salvatore
[1] https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel
On 27 December 2013 11:09, Salvatore Orlando wrote:
> Hi,
>
> We now have several patches under r
Hi Hemanth,
I think that the only job that needs to be integrated with gate tests and
vote is the one running tempest smoketests, which are plugin agnostic.
For tests specific to a given controller, they can surely be integrated
with upstream gerrit in order to vote on changes specific to the plug
I reckon the decision of keeping neutron's firewall API out of gate tests
was reasonable for the Havana release.
I might be argued the other 'experimental' service, VPN, is already enabled
on the gate, but that did not happen before proving the feature was
reliable enough to not cause gate breakage
planation!
> >
> > Eugene.
> >
> >
> > On Mon, Dec 2, 2013 at 11:48 PM, Joe Gordon
> wrote:
> >
> > On Dec 2, 2013 9:04 PM, "Salvatore Orlando" wrote:
> > >
> > > Hi,
> > >
> > > As you might have noticed,
k so because otherwise we should see failures
even with nova-network, and that does not happen.
Salvatore
On 27 December 2013 08:14, IWAMOTO Toshihiro wrote:
> At Fri, 27 Dec 2013 01:53:59 +0100,
> Salvatore Orlando wrote:
> >
> > [1 ]
> > [1.1 ]
> > I put toge
I put together all the patches which we prepared for making parallel
testing work, and ran a few times 'check experimental' on the gate to see
whether it worked or not.
With parallel testing, the only really troubling issue are the scenario
tests which require to access a VM from a floating IP, an
t;
>
> All apologies,
>
> Roey Chen
>
>
>
> *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
> *Sent:* Sunday, December 22, 2013 1:35 PM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Neutron] Availability of external testing logs
>
&
Hi,
The patch: https://review.openstack.org/#/c/63558/ failed mellanox external
testing.
Subsequent patch sets have not been picked up by the mellanox testing
system.
I would like to see why the patch failed the job; if it breaks mellanox
plugin for any reason, I would be happy to fix it. However
Before starting this post I confess I did not read with the required level
of attention all this thread, so I apologise for any repetition.
I just wanted to point out that floating IPs in neutron are created
asynchronously when using the l3 agent, and I think this is clear to
everybody.
So when th
Hi,
I'm sorry I could not make it to meeting.
However, I can see clearly see the progress being made from gerrit!
One thing which might be worth mentioning is that some of the new jobs are
already voting.
However, in some cases the logs are either not accessible, and in other
cases the job seem t
>
>
> Horizontal scaling with multiple neutron-server hosts would be one option,
> but having support of multiple neutron rpc servers process in in the same
>
> System would be really helpful for the scaling of neutron server
> especially during concurrent instance deployments
rrect the OVS agent loop slowdown issue?
>
> Does this patch address the DHCP agent updating the host file once in a
> minute and finally sending SIGKILL to dnsmasq process?
>
>
>
> I have tested with Marun’s patch
> https://review.openstack.org/#/c/61168/regarding ‘Send
> DHCP no
Robert,
As you've deliberately picked on me I feel compelled to reply!
Jokes apart, I am going to retire that patch and push the new default in
neutron. Regardless of considerations on real loads vs gate loads, I think
it is correct to assume the default configuration should be one that will
allow
I believe your analysis is correct and inline with the findings reported in
the bug concerning OVS agent loop slowdown.
The issue has become even more prominent with the ML2 plugin due to an
increased number of notifications sent.
Another issue which makes delays on the DHCP agent worse is that i
Sadly this patch is now abandoned.
As stated in the review I did, the bug is one we should definitely fix, but
it is even more important to avoid introducing further race conditions.
I will look back at the latest comments from Zhang and see whether we can
go ahead and restore that patch or whethe
NSX distributed routers behave, from a tenant perspective, exactly like any
other router.
Beyond the service level factor, which I believe Ian is referring to as
well, there is no reason for distinguishing them from standard routers
through the API.
I believe the same applies distributed router b
I think there's little to add to what Aaron said.
This mechanism might end up generating long-running sql transactions which
have a detrimental effect on the availability of connections in the pool as
well as the threat of the deadlock with eventlet.
We are progressively removing all the controlle
Thanks Miguel!
I will pick up a few tests from the list you put together, and I encourage
too every neutron developer to do the same.
At the end of the day, it's not really different from scripting what you do
everyday to test the code you develop.
I am also available to help new contributors get
Hi Yoshihiro,
In my opinion the use of filters on changes is allowed by the smoketesting
policy we defined.
Notwithstanding that the approach of testing every patch is definitely the
safest, I understand in some cases the volume of patchsets uploaded to
gerrit might overwhelm the plugin-specific t
I generally tend to agree that once the distributed router is available,
nobody would probably want to use a centralized one.
Nevertheless, I think it is correct that, at least for the moment, some
advanced services would only work with a centralized router.
There might also be unforeseen scalabili
I think this bug was considered fixed because at the time once the patch
addressing was merged, the bug automatically went into fix committed.
It should therefore be re-opened. Even if tweaking sql pool parameters
avoids the issue, this should be considered more of a mitigation rather
than a perman
Hi,
As you might have noticed, there has been some progress on parallel tests
for neutron.
In a nutshell:
* Armando fixed the issue with IP address exhaustion on the public network
[1]
* Salvatore has now a patch which has a 50% success rate (the last failures
are because of me playing with it) [2
>
> Am I missing something?
>
> On 27 November 2013 09:08, Salvatore Orlando wrote:
> > Thanks Maru,
> >
> > This is something my team had on the backlog for a while.
> > I will push some patches to contribute towards this effort in the next
> few
> >
Thanks Maru,
This is something my team had on the backlog for a while.
I will push some patches to contribute towards this effort in the next few
days.
Let me know if you're already thinking of targeting the completion of this
job for a specific deadline.
Salvatore
On 27 November 2013 17:50, M
301 - 400 of 472 matches
Mail list logo