It might be because of the wording used, but it seems to me that you're
making it sound like the group policy effort could have been completely
orthogonal to neutron as we know it now.
What I understood is that the declarative abstraction offered by group
policy could do without any existing
If we want to keep everything the way it is, we have to change everything
[1]
This is pretty much how I felt after reading this proposal, and I felt that
this quote, which Ivar will probably appreciate, was apt to the situation.
Recent events have spurred a discussion about the need for a change
-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][CI] VMware mine sweeper for
Neutron temporarily disabled
On Jul 29, 2014 12:46 PM, Salvatore Orlando sorla...@nicira.com wrote:
Minesweeper for Neutron is now running again.
We updated the image for our compute nodes
cores who'll review these patches!)
Salvatore
[1] https://review.openstack.org/#/c/113554/
[2] https://review.openstack.org/#/c/113562/
On 7 August 2014 17:51, Salvatore Orlando sorla...@nicira.com wrote:
Thanks Armando,
The fix for the bug you pointed out was the reason of the failure we've
Hi,
VMware minesweeper caused havoc today causing exhaustion of the upstream
node pool.
The account has been disabled so things are back to normal now.
The root cause of the issue was super easy once we realized we missed [1].
I would like to apologise to the whole community on behalf of the
? Ideas? Criticisms? Complements? J
Steven
Original message
From: Salvatore Orlando sorla...@nicira.com
Date: 11/14/2013 4:23 AM (GMT-07:00)
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev
I think there will soon be a discussion regarding what the appropriate
location for plugin and drivers should be.
My personal feeling is that Neutron has simply reached the tipping point
where the high number of drivers and plugins is causing unnecessary load
for the core team and frustration for
August 2014 20:14, Salvatore Orlando sorla...@nicira.com wrote:
And just when the patch was only missing a +A, another bug slipped in!
The nova patch to fix it is available at [1]
And while we're there, it won't be a bad idea to also push the neutron
full job, as non-voting, into the integrated
VMware minesweeper has filters which have been designed to cover the
largest possible subset of submissions without add unnecessary load to our
scarce resources for CI validation. This is probably why the analysis
reveals not all patches are covered.
Therefore our filters exclude neutral changes
Hi Stuart,
As far as I can tell, this is the first time I hear about this problem.
I can't make any judgment with the details you've shared here, but I would
initially focus on ovs, the kernel and their interactions.
For Neutron's l3 agent the only thing I can say is that it uses the
conntrack
As the conversation has drifted away from a discussion pertaining the nova
core team, I have some comments inline as well.
On 18 August 2014 12:18, Thierry Carrez thie...@openstack.org wrote:
Doug Hellmann wrote:
On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
Let me
://bugs.launchpad.net/nova/+bug/1305892
On 16 August 2014 01:13, Mark McClain mmccl...@yahoo-inc.com wrote:
On Aug 15, 2014, at 6:20 PM, Salvatore Orlando sorla...@nicira.com
wrote:
The neutron full job is finally voting, and the first patch [1] has
already passed it in gate checks!
I've collected
Hi Trevor,
thanks for sharing this minutes!
I would like to cooperate a bit to this project's developments, possibly
without ending up being just deadweight.
To this aim I have some comments inline.
Salvatore
On 18 August 2014 22:25, Trevor Vardeman trevor.varde...@rackspace.com
wrote:
Hi Joe,
manual rechecks are possible for mine sweeper. The new syntax is
vmware-recheck-patch. I found out vmware-recheck still triggered upstream
zuul.
I think it should be possible to submit a batch job with all the patches
that need to be rechecked without having to trigger the recheck from
In the current approach QoS support is being hardwired into ML2.
Maybe this is not the best way of doing that, as perhaps it will end up
requiring every mech driver which enforces VIF configuration should support
it.
I see two routes. One is a mechanism driver similar to l2-pop, and then you
by
the service plugin should be exposed at the management plane, implemented
at the control plane, and if necessary also at the data plane.
Some more comments inline.
Salvatore
On 20 August 2014 11:31, Mathieu Rohon mathieu.ro...@gmail.com wrote:
Hi
On Wed, Aug 20, 2014 at 12:12 AM, Salvatore
Some comments inline.
Salvatore
On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi all,
I've read the proposal for incubator as described at [1], and I have
several comments/concerns/suggestions to this.
Overall, the
Hi Nader,
Sorry about that failure.
We have temporarily stopped mine sweeper for neutron while we update our
devstack images.
However, unfortunately some jobs did not complete properly, and therefore
you had failures without logs being reported.
The situation should be back to normal soon, and
Hi Kyle,
I have conflicts for 13 UTC - Thursday is already full for me, but I'll try
anyway, to join the convo on IRC.
I agree the 3 blueprints you've mentioned are the ones we should really
merge for Juno.
To this aim, I wonder why [1] has not been set to high. Nevertheless it
does not matter a
Hi Karthik,
what do you mean that the plugin is incompatible with
https://review.openstack.org/#/c/114393/?
you're mentioning a rebase issue - but the patch in question appears to
cleanly apply to master.
Is your probably because patch #114393 does not have in its log some
changes you need to
TL; DR
A few folks are proposing to stop running tests for neutron advanced
services [ie: (lb|vpn|fw)aas] in the integrated gate, and run them only on
the neutron gate.
Reason: projects like nova are 100% orthogonal to neutron advanced
services. Also, there have been episodes in the past of
As it has been pointed out previously in this thread debugging gate
failures is mostly about chasing race conditions, which in some cases
involve the most disparate interactions between Openstack services [1].
Finding the root cause of these races is a mix of knowledge, pragmatism,
and luck.
I think it's ok to submit specs for Kilo - mostly because it would be a bit
pointless submitting them for Juno!
Salvatore
On 28 August 2014 09:26, Kevin Benton blak...@gmail.com wrote:
You could just make the kilo folder in your commit and then rebase it once
Kilo is open.
On Thu, Aug 28,
If you are running version from a stable branch, changes in DB migrations
should generally be forbidden as the policy states since those migrations
are not likely to be executed again. Downgrading and then upgrading again
is extremely risky and I don't think anybody would ever do that.
However,
I agree with Brandon that it will be difficult to find spaces for Octavia,
and the pod is a valid option.
Nevertheless it is always worth trying.
For the traditional load balancing service instead I reckon #1 is a very
good thing to discuss. Problem is that it is also hard to conclude anything
in
Some more comments from me inline.
Salvatore
On 2 September 2014 11:06, Adam Harwell adam.harw...@rackspace.com wrote:
I also agree with most of what Brandon said, though I am slightly
concerned by the talk of merging Octavia and [Neutron-]LBaaS-v2 codebases.
Beyond all the reasons listed
Octavia will
sit.
Nevertheless I think this is a discussion that it's useful for the
medium/long term - it does not seem to me that there is an urgency here.
Regards Susanne
On Tue, Sep 2, 2014 at 9:18 AM, Salvatore Orlando sorla...@nicira.com
wrote:
Some more comments from me inline
On 3 September 2014 22:10, Joe Gordon joe.gord...@gmail.com wrote:
On Tue, Aug 26, 2014 at 4:47 PM, Salvatore Orlando sorla...@nicira.com
wrote:
TL; DR
A few folks are proposing to stop running tests for neutron advanced
services [ie: (lb|vpn|fw)aas] in the integrated gate, and run them
While it's good that somebody is addressing this specific issue, perhaps
punctual solutions - eg: hey I have a patch for that, are not
addressing the general issues, which is that Neutron has very granular
primitives that force users to do multiple API requests for operations they
regard as
This is a very important discussion - very closely related to the one going
on in this other thread
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045768.html
.
Unfortunately it is also a discussion that tends to easily fragment and
move in a thousand different directions.
A few
Bug 1357055 [1] and 1323658 [2] affect neutron jobs and are among the top
gate offenders.
With this kind of bugs, it's hard to tell whether the root cause lies with
neutron, nova, tempest, or even cirros.
However, it is not ok that these bugs are not assigned in neutron. We need
to have some
Nested commits in sqlalchemy should be seen as a single transaction on the
backend, shouldn't they?
I don't know anything about this specific problem, but the fact that unit
tests use sqlite might be a reason, since it's not really a full DBMS...
I think that wrapping tests in transaction also
The HDN plugin is purely for educational purposes.
I remember it worked with devstack, but as I've not run it for a while it
might be broken now.
If you've found this plugin you should also have found the slides which
introduced it.
First you should assess whether you need to implement a new
Relying again on automatic schema generation could be error-prone. It can
only be enabled globally, and does not work when models are altered if the
table for the model being altered already exists in the DB schema.
I don't think it would be a big problem to put these migrations in the main
Please keep me in the loop.
The importance of ensuring consistent style across Openstack APIs increases
as the number of integrated project increases.
Unless we decide to merge all API endpoints as proposed in another thread!
[1]
Regards,
Salvatore
[1]
I generally tend to agree that once the distributed router is available,
nobody would probably want to use a centralized one.
Nevertheless, I think it is correct that, at least for the moment, some
advanced services would only work with a centralized router.
There might also be unforeseen
Hi Yoshihiro,
In my opinion the use of filters on changes is allowed by the smoketesting
policy we defined.
Notwithstanding that the approach of testing every patch is definitely the
safest, I understand in some cases the volume of patchsets uploaded to
gerrit might overwhelm the plugin-specific
Thanks Miguel!
I will pick up a few tests from the list you put together, and I encourage
too every neutron developer to do the same.
At the end of the day, it's not really different from scripting what you do
everyday to test the code you develop.
I am also available to help new contributors
I believe your analysis is correct and inline with the findings reported in
the bug concerning OVS agent loop slowdown.
The issue has become even more prominent with the ML2 plugin due to an
increased number of notifications sent.
Another issue which makes delays on the DHCP agent worse is that
Robert,
As you've deliberately picked on me I feel compelled to reply!
Jokes apart, I am going to retire that patch and push the new default in
neutron. Regardless of considerations on real loads vs gate loads, I think
it is correct to assume the default configuration should be one that will
notifications regardless of agent status’ but this patch
Also observed the same behavior.
Thanks Regards,
Sreedhar Nathani
*From:* Salvatore Orlando [mailto:sorla...@nicira.comsorla...@nicira.com]
*Sent:* Thursday, December 12, 2013 6:21 PM
*To:* OpenStack Development Mailing List
,
but having support of multiple neutron rpc servers process in in the same
System would be really helpful for the scaling of neutron server
especially during concurrent instance deployments.
Thanks Regards,
Sreedhar Nathani
*From:* Salvatore Orlando [mailto:sorla...@nicira.com]
*Sent
Hi,
I'm sorry I could not make it to meeting.
However, I can see clearly see the progress being made from gerrit!
One thing which might be worth mentioning is that some of the new jobs are
already voting.
However, in some cases the logs are either not accessible, and in other
cases the job seem
Before starting this post I confess I did not read with the required level
of attention all this thread, so I apologise for any repetition.
I just wanted to point out that floating IPs in neutron are created
asynchronously when using the l3 agent, and I think this is clear to
everybody.
So when
,
Roey Chen
*From:* Salvatore Orlando [mailto:sorla...@nicira.com]
*Sent:* Sunday, December 22, 2013 1:35 PM
*To:* OpenStack Development Mailing List
*Subject:* [openstack-dev] [Neutron] Availability of external testing logs
Hi,
The patch: https://review.openstack.org/#/c/63558/ failed
I put together all the patches which we prepared for making parallel
testing work, and ran a few times 'check experimental' on the gate to see
whether it worked or not.
With parallel testing, the only really troubling issue are the scenario
tests which require to access a VM from a floating IP,
with nova-network, and that does not happen.
Salvatore
On 27 December 2013 08:14, IWAMOTO Toshihiro iwam...@valinux.co.jp wrote:
At Fri, 27 Dec 2013 01:53:59 +0100,
Salvatore Orlando wrote:
[1 multipart/alternative (7bit)]
[1.1 text/plain; ISO-8859-1 (7bit)]
I put together all
, Dec 2, 2013 at 11:48 PM, Joe Gordon joe.gord...@gmail.com
wrote:
On Dec 2, 2013 9:04 PM, Salvatore Orlando sorla...@nicira.com wrote:
Hi,
As you might have noticed, there has been some progress on parallel
tests for neutron.
In a nutshell:
* Armando fixed the issue with IP
I reckon the decision of keeping neutron's firewall API out of gate tests
was reasonable for the Havana release.
I might be argued the other 'experimental' service, VPN, is already enabled
on the gate, but that did not happen before proving the feature was
reliable enough to not cause gate
the bug reports with neutron, but some are probably more
tempest or nova issues.
Salvatore
[1] https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel
On 27 December 2013 11:09, Salvatore Orlando sorla...@nicira.com wrote:
Hi,
We now have several patches under review which improve
work!
On Jan 2, 2014, at 6:57 AM, Salvatore Orlando sorla...@nicira.com
wrote:
Hi again,
I've now run the experimental job a good deal of times, and I've
filed bugs
for all the issues which came out.
Most of them occurred no more than once among all test execution (I
think
welcome it; I just don't think it will be the ultimate solution.
Salvatore
On 6 January 2014 11:40, Isaku Yamahata isaku.yamah...@gmail.com wrote:
On Fri, Dec 27, 2013 at 11:09:02AM +0100,
Salvatore Orlando sorla...@nicira.com wrote:
Hi,
We now have several patches under review which
This thread is starting to get a bit confusing, at least for people with a
single-pipeline brain like me!
I am not entirely sure if I understand correctly Isaku's proposal
concerning deferring the application of flow changes.
I think it's worth discussing in a separate thread, and a supporting
/61341/
[2]https://review.openstack.org/#/c/63917/
On Mon, Jan 6, 2014 at 9:24 PM, Salvatore Orlando sorla...@nicira.com
wrote:
This thread is starting to get a bit confusing, at least for people with
a
single-pipeline brain like me!
I am not entirely sure if I understand correctly
I am afraid I need to correct you Jay!
This actually appears to be bug 1253896 [1]
Technically, what we call 'bug' here is actually a failure manifestation.
So far, we have removed several bugs causing this failure. The last patch
was pushed to devstack around Christmas.
Nevertheless, if you
I think I have found another fault triggering bug 1253896 when neutron is
enabled.
I've added a comment to https://bugs.launchpad.net/bugs/1253896
On another note, I'm seeing also occurrences of this bug with nova-network.
Is there anybody from the nova side looking at it (I can give it a try,
Hi Jay,
replies inline.
I have probably have found one more cause for this issue in the logs, and I
have added a comment to the bug report.
Salvatore
On 9 January 2014 19:10, Jay Pipes jaypi...@gmail.com wrote:
On Thu, 2014-01-09 at 09:09 +0100, Salvatore Orlando wrote:
I am afraid I need
I don't think I can use better words than Mark's.
So I have nothing to add.
Salvatore
On 13 January 2014 23:29, Mark McClain mmccl...@yahoo-inc.com wrote:
On Jan 13, 2014, at 12:24 PM, Collins, Sean
sean_colli...@cable.comcast.com wrote:
Hi,
I posted a message to the mailing list[1]
TL;DR;
I have been looking back at the API and found out that it's a bit weird how
floating IPs are mapped to ports. This might or might not be an issue, and
several things can be done about it.
The rest of this post is a boring description of the problem and a possibly
even more boring list of
good, but its just a confusing ui, you could
always change the code so it filters out the floating-ip ports from view.
Make them a pure implementation detail that a user never sees.
Kevin
--
*From:* Salvatore Orlando [sorla...@nicira.com]
*Sent:* Tuesday, January 14
I have been seeing in the past 2 days timeout failures on gate jobs which I
am struggling to explain. An example is available in [1]
These are the usual failure that we associate with bug 1253896, but this
time I can verify that:
- The floating IP is correctly wired (IP and NAT rules)
- The DHCP
I think you're right Darragh.
It was actually Montreal's snow and cold freezing my brain as I
investigated the same issue a while ago and tried to change cirrOS to send
a DHCPDISCOVER every 10 seconds instead of 60 seconds, but then I moved to
something else as I wasn't even sure a new centos
I gave a -2 yesterday to all my Neutron patches. I did that because I
thought something was wrong with them, but then I started to realize it's a
general problem.
It makes sense to give some priority to the patches Eugene linked, even if
it would be better to have some people root causing the
Yair is probably referring to statistically independent tests, or whatever
case for which the following is true (P(x) is the probably that a test
succeeds):
P(4|3|2|1) = P(4|1) * P(3|1) * P(2|1)
This might apply to the tests we are adding to network_basic_ops scenario;
however it is worth noting
I have some hints which the people looking at neutron failures might find
useful.
# 1 - in [1] a weird thing happens with DHCP. A DHCPDISCOVER with for
fa:16:3e:cc:d9:c7
is pretty much simultaneously received by two dnsmasq instances, which are
listening on ports belonging to two distinct
It's worth noticing that elastic recheck is signalling bug 1253896 and bug
1224001 but they have actually the same signature.
I found also interesting that neutron is triggering a lot bug 1254890,
which appears to be a hang on /dev/nbdX during key injection; so far I have
no explanation for that.
An openstack deployment with an external DHCP server is definetely a
possible scenario; I don't think it can be implemented out-of-the-box with
the components provided by the core openstack services, but it should be
doable and a possibly even a requirement for deployments which integrate
Hi Sukhdev,
Some comments inline.
Salvatore
On 23 January 2014 03:10, Sukhdev Kapur sukh...@aristanetworks.com wrote:
Hi All,
During tempest sprint in Montreal, as we were writing API tests, we
noticed a behavior which we believed is an issue with the Neutron API (or
perhaps documentation
Hi,
My expertise on the subject is pretty much zero, but the performance loss
(which I think is throughput loss) is so blatant that perhaps there's more
than the MTU issue. From what Robert writes, GRO definitely plays a role
too.
My only suggestion would be to route the question to ovs-discuss
I've found out that several jobs are exhibiting failures like bug 1254890
[1] and bug 1253896 [2] because openvswitch seem to be crashing the kernel.
The kernel trace reports as offending process usually either
neutron-ns-metadata-proxy or dnsmasq, but [3] seem to clearly point to
ovs-vsctl.
254
It might be creative, but it's a shame that it did not serve the purpose.
At least it confirmed the kernel bug was related to process termination in
network namespaces but was due to SIGKILL exlusively, as it occurred with
SIGTERM as well.
On the bright side, Mark has now pushed another patch
Sean,
You surely have my permission. If it's publicly available you should not
even ask for it!
Well, unless it's copyrighted, but at least I could not possibly copyright
material mostly stolen from other people ;)
On the other hand the issue with slideshare and such is that material might
come
Hi,
By merging patch https://review.openstack.org/#/c/53459/, full tenant
isolation has been turned on for neutron API tests.
We have executed a few experimental checks before merging and we found out
that the isolated jobs were always passing.
However, the gating jobs both in tempest and nova
.
But still, we should probably make gate tests symmetric again.
Salvatore
On 7 February 2014 15:05, Salvatore Orlando sorla...@nicira.com wrote:
Hi,
By merging patch https://review.openstack.org/#/c/53459/, full tenant
isolation has been turned on for neutron API tests.
We have executed
It seems this work item is made of several blueprints, some of which are
not yet approved. This is true at least for the Neutron blueprint regarding
policy extensions.
Since I first looked at this spec I've been wondering why nova has been
selected as an endpoint for network operations rather
Hi,
I've provided an update on this bug (which by the way is finally not
anymore #1 gate blocker!).
Please see the bug report [1]
I've tagged heat as well since over 50% of hits are from a non-voting heat
job which is trying to ssh into machines using their private IP; this might
not work with
I have tried to collect more information on neutron full job failures.
So far there have been 219 failures and 891 successes, for an overall
success rate of 19.8% which is inline with Sean's evaluation.
The count has performed exclusively on jobs executed against master branch.
The failure rate
are assigned to
you, I was wondering if you´d use some help. I guess we can coordinate
better when you are online.
cheers,
Rossella
On 02/23/2014 03:14 AM, Salvatore Orlando wrote:
I have tried to collect more information on neutron full job failures.
So far there have been 219 failures
Hi Assaf,
some comments inline.
As a general comment, I'd prefer to move all the discussions to gerrit
since the patches are now in review.
This unless you have design concerns (the ones below look more related to
the implementation to me)
Salvatore
On 24 February 2014 15:58, Assaf Muller
I understand the fact that resources with invalid tenant_ids can be created
(only with admin rights at least for Neutron) can be annoying.
However, I support Jay's point on cross-project interactions. If tenant_id
validation (and orphaned resource management) can't be efficiently handled,
then
Hi Kyle,
I think conceptually your approach is fine.
I would have had concerns if you were trying to manage ODL life cycle
through devstack (like installing/uninstalling it or configuring the ODL
controller).
But looking at your code it seems you're just setting up the host so that
it could work
Hi,
I read this thread and I think this moves us in the right direction of
moving away from provider mapping, and, most importantly, abstracting away
backend-specific details.
I was however wondering if flavours (or service offerings) will act
more like a catalog or a scheduler.
The difference,
the update the analysis performed by Salvatore Orlando few weeks
ago [1]
I used the following query for Logstash [2] to detect the failures of the
last 48 hours.
There were 77 failures (40% of the total).
I classified them and obtained the following:
21% due to infra issues
16% https
It is a common practice to have both an operational and an administrative
status.
I agree ACTIVE as a term might result confusing. Even in the case of a
port, it is not really clear whether it means READY or LINK UP.
Terminology-wise I would suggest READY rather than DEPLOYED, as it is a
term
Replies inline,
Salvatore
On 17 March 2014 21:38, Eugene Nikanorov enikano...@mirantis.com wrote:
On Mon, Mar 17, 2014 at 11:46 PM, Salvatore Orlando
sorla...@nicira.comwrote:
It is a common practice to have both an operational and an administrative
status.
I agree ACTIVE as a term
Hi Vinay,
I left a few comments on the specification document.
While I understand this is functional for the VPC use case, there might be
applications also outside of the VPC.
My only concern is that, at least in the examples in the document, this
appear to be violating a bit the tenet of neutron
Thanks a lot!
We now need to get on these bugs, and define with QA an acceptable failure
rate criterion for switching the full job to voting.
It would be good to have a chance to only run the tests against code which
is already in master.
To this aim we might push a dummy patch, and keep it
Hi Jakub,
thanks for finding this out.
I think there might be a migration which needs to be fixed.
I will look into the logs you linked to see what can be done.
Salvatore
ps: I've also added a [NEUTRON] tag to the subject so it will be easy for
people doing filters on mailing list to retrieve
Inline
Salvatore
On 24 March 2014 23:01, Matthew Treinish mtrein...@kortar.org wrote:
On Mon, Mar 24, 2014 at 09:56:09PM +0100, Salvatore Orlando wrote:
Thanks a lot!
We now need to get on these bugs, and define with QA an acceptable
failure
rate criterion for switching the full job
I hope we can sort this out on the mailing list IRC, without having to
schedule emergency meetings.
Salvatore
On 25 March 2014 22:58, Nachi Ueno na...@ntti3.com wrote:
Hi Nova, Neturon Team
I would like to discuss issue of Neutron + Nova + OVS security group fix.
We have a discussion in IRC
regarding the fact that a libvirt network filter name
should not be added in guest config.
Salvatore
On 26 March 2014 05:57, Akihiro Motoki mot...@da.jp.nec.com wrote:
Hi Nachi and the teams,
(2014/03/26 9:57), Salvatore Orlando wrote:
I hope we can sort this out on the mailing list IRC, without
saying.
Salvatore
[1] https://bugs.launchpad.net/neutron/+bug/1297469
On 26 March 2014 09:02, Salvatore Orlando sorla...@nicira.com wrote:
The thread branched, and it's getting long.
I'm trying to summarize the discussion for other people to quickly catch
up.
- The bug being targeted
On 26 March 2014 19:19, James E. Blair jebl...@openstack.org wrote:
Salvatore Orlando sorla...@nicira.com writes:
On another note, we noticed that the duplicated jobs currently executed
for
redundancy in neutron actually seem to point all to the same build id.
I'm not sure then if we're
Hi Thomas,
it probably won't be a bad idea if you can share the patches you're
applying to the default configuration files.
I think all distros are patching them anyway, so this might allow us to
provide mostly ready to use config files.
Is there a chance you can push something to gerrit?
Hi Simon,
I agree with your concern.
Let me point out however that VMware mine sweeper runs almost all the smoke
suite.
It's been down a few days for an internal software upgrade, so perhaps you
have not seen any recent report from it.
I've seen some CI systems testing as little as
Hi Rudra,
Some comments inline.
Regards,
Salvatore
Il 09/ott/2013 19:27 Rudra Rugge rru...@juniper.net ha scritto:
Updated the subject [neutron]
Hi All,
Is the extra route extension always tied to the router extension or
can it live in a separate route-table container. If extra-route
Hi Marco,
At least two of your questions clearly hint at the dichotomy between subnet
and network, which appear to be redundant.
A multi-homing use case on a single network is a potential use case for
this, albeit a very limited one, since one might argue that in a cloud
scenario instead of
wrote:
On 10/17/2013 08:46 AM, Salvatore Orlando wrote:
Hi,
in the discussion for patch
https://review.openstack.org/#**/c/50880/https://review.openstack.org/#/c/50880/Sean
asked a very reasonable question:
so are all these [Neutron] extensions always loaded on all
environments? If not, how
It might be worth both documenting this limitation on the admin guide and
provide a fix which we should backport to havana too.
It sounds like the fix should not be too extensive, so the backport should
be easily feasible.
Regards,
Salvatore
On 18 October 2013 21:50, Édouard Thuleau
Gary,
In the context of the nvp plugin we use a mechanism for enabling 'advanced'
capabilities of a router leveraging a 'router_service_type' extension.
this allow us to configure two types of routers, one which does just L3
forwarding, NAT and a few other things, and another one which also has
1 - 100 of 425 matches
Mail list logo