It might be because of the wording used, but it seems to me that you're
making it sound like the group policy effort could have been completely
orthogonal to neutron as we know it now.
What I understood is that the declarative abstraction offered by group
policy could do without any existing neutr
"If we want to keep everything the way it is, we have to change everything"
[1]
This is pretty much how I felt after reading this proposal, and I felt that
this quote, which Ivar will probably appreciate, was apt to the situation.
Recent events have spurred a discussion about the need for a change
pment Mailing List
> Subject: Re: [openstack-dev] [Neutron][CI] VMware mine sweeper for
> Neutron temporarily disabled
>
>
> On Jul 29, 2014 12:46 PM, "Salvatore Orlando" wrote:
> >
> > Minesweeper for Neutron is now running again.
> > We updated the image f
and infra cores who'll review these patches!)
Salvatore
[1] https://review.openstack.org/#/c/113554/
[2] https://review.openstack.org/#/c/113562/
On 7 August 2014 17:51, Salvatore Orlando wrote:
> Thanks Armando,
>
> The fix for the bug you pointed out was the reason of the fai
Hi,
VMware minesweeper caused havoc today causing exhaustion of the upstream
node pool.
The account has been disabled so things are back to normal now.
The root cause of the issue was super easy once we realized we missed [1].
I would like to apologise to the whole community on behalf of the VMwa
> Hi Salvatore!
>
> My responses (to your responses) are in-line. I think we could also use
> some feedback from the rest of the community on this, as well … would it be
> a good idea to discuss the implementation further at the next IRC meeting?
>
> Good Stuff!!
>
> Steven
I think there will soon be a discussion regarding what the appropriate
location for plugin and drivers should be.
My personal feeling is that Neutron has simply reached the tipping point
where the high number of drivers and plugins is causing unnecessary load
for the core team and frustration for t
105694/
On 12 August 2014 20:14, Salvatore Orlando wrote:
> And just when the patch was only missing a +A, another bug slipped in!
> The nova patch to fix it is available at [1]
>
> And while we're there, it won't be a bad idea to also push the neutron
> full job, as
VMware minesweeper has filters which have been designed to cover the
largest possible subset of submissions without add unnecessary load to our
scarce resources for CI validation. This is probably why the analysis
reveals not all patches are covered.
Therefore our filters exclude neutral changes s
Hi Stuart,
As far as I can tell, this is the first time I hear about this problem.
I can't make any judgment with the details you've shared here, but I would
initially focus on ovs, the kernel and their interactions.
For Neutron's l3 agent the only thing I can say is that it uses the
conntrack mod
As the conversation has drifted away from a discussion pertaining the nova
core team, I have some comments inline as well.
On 18 August 2014 12:18, Thierry Carrez wrote:
> Doug Hellmann wrote:
> > On Aug 13, 2014, at 4:42 PM, Russell Bryant wrote:
> >> Let me try to say it another way. You se
view.openstack.org/#/c/114932/
[3] https://bugs.launchpad.net/nova/+bug/1305892
On 16 August 2014 01:13, Mark McClain wrote:
>
> On Aug 15, 2014, at 6:20 PM, Salvatore Orlando
> wrote:
>
> The neutron full job is finally voting, and the first patch [1] has
> already passed it in gat
Hi Trevor,
thanks for sharing this minutes!
I would like to cooperate a bit to this project's developments, possibly
without ending up being just deadweight.
To this aim I have some comments inline.
Salvatore
On 18 August 2014 22:25, Trevor Vardeman
wrote:
> Agenda items are numbered, and t
Hi Joe,
manual rechecks are possible for mine sweeper. The new syntax is
vmware-recheck-patch. I found out vmware-recheck still triggered upstream
zuul.
I think it should be possible to submit a batch job with all the patches
that need to be rechecked without having to trigger the recheck from
ge
In the current approach QoS support is being "hardwired" into ML2.
Maybe this is not the best way of doing that, as perhaps it will end up
requiring every mech driver which enforces VIF configuration should support
it.
I see two routes. One is a mechanism driver similar to l2-pop, and then you
mig
he one provided by
the service plugin should be exposed at the management plane, implemented
at the control plane, and if necessary also at the data plane.
Some more comments inline.
Salvatore
On 20 August 2014 11:31, Mathieu Rohon wrote:
> Hi
>
> On Wed, Aug 20, 2014 at 12:12 AM, Sa
ed in a recent merge. Will we have a big pile of
> various migration scripts that users will need to pick from depending on
> which services he/she wants to use from the various neutron incubated
> projects?
>
>
> On Wed, Aug 20, 2014 at 4:03 AM, Salvatore Orlando
> wrote:
>
>&
Some comments inline.
Salvatore
On 20 August 2014 17:38, Ihar Hrachyshka wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Hi all,
>
> I've read the proposal for incubator as described at [1], and I have
> several comments/concerns/suggestions to this.
>
> Overall, the idea of givi
Hi Nader,
Sorry about that failure.
We have temporarily stopped mine sweeper for neutron while we update our
devstack images.
However, unfortunately some jobs did not complete properly, and therefore
you had failures without logs being reported.
The situation should be back to normal soon, and yo
Hi Kyle,
I have conflicts for 13 UTC - Thursday is already full for me, but I'll try
anyway, to join the convo on IRC.
I agree the 3 blueprints you've mentioned are the ones we should really
merge for Juno.
To this aim, I wonder why [1] has not been set to high. Nevertheless it
does not matter a
Hi Karthik,
what do you mean that the plugin is incompatible with
https://review.openstack.org/#/c/114393/?
you're mentioning a rebase issue - but the patch in question appears to
cleanly apply to master.
Is your probably because patch #114393 does not have in its log some
changes you need to acc
TL; DR
A few folks are proposing to stop running tests for neutron advanced
services [ie: (lb|vpn|fw)aas] in the integrated gate, and run them only on
the neutron gate.
Reason: projects like nova are 100% orthogonal to neutron advanced
services. Also, there have been episodes in the past of unreli
As it has been pointed out previously in this thread debugging gate
failures is mostly about chasing race conditions, which in some cases
involve the most disparate interactions between Openstack services [1].
Finding the root cause of these races is a mix of knowledge, pragmatism,
and luck. Havin
I think it's ok to submit specs for Kilo - mostly because it would be a bit
pointless submitting them for Juno!
Salvatore
On 28 August 2014 09:26, Kevin Benton wrote:
> You could just make the kilo folder in your commit and then rebase it once
> Kilo is open.
>
>
> On Thu, Aug 28, 2014 at 12:0
If you are running version from a stable branch, changes in DB migrations
should generally be forbidden as the policy states since those migrations
are not likely to be executed again. Downgrading and then upgrading again
is extremely risky and I don't think anybody would ever do that.
However, if
I agree with Brandon that it will be difficult to find spaces for Octavia,
and the pod is a valid option.
Nevertheless it is always worth trying.
For the "traditional" load balancing service instead I reckon #1 is a very
good thing to discuss. Problem is that it is also hard to conclude anything
i
Some more comments from me inline.
Salvatore
On 2 September 2014 11:06, Adam Harwell wrote:
> I also agree with most of what Brandon said, though I am slightly
> concerned by the talk of merging Octavia and [Neutron-]LBaaS-v2 codebases.
>
Beyond all the reasons listed in this thread - merging
m
having a Neutron LBaaS driver pointing to libra (ie: it was much easier to
just deploy libra instead of neutron lbaas).
Summarising, so far I haven't yet an opinion regarding where Octavia will
sit.
Nevertheless I think this is a discussion that it's useful for the
medium/long term - it do
On 3 September 2014 22:10, Joe Gordon wrote:
>
>
>
> On Tue, Aug 26, 2014 at 4:47 PM, Salvatore Orlando
> wrote:
>
>> TL; DR
>> A few folks are proposing to stop running tests for neutron advanced
>> services [ie: (lb|vpn|fw)aas] in the integrated gate, and
While it's good that somebody is addressing this specific issue, perhaps
"punctual" solutions - eg: "hey I have a patch for that", are not
addressing the general issues, which is that Neutron has very granular
primitives that force users to do multiple API requests for operations they
regard as at
This is a very important discussion - very closely related to the one going
on in this other thread
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045768.html
.
Unfortunately it is also a discussion that tends to easily fragment and
move in a thousand different directions.
A few
Bug 1357055 [1] and 1323658 [2] affect neutron jobs and are among the top
gate offenders.
With this kind of bugs, it's hard to tell whether the root cause lies with
neutron, nova, tempest, or even cirros.
However, it is not ok that these bugs are not assigned in neutron. We need
to have some neutro
Nested commits in sqlalchemy should be seen as a single transaction on the
backend, shouldn't they?
I don't know anything about this specific problem, but the fact that unit
tests use sqlite might be a reason, since it's not really a full DBMS...
I think that wrapping tests in transaction also wil
The HDN plugin is purely for educational purposes.
I remember it worked with devstack, but as I've not run it for a while it
might be broken now.
If you've found this plugin you should also have found the slides which
introduced it.
First you should assess whether you need to implement a new plug
: https://bugs.launchpad.net/neutron/+bug/1248423
>
>
>
> Thanks
>
>
>
> Avishay
>
>
>
> *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
> *Sent:* Thursday, November 14, 2013 1:15 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subj
For what is worth we have considered this aspect from the perspective of
the Neutron plugin my team maintains (NVP) during the past release cycle.
The synchronous model that most plugins with a controller on the backend
currently implement is simple and convenient, but has some flaws:
- reliabili
reinvent the wheel - or, in other
words, I surely don't want to write code if that code has already been
written by somebody else for me.
> More details @ http://www.slideshare.net/harlowja/taskflow-27820295
>
> From: Salvatore Orlando
> Date: Tuesday, November 19, 2013 2:22 P
I've noticed that
https://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5stores
the network client in local.strong_store which is a reference to
corolocal.local (the class, not the instance).
In Russell's example instead the code accesses local.store which is an
instance
As it's been over two weeks since the issue was discussed at the summit,
I've tried to summarize the current status in the summit session's
etherpad: https://etherpad.openstack.org/p/icehouse-summit-qa-neutron
Please feel free to update the etherpad if you feel something is incorrect,
or I forgot
Forgive my ignorance, but I would like to make sure that packets generated
from Openstack instances on neutron private networks will actually be able
to reach public addresses.
In its default configuration the traffic from the OS instance is SNATed and
the SRC IP will be rewritten to an address in
Hi,
I've been recently debugging some issues I've had with the OVS agent, and I
found out that in many cases (possibly every case) the code just logs
errors from ovs-vsctl and ovs-ofctl without taking any action in the
control flow.
For instance, the routine which should do the wiring for a port
Thanks Kyle,
More comments inline.
Salvatore
On 25 November 2013 16:03, Kyle Mestery (kmestery) wrote:
> On Nov 25, 2013, at 8:28 AM, Salvatore Orlando
> wrote:
> >
> > Hi,
> >
> > I've been recently debugging some issues I've had with the OVS agen
Thanks Maru,
This is something my team had on the backlog for a while.
I will push some patches to contribute towards this effort in the next few
days.
Let me know if you're already thinking of targeting the completion of this
job for a specific deadline.
Salvatore
On 27 November 2013 17:50, M
>
> Am I missing something?
>
> On 27 November 2013 09:08, Salvatore Orlando wrote:
> > Thanks Maru,
> >
> > This is something my team had on the backlog for a while.
> > I will push some patches to contribute towards this effort in the next
> few
> >
Hi,
As you might have noticed, there has been some progress on parallel tests
for neutron.
In a nutshell:
* Armando fixed the issue with IP address exhaustion on the public network
[1]
* Salvatore has now a patch which has a 50% success rate (the last failures
are because of me playing with it) [2
I think this bug was considered fixed because at the time once the patch
addressing was merged, the bug automatically went into fix committed.
It should therefore be re-opened. Even if tweaking sql pool parameters
avoids the issue, this should be considered more of a mitigation rather
than a perman
I generally tend to agree that once the distributed router is available,
nobody would probably want to use a centralized one.
Nevertheless, I think it is correct that, at least for the moment, some
advanced services would only work with a centralized router.
There might also be unforeseen scalabili
Hi Yoshihiro,
In my opinion the use of filters on changes is allowed by the smoketesting
policy we defined.
Notwithstanding that the approach of testing every patch is definitely the
safest, I understand in some cases the volume of patchsets uploaded to
gerrit might overwhelm the plugin-specific t
Thanks Miguel!
I will pick up a few tests from the list you put together, and I encourage
too every neutron developer to do the same.
At the end of the day, it's not really different from scripting what you do
everyday to test the code you develop.
I am also available to help new contributors get
I think there's little to add to what Aaron said.
This mechanism might end up generating long-running sql transactions which
have a detrimental effect on the availability of connections in the pool as
well as the threat of the deadlock with eventlet.
We are progressively removing all the controlle
NSX distributed routers behave, from a tenant perspective, exactly like any
other router.
Beyond the service level factor, which I believe Ian is referring to as
well, there is no reason for distinguishing them from standard routers
through the API.
I believe the same applies distributed router b
Sadly this patch is now abandoned.
As stated in the review I did, the bug is one we should definitely fix, but
it is even more important to avoid introducing further race conditions.
I will look back at the latest comments from Zhang and see whether we can
go ahead and restore that patch or whethe
I believe your analysis is correct and inline with the findings reported in
the bug concerning OVS agent loop slowdown.
The issue has become even more prominent with the ML2 plugin due to an
increased number of notifications sent.
Another issue which makes delays on the DHCP agent worse is that i
Robert,
As you've deliberately picked on me I feel compelled to reply!
Jokes apart, I am going to retire that patch and push the new default in
neutron. Regardless of considerations on real loads vs gate loads, I think
it is correct to assume the default configuration should be one that will
allow
rrect the OVS agent loop slowdown issue?
>
> Does this patch address the DHCP agent updating the host file once in a
> minute and finally sending SIGKILL to dnsmasq process?
>
>
>
> I have tested with Marun’s patch
> https://review.openstack.org/#/c/61168/regarding ‘Send
> DHCP no
>
>
> Horizontal scaling with multiple neutron-server hosts would be one option,
> but having support of multiple neutron rpc servers process in in the same
>
> System would be really helpful for the scaling of neutron server
> especially during concurrent instance deployments
Hi,
I'm sorry I could not make it to meeting.
However, I can see clearly see the progress being made from gerrit!
One thing which might be worth mentioning is that some of the new jobs are
already voting.
However, in some cases the logs are either not accessible, and in other
cases the job seem t
Before starting this post I confess I did not read with the required level
of attention all this thread, so I apologise for any repetition.
I just wanted to point out that floating IPs in neutron are created
asynchronously when using the l3 agent, and I think this is clear to
everybody.
So when th
Hi,
The patch: https://review.openstack.org/#/c/63558/ failed mellanox external
testing.
Subsequent patch sets have not been picked up by the mellanox testing
system.
I would like to see why the patch failed the job; if it breaks mellanox
plugin for any reason, I would be happy to fix it. However
t;
>
> All apologies,
>
> Roey Chen
>
>
>
> *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
> *Sent:* Sunday, December 22, 2013 1:35 PM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Neutron] Availability of external testing logs
>
&
I put together all the patches which we prepared for making parallel
testing work, and ran a few times 'check experimental' on the gate to see
whether it worked or not.
With parallel testing, the only really troubling issue are the scenario
tests which require to access a VM from a floating IP, an
k so because otherwise we should see failures
even with nova-network, and that does not happen.
Salvatore
On 27 December 2013 08:14, IWAMOTO Toshihiro wrote:
> At Fri, 27 Dec 2013 01:53:59 +0100,
> Salvatore Orlando wrote:
> >
> > [1 ]
> > [1.1 ]
> > I put toge
planation!
> >
> > Eugene.
> >
> >
> > On Mon, Dec 2, 2013 at 11:48 PM, Joe Gordon
> wrote:
> >
> > On Dec 2, 2013 9:04 PM, "Salvatore Orlando" wrote:
> > >
> > > Hi,
> > >
> > > As you might have noticed,
I reckon the decision of keeping neutron's firewall API out of gate tests
was reasonable for the Havana release.
I might be argued the other 'experimental' service, VPN, is already enabled
on the gate, but that did not happen before proving the feature was
reliable enough to not cause gate breakage
Hi Hemanth,
I think that the only job that needs to be integrated with gate tests and
vote is the one running tempest smoketests, which are plugin agnostic.
For tests specific to a given controller, they can surely be integrated
with upstream gerrit in order to vote on changes specific to the plug
27;ve
associated all the bug reports with neutron, but some are probably more
tempest or nova issues.
Salvatore
[1] https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel
On 27 December 2013 11:09, Salvatore Orlando wrote:
> Hi,
>
> We now have several patches under r
2, 2014 7:53:05 PM
> >> Subject: Re: [openstack-dev] [Neutron][qa] Parallel testing update
> >>
> >> Thanks for the updates here Salvatore, and for continuing to push on
> >> this! This is all great work!
> >>
> >> On Jan 2, 2014, at 6:57 AM, Salva
Hi,
On IRC Yair Fried reminded me that we have not yet solved the issue around
security groups not enforced on the gate.
An accurate report of the current status is here [1]
And it seems there is some consensus around using the additional port
binding parameters for security groups (lp: [2] and
for ovs-vsctl, and I would
definitely welcome it; I just don't think it will be the ultimate solution.
Salvatore
On 6 January 2014 11:40, Isaku Yamahata wrote:
> On Fri, Dec 27, 2013 at 11:09:02AM +0100,
> Salvatore Orlando wrote:
>
> > Hi,
> >
> > We
This thread is starting to get a bit confusing, at least for people with a
single-pipeline brain like me!
I am not entirely sure if I understand correctly Isaku's proposal
concerning deferring the application of flow changes.
I think it's worth discussing in a separate thread, and a supporting pat
/review.openstack.org/#/c/61341/
> [2]https://review.openstack.org/#/c/63917/
>
> On Mon, Jan 6, 2014 at 9:24 PM, Salvatore Orlando
> wrote:
> > This thread is starting to get a bit confusing, at least for people with
> a
> > single-pipeline brain like me!
> >
I am afraid I need to correct you Jay!
This actually appears to be bug 1253896 [1]
Technically, what we call 'bug' here is actually a failure manifestation.
So far, we have removed several bugs causing this failure. The last patch
was pushed to devstack around Christmas.
Nevertheless, if you look
I think I have found another fault triggering bug 1253896 when neutron is
enabled.
I've added a comment to https://bugs.launchpad.net/bugs/1253896
On another note, I'm seeing also occurrences of this bug with nova-network.
Is there anybody from the nova side looking at it (I can give it a try, but
Hi Jay,
replies inline.
I have probably have found one more cause for this issue in the logs, and I
have added a comment to the bug report.
Salvatore
On 9 January 2014 19:10, Jay Pipes wrote:
> On Thu, 2014-01-09 at 09:09 +0100, Salvatore Orlando wrote:
> > I am afraid I need to co
I don't think I can use better words than Mark's.
So I have nothing to add.
Salvatore
On 13 January 2014 23:29, Mark McClain wrote:
>
> On Jan 13, 2014, at 12:24 PM, Collins, Sean <
> sean_colli...@cable.comcast.com> wrote:
>
> > Hi,
> >
> > I posted a message to the mailing list[1] when I fir
TL;DR;
I have been looking back at the API and found out that it's a bit weird how
floating IPs are mapped to ports. This might or might not be an issue, and
several things can be done about it.
The rest of this post is a boring description of the problem and a possibly
even more boring list of pot
its just a confusing ui, you could
> always change the code so it filters out the floating-ip ports from view.
> Make them a pure implementation detail that a user never sees.
>
> Kevin
> ------
> *From:* Salvatore Orlando [sorla...@nicira.com]
> *Sent
I have been seeing in the past 2 days timeout failures on gate jobs which I
am struggling to explain. An example is available in [1]
These are the usual failure that we associate with bug 1253896, but this
time I can verify that:
- The floating IP is correctly wired (IP and NAT rules)
- The DHCP po
I think you're right Darragh.
It was actually Montreal's snow and cold freezing my brain as I
investigated the same issue a while ago and tried to change cirrOS to send
a DHCPDISCOVER every 10 seconds instead of 60 seconds, but then I moved to
something else as I wasn't even sure a new centos base
I gave a -2 yesterday to all my Neutron patches. I did that because I
thought something was wrong with them, but then I started to realize it's a
general problem.
It makes sense to give some priority to the patches Eugene linked, even if
it would be better to have some people root causing the issue
Yair is probably referring to statistically independent tests, or whatever
case for which the following is true (P(x) is the probably that a test
succeeds):
P(4|3|2|1) = P(4|1) * P(3|1) * P(2|1)
This might apply to the tests we are adding to network_basic_ops scenario;
however it is worth noting
I have some hints which the people looking at neutron failures might find
useful.
# 1 - in [1] a weird thing happens with DHCP. A DHCPDISCOVER with for
fa:16:3e:cc:d9:c7
is pretty much simultaneously received by two dnsmasq instances, which are
listening on ports belonging to two distinct networks
It's worth noticing that elastic recheck is signalling bug 1253896 and bug
1224001 but they have actually the same signature.
I found also interesting that neutron is triggering a lot bug 1254890,
which appears to be a hang on /dev/nbdX during key injection; so far I have
no explanation for that.
An openstack deployment with an external DHCP server is definetely a
possible scenario; I don't think it can be implemented out-of-the-box with
the components provided by the core openstack services, but it should be
doable and a possibly even a requirement for deployments which integrate
openstack
Hi Sukhdev,
Some comments inline.
Salvatore
On 23 January 2014 03:10, Sukhdev Kapur wrote:
> Hi All,
>
> During tempest sprint in Montreal, as we were writing API tests, we
> noticed a behavior which we believed is an issue with the Neutron API (or
> perhaps documentation or both)
>
> Let me s
Hi,
My expertise on the subject is pretty much zero, but the performance loss
(which I think is throughput loss) is so blatant that perhaps there's more
than the MTU issue. From what Robert writes, GRO definitely plays a role
too.
My only suggestion would be to route the question to ovs-discuss i
I've found out that several jobs are exhibiting failures like bug 1254890
[1] and bug 1253896 [2] because openvswitch seem to be crashing the kernel.
The kernel trace reports as offending process usually either
neutron-ns-metadata-proxy or dnsmasq, but [3] seem to clearly point to
ovs-vsctl.
254 ev
Thanks Chris!
some comments inline.
On 25 January 2014 02:08, Chris Wright wrote:
> * Salvatore Orlando (sorla...@nicira.com) wrote:
> > I've found out that several jobs are exhibiting failures like bug 1254890
> > [1] and bug 1253896 [2] because openvswitch seem to be c
It might be creative, but it's a shame that it did not serve the purpose.
At least it confirmed the kernel bug was related to process termination in
network namespaces but was due to SIGKILL exlusively, as it occurred with
SIGTERM as well.
On the bright side, Mark has now pushed another patch whic
Sean,
You surely have my permission. If it's publicly available you should not
even ask for it!
Well, unless it's copyrighted, but at least I could not possibly copyright
material mostly stolen from other people ;)
On the other hand the issue with slideshare and such is that material might
come a
Hi,
By merging patch https://review.openstack.org/#/c/53459/, full tenant
isolation has been turned on for neutron API tests.
We have executed a few experimental checks before merging and we found out
that the isolated jobs were always passing.
However, the gating jobs both in tempest and nova do
.
But still, we should probably make gate tests symmetric again.
Salvatore
On 7 February 2014 15:05, Salvatore Orlando wrote:
> Hi,
>
> By merging patch https://review.openstack.org/#/c/53459/, full tenant
> isolation has been turned on for neutron API tests.
> We have
+1
Il 11/feb/2014 10:47 "Gary Kotton" ha scritto:
> +1
>
>
> On 2/11/14 1:28 AM, "Mark McClain" wrote:
>
> >All-
> >
> >I¹d like to nominate Oleg Bondarev to become a Neutron core reviewer.
> >Oleg has been valuable contributor to Neutron by actively reviewing,
> >working on bugs, and contributi
It seems this work item is made of several blueprints, some of which are
not yet approved. This is true at least for the Neutron blueprint regarding
policy extensions.
Since I first looked at this spec I've been wondering why nova has been
selected as an endpoint for network operations rather than
Hi,
I've provided an update on this bug (which by the way is finally not
anymore #1 gate blocker!).
Please see the bug report [1]
I've tagged heat as well since over 50% of hits are from a non-voting heat
job which is trying to ssh into machines using their private IP; this might
not work with Ne
I have tried to collect more information on neutron full job failures.
So far there have been 219 failures and 891 successes, for an overall
success rate of 19.8% which is inline with Sean's evaluation.
The count has performed exclusively on jobs executed against master branch.
The failure rate fo
not assigned. Most of the bugs are assigned to
> you, I was wondering if you´d use some help. I guess we can coordinate
> better when you are online.
>
> cheers,
>
> Rossella
>
>
> On 02/23/2014 03:14 AM, Salvatore Orlando wrote:
>
> I have tried to collect more infor
Hi Assaf,
some comments inline.
As a general comment, I'd prefer to move all the discussions to gerrit
since the patches are now in review.
This unless you have design concerns (the ones below look more related to
the implementation to me)
Salvatore
On 24 February 2014 15:58, Assaf Muller wrot
I understand the fact that resources with invalid tenant_ids can be created
(only with admin rights at least for Neutron) can be annoying.
However, I support Jay's point on cross-project interactions. If tenant_id
validation (and orphaned resource management) can't be efficiently handled,
then I'd
Hi Kyle,
I think conceptually your approach is fine.
I would have had concerns if you were trying to manage ODL life cycle
through devstack (like installing/uninstalling it or configuring the ODL
controller).
But looking at your code it seems you're just setting up the host so that
it could work w
1 - 100 of 472 matches
Mail list logo