Thank you for the video, keep up the good work!,
- Original Message -
> Hi folks,
>
> The DVR team is working really hard to complete this important task for Juno
> and Neutron.
>
> In order to help see this feature in action, a video has been made available
> and link can be found in [
ce the load of compute node.
> > * rpc communication mechanisms.
> -- this can reduce the load of neutron server
> can you help me to review my BP specs?
>
>
>
>
>
>
>
> At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com &
Yes, once a connection has past the nat tables,
and it's on the kernel connection tracker, it
will keep working even if you remove the nat rule.
Doing that would require manipulating the kernel
connection tracking to kill that connection,
I'm not familiar with that part of the linux network
stac
I believe it's an important feature, because currently
the default security rules are hard-coded in neutron's code,
and that won't fit all organizations (not to say that the
default security rules won't scale well on our current
implementation).
Best,
Miguel Ángel
- Mensaje origin
Hi it's a very interesting topic, I was getting ready to raise
the same concerns about our security groups implementation, shihanzhang
thank you for starting this topic.
Not only at low level where (with our default security group
rules -allow all incoming from 'default' sg- the iptable rules
It actually affects all numbers but mean (e.g. deviation is gross).
Carl is right, I thought of it later in the evening, when the timeout
mechanism is in place we must consider the number.
I'd say keep it in there.
+1 I agree.
Carl
On Mon, Mar 24, 2014 at 2:04 AM, Miguel
It's the first call starting the daemon / loading config files, etc?,
May be that first sample should be discarded from the mean for all
processes (it's an outlier value).
On 03/21/2014 05:32 PM, Yuriy Taraday wrote:
On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez mailto:thie...@openstack
On 03/21/2014 11:01 AM, Thierry Carrez wrote:
Yuriy Taraday wrote:
Benchmark included showed on my machine these numbers (average over 100
iterations):
Running 'ip a':
ip a : 4.565ms
sudo ip a : 13.744
On 03/21/2014 10:42 AM, Thierry Carrez wrote:
Yuriy Taraday wrote:
On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:
If this coupled to neutron in a way that it can be accepted for
Icehouse (we're killing a performance bug), or th
On 03/20/2014 12:32 PM, Monty Taylor wrote:
On 03/20/2014 05:31 AM, Miguel Angel Ajo wrote:
On 03/19/2014 10:54 PM, Joe Gordon wrote:
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:
An advance on the changes that it's requirin
Wow Yuriy, amazing and fast :-), benchmarks included ;-)
The daemon solution only adds 4.5ms, good work. I'll add some
comments in a while.
Recently I talked with another engineer in Red Hat (working
in ovirt/vdsm), and they have something like this daemon, and they
are using BaseMan
On 03/19/2014 10:54 PM, Joe Gordon wrote:
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:
An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.
http
odules
implemented for shedkin.
On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:
Hi Joe, thank you very much for the positive feedback,
I plan to spend a day during this week on the shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to make
it compile under
eutron
as 'ajo'.
Best regards,
Miguel Ángel.
[1] https://github.com/mangelajo/shedskin.rootwrap
On 03/18/2014 12:48 AM, Joe Gordon wrote:
On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mailto:mangel...@redhat.com>> wrote:
I have included on the etherpad,
thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason. See:
https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.
Best,
Aaron
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo mailto:majop...@redh
imple call to system happening, I think
that'd help my understanding.
Best regards,
Miguel Ángel.
On 03/13/2014 07:42 AM, Yuriy Taraday wrote:
On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:
I'm not familiar with unix domain sockets
tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
>
> real 0m0.021s
> user 0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
>
> real 0m0.010s
> user 0m0.000s
> sys 0m0.008s
>
> jogo@dev:~/tmp/pypy-2.2.1-src$ t
I'd
> like to come up with some actions this week to try to address at least
> part of the problem before Icehouse releases.
>
> https://etherpad.openstack.org/p/neutron-agent-exec-performance
>
> Carl
>
> On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo
> wr
ests. This seems like
a reasonably sensible idea to me.
Yes, you're right.
On 07/03/14 12:52, Miguel Angel Ajo wrote:
I thought of this option, but didn't consider it, as It's somehow
risky to expose an RPC end executing priviledged (even filtered)
com
+1 (Voting here to workaround my previous top-posting).
On 03/09/2014 01:22 PM, Nir Yechiel wrote:
+1
I see it as one of the main current gaps and I believe that this is something
that can promote Neutron as stable and production ready.
Based on Édouard's comment below, having this enabled in
+1
I Agree on this topic: experimental, and disabled-by-default
if there's low impact when the functionality is disabled.
On 03/07/2014 05:29 PM, Carl Baldwin wrote:
+1
On Fri, Mar 7, 2014 at 2:42 AM, Édouard Thuleau wrote:
+1
I though it must merge as experimental for IceHouse, to let the
Good work on the documentation, it's something that will really help
new neutron developers.
On 03/07/2014 07:44 PM, Akihiro Motoki wrote:
Hi Carl,
I really appreciate you for writing this up!
I have no hurdle to remove my -1 for the review.
I just thought it is an item easy to fix.
It is a
I thought of this option, but didn't consider it, as It's somehow
risky to expose an RPC end executing priviledged (even filtered) commands.
If I'm not wrong, once you have credentials for messaging, you can
send messages to any end, even filtered, I somehow see this as a higher
risk option.
An
ly Ross,
I haven't tried cython, but I will check it in a few minutes.
Iwamoto Toshihiro,
Thanks for pointing us to "ip netns exec" too, I wonder if that's
releated to the iproute upstream change Rick Jones was talking about.
Cheers,
Miguel Ángel.
On 03/06/2014 09:31
On 03/06/2014 07:57 AM, IWAMOTO Toshihiro wrote:
At Wed, 05 Mar 2014 15:42:54 +0100,
Miguel Angel Ajo wrote:
3) I also find 10 minutes a long time to setup 192 networks/basic tenant
structures, I wonder if that time could be reduced by conversion
of system process calls into system library
- Original Message -
> Miguel Angel Ajo wrote:
> > [...]
> >The overhead comes from python startup time + rootwrap loading.
> >
> >I suppose that rootwrap was designed for lower amount of system calls
> > (nova?).
>
> Yes, it was no
Hello,
Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes
it's wrapping itself.
On a database with 1 public network, 192 private networks, 192
routers, and 192 nano VMs, with OVS plugin:
N
Ok
My previous answer was actually about the Feature proposal freeze
which happened two days ago.
Cheers,
Miguel Ángel.
On 02/20/2014 11:27 AM, Thierry Carrez wrote:
马煜 wrote:
who know when to freezy icehouse version ?
my bp on ml2 driver has been approved, code is under review,
but I have s
If I didn't understand it wrong, as long as you have an active review for
your change, and some level of interest / approval, then you should
be ok to finish it during the last Icehouse cycle, but of course,
your code needs to be approved to become part of Icehouse.
Cheers,
Miguel Ángel.
- O
I rebased the https://review.openstack.org/#/c/72576/ no-op change.
- Original Message -
> From: "Alan Pevec"
> To: "openstack-stable-maint"
> Cc: "OpenStack Development Mailing List"
> Sent: Tuesday, February 18, 2014 7:52:23 PM
> Subject: Re: [openstack-dev] [Openstack-stable-maint
ons)"
>
> Cc: "openstack-stable-maint"
> Sent: Wednesday, February 12, 2014 12:05:29 PM
> Subject: Re: [openstack-dev] [neutron] [stable/havana] cherry backport,
> multiple external networks, passing tests
>
> 2014-02-12 10:48 GMT+01:00 Miguel Angel Ajo Pelayo :
&
Could any core developer check/approve this if it does look good?
https://review.openstack.org/#/c/68601/
I'd like to get it in for the new stable/havana release
if it's possible.
Best regards,
Miguel Ángel
___
OpenStack-dev mailing list
OpenS
During the design of HA deployments for Neutron, I have found
that agent's could run into problems, and they keep running,
but they have no methods to expose status to parent process
or which could be queried via an init.d script.
So I'm proposing this blueprint,
https://blueprints.launchpad.n
t; On Tue, Feb 04, 2014 at 12:36:16PM -0500, Miguel Angel Ajo Pelayo wrote:
> >
> >
> > Hi Ralf, I see we're on the same boat for this.
> >
> >It seems that a database migration introduces complications
> > for future upgrades. It's not an
- Original Message -
> From: "Miguel Angel Ajo Pelayo"
> To: "OpenStack Development Mailing List (not for usage questions)"
>
> Sent: Tuesday, February 4, 2014 6:36:16 PM
> Subject: Re: [openstack-dev] [Neutron] backporting database migrati
Hi Ralf, I see we're on the same boat for this.
It seems that a database migration introduces complications
for future upgrades. It's not an easy path.
My aim when I started this backport was trying to scale out
neutron-server, starting several ones together. But I'm afraid
we would find
Hi, any feedback on this?
If there is not, and it does seem right, I will go on adding the
documentation of this parameter to the agent config files.
Best Regards,
Miguel Ángel.
- Original Message -
> From: "Miguel Angel Ajo Pelayo"
> To: "OpenStack Development
Hi!,
I want to ask, specifically, about the intended purpose of the
host=... parameter in the neutron-agents (undocumented AFAIK).
I haven't found any documentation about it. As far as I discovered,
it's being used to provide Active/Passive replication of agents, as
you can manage agent
Hi Dong,
Can you elaborate an example of what you get, and what you were expecting
exactly?.
I have a similar problem within one operator, where they assign you sparse
blocks
of IP addresses (floating IPs), directly routed to your machine, and they also
assign the virtual mac add
Hi Salvatore!,
Good work on this.
About the quota limit tests, I believe they may be unit-tested,
instead of functionally tested.
When running those tests in parallel with any other tests that rely
on having ports, networks or subnets available into quota, they have
high chances of ma
201 - 240 of 240 matches
Mail list logo