On Wed, Jun 11, 2014 at 01:27:07PM -0500, Dave Chiluk wrote:
> On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
> > On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
> >> Now think about what happens when a gateway goes down, the namespaces
> >> need to be migrated, or a new machine needs to be brought up to replace
> >> it.  When we're talking about 3000 namespaces, the amount of time it
> >> takes simply to recreate the namespaces becomes very significant.
> >>
> >> The script is a stripped down example of what exactly is being done on
> >> the neutron gateway in order to create namespaces.
> > 
> > Are the namespaces torn down and recreated one at a time, or is there some
> > syscall, ioctl(), or whatever that allows bulk tear down and recreating?
> > 
> >                                                     Thanx, Paul
> 
> In the normal running case, the namespaces are created one at a time, as
> new customers create a new set of VMs on the cloud.
> 
> However, in the case of failover to a new neutron gateway the namespaces
> are created all at once using the ip command (more or less serially).
> 
> As far as I know there is no syscall or ioctl that allows bulk tear down
> and recreation.  if such a beast exists that might be helpful.

The solution might be to create such a beast.  I might be able to shave
a bit of time off of this benchmark, but at the cost of significant
increases in RCU's CPU consumption.  A bulk teardown/recreation API could
reduce the RCU grace-period overhead by several orders of magnitude by
having a single RCU grace period cover a few thousand changes.

This is why other bulk-change syscalls exist.

Just out of curiosity, what syscalls does the ip command use?

                                                        Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to