Hi Iwase-san,

Thank you for the suggestion.
In the openstack mayb a procedure to build from the begining
will be necessary.
But in the case of ryu we can use an image built.

> Just an idea...
> How about build image from Alpine?
> It will be extremely small, on my environment.
>    https://hub.docker.com/r/iwaseyusuke/quagga/

I thought building an image from the begining or using an image
built from the above Dockerfile.
I will think using the image on my account.


> Can we use the official Docker image of Ryu?
>    https://hub.docker.com/r/osrg/ryu/
> e.g.)
>    docker run -it --rm -v WORKDIR:/root/ryu osrg/ryu /bin/bash
> 
> If we can use this image, we can also test this image, I guess.
> 

I was not aware of it. I will try to use it.


Thanks,
kakuma

On Mon, 24 Oct 2016 11:34:19 +0900
Iwase Yusuke <[email protected]> wrote:

> Hi Kakuma-San,
> 
> 
> On 2016年10月24日 11:12, fumihiko kakuma wrote:
> > This provides an environment which test a peer between ryu and quagga.
> > I also consider that these modules are used from openstack or other
> > projects. So there may be some functions that are not used by test
> > for ryu.
> > This has the following functions.
> >
> > - build docker image and run ryu and quagga on that container.
> > - configure ryu and quagga.
> > - have some operations for ryu, quagga and docker.
> >
> > Signed-off-by: Fumihiko Kakuma <[email protected]>
> > ---
> >  ryu/tests/integrated/common/__init__.py    |   0
> >  ryu/tests/integrated/common/docker_base.py | 774 
> > +++++++++++++++++++++++++++++
> >  ryu/tests/integrated/common/quagga.py      | 331 ++++++++++++
> >  ryu/tests/integrated/common/ryubgp.py      | 206 ++++++++
> >  4 files changed, 1311 insertions(+)
> >  create mode 100644 ryu/tests/integrated/common/__init__.py
> >  create mode 100644 ryu/tests/integrated/common/docker_base.py
> >  create mode 100644 ryu/tests/integrated/common/quagga.py
> >  create mode 100644 ryu/tests/integrated/common/ryubgp.py
> >
> > diff --git a/ryu/tests/integrated/common/__init__.py 
> > b/ryu/tests/integrated/common/__init__.py
> > new file mode 100644
> > index 0000000..e69de29
> > diff --git a/ryu/tests/integrated/common/docker_base.py 
> > b/ryu/tests/integrated/common/docker_base.py
> > new file mode 100644
> > index 0000000..cf593bf
> > --- /dev/null
> > +++ b/ryu/tests/integrated/common/docker_base.py
> > @@ -0,0 +1,774 @@
> > +# Copyright (C) 2015 Nippon Telegraph and Telephone Corporation.
> > +#
> > +# This is based on the following
> > +#     https://github.com/osrg/gobgp/test/lib/base.py
> > +#
> > +# Licensed under the Apache License, Version 2.0 (the "License");
> > +# you may not use this file except in compliance with the License.
> > +# You may obtain a copy of the License at
> > +#
> > +#    http://www.apache.org/licenses/LICENSE-2.0
> > +#
> > +# Unless required by applicable law or agreed to in writing, software
> > +# distributed under the License is distributed on an "AS IS" BASIS,
> > +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> > +# implied.
> > +# See the License for the specific language governing permissions and
> > +# limitations under the License.
> > +
> > +from __future__ import absolute_import
> > +
> > +import itertools
> > +import logging
> > +import subprocess
> > +import time
> > +
> > +from docker import Client
> > +import netaddr
> > +import six
> > +
> > +LOG = logging.getLogger(__name__)
> > +
> > +DEFAULT_TEST_PREFIX = ''
> > +DEFAULT_TEST_BASE_DIR = '/tmp/ctn_docker/bgp'
> > +TEST_PREFIX = DEFAULT_TEST_PREFIX
> > +TEST_BASE_DIR = DEFAULT_TEST_BASE_DIR
> > +
> > +BGP_FSM_IDLE = 'BGP_FSM_IDLE'
> > +BGP_FSM_ACTIVE = 'BGP_FSM_ACTIVE'
> > +BGP_FSM_ESTABLISHED = 'BGP_FSM_ESTABLISHED'
> > +
> > +BGP_ATTR_TYPE_ORIGIN = 1
> > +BGP_ATTR_TYPE_AS_PATH = 2
> > +BGP_ATTR_TYPE_NEXT_HOP = 3
> > +BGP_ATTR_TYPE_MULTI_EXIT_DISC = 4
> > +BGP_ATTR_TYPE_LOCAL_PREF = 5
> > +BGP_ATTR_TYPE_COMMUNITIES = 8
> > +BGP_ATTR_TYPE_ORIGINATOR_ID = 9
> > +BGP_ATTR_TYPE_CLUSTER_LIST = 10
> > +BGP_ATTR_TYPE_MP_REACH_NLRI = 14
> > +BGP_ATTR_TYPE_EXTENDED_COMMUNITIES = 16
> > +
> > +
> > +class CommandError(Exception):
> > +    def __init__(self, out):
> > +        super(CommandError, self).__init__()
> > +        self.out = out
> > +
> > +
> > +def try_several_times(f, t=3, s=1):
> > +    e = None
> > +    for i in range(t):
> > +        try:
> > +            r = f()
> > +        except RuntimeError as e:
> > +            time.sleep(s)
> > +        else:
> > +            return r
> > +    raise e
> > +
> > +
> > +class CmdBuffer(list):
> > +    def __init__(self, delim='\n'):
> > +        super(CmdBuffer, self).__init__()
> > +        self.delim = delim
> > +
> > +    def __lshift__(self, value):
> > +        self.append(value)
> > +
> > +    def __str__(self):
> > +        return self.delim.join(self)
> > +
> > +
> > +class CommandOut(str):
> > +    pass
> > +
> > +
> > +class Command(object):
> > +
> > +    def _execute(self, cmd, capture=False, executable=None):
> > +        """Execute a command using subprocess.Popen()
> > +        :Parameters:
> > +            - out: stdout from subprocess.Popen()
> > +              out has some attributes.
> > +              out.returncode: returncode of subprocess.Popen()
> > +              out.stderr: stderr from subprocess.Popen()
> > +        """
> > +        if capture:
> > +            p_stdout = subprocess.PIPE
> > +            p_stderr = subprocess.PIPE
> > +        else:
> > +            p_stdout = None
> > +            p_stderr = None
> > +        pop = subprocess.Popen(cmd, shell=True, executable=executable,
> > +                               stdout=p_stdout,
> > +                               stderr=p_stderr)
> > +        __stdout, __stderr = pop.communicate()
> > +        try:
> > +            if six.PY3 and isinstance(__stdout, six.binary_type):
> > +                _stdout = __stdout.decode('ascii')
> > +            else:
> > +                _stdout = __stdout
> > +            if six.PY3 and isinstance(__stderr, six.binary_type):
> > +                _stderr = __stderr.decode('ascii')
> > +            else:
> > +                _stderr = __stderr
> > +        except UnicodeError:
> > +            _stdout = __stdout
> > +            _stderr = __stderr
> > +        out = CommandOut(_stdout if _stdout else "")
> > +        out.stderr = _stderr if _stderr else ""
> > +        out.command = cmd
> > +        out.returncode = pop.returncode
> > +        return out
> > +
> > +    def execute(self, cmd, capture=True, try_times=1, interval=1):
> > +        for i in range(try_times):
> > +            out = self._execute(cmd, capture=capture)
> > +            LOG.error(out.command)
> > +            if out.returncode == 0:
> > +                return out
> > +            LOG.error(out.stderr)
> > +            if try_times + 1 >= try_times:
> > +                break
> > +            time.sleep(interval)
> > +        raise CommandError(out)
> > +
> > +    def sudo(self, cmd, capture=True, try_times=1, interval=1):
> > +        cmd = 'sudo ' + cmd
> > +        return self.execute(cmd, capture=capture,
> > +                            try_times=try_times, interval=interval)
> > +
> > +
> > +class DockerImage(object):
> > +    def __init__(self, baseimage='ubuntu:14.04.5'):
> > +        self.baseimage = baseimage
> > +        self.cmd = Command()
> > +
> > +    def get_images(self):
> > +        out = self.cmd.sudo('sudo docker images')
> > +        images = []
> > +        for line in out.splitlines()[1:]:
> > +            images.append(line.split()[0])
> > +        return images
> > +
> > +    def exist(self, name):
> > +        if name in self.get_images():
> > +            return True
> > +        else:
> > +            return False
> > +
> > +    def build(self, tagname, dockerfile_dir):
> > +        self.cmd.sudo(
> > +            "docker build -t {0} {1}".format(tagname, dockerfile_dir),
> > +            try_times=3)
> > +
> > +    def remove(self, tagname, check_exist=False):
> > +        if check_exist and not self.exist(tagname):
> > +            return tagname
> > +        self.cmd.sudo("docker rmi -f " + tagname, try_times=3)
> > +
> > +    def create_quagga(self, tagname='quagga', check_exist=False):
> > +        if check_exist and self.exist(tagname):
> > +            return tagname
> > +        workdir = TEST_BASE_DIR + '/' + tagname
> > +        pkges = 'telnet tcpdump quagga'
> > +        c = CmdBuffer()
> > +        c << 'FROM ' + self.baseimage
> > +        c << 'RUN apt-get update'
> > +        c << 'RUN apt-get install -qy --no-install-recommends ' + pkges
> > +        c << 'CMD /usr/lib/quagga/bgpd'
> > +
> > +        self.cmd.sudo('rm -rf ' + workdir)
> > +        self.cmd.execute('mkdir -p ' + workdir)
> > +        self.cmd.execute("echo '%s' > %s/Dockerfile" % (str(c), workdir))
> > +        self.build(tagname, workdir)
> > +        return tagname
> 
> Just an idea...
> How about build image from Alpine?
> It will be extremely small, on my environment.
>    https://hub.docker.com/r/iwaseyusuke/quagga/
> 
> 
> > +
> > +    def create_ryu(self, tagname='ryu', check_exist=False):
> > +        if check_exist and self.exist(tagname):
> > +            return tagname
> > +        workdir = TEST_BASE_DIR + '/' + tagname
> > +        pkges = 'telnet tcpdump '
> > +        pkges += 'gcc python-pip python-dev libffi-dev libssl-dev'
> > +        c = CmdBuffer()
> > +        c << 'FROM ' + self.baseimage
> > +        c << 'RUN apt-get update'
> > +        c << 'RUN apt-get install -qy --no-install-recommends ' + pkges
> > +        c << 'RUN pip install -U six paramiko msgpack-rpc-python'
> > +        c << 'ADD ryu /root/ryu'
> > +        c << 'RUN pip install -e /root/ryu'
> > +
> > +        self.cmd.sudo('rm -rf ' + workdir)
> > +        self.cmd.execute('mkdir -p ' + workdir)
> > +        self.cmd.execute("echo '%s' > %s/Dockerfile" % (str(c), workdir))
> > +        self.cmd.execute('cp -r ../ryu %s/' % workdir)
> > +        self.build(tagname, workdir)
> > +        return tagname
> 
> Can we use the official Docker image of Ryu?
>    https://hub.docker.com/r/osrg/ryu/
> e.g.)
>    docker run -it --rm -v WORKDIR:/root/ryu osrg/ryu /bin/bash
> 
> If we can use this image, we can also test this image, I guess.
> 
> 
> > +
> > +
> > +class Bridge(object):
> > +    def __init__(self, name, subnet='', start_ip=None, end_ip=None,
> > +                 with_ip=True, self_ip=False,
> > +                 fixed_ip=None, reuse=False,
> > +                 docker_nw=True):
> > +        """Manage a bridge
> > +        :Parameters:
> > +            - name: bridge name
> > +            - subnet: network cider to be used in this bridge
> > +            - start_ip: start address of an ip to be used in the subnet
> > +            - end_ip: end address of an ip to be used in the subnet
> > +            - with_ip: specify if assign automatically an ip address
> > +            - self_ip: specify if assign an ip address for the bridge
> > +            - fixed_ip: an ip address to be assigned to the bridge
> > +            - reuse: specify if use an existing bridge
> > +            - docker_nw: specify if use a docker network
> > +        """
> > +        self.cmd = Command()
> > +        self.name = name
> > +        self.docker_nw = docker_nw
> > +        if TEST_PREFIX != '':
> > +            self.name = '{0}_{1}'.format(TEST_PREFIX, name)
> > +        self.with_ip = with_ip
> > +        if with_ip:
> > +            self.subnet = netaddr.IPNetwork(subnet)
> > +            if start_ip:
> > +                self.start_ip = start_ip
> > +            else:
> > +                self.start_ip = netaddr.IPAddress(self.subnet.first)
> > +            if end_ip:
> > +                self.end_ip = end_ip
> > +            else:
> > +                self.end_ip = netaddr.IPAddress(self.subnet.last)
> > +
> > +            def f():
> > +                for host in netaddr.IPRange(self.start_ip, self.end_ip):
> > +                    yield host
> > +            self._ip_generator = f()
> > +            # throw away first network address
> > +            self.next_ip_address()
> > +
> > +        self.self_ip = self_ip
> > +        if fixed_ip:
> > +            self.ip_addr = fixed_ip
> > +        else:
> > +            self.ip_addr = self.next_ip_address()
> > +        if not reuse:
> > +            def f():
> > +                if self.docker_nw:
> > +                    gw = "--gateway %s" % self.ip_addr.split('/')[0]
> > +                    v6 = ''
> > +                    if self.subnet.version == 6:
> > +                        v6 = '--ipv6'
> > +                    cmd = "docker network create --driver bridge %s " % v6
> > +                    cmd += "%s --subnet %s %s" % (gw, subnet, self.name)
> > +                else:
> > +                    cmd = "ip link add {0} type bridge".format(self.name)
> > +                self.delete()
> > +                self.execute(cmd, sudo=True, retry=True)
> > +            try_several_times(f)
> > +        if not self.docker_nw:
> > +            self.execute("ip link set up dev {0}".format(self.name),
> > +                         sudo=True, retry=True)
> > +
> > +        if not docker_nw and self_ip:
> > +            ips = self.check_br_addr(self.name)
> > +            for key, ip in ips.items():
> > +                if self.subnet.version == key:
> > +                    self.execute(
> > +                        "ip addr del {0} dev {1}".format(ip, self.name),
> > +                        sudo=True, retry=True)
> > +            self.execute(
> > +                "ip addr add {0} dev {1}".format(self.ip_addr, self.name),
> > +                sudo=True, retry=True)
> > +        self.ctns = []
> > +
> > +    def get_bridges_dc(self):
> > +        out = self.execute('docker network ls', sudo=True, retry=True)
> > +        bridges = []
> > +        for line in out.splitlines()[1:]:
> > +            bridges.append(line.split()[1])
> > +        return bridges
> > +
> > +    def get_bridges_brctl(self):
> > +        out = self.execute('brctl show', retry=True)
> > +        bridges = []
> > +        for line in out.splitlines()[1:]:
> > +            bridges.append(line.split()[0])
> > +        return bridges
> > +
> > +    def get_bridges(self):
> > +        if self.docker_nw:
> > +            return self.get_bridges_dc()
> > +        else:
> > +            return self.get_bridges_brctl()
> > +
> > +    def exist(self):
> > +        if self.name in self.get_bridges():
> > +            return True
> > +        else:
> > +            return False
> > +
> > +    def execute(self, cmd, capture=True, sudo=False, retry=False):
> > +        if sudo:
> > +            m = self.cmd.sudo
> > +        else:
> > +            m = self.cmd.execute
> > +        if retry:
> > +            return m(cmd, capture=capture, try_times=3, interval=1)
> > +        else:
> > +            return m(cmd, capture=capture)
> > +
> > +    def check_br_addr(self, br):
> > +        ips = {}
> > +        cmd = "ip a show dev %s" % br
> > +        for line in self.execute(cmd, sudo=True).split('\n'):
> > +            if line.strip().startswith("inet "):
> > +                elems = [e.strip() for e in line.strip().split(' ')]
> > +                ips[4] = elems[1]
> > +            elif line.strip().startswith("inet6 "):
> > +                elems = [e.strip() for e in line.strip().split(' ')]
> > +                ips[6] = elems[1]
> > +        return ips
> > +
> > +    def next_ip_address(self):
> > +        return "{0}/{1}".format(next(self._ip_generator),
> > +                                self.subnet.prefixlen)
> > +
> > +    def addif(self, ctn):
> > +        name = ctn.next_if_name()
> > +        self.ctns.append(ctn)
> > +        ip_address = None
> > +        if self.docker_nw:
> > +            ipv4 = None
> > +            ipv6 = None
> > +            ip_address = self.next_ip_address()
> > +            version = 4
> > +            if netaddr.IPNetwork(ip_address).version == 6:
> > +                version = 6
> > +            opt_ip = "--ip %s" % ip_address
> > +            if version == 4:
> > +                ipv4 = ip_address
> > +            else:
> > +                opt_ip = "--ip6 %s" % ip_address
> > +                ipv6 = ip_address
> > +            cmd = "docker network connect %s " % opt_ip
> > +            cmd += "%s %s" % (self.name, ctn.docker_name())
> > +            self.execute(cmd, sudo=True)
> > +            ctn.set_addr_info(bridge=self.name, ipv4=ipv4, ipv6=ipv6,
> > +                              ifname=name)
> > +        else:
> > +            if self.with_ip:
> > +                ip_address = self.next_ip_address()
> > +                version = 4
> > +                if netaddr.IPNetwork(ip_address).version == 6:
> > +                    version = 6
> > +                ctn.pipework(self, ip_address, name, version=version)
> > +            else:
> > +                ctn.pipework(self, '0/0', name)
> > +        return ip_address
> > +
> > +    def delete(self, check_exist=True):
> > +        if check_exist:
> > +            if not self.exist():
> > +                return
> > +        if self.docker_nw:
> > +            self.execute("docker network rm %s" % self.name,
> > +                         sudo=True, retry=True)
> > +        else:
> > +            self.execute("ip link set down dev {0}".format(self.name),
> > +                         sudo=True, retry=True)
> > +            self.execute(
> > +                "ip link delete {0} type bridge".format(self.name),
> > +                sudo=True, retry=True)
> > +
> > +
> > +class Container(object):
> > +    def __init__(self, name, image=None):
> > +        self.name = name
> > +        self.image = image
> > +        self.shared_volumes = []
> > +        self.ip_addrs = []
> > +        self.ip6_addrs = []
> > +        self.is_running = False
> > +        self.eths = []
> > +        self.id = None
> > +
> > +        self.cmd = Command()
> > +        self.remove()
> > +
> > +    def docker_name(self):
> > +        if TEST_PREFIX == DEFAULT_TEST_PREFIX:
> > +            return self.name
> > +        return '{0}_{1}'.format(TEST_PREFIX, self.name)
> > +
> > +    def get_docker_id(self):
> > +        if self.id:
> > +            return self.id
> > +        else:
> > +            return self.docker_name()
> > +
> > +    def next_if_name(self):
> > +        name = 'eth{0}'.format(len(self.eths) + 1)
> > +        self.eths.append(name)
> > +        return name
> > +
> > +    def set_addr_info(self, bridge, ipv4=None, ipv6=None, ifname='eth0'):
> > +        if ipv4:
> > +            self.ip_addrs.append((ifname, ipv4, bridge))
> > +        if ipv6:
> > +            self.ip6_addrs.append((ifname, ipv6, bridge))
> > +
> > +    def get_addr_info(self, bridge, ipv=4):
> > +        addrinfo = {}
> > +        if ipv == 4:
> > +            ip_addrs = self.ip_addrs
> > +        elif ipv == 6:
> > +            ip_addrs = self.ip6_addrs
> > +        else:
> > +            return None
> > +        for addr in ip_addrs:
> > +            if addr[2] == bridge:
> > +                addrinfo[addr[1]] = addr[0]
> > +        return addrinfo
> > +
> > +    def execute(self, cmd, capture=True, sudo=False, retry=False):
> > +        if sudo:
> > +            m = self.cmd.sudo
> > +        else:
> > +            m = self.cmd.execute
> > +        if retry:
> > +            return m(cmd, capture=capture, try_times=3, interval=1)
> > +        else:
> > +            return m(cmd, capture=capture)
> > +
> > +    def dcexec(self, cmd, capture=True, retry=False):
> > +        if retry:
> > +            return self.cmd.sudo(cmd, capture=capture, try_times=3, 
> > interval=1)
> > +        else:
> > +            return self.cmd.sudo(cmd, capture=capture)
> > +
> > +    def exec_on_ctn(self, cmd, capture=True, stream=False, detach=False):
> > +        name = self.docker_name()
> > +        if stream:
> > +            # This needs root permission.
> > +            dcli = Client(timeout=120, version='auto')
> > +            i = dcli.exec_create(container=name, cmd=cmd)
> > +            return dcli.exec_start(i['Id'], tty=True,
> > +                                   stream=stream, detach=detach)
> > +        else:
> > +            flag = '-d' if detach else ''
> > +            return self.dcexec('docker exec {0} {1} {2}'.format(
> > +                flag, name, cmd), capture=capture)
> > +
> > +    def get_containers(self, allctn=False):
> > +        cmd = 'docker ps --no-trunc=true'
> > +        if allctn:
> > +            cmd += ' --all=true'
> > +        out = self.dcexec(cmd, retry=True)
> > +        containers = []
> > +        for line in out.splitlines()[1:]:
> > +            containers.append(line.split()[-1])
> > +        return containers
> > +
> > +    def exist(self, allctn=False):
> > +        if self.docker_name() in self.get_containers(allctn=allctn):
> > +            return True
> > +        else:
> > +            return False
> > +
> > +    def run(self):
> > +        c = CmdBuffer(' ')
> > +        c << "docker run --privileged=true"
> > +        for sv in self.shared_volumes:
> > +            c << "-v {0}:{1}".format(sv[0], sv[1])
> > +        c << "--name {0} --hostname {0} -id {1}".format(self.docker_name(),
> > +                                                        self.image)
> > +        self.id = self.dcexec(str(c), retry=True)
> > +        self.is_running = True
> > +        self.exec_on_ctn("ip li set up dev lo")
> > +        ipv4 = None
> > +        ipv6 = None
> > +        for line in self.exec_on_ctn("ip a show dev eth0").split('\n'):
> > +            if line.strip().startswith("inet "):
> > +                elems = [e.strip() for e in line.strip().split(' ')]
> > +                ipv4 = elems[1]
> > +            elif line.strip().startswith("inet6 "):
> > +                elems = [e.strip() for e in line.strip().split(' ')]
> > +                ipv6 = elems[1]
> > +        self.set_addr_info(bridge='docker0', ipv4=ipv4, ipv6=ipv6,
> > +                           ifname='eth0')
> > +        return 0
> > +
> > +    def stop(self, check_exist=True):
> > +        if check_exist:
> > +            if not self.exist(allctn=False):
> > +                return
> > +        ctn_id = self.get_docker_id()
> > +        out = self.dcexec('docker stop -t 0 ' + ctn_id, retry=True)
> > +        self.is_running = False
> > +        return out
> > +
> > +    def remove(self, check_exist=True):
> > +        if check_exist:
> > +            if not self.exist(allctn=True):
> > +                return
> > +        ctn_id = self.get_docker_id()
> > +        out = self.dcexec('docker rm -f ' + ctn_id, retry=True)
> > +        self.is_running = False
> > +        return out
> > +
> > +    def pipework(self, bridge, ip_addr, intf_name="", version=4):
> > +        if not self.is_running:
> > +            LOG.warning('Call run() before pipeworking')
> > +            return
> > +        c = CmdBuffer(' ')
> > +        c << "pipework {0}".format(bridge.name)
> > +
> > +        if intf_name != "":
> > +            c << "-i {0}".format(intf_name)
> > +        else:
> > +            intf_name = "eth1"
> > +        ipv4 = None
> > +        ipv6 = None
> > +        if version == 4:
> > +            ipv4 = ip_addr
> > +        else:
> > +            c << '-a 6'
> > +            ipv6 = ip_addr
> > +        c << "{0} {1}".format(self.docker_name(), ip_addr)
> > +        self.set_addr_info(bridge=bridge.name, ipv4=ipv4, ipv6=ipv6,
> > +                           ifname=intf_name)
> > +        self.execute(str(c), sudo=True, retry=True)
> > +
> > +    def get_pid(self):
> > +        if self.is_running:
> > +            cmd = "docker inspect -f '{{.State.Pid}}' " + 
> > self.docker_name()
> > +            return int(self.dcexec(cmd))
> > +        return -1
> > +
> > +    def start_tcpdump(self, interface=None, filename=None):
> > +        if not interface:
> > +            interface = "eth0"
> > +        if not filename:
> > +            filename = "{0}/{1}.dump".format(
> > +                self.shared_volumes[0][1], interface)
> > +        self.exec_on_ctn(
> > +            "tcpdump -i {0} -w {1}".format(interface, filename),
> > +            detach=True)
> > +
> > +
> > +class BGPContainer(Container):
> > +
> > +    WAIT_FOR_BOOT = 1
> > +    RETRY_INTERVAL = 5
> > +    DEFAULT_PEER_ARG = {'neigh_addr': '',
> > +                        'passwd': None,
> > +                        'vpn': False,
> > +                        'flowspec': False,
> > +                        'is_rs_client': False,
> > +                        'is_rr_client': False,
> > +                        'cluster_id': None,
> > +                        'policies': None,
> > +                        'passive': False,
> > +                        'local_addr': '',
> > +                        'as2': False,
> > +                        'graceful_restart': None,
> > +                        'local_as': None,
> > +                        'prefix_limit': None}
> > +    default_peer_keys = sorted(DEFAULT_PEER_ARG.keys())
> > +    DEFAULT_ROUTE_ARG = {'prefix': None,
> > +                         'rf': 'ipv4',
> > +                         'attr': None,
> > +                         'next-hop': None,
> > +                         'as-path': None,
> > +                         'community': None,
> > +                         'med': None,
> > +                         'local-pref': None,
> > +                         'extended-community': None,
> > +                         'matchs': None,
> > +                         'thens': None}
> > +    default_route_keys = sorted(DEFAULT_ROUTE_ARG.keys())
> > +
> > +    def __init__(self, name, asn, router_id, ctn_image_name=None):
> > +        self.config_dir = TEST_BASE_DIR
> > +        if TEST_PREFIX:
> > +            self.config_dir += '/' + TEST_PREFIX
> > +        self.config_dir += '/' + name
> > +        self.asn = asn
> > +        self.router_id = router_id
> > +        self.peers = {}
> > +        self.routes = {}
> > +        self.policies = {}
> > +        super(BGPContainer, self).__init__(name, ctn_image_name)
> > +        self.execute(
> > +            'rm -rf {0}'.format(self.config_dir), sudo=True)
> > +        self.execute('mkdir -p {0}'.format(self.config_dir))
> > +        self.execute('chmod 777 {0}'.format(self.config_dir))
> > +
> > +    def __repr__(self):
> > +        return str({'name': self.name, 'asn': self.asn,
> > +                    'router_id': self.router_id})
> > +
> > +    def run(self, wait=False, w_time=WAIT_FOR_BOOT):
> > +        self.create_config()
> > +        super(BGPContainer, self).run()
> > +        if wait:
> > +            time.sleep(w_time)
> > +        return w_time
> > +
> > +    def add_peer(self, peer, bridge='', reload_config=True, v6=False,
> > +                 peer_info={}):
> > +        self.peers[peer] = self.DEFAULT_PEER_ARG.copy()
> > +        self.peers[peer].update(peer_info)
> > +        peer_keys = sorted(self.peers[peer].keys())
> > +        if peer_keys != self.default_peer_keys:
> > +            raise Exception('argument error: {0}'.format(peer_info))
> > +
> > +        neigh_addr = ''
> > +        local_addr = ''
> > +        it = itertools.product(self.ip_addrs, peer.ip_addrs)
> > +        if v6:
> > +            it = itertools.product(self.ip6_addrs, peer.ip6_addrs)
> > +
> > +        for me, you in it:
> > +            if bridge != '' and bridge != me[2]:
> > +                continue
> > +            if me[2] == you[2]:
> > +                neigh_addr = you[1]
> > +                local_addr = me[1]
> > +                if v6:
> > +                    addr, mask = local_addr.split('/')
> > +                    local_addr = "{0}%{1}/{2}".format(addr, me[0], mask)
> > +                break
> > +
> > +        if neigh_addr == '':
> > +            raise Exception('peer {0} seems not ip reachable'.format(peer))
> > +
> > +        if not self.peers[peer]['policies']:
> > +            self.peers[peer]['policies'] = {}
> > +
> > +        self.peers[peer]['neigh_addr'] = neigh_addr
> > +        self.peers[peer]['local_addr'] = local_addr
> > +        if self.is_running and reload_config:
> > +            self.create_config()
> > +            self.reload_config()
> > +
> > +    def del_peer(self, peer, reload_config=True):
> > +        del self.peers[peer]
> > +        if self.is_running and reload_config:
> > +            self.create_config()
> > +            self.reload_config()
> > +
> > +    def disable_peer(self, peer):
> > +        raise Exception('implement disable_peer() method')
> > +
> > +    def enable_peer(self, peer):
> > +        raise Exception('implement enable_peer() method')
> > +
> > +    def log(self):
> > +        return self.execute('cat {0}/*.log'.format(self.config_dir))
> > +
> > +    def add_route(self, route, reload_config=True, route_info={}):
> > +        self.routes[route] = self.DEFAULT_ROUTE_ARG.copy()
> > +        self.routes[route].update(route_info)
> > +        route_keys = sorted(self.routes[route].keys())
> > +        if route_keys != self.default_route_keys:
> > +            raise Exception('argument error: {0}'.format(route_info))
> > +        self.routes[route]['prefix'] = route
> > +        if self.is_running and reload_config:
> > +            self.create_config()
> > +            self.reload_config()
> > +
> > +    def add_policy(self, policy, peer, typ, default='accept',
> > +                   reload_config=True):
> > +        self.set_default_policy(peer, typ, default)
> > +        self.define_policy(policy)
> > +        self.assign_policy(peer, policy, typ)
> > +        if self.is_running and reload_config:
> > +            self.create_config()
> > +            self.reload_config()
> > +
> > +    def set_default_policy(self, peer, typ, default):
> > +        if (typ in ['in', 'out', 'import', 'export'] and
> > +                default in ['reject', 'accept']):
> > +            if 'default-policy' not in self.peers[peer]:
> > +                self.peers[peer]['default-policy'] = {}
> > +            self.peers[peer]['default-policy'][typ] = default
> > +        else:
> > +            raise Exception('wrong type or default')
> > +
> > +    def define_policy(self, policy):
> > +        self.policies[policy['name']] = policy
> > +
> > +    def assign_policy(self, peer, policy, typ):
> > +        if peer not in self.peers:
> > +            raise Exception('peer {0} not found'.format(peer.name))
> > +        name = policy['name']
> > +        if name not in self.policies:
> > +            raise Exception('policy {0} not found'.format(name))
> > +        self.peers[peer]['policies'][typ] = policy
> > +
> > +    def get_local_rib(self, peer, rf):
> > +        raise Exception('implement get_local_rib() method')
> > +
> > +    def get_global_rib(self, rf):
> > +        raise Exception('implement get_global_rib() method')
> > +
> > +    def get_neighbor_state(self, peer_id):
> > +        raise Exception('implement get_neighbor() method')
> > +
> > +    def get_reachablily(self, prefix, timeout=20):
> > +            version = netaddr.IPNetwork(prefix).version
> > +            addr = prefix.split('/')[0]
> > +            if version == 4:
> > +                ping_cmd = 'ping'
> > +            elif version == 6:
> > +                ping_cmd = 'ping6'
> > +            else:
> > +                raise Exception(
> > +                    'unsupported route family: {0}'.format(version))
> > +            cmd = '/bin/bash -c "/bin/{0} -c 1 -w 1 {1} | xargs 
> > echo"'.format(
> > +                ping_cmd, addr)
> > +            interval = 1
> > +            count = 0
> > +            while True:
> > +                res = self.exec_on_ctn(cmd)
> > +                LOG.info(res)
> > +                if '1 packets received' in res and '0% packet loss':
> > +                    break
> > +                time.sleep(interval)
> > +                count += interval
> > +                if count >= timeout:
> > +                    raise Exception('timeout')
> > +            return True
> > +
> > +    def wait_for(self, expected_state, peer, timeout=120):
> > +        interval = 1
> > +        count = 0
> > +        while True:
> > +            state = self.get_neighbor_state(peer)
> > +            LOG.info("{0}'s peer {1} state: {2}".format(self.router_id,
> > +                                                        peer.router_id,
> > +                                                        state))
> > +            if state == expected_state:
> > +                return
> > +
> > +            time.sleep(interval)
> > +            count += interval
> > +            if count >= timeout:
> > +                raise Exception('timeout')
> > +
> > +    def add_static_route(self, network, next_hop):
> > +        cmd = '/sbin/ip route add {0} via {1}'.format(network, next_hop)
> > +        self.exec_on_ctn(cmd)
> > +
> > +    def set_ipv6_forward(self):
> > +        cmd = 'sysctl -w net.ipv6.conf.all.forwarding=1'
> > +        self.exec_on_ctn(cmd)
> > +
> > +    def create_config(self):
> > +        raise Exception('implement create_config() method')
> > +
> > +    def reload_config(self):
> > +        raise Exception('implement reload_config() method')
> > diff --git a/ryu/tests/integrated/common/quagga.py 
> > b/ryu/tests/integrated/common/quagga.py
> > new file mode 100644
> > index 0000000..80c8cf2
> > --- /dev/null
> > +++ b/ryu/tests/integrated/common/quagga.py
> > @@ -0,0 +1,331 @@
> > +# Copyright (C) 2015 Nippon Telegraph and Telephone Corporation.
> > +#
> > +# This is based on the following
> > +#     https://github.com/osrg/gobgp/test/lib/quagga.py
> > +#
> > +# Licensed under the Apache License, Version 2.0 (the "License");
> > +# you may not use this file except in compliance with the License.
> > +# You may obtain a copy of the License at
> > +#
> > +#    http://www.apache.org/licenses/LICENSE-2.0
> > +#
> > +# Unless required by applicable law or agreed to in writing, software
> > +# distributed under the License is distributed on an "AS IS" BASIS,
> > +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> > +# implied.
> > +# See the License for the specific language governing permissions and
> > +# limitations under the License.
> > +
> > +from __future__ import absolute_import
> > +
> > +import logging
> > +
> > +import netaddr
> > +
> > +from . import docker_base as base
> > +
> > +LOG = logging.getLogger(__name__)
> > +
> > +
> > +class QuaggaBGPContainer(base.BGPContainer):
> > +
> > +    WAIT_FOR_BOOT = 1
> > +    SHARED_VOLUME = '/etc/quagga'
> > +
> > +    def __init__(self, name, asn, router_id, ctn_image_name, zebra=False):
> > +        super(QuaggaBGPContainer, self).__init__(name, asn, router_id,
> > +                                                 ctn_image_name)
> > +        self.shared_volumes.append((self.config_dir, self.SHARED_VOLUME))
> > +        self.zebra = zebra
> > +        self._create_config_debian()
> > +
> > +    def run(self, wait=False):
> > +        w_time = super(QuaggaBGPContainer,
> > +                       self).run(wait=wait, w_time=self.WAIT_FOR_BOOT)
> > +        return w_time
> > +
> > +    def get_global_rib(self, prefix='', rf='ipv4'):
> > +        rib = []
> > +        if prefix != '':
> > +            return self.get_global_rib_with_prefix(prefix, rf)
> > +
> > +        out = self.vtysh('show bgp {0} unicast'.format(rf), config=False)
> > +        if out.startswith('No BGP network exists'):
> > +            return rib
> > +
> > +        read_next = False
> > +
> > +        for line in out.split('\n'):
> > +            if line[:2] == '*>':
> > +                line = line[2:]
> > +                ibgp = False
> > +                if line[0] == 'i':
> > +                    line = line[1:]
> > +                    ibgp = True
> > +            elif not read_next:
> > +                continue
> > +
> > +            elems = line.split()
> > +
> > +            if len(elems) == 1:
> > +                read_next = True
> > +                prefix = elems[0]
> > +                continue
> > +            elif read_next:
> > +                nexthop = elems[0]
> > +            else:
> > +                prefix = elems[0]
> > +                nexthop = elems[1]
> > +            read_next = False
> > +
> > +            rib.append({'prefix': prefix, 'nexthop': nexthop,
> > +                        'ibgp': ibgp})
> > +
> > +        return rib
> > +
> > +    def get_global_rib_with_prefix(self, prefix, rf):
> > +        rib = []
> > +
> > +        lines = [line.strip() for line in self.vtysh(
> > +            'show bgp {0} unicast {1}'.format(rf, prefix),
> > +            config=False).split('\n')]
> > +
> > +        if lines[0] == '% Network not in table':
> > +            return rib
> > +
> > +        lines = lines[2:]
> > +
> > +        if lines[0].startswith('Not advertised'):
> > +            lines.pop(0)  # another useless line
> > +        elif lines[0].startswith('Advertised to non peer-group peers:'):
> > +            lines = lines[2:]  # other useless lines
> > +        else:
> > +            raise Exception('unknown output format {0}'.format(lines))
> > +
> > +        if lines[0] == 'Local':
> > +            aspath = []
> > +        else:
> > +            aspath = [int(asn) for asn in lines[0].split()]
> > +
> > +        nexthop = lines[1].split()[0].strip()
> > +        info = [s.strip(',') for s in lines[2].split()]
> > +        attrs = []
> > +        if 'metric' in info:
> > +            med = info[info.index('metric') + 1]
> > +            attrs.append({'type': base.BGP_ATTR_TYPE_MULTI_EXIT_DISC,
> > +                          'metric': int(med)})
> > +        if 'localpref' in info:
> > +            localpref = info[info.index('localpref') + 1]
> > +            attrs.append({'type': base.BGP_ATTR_TYPE_LOCAL_PREF,
> > +                          'value': int(localpref)})
> > +
> > +        rib.append({'prefix': prefix, 'nexthop': nexthop,
> > +                    'aspath': aspath, 'attrs': attrs})
> > +
> > +        return rib
> > +
> > +    def get_neighbor_state(self, peer):
> > +        if peer not in self.peers:
> > +            raise Exception('not found peer {0}'.format(peer.router_id))
> > +
> > +        neigh_addr = self.peers[peer]['neigh_addr'].split('/')[0]
> > +
> > +        info = [l.strip() for l in self.vtysh(
> > +            'show bgp neighbors {0}'.format(neigh_addr),
> > +            config=False).split('\n')]
> > +
> > +        if not info[0].startswith('BGP neighbor is'):
> > +            raise Exception('unknown format')
> > +
> > +        idx1 = info[0].index('BGP neighbor is ')
> > +        idx2 = info[0].index(',')
> > +        n_addr = info[0][idx1 + len('BGP neighbor is '):idx2]
> > +        if n_addr == neigh_addr:
> > +            idx1 = info[2].index('= ')
> > +            state = info[2][idx1 + len('= '):]
> > +            if state.startswith('Idle'):
> > +                return base.BGP_FSM_IDLE
> > +            elif state.startswith('Active'):
> > +                return base.BGP_FSM_ACTIVE
> > +            elif state.startswith('Established'):
> > +                return base.BGP_FSM_ESTABLISHED
> > +            else:
> > +                return state
> > +
> > +        raise Exception('not found peer {0}'.format(peer.router_id))
> > +
> > +    def send_route_refresh(self):
> > +        self.vtysh('clear ip bgp * soft', config=False)
> > +
> > +    def create_config(self):
> > +        zebra = 'no'
> > +        self._create_config_bgp()
> > +        if self.zebra:
> > +            zebra = 'yes'
> > +            self._create_config_zebra()
> > +        self._create_config_daemons(zebra)
> > +
> > +    def _create_config_debian(self):
> > +        c = base.CmdBuffer()
> > +        c << 'vtysh_enable=yes'
> > +        c << 'zebra_options="  --daemon -A 127.0.0.1"'
> > +        c << 'bgpd_options="   --daemon -A 127.0.0.1"'
> > +        c << 'ospfd_options="  --daemon -A 127.0.0.1"'
> > +        c << 'ospf6d_options=" --daemon -A ::1"'
> > +        c << 'ripd_options="   --daemon -A 127.0.0.1"'
> > +        c << 'ripngd_options=" --daemon -A ::1"'
> > +        c << 'isisd_options="  --daemon -A 127.0.0.1"'
> > +        c << 'babeld_options=" --daemon -A 127.0.0.1"'
> > +        c << 'watchquagga_enable=yes'
> > +        c << 'watchquagga_options=(--daemon)'
> > +        with open('{0}/debian.conf'.format(self.config_dir), 'w') as f:
> > +            LOG.info('[{0}\'s new config]'.format(self.name))
> > +            LOG.info(str(c))
> > +            f.writelines(str(c))
> > +
> > +    def _create_config_daemons(self, zebra='no'):
> > +        c = base.CmdBuffer()
> > +        c << 'zebra=%s' % zebra
> > +        c << 'bgpd=yes'
> > +        c << 'ospfd=no'
> > +        c << 'ospf6d=no'
> > +        c << 'ripd=no'
> > +        c << 'ripngd=no'
> > +        c << 'isisd=no'
> > +        c << 'babeld=no'
> > +        with open('{0}/daemons'.format(self.config_dir), 'w') as f:
> > +            LOG.info('[{0}\'s new config]'.format(self.name))
> > +            LOG.info(str(c))
> > +            f.writelines(str(c))
> > +
> > +    def _create_config_bgp(self):
> > +
> > +        c = base.CmdBuffer()
> > +        c << 'hostname bgpd'
> > +        c << 'password zebra'
> > +        c << 'router bgp {0}'.format(self.asn)
> > +        c << 'bgp router-id {0}'.format(self.router_id)
> > +        if any(info['graceful_restart'] for info in self.peers.values()):
> > +            c << 'bgp graceful-restart'
> > +
> > +        version = 4
> > +        for peer, info in self.peers.items():
> > +            version = netaddr.IPNetwork(info['neigh_addr']).version
> > +            n_addr = info['neigh_addr'].split('/')[0]
> > +            if version == 6:
> > +                c << 'no bgp default ipv4-unicast'
> > +
> > +            c << 'neighbor {0} remote-as {1}'.format(n_addr, peer.asn)
> > +            if info['is_rs_client']:
> > +                c << 'neighbor {0} route-server-client'.format(n_addr)
> > +            for typ, p in info['policies'].items():
> > +                c << 'neighbor {0} route-map {1} {2}'.format(n_addr, 
> > p['name'],
> > +                                                             typ)
> > +            if info['passwd']:
> > +                c << 'neighbor {0} password {1}'.format(n_addr, 
> > info['passwd'])
> > +            if info['passive']:
> > +                c << 'neighbor {0} passive'.format(n_addr)
> > +            if version == 6:
> > +                c << 'address-family ipv6 unicast'
> > +                c << 'neighbor {0} activate'.format(n_addr)
> > +                c << 'exit-address-family'
> > +
> > +        for route in self.routes.values():
> > +            if route['rf'] == 'ipv4':
> > +                c << 'network {0}'.format(route['prefix'])
> > +            elif route['rf'] == 'ipv6':
> > +                c << 'address-family ipv6 unicast'
> > +                c << 'network {0}'.format(route['prefix'])
> > +                c << 'exit-address-family'
> > +            else:
> > +                raise Exception(
> > +                    'unsupported route faily: {0}'.format(route['rf']))
> > +
> > +        if self.zebra:
> > +            if version == 6:
> > +                c << 'address-family ipv6 unicast'
> > +                c << 'redistribute connected'
> > +                c << 'exit-address-family'
> > +            else:
> > +                c << 'redistribute connected'
> > +
> > +        for name, policy in self.policies.items():
> > +            c << 'access-list {0} {1} {2}'.format(name, policy['type'],
> > +                                                  policy['match'])
> > +            c << 'route-map {0} permit 10'.format(name)
> > +            c << 'match ip address {0}'.format(name)
> > +            c << 'set metric {0}'.format(policy['med'])
> > +
> > +        c << 'debug bgp as4'
> > +        c << 'debug bgp fsm'
> > +        c << 'debug bgp updates'
> > +        c << 'debug bgp events'
> > +        c << 'log file {0}/bgpd.log'.format(self.SHARED_VOLUME)
> > +
> > +        with open('{0}/bgpd.conf'.format(self.config_dir), 'w') as f:
> > +            LOG.info('[{0}\'s new config]'.format(self.name))
> > +            LOG.info(str(c))
> > +            f.writelines(str(c))
> > +
> > +    def _create_config_zebra(self):
> > +        c = base.CmdBuffer()
> > +        c << 'hostname zebra'
> > +        c << 'password zebra'
> > +        c << 'log file {0}/zebra.log'.format(self.SHARED_VOLUME)
> > +        c << 'debug zebra packet'
> > +        c << 'debug zebra kernel'
> > +        c << 'debug zebra rib'
> > +        c << ''
> > +
> > +        with open('{0}/zebra.conf'.format(self.config_dir), 'w') as f:
> > +            LOG.info('[{0}\'s new config]'.format(self.name))
> > +            LOG.info(str(c))
> > +            f.writelines(str(c))
> > +
> > +    def vtysh(self, cmd, config=True):
> > +        if type(cmd) is not list:
> > +            cmd = [cmd]
> > +        cmd = ' '.join("-c '{0}'".format(c) for c in cmd)
> > +        if config:
> > +            return self.exec_on_ctn(
> > +                "vtysh -d bgpd -c 'en' -c 'conf t' -c "
> > +                "'router bgp {0}' {1}".format(self.asn, cmd),
> > +                capture=True)
> > +        else:
> > +            return self.exec_on_ctn("vtysh -d bgpd {0}".format(cmd),
> > +                                    capture=True)
> > +
> > +    def reload_config(self):
> > +        daemon = []
> > +        daemon.append('bgpd')
> > +        if self.zebra:
> > +            daemon.append('zebra')
> > +        for d in daemon:
> > +            cmd = '/usr/bin/pkill {0} -SIGHUP'.format(d)
> > +            self.exec_on_ctn(cmd, capture=True)
> > +
> > +
> > +class RawQuaggaBGPContainer(QuaggaBGPContainer):
> > +    def __init__(self, name, config, ctn_image_name,
> > +                 zebra=False):
> > +        asn = None
> > +        router_id = None
> > +        for line in config.split('\n'):
> > +            line = line.strip()
> > +            if line.startswith('router bgp'):
> > +                asn = int(line[len('router bgp'):].strip())
> > +            if line.startswith('bgp router-id'):
> > +                router_id = line[len('bgp router-id'):].strip()
> > +        if not asn:
> > +            raise Exception('asn not in quagga config')
> > +        if not router_id:
> > +            raise Exception('router-id not in quagga config')
> > +        self.config = config
> > +        super(RawQuaggaBGPContainer, self).__init__(name, asn, router_id,
> > +                                                    ctn_image_name, zebra)
> > +
> > +    def create_config(self):
> > +        with open('{0}/bgpd.conf'.format(self.config_dir), 'w') as f:
> > +            LOG.info('[{0}\'s new config]'.format(self.name))
> > +            LOG.info(self.config)
> > +            f.writelines(self.config)
> > diff --git a/ryu/tests/integrated/common/ryubgp.py 
> > b/ryu/tests/integrated/common/ryubgp.py
> > new file mode 100644
> > index 0000000..376bf74
> > --- /dev/null
> > +++ b/ryu/tests/integrated/common/ryubgp.py
> > @@ -0,0 +1,206 @@
> > +# Copyright (C) 2016 Nippon Telegraph and Telephone Corporation.
> > +#
> > +# Licensed under the Apache License, Version 2.0 (the "License");
> > +# you may not use this file except in compliance with the License.
> > +# You may obtain a copy of the License at
> > +#
> > +#    http://www.apache.org/licenses/LICENSE-2.0
> > +#
> > +# Unless required by applicable law or agreed to in writing, software
> > +# distributed under the License is distributed on an "AS IS" BASIS,
> > +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> > +# implied.
> > +# See the License for the specific language governing permissions and
> > +# limitations under the License.
> > +
> > +from __future__ import absolute_import
> > +
> > +import logging
> > +import time
> > +
> > +from . import docker_base as base
> > +
> > +LOG = logging.getLogger(__name__)
> > +
> > +
> > +class RyuBGPContainer(base.BGPContainer):
> > +
> > +    WAIT_FOR_BOOT = 1
> > +    SHARED_VOLUME = '/etc/ryu'
> > +
> > +    def __init__(self, name, asn, router_id, ctn_image_name):
> > +        super(RyuBGPContainer, self).__init__(name, asn, router_id,
> > +                                              ctn_image_name)
> > +        self.RYU_CONF = self.config_dir + '/ryu.conf'
> > +        self.SHARED_RYU_CONF = self.SHARED_VOLUME + '/ryu.conf'
> > +        self.SHARED_BGP_CONF = self.SHARED_VOLUME + '/bgp_conf.py'
> > +        self.shared_volumes.append((self.config_dir, self.SHARED_VOLUME))
> > +
> > +    def _create_config_ryu(self):
> > +        c = base.CmdBuffer()
> > +        c << '[DEFAULT]'
> > +        c << 'verbose=True'
> > +        c << 'log_file=/etc/ryu/manager.log'
> > +        c << 'bgp_config_file=' + self.SHARED_BGP_CONF
> > +        with open(self.RYU_CONF, 'w') as f:
> > +            LOG.info("[%s's new config]" % self.name)
> > +            LOG.info(str(c))
> > +            f.writelines(str(c))
> > +
> > +    def _create_config_ryu_bgp(self):
> > +        c = base.CmdBuffer()
> > +        c << 'import os'
> > +        c << ''
> > +        c << 'BGP = {'
> > +        c << "    'routing': {"
> > +        c << "        'local_as': %s," % str(self.asn)
> > +        c << "        'router_id': '%s'," % self.router_id
> > +        c << "        'bgp_neighbors': {"
> > +        for peer, info in self.peers.items():
> > +            n_addr = info['neigh_addr'].split('/')[0]
> > +            c << "            '%s': {" % n_addr
> > +            c << "                'remote_as': %s," % str(peer.asn)
> > +            c << '            },'
> > +            c << '        },'
> > +        c << "        'networks': ["
> > +        for route in self.routes.values():
> > +            c << "            '%s'," % route['prefix']
> > +        c << "        ],"
> > +        c << "    },"
> > +        c << "}"
> > +        log_conf = """LOGGING = {
> > +
> > +    # We use python logging package for logging.
> > +    'version': 1,
> > +    'disable_existing_loggers': False,
> > +
> > +    'formatters': {
> > +        'verbose': {
> > +            'format': '%(levelname)s %(asctime)s %(module)s ' +
> > +                      '[%(process)d %(thread)d] %(message)s'
> > +        },
> > +        'simple': {
> > +            'format': '%(levelname)s %(asctime)s %(module)s %(lineno)s ' +
> > +                      '%(message)s'
> > +        },
> > +        'stats': {
> > +            'format': '%(message)s'
> > +        },
> > +    },
> > +
> > +    'handlers': {
> > +        # Outputs log to console.
> > +        'console': {
> > +            'level': 'DEBUG',
> > +            'class': 'logging.StreamHandler',
> > +            'formatter': 'simple'
> > +        },
> > +        'console_stats': {
> > +            'level': 'DEBUG',
> > +            'class': 'logging.StreamHandler',
> > +            'formatter': 'stats'
> > +        },
> > +        # Rotates log file when its size reaches 10MB.
> > +        'log_file': {
> > +            'level': 'DEBUG',
> > +            'class': 'logging.handlers.RotatingFileHandler',
> > +            'filename': os.path.join('.', 'bgpspeaker.log'),
> > +            'maxBytes': '10000000',
> > +            'formatter': 'verbose'
> > +        },
> > +        'stats_file': {
> > +            'level': 'DEBUG',
> > +            'class': 'logging.handlers.RotatingFileHandler',
> > +            'filename': os.path.join('.', 'statistics_bgps.log'),
> > +            'maxBytes': '10000000',
> > +            'formatter': 'stats'
> > +        },
> > +    },
> > +
> > +    # Fine-grained control of logging per instance.
> > +    'loggers': {
> > +        'bgpspeaker': {
> > +            'handlers': ['console', 'log_file'],
> > +            'handlers': ['console'],
> > +            'level': 'DEBUG',
> > +            'propagate': False,
> > +        },
> > +        'stats': {
> > +            'handlers': ['stats_file', 'console_stats'],
> > +            'level': 'INFO',
> > +            'propagate': False,
> > +            'formatter': 'stats',
> > +        },
> > +    },
> > +
> > +    # Root loggers.
> > +    'root': {
> > +        'handlers': ['console', 'log_file'],
> > +        'level': 'DEBUG',
> > +        'propagate': True,
> > +    },
> > +}"""
> > +        c << log_conf
> > +        with open(self.config_dir + '/bgp_conf.py', 'w') as f:
> > +            LOG.info("[%s's new config]" % self.name)
> > +            LOG.info(str(c))
> > +            f.writelines(str(c))
> > +
> > +    def create_config(self):
> > +        self._create_config_ryu()
> > +        self._create_config_ryu_bgp()
> > +
> > +    def is_running_ryu(self):
> > +        results = self.exec_on_ctn('ps ax')
> > +        running = False
> > +        for line in results.split('\n')[1:]:
> > +            if 'ryu-manager' in line:
> > +                running = True
> > +        return running
> > +
> > +    def start_ryubgp(self, check_running=True, retry=False):
> > +        if check_running:
> > +            if self.is_running_ryu():
> > +                return True
> > +        result = False
> > +        if retry:
> > +            try_times = 3
> > +        else:
> > +            try_times = 1
> > +        cmd = "ryu-manager --verbose "
> > +        cmd += "--config-file %s " % self.SHARED_RYU_CONF
> > +        cmd += "ryu.tests.integrated.bgp.ryubgp_app"
> > +        for i in range(try_times):
> > +            self.exec_on_ctn(cmd, detach=True)
> > +            if self.is_running_ryu():
> > +                result = True
> > +                break
> > +            time.sleep(1)
> > +        return result
> > +
> > +    def stop_ryubgp(self, check_running=True, retry=False):
> > +        if check_running:
> > +            if not self.is_running_ryu():
> > +                return True
> > +        result = False
> > +        if retry:
> > +            try_times = 3
> > +        else:
> > +            try_times = 1
> > +        for i in range(try_times):
> > +            cmd = '/usr/bin/pkill ryu-manager -SIGTERM'
> > +            self.exec_on_ctn(cmd)
> > +            if not self.is_running_ryu():
> > +                result = True
> > +                break
> > +            time.sleep(1)
> > +        return result
> > +
> > +    def run(self, wait=False):
> > +        w_time = super(RyuBGPContainer,
> > +                       self).run(wait=wait, w_time=self.WAIT_FOR_BOOT)
> > +        return w_time
> > +
> > +    def reload_config(self):
> > +        self.stop_ryubgp(retry=True)
> > +        self.start_ryubgp(retry=True)
> >

-- 
fumihiko kakuma <[email protected]>



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Ryu-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ryu-devel

Reply via email to