Re: [openstack-dev] [cinder] making volume available without stopping VM

2018-06-26 Thread Volodymyr Litovka
eak dependencies. If this is fully controlled 
environment (nobody else can modify it in any way or reattach it to 
other instance or make anything else with the volume), which other kinds 
of problems can appear in this case?


Thank you.


You can get some details from the cinder spec:

https://specs.openstack.org/openstack/cinder-specs/specs/pike/extend-attached-volume.html

And the corresponding Nova spec:

http://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/nova-support-attached-volume-extend.html

You may also want to read through the mailing list thread if you want to get in
to some of the nitty gritty details behind why certain design choices were
made:

http://lists.openstack.org/pipermail/openstack-dev/2017-April/115292.html


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] making volume available without stopping VM

2018-06-25 Thread Volodymyr Litovka

Hi Jay,

We have had similar issues with extending attached volumes that are 
iSCSI based. In that case the VM has to be forced to rescan the scsi bus.


In this case I am not sure if there needs to be a change to Libvirt or 
to rbd or something else.


I would recommend reaching out to John Bernard for help.


In fact, I'm ok with delayed resize (upon power-cycle), and it's not an 
issue for me that VM don't detect changes immediately. What I want to 
understand is that changes to Cinder (and, thus, underlying changes to 
CEPH) are safe for VM while it's in active state.


Hopefully, Jon will help with this question.

Thank you!

On 6/23/18 8:41 PM, Jay Bryant wrote:



On Sat, Jun 23, 2018, 9:39 AM Volodymyr Litovka <mailto:doka...@gmx.com>> wrote:


Dear friends,

I did some tests with making volume available without stopping VM.
I'm
using CEPH and these steps produce the following results:

1) openstack volume set --state available [UUID]
- nothing changed inside both VM (volume is still connected) and CEPH
2) openstack volume set --size [new size] --state in-use [UUID]
- nothing changed inside VM (volume is still connected and has an
old size)
- size of CEPH volume changed to the new value
3) during these operations I was copying a lot of data from external
source and all md5 sums are the same on both VM and source
4) changes on VM happens upon any kind of power-cycle (e.g. reboot
(either soft or hard): openstack server reboot [--hard] [VM uuid] )
- note: NOT after 'reboot' from inside VM

It seems, that all these manipilations with cinder just update
internal
parameters of cinder/CEPH subsystems, without immediate effect for
VMs.
Is it safe to use this mechanism in this particular environent (e.g.
CEPH as backend)?

 From practical point of view, it's useful when somebody, for
example,
update project in batch mode, and will then manually reboot every VM,
affected by the update, in appropriate time with minimized downtime
(it's just reboot, not manual stop/update/start).

Thank you.

    -- 
Volodymyr Litovka

   "Vision without Execution is Hallucination." -- Thomas Edison



--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] making volume available without stopping VM

2018-06-23 Thread Volodymyr Litovka

Dear friends,

I did some tests with making volume available without stopping VM. I'm 
using CEPH and these steps produce the following results:


1) openstack volume set --state available [UUID]
- nothing changed inside both VM (volume is still connected) and CEPH
2) openstack volume set --size [new size] --state in-use [UUID]
- nothing changed inside VM (volume is still connected and has an old size)
- size of CEPH volume changed to the new value
3) during these operations I was copying a lot of data from external 
source and all md5 sums are the same on both VM and source
4) changes on VM happens upon any kind of power-cycle (e.g. reboot 
(either soft or hard): openstack server reboot [--hard] [VM uuid] )

- note: NOT after 'reboot' from inside VM

It seems, that all these manipilations with cinder just update internal 
parameters of cinder/CEPH subsystems, without immediate effect for VMs. 
Is it safe to use this mechanism in this particular environent (e.g. 
CEPH as backend)?


From practical point of view, it's useful when somebody, for example, 
update project in batch mode, and will then manually reboot every VM, 
affected by the update, in appropriate time with minimized downtime 
(it's just reboot, not manual stop/update/start).


Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] route metrics inside VR

2018-03-09 Thread Volodymyr Litovka

Dear colleagues,

for some reasons (see below explanation) , I'm trying to deploy the 
following network configuration:


  Network
+---+
 Subnet-1 Subnet-2
+---++--+   ++--+
    |    |    ++ |
    |    |    |    | |
    |    ++ VR +-+
    | |    |
 +--+-+   ++
 |    |
 | VM |
 |    |
 ++

where VR is Neutron's virtual router, connected to two subnets, which 
belong to same network:

Subnet-1 is "LAN" interface (25.0.0.1/8) connected to qr-64c53cf8-d9
Subnet-2 is external gateway (51.x.x.x) connected to qg-16bdddb1-d5 with 
SNAT enabled


The reason why I'm trying to use this configuration is pretty simple - 
this allows to switch VM between diffrent address scopes (e.g. "grey" 
and "white") while preserving port/MAC (which is created in the 
"Network" and remains there while I'm switching VM between different 
subnets).


Such configuration produces the following commands list when creating VR:

14:45:18.043 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 
'ip', '-4', 'addr', 'add', '25.0.0.1/8', 'scope', 'global', 'dev', 
'qr-64c53cf8-d9', 'brd', '25.255.255.255']
14:45:19.815 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 
'ip', '-4', 'addr', 'add', '51.x.x.x/24', 'scope', 'global', 'dev', 
'qg-16bdddb1-d5', 'brd', '51.x.x.255']
14:45:20.283 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 
'ip', '-4', 'route', 'replace', '25.0.0.0/8', 'dev', 'qg-16bdddb1-d5', 
'scope', 'link']
14:45:20.919 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 
'ip', '-4', 'route', 'replace', 'default', 'via', '51.x.x.254', 'dev', 
'qg-16bdddb1-d5']


Since 25/8 is extra subnet of "Network",  Neutron installs this entry 
(by using 'ip route replace') despite the fact that there should be 
connected route (via qr-64c53cf8-d9).


Due to current implementation, all traffic from VR to directly connected 
"subnet-1" goes over "subnet-2" (through NAT) and, thus, VM in Subnet-1 
can't access VR - it "pings" local address (25.0.0.1) while replies 
return from another (NAT) address.


Whether this behaviour can be safely changed by using "ip route add 
[...] metric " instead of "ip route replace"?


Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Octavia] Re: [Bug 1737567] Re: Direct support for Octavia LBaaS API

2017-12-13 Thread Volodymyr Litovka

Hi Rabi,

see below

On 12/13/17 11:03 AM, Rabi Mishra wrote:


if Heat will provide a way to choose provider (neutron-lbaas or octavia), then 
customers will continue to use neutron-lbaas as long as
it will be required, with their specific drivers (haproxy, F5, A10,
etc), gradually migrating to Octavia when time will come.

Heat already provides that, though it uses neutron lbaas api extenstions
and not the octavia API (you've to set the service provider for lbaas
config ex. service_provider =
LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default).


As said:

Octavia is properly returning a 409 HTTP
status code telling the caller that the load balancer is in an
immutable state and the user should try again.

The issue is neutron-lbaas has some fundamental issues with it's
object locking that would require a full re-write to correct.
neutron-lbaas is not using transactions and locking correctly, so it
is allowing your second request through even though the load balancer
is/should be locked on the first request.


which means, that neutron-lbaas unsuitable for automated operations 
(high operations rate) on same objects if, for some reasons, provider 
asks for delay. I confirm the issue, described here - 
http://lists.openstack.org/pipermail/openstack-dev/2017-July/120145.html. 
It's not Heat's issue, it's neutron-lbaas issue and, while neutron-lbaas 
has such kind of problems, relying on it is undesirable.



We would probably not like to have the logic in the resources to call
two different api endpoints based on the 'provider' choice in resource
properties and then provide more functionality for the ones using
'octavia'.


What I'm talking about is not replacing existing resources and not 
expanding functionality, but provide the same functionality using the 
same set of resources using two different providers. It's, IMHO, the 
easiest and fastest way to start support Octavia and work around current 
neutron-lbaas issues.


Yes, Octavia provides superset of current neutron-lbaas API and said 
above doesn't cancel idea to create another set of resources. If Heat 
will provide basic set of functions within basic LBaaS framework and, 
sometimes, richer set within "NG LBaaS" framework - the only I can say: 
it will be great.


Thanks.

https://bugs.launchpad.net/heat/+bug/1737567


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] API v2 or v1 or both?

2017-12-04 Thread Volodymyr Litovka

Hi colleagues,

when I use in [api_settings] section of octavia.conf

api_v1_enabled = False
api_v2_enabled = True

I'm getting member creation failed:

2017-12-04 22:20:37.326 7199 INFO 
neutron_lbaas.services.loadbalancer.plugin 
[req-2c3d3185-9440-4f3d-90de-dff87de783b9 
e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - 
default default] Calling driver operation LoadBalancerManager.create
2017-12-04 22:20:37.326 7199 DEBUG neutron_lbaas.drivers.octavia.driver 
[req-2c3d3185-9440-4f3d-90de-dff87de783b9 
e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - 
default default] url = http://lagavulin:9876/v1/loadbalancers request 
/usr/lib/python2.7/dist-packages/neutron_lbaas/drivers/octavia/driver.py:138
2017-12-04 22:20:37.327 7199 DEBUG neutron_lbaas.drivers.octavia.driver 
[req-2c3d3185-9440-4f3d-90de-dff87de783b9 
e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - 
default default] args = {"vip": {"subnet_id": 
"1748a90d-25bc-46a0-b623-d8450db83ed1", "port_id": 
"f773321c-eb34-41fa-8434-45358e6232fb", "ip_address": "10.1.1.12"}, 
"name": "nbt-balancer", "project_id": 
"c1114776e144400da17d8e060856be8c", "enabled": true, "id": 
"25604d3c-1714-4837-ad98-8ea2d2f03bc7", "description": ""} request 
/usr/lib/python2.7/dist-packages/neutron_lbaas/drivers/octavia/driver.py:139
2017-12-04 22:20:37.337 7199 DEBUG neutron_lbaas.drivers.octavia.driver 
[req-2c3d3185-9440-4f3d-90de-dff87de783b9 
e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - 
default default] Octavia Response Code: 405 request 
/usr/lib/python2.7/dist-packages/neutron_lbaas/drivers/octavia/driver.py:144
2017-12-04 22:20:37.338 7199 DEBUG neutron_lbaas.drivers.octavia.driver 
[req-2c3d3185-9440-4f3d-90de-dff87de783b9 
e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - 
default default] Octavia Response Body: 

 
  405 Method Not Allowed
 
 
  405 Method Not Allowed
  The method POST is not allowed for this resource. 

 
 request 
/usr/lib/python2.7/dist-packages/neutron_lbaas/drivers/octavia/driver.py:145


when I use both v1 and v2 enabled, I see ALL calls to API like 
http://...:9876/_v1_/... and corresponding log records like


2017-12-04 18:12:01.676 27192 INFO octavia.api._v1_.controllers.*

does "v1" means that neutron use LBaaS API v1 instead of v2 ? Neutron 
itself configured to use LBaaS v2 :


[DEFAULT]
service_plugins = 
router,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

[service_providers]
service_provider = 
LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default


Is it enough or how to configure both neutron and octavia to use API v2 
instead of v1? I'm under Ubuntu 16 and Openstack Pike and there are just 
"python-neutron-lbaas" package installed (no lbaasv2-agent, 
lbaas-common, etc) and octavia itself (using "pip install octavia" in 
_virtual environment_). Thus, endpoints configured unversioned in this way:


+--+---+--++-+---+--+
| ID   | Region    | Service Name | Service 
Type   | Enabled | Interface | URL  |

+--+---+--++-+---+--+
| 18862b1bd4c643aca207f8b2d9066895 | RegionOne | octavia  | 
load-balancer  | True    | internal  | http://lagavulin:9876/   |
| 8cb62ab73fb2431dbc3a0def744852ea | RegionOne | octavia  | 
load-balancer  | True    | public    | http://lagavulin:9876/   |
| 909e9fc434cb4667bb828828bf49f906 | RegionOne | octavia  | 
load-balancer  | True    | admin | http://lagavulin:9876/   |

+--+---+--++-+---+--+

and no neutron lbaas agent in the list of agents (since there is no 
lbaasv2 agent installed and configured in the system).


Seems I'm missing something.

And two related questions:

 * which method to provide API is more preferable - WSGI or standalone
   octavia-api process?
 * how to be sure that Octavia code matches Pike version? I'm using
   "pip install octavia" and it's v1.0.1 at the moment, but not sure it
   matches package "python-neutron-lbaas" version 2:11.0.0-0ubuntu1~cloud0

Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] [heat] errors during loadbalancer creation

2017-11-29 Thread Volodymyr Litovka
 heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] sleeping _sleep 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:155
2017-11-29 12:04:36.017 6286 DEBUG heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] running step 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:214
2017-11-29 12:04:38.763 6286 DEBUG heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] sleeping _sleep 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:155
2017-11-29 12:04:39.763 6286 DEBUG heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] running step 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:214
2017-11-29 12:04:39.891 6286 DEBUG heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] sleeping _sleep 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:155
2017-11-29 12:04:40.892 6286 DEBUG heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] running step 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:214
2017-11-29 12:04:41.013 6286 DEBUG heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] sleeping _sleep 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:155
2017-11-29 12:04:42.013 6286 DEBUG heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] running step 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:214
2017-11-29 12:04:42.277 6286 DEBUG heat.engine.scheduler 
[req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task 
create from PoolMember "pm1" Stack "nbt" 
[b8beca77-19c7-49e5-94a7-ec079d841277] complete step 
/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:220


Heat template for loadbalancer is the following:

  balancer:
    type: OS::Neutron::LBaaS::LoadBalancer
    properties:
  name: nbt-balancer
  vip_subnet: { get_resource: lan-subnet }

  listener:
    type: OS::Neutron::LBaaS::Listener
    properties:
  name: nbt-listener
  protocol: TCP
  protocol_port: { get_param: lb_port }
  loadbalancer: { get_resource: balancer }

  pool:
    type: OS::Neutron::LBaaS::Pool
    properties:
  name: nbt-pool
  protocol: TCP
  lb_algorithm: ROUND_ROBIN
  listener: { get_resource: listener }

  pm1:
    type:  OS::Neutron::LBaaS::PoolMember
    properties:
  address: { get_attr: [ n1, first_address ]}
  pool: { get_resource: pool }
  protocol_port: { get_param: pool_port }
  subnet: { get_resource: lan-subnet }

  pm2:
    type:  OS::Neutron::LBaaS::PoolMember
    properties:
  address: { get_attr: [ n2, first_address ]}
  pool: { get_resource: pool }
  protocol_port: { get_param: pool_port }
  subnet: { get_resource: lan-subnet }

and, of course, servers n1 and n2 are exist and are operational.

I will appreciate if you'll take a look at the issue and give some 
feedback on this. I can provide any related information in order to 
clarify this issue.


Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] openstack CLI output don't match neutron lbaas-* output

2017-11-29 Thread Volodymyr Litovka
7956-7c54-448a-96a7-709905c2bf4f
neutron CLI is deprecated and will be removed in the future. Use 
openstack CLI instead.

+--++
| Field    | Value  |
+--++
| admin_state_up   | True   |
| delay    | 5  |
| id   | 71cc7956-7c54-448a-96a7-709905c2bf4f   |
| max_retries  | 3  |
| max_retries_down | 3  |
| name |    |
| pools    | {"id": "e106e039-af27-4cfa-baa2-7238acd3078e"} |
| tenant_id    | c1114776e144400da17d8e060856be8c   |
| timeout  | 1  |
| type | PING   |
+--++

but openstack cli extension thinks different*
*

doka@lagavulin(admin@bush):~/heat$ openstack loadbalancer healthmonitor list

doka@lagavulin(admin@bush):~/heat$ openstack loadbalancer healthmonitor 
show 71cc7956-7c54-448a-96a7-709905c2bf4f

Unable to locate 71cc7956-7c54-448a-96a7-709905c2bf4f in healthmonitors

Be informed and can this impact something else?

Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] connection to external network

2017-11-28 Thread Volodymyr Litovka

Hi colleagues,

found the solution, it need to be done manually. No corresponding 
Octavia configuration responsible for this.


Everything works, thank you :-)


On 11/27/17 11:30 AM, Volodymyr Litovka wrote:


Hello colleagues,

I think I'm missing something architectural in LBaaS / Octavia, thus 
asking there - how to connect Amphora agent to external network? My 
current lab topology is the following:


    +
    |
    |    ++
    +   ++ n1 |
    |    +-+    |    ++
    ++ Amphora ++
    |    +-+    |    ++
  m | n ++ n2 |
  g | b |    ++    + e
  m | t |  | x
  t |   |    ++    | t
    | s ++ vR ++ e
    | u |    ++    | r
   ++ b |  | n
   | Controller | n |    ++    | a
   ++ e |+ n3 |    + l
  t |    ++
    +

where "Amphora" is agent which loadbalances requests between "n1" and 
"n2":


  * openstack loadbalancer create --name lb1 --vip-subnet-id
nbt-subnet --project bush
  * openstack loadbalancer listener create --protocol TCP
--protocol-port 80 --name lis1 lb1
  * openstack loadbalancer pool create --protocol TCP --listener lis1
--name lpool1 --lb-algorithm ROUND_ROBIN
  * openstack loadbalancer member create --protocol-port 80 --name n1
--address 1.1.1.11 lpool1
  * openstack loadbalancer member create --protocol-port 80 --name n2
--address 1.1.1.14 lpool1

Everything works (n3-sourced connections to Amphora-agent return 
answers from n1 and n2 respectively in round robin way) and the 
question is how to connect Amphora-agent to external network in order 
to service requests from outside?


In example above, nbt-subnet (which is VIP network) has a virtual 
router which is connected to external network and has all abilities to 
provide e.g. floating ip to Amphora, but I see nothing in octavia 
config files regarding floating ip functions.


Am I missing something? Any ways on connect Web-servers in closed 
(project's) networks with Internet using Octavia / LBaaS?


Thank you!

--
Volodymyr Litovka
   "Vision without Execution is Hallucination." -- Thomas Edison


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] connection to external network

2017-11-27 Thread Volodymyr Litovka

Hello colleagues,

I think I'm missing something architectural in LBaaS / Octavia, thus 
asking there - how to connect Amphora agent to external network? My 
current lab topology is the following:


    +
    |
    |    ++
    +   ++ n1 |
    |    +-+    |    ++
    ++ Amphora ++
    |    +-+    |    ++
  m | n ++ n2 |
  g | b |    ++    + e
  m | t |  | x
  t |   |    ++    | t
    | s ++ vR ++ e
    | u |    ++    | r
   ++ b |  | n
   | Controller | n |    ++    | a
   ++ e |+ n3 |    + l
  t |    ++
    +

where "Amphora" is agent which loadbalances requests between "n1" and "n2":

 * openstack loadbalancer create --name lb1 --vip-subnet-id nbt-subnet
   --project bush
 * openstack loadbalancer listener create --protocol TCP
   --protocol-port 80 --name lis1 lb1
 * openstack loadbalancer pool create --protocol TCP --listener lis1
   --name lpool1 --lb-algorithm ROUND_ROBIN
 * openstack loadbalancer member create --protocol-port 80 --name n1
   --address 1.1.1.11 lpool1
 * openstack loadbalancer member create --protocol-port 80 --name n2
   --address 1.1.1.14 lpool1

Everything works (n3-sourced connections to Amphora-agent return answers 
from n1 and n2 respectively in round robin way) and the question is how 
to connect Amphora-agent to external network in order to service 
requests from outside?


In example above, nbt-subnet (which is VIP network) has a virtual router 
which is connected to external network and has all abilities to provide 
e.g. floating ip to Amphora, but I see nothing in octavia config files 
regarding floating ip functions.


Am I missing something? Any ways on connect Web-servers in closed 
(project's) networks with Internet using Octavia / LBaaS?


Thank you!

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] networking issues

2017-11-08 Thread Volodymyr Litovka
Please, disregard this message - I've found that part of networking 
resides in namespace.


On 11/7/17 5:54 PM, Volodymyr Litovka wrote:

Dear colleagues,

while trying to setup Octavia, I faced the problem of connecting 
amphora agent to VIP network.


*Environment:
*Octavia 1.0.1 (installed by using "pip install")
Openstack Pike:
- Nova 16.0.1
- Neutron 11.0.1
- Keystone 12.0.0

*Topology of testbed:*

    +
    |
    |    ++
    +   ++ n1 |
    |    +-+    |    ++
    ++ Amphora ++
    |    +-+    |    ++
  m | l ++ n2 |
  g | b |    ++    + e
  m | t |  | x
  t |   |    ++    | t
    | s ++ vR ++ e
    | u |    ++    | r
   ++ b |  | n
   | Controller | n |  | a
   ++ e |  + l
  t |
    +

*Summary:*

$ openstack loadbalancer create --name nlb2 --vip-subnet-id lbt-subnet
$ openstack loadbalancer list
+--+--+--+-+-+--+
| id   | name | 
project_id   | vip_address | provisioning_status | 
provider |

+--+--+--+-+-+--+
| 93facca0-d39a-44e0-96b6-28efc1388c2d | nlb2 | 
d8051a3ff3ad4c4bb380f828992b8178 | 1.1.1.16    | ACTIVE  | 
octavia  |

+--+--+--+-+-+--+
$ openstack server list --all
+--+--++-+-++
| ID   | 
Name | Status | 
Networks    | Image   | Flavor |

+--+--++-+-++
| 98ae591b-0270-4625-95eb-a557c1452eef | 
amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab | ACTIVE | 
lb-mgmt-net=172.16.252.28; lbt-net=1.1.1.11 | amphora |    |
| cc79ca78-b036-4d55-a4bd-5b3803ed2f9b | 
lb-n1    | ACTIVE | 
lbt-net=1.1.1.18    | | B-cup  |
| 6c43ccca-c808-44cf-974d-acdbdb4b26db | 
lb-n2    | ACTIVE | 
lbt-net=1.1.1.19    | | B-cup  |

+--+--++-+-++

This output shows that amphora agent is active with two interfaces, 
connected to management and project's networks (lb-mgmt-net and 
lbt-net respectively). BUT in fact there is no interface to lbt-net on 
the agent's VM:


*ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$* ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1

[ ... ]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
state UP group default qlen 1000

    link/ether d0:1c:a0:58:e0:02 brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.28/22 brd 172.16.255.255 scope global eth0
*ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$* ls 
/sys/class/net/

_eth0_ _lo_
*ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$*

The issue is that eth1 exists during start of agent's VM and then it 
magically disappears (snipped from syslog, note timing):


Nov  7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab 
dhclient[1051]: DHCPREQUEST of 1.1.1.11 on eth1 to 255.255.255.255 
port 67 (xid=0x1c65db9b)
Nov  7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab 
dhclient[1051]: DHCPOFFER of 1.1.1.11 from 1.1.1.10
Nov  7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab 
dhclient[1051]: DHCPACK of 1.1.1.11 from 1.1.1.10
Nov  7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab 
dhclient[1051]: bound to 1.1.1.11 -- renewal in 38793 seconds.

[ ... ]
Nov  7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab 
dhclient[1116]: receive_packet failed on eth1: Network is down
Nov  7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab 
systemd[1]: Stopping ifup for eth1...
Nov  7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab 
dhclient[1715]: Killed old client process
Nov  7 12:00:45 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab 
dhclient[1715]: Error getting hardware address for "eth1": No suc

[openstack-dev] [Octavia] networking issues

2017-11-07 Thread Volodymyr Litovka
200}


*Octavia-worker.log* is available at the following link: 
https://pastebin.com/44rwshKZ


*Q**uestion**s are* - any ideas on what is happening and which further 
information and debugs I need to gather in order to resolve this issue?


Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev