Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-29 Thread Michael Johnson
Hi Wanjing,

1. Yes, active/standby uses VRRP combined with our stickyness table
sync between the amphora.  However, since some clouds have had issues
with multicast we have opted to use the unicast mode between the
amphora instances.
2. This is a good question that we have been working on.  Currently we
do not have a working solution for containers or bare metal, but we
would like to.  We are close with nova-lxd, but we have hit some bugs.
Likewise with bare metal, I would expect we could integrate with
ironic pretty easily, it just hasn't been something the team has
worked on yet.  This is an area the project needs more work/support.

Michael

On Mon, Nov 28, 2016 at 3:45 PM, Wanjing Xu (waxu)  wrote:
> Thanks Michael
>
> I still have the following questions:
> 1)  For active-standby, do the amphorae VM pair really communicate with each 
> other using vrrp protocol, like using multicast vrrp IP?
> 2)   How do we install/configure the Octavia so that amphorae instances are 
> spun as containers or on bare metal?
>
> Thanks!
>
> Wanjing
>
> On 11/10/16, 5:12 PM, "Michael Johnson"  wrote:
>
> Hi Wanjing,
>
> Yes, active/standby is available in Mitaka.  You must enable it via
> the octavia.conf file.
>
> As for benchmarking, there has been some work done in this space (see
> the octavia meeting logs last month), but it varies greatly depending
> on how your cloud is configured and/or the hardware it is on.
>
> Michael
>
> On Thu, Nov 10, 2016 at 3:18 PM, Wanjing Xu (waxu)  wrote:
> > Thanks, Michael.  Now I have brought up this octavia.  I have a 
> question:
> > Is HA supported on octavia, or is it yet to come?  I am using
> > stable/mitaka and I only see one amphorae vm launched per loadbalancer.
> > And did anybody benchmark this octtavia against vender box?
> >
> > Regards!
> >
> > Wanjing
> >
> > On 11/7/16, 10:02 AM, "Michael Johnson"  wrote:
> >
> >>Hi Wanjing,
> >>
> >>You are not seeing the network interfaces for the VIP and member
> >>networks because they are inside a network namespace for security
> >>reasons.  You can see these by issuing "sudo ip netns exec
> >>amphora-haproxy ifconfig -a".
> >>
> >>I'm not sure what version of octavia and neutron you are using, but
> >>the issue you noted about "dns_name" was fixed here:
> >>https://review.openstack.org/#/c/337939/
> >>
> >>Michael
> >>
> >>
> >>On Thu, Nov 3, 2016 at 11:29 AM, Wanjing Xu (waxu)  
> wrote:
> >>> Going through the log , I saw the following error on o-hm
> >>>
> >>> 2016-11-03 03:31:06.441 19560 ERROR
> >>> octavia.controller.worker.controller_worker 
> request_ids=request_ids)
> >>> 2016-11-03 03:31:06.441 19560 ERROR
> >>> octavia.controller.worker.controller_worker BadRequest: Unrecognized
> >>> attribute(s) 'dns_name'
> >>> 2016-11-03 03:31:06.441 19560 ERROR
> >>> octavia.controller.worker.controller_worker Neutron server returns
> >>> request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']
> >>>
> >>> And it seemed that I need to upgrade my neutron client.  While I am
> >>>planning
> >>> to do it, could somebody please send me the document on how this vip 
> is
> >>> supposed to plug into the lbaas vm and what the failover is about ?
> >>>
> >>> Thanks!
> >>> Wanjing
> >>>
> >>>
> >>> From: Cisco Employee 
> >>> Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)"
> >>> 
> >>> Date: Wednesday, November 2, 2016 at 7:04 PM
> >>> To: "OpenStack Development Mailing List (not for usage questions)"
> >>> 
> >>> Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able
> >>>to
> >>> ping loadbalancer ip
> >>>
> >>> So I bring up octavia using devstack (stable/mitaka).   I created a
> >>> loadbalander and a listener(not create member yet) and start to look 
> at
> >>>how
> >>> things are connected to each other.  I can ssh to amphora vm and I do
> >>>see a
> >>> haproxy is up with front end point to my listener.  I tried to ping
> >>>(from
> >>> dhcp namespace) to the loadbalancer ip, and ping could not go through.
> >>>I am
> >>> wondering how packet is supposed to reach this amphora vm.  I can see
> >>>that
> >>> the vm is launched on both network(lb_mgmt network and my vipnet), 
> but I
> >>> don¹t see any nic associated with my vipnet:
> >>>
> >>> ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
> >>> eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
> >>>   inet addr:192.168.0.4  Bcast:192.168.0.255  
> Mask:255.255.255.0
> >>>   inet6 addr: 

Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-28 Thread Wanjing Xu (waxu)
Thanks Michael

I still have the following questions:
1)  For active-standby, do the amphorae VM pair really communicate with each 
other using vrrp protocol, like using multicast vrrp IP?
2)   How do we install/configure the Octavia so that amphorae instances are 
spun as containers or on bare metal?

Thanks!

Wanjing

On 11/10/16, 5:12 PM, "Michael Johnson"  wrote:

Hi Wanjing,

Yes, active/standby is available in Mitaka.  You must enable it via
the octavia.conf file.

As for benchmarking, there has been some work done in this space (see
the octavia meeting logs last month), but it varies greatly depending
on how your cloud is configured and/or the hardware it is on.

Michael

On Thu, Nov 10, 2016 at 3:18 PM, Wanjing Xu (waxu)  wrote:
> Thanks, Michael.  Now I have brought up this octavia.  I have a question:
> Is HA supported on octavia, or is it yet to come?  I am using
> stable/mitaka and I only see one amphorae vm launched per loadbalancer.
> And did anybody benchmark this octtavia against vender box?
>
> Regards!
>
> Wanjing
>
> On 11/7/16, 10:02 AM, "Michael Johnson"  wrote:
>
>>Hi Wanjing,
>>
>>You are not seeing the network interfaces for the VIP and member
>>networks because they are inside a network namespace for security
>>reasons.  You can see these by issuing "sudo ip netns exec
>>amphora-haproxy ifconfig -a".
>>
>>I'm not sure what version of octavia and neutron you are using, but
>>the issue you noted about "dns_name" was fixed here:
>>https://review.openstack.org/#/c/337939/
>>
>>Michael
>>
>>
>>On Thu, Nov 3, 2016 at 11:29 AM, Wanjing Xu (waxu)  wrote:
>>> Going through the log , I saw the following error on o-hm
>>>
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker request_ids=request_ids)
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker BadRequest: Unrecognized
>>> attribute(s) 'dns_name'
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker Neutron server returns
>>> request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']
>>>
>>> And it seemed that I need to upgrade my neutron client.  While I am
>>>planning
>>> to do it, could somebody please send me the document on how this vip is
>>> supposed to plug into the lbaas vm and what the failover is about ?
>>>
>>> Thanks!
>>> Wanjing
>>>
>>>
>>> From: Cisco Employee 
>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Date: Wednesday, November 2, 2016 at 7:04 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able
>>>to
>>> ping loadbalancer ip
>>>
>>> So I bring up octavia using devstack (stable/mitaka).   I created a
>>> loadbalander and a listener(not create member yet) and start to look at
>>>how
>>> things are connected to each other.  I can ssh to amphora vm and I do
>>>see a
>>> haproxy is up with front end point to my listener.  I tried to ping
>>>(from
>>> dhcp namespace) to the loadbalancer ip, and ping could not go through.
>>>I am
>>> wondering how packet is supposed to reach this amphora vm.  I can see
>>>that
>>> the vm is launched on both network(lb_mgmt network and my vipnet), but I
>>> don¹t see any nic associated with my vipnet:
>>>
>>> ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
>>> eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
>>>   inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
>>>   inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
>>>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>   RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
>>>   TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
>>>   collisions:0 txqueuelen:1000
>>>   RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)
>>>
>>> loLink encap:Local Loopback
>>>   inet addr:127.0.0.1  Mask:255.0.0.0
>>>   inet6 addr: ::1/128 Scope:Host
>>>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>>>   RX packets:212 errors:0 dropped:0 overruns:0 frame:0
>>>   TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
>>>   collisions:0 txqueuelen:0
>>>   RX bytes:18248 (18.2 KB)  TX bytes:18248 (18.2 KB)
>>>
>>> localadmin@dmz-eth2-ucs1:~/devstack$ nova list
>>>

Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-10 Thread Michael Johnson
Hi Wanjing,

Yes, active/standby is available in Mitaka.  You must enable it via
the octavia.conf file.

As for benchmarking, there has been some work done in this space (see
the octavia meeting logs last month), but it varies greatly depending
on how your cloud is configured and/or the hardware it is on.

Michael

On Thu, Nov 10, 2016 at 3:18 PM, Wanjing Xu (waxu)  wrote:
> Thanks, Michael.  Now I have brought up this octavia.  I have a question:
> Is HA supported on octavia, or is it yet to come?  I am using
> stable/mitaka and I only see one amphorae vm launched per loadbalancer.
> And did anybody benchmark this octtavia against vender box?
>
> Regards!
>
> Wanjing
>
> On 11/7/16, 10:02 AM, "Michael Johnson"  wrote:
>
>>Hi Wanjing,
>>
>>You are not seeing the network interfaces for the VIP and member
>>networks because they are inside a network namespace for security
>>reasons.  You can see these by issuing "sudo ip netns exec
>>amphora-haproxy ifconfig -a".
>>
>>I'm not sure what version of octavia and neutron you are using, but
>>the issue you noted about "dns_name" was fixed here:
>>https://review.openstack.org/#/c/337939/
>>
>>Michael
>>
>>
>>On Thu, Nov 3, 2016 at 11:29 AM, Wanjing Xu (waxu)  wrote:
>>> Going through the log , I saw the following error on o-hm
>>>
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker request_ids=request_ids)
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker BadRequest: Unrecognized
>>> attribute(s) 'dns_name'
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker Neutron server returns
>>> request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']
>>>
>>> And it seemed that I need to upgrade my neutron client.  While I am
>>>planning
>>> to do it, could somebody please send me the document on how this vip is
>>> supposed to plug into the lbaas vm and what the failover is about ?
>>>
>>> Thanks!
>>> Wanjing
>>>
>>>
>>> From: Cisco Employee 
>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Date: Wednesday, November 2, 2016 at 7:04 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able
>>>to
>>> ping loadbalancer ip
>>>
>>> So I bring up octavia using devstack (stable/mitaka).   I created a
>>> loadbalander and a listener(not create member yet) and start to look at
>>>how
>>> things are connected to each other.  I can ssh to amphora vm and I do
>>>see a
>>> haproxy is up with front end point to my listener.  I tried to ping
>>>(from
>>> dhcp namespace) to the loadbalancer ip, and ping could not go through.
>>>I am
>>> wondering how packet is supposed to reach this amphora vm.  I can see
>>>that
>>> the vm is launched on both network(lb_mgmt network and my vipnet), but I
>>> don¹t see any nic associated with my vipnet:
>>>
>>> ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
>>> eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
>>>   inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
>>>   inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
>>>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>   RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
>>>   TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
>>>   collisions:0 txqueuelen:1000
>>>   RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)
>>>
>>> loLink encap:Local Loopback
>>>   inet addr:127.0.0.1  Mask:255.0.0.0
>>>   inet6 addr: ::1/128 Scope:Host
>>>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>>>   RX packets:212 errors:0 dropped:0 overruns:0 frame:0
>>>   TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
>>>   collisions:0 txqueuelen:0
>>>   RX bytes:18248 (18.2 KB)  TX bytes:18248 (18.2 KB)
>>>
>>> localadmin@dmz-eth2-ucs1:~/devstack$ nova list
>>>
>>>+--+-
>>>-+++-+---
>>>+
>>> | ID   | Name
>>> | Status | Task State | Power State | Networks
>>> |
>>>
>>>+--+-
>>>-+++-+---
>>>+
>>> | 557a3de3-a32e-419d-bdf5-41d92dd2333b |
>>> amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699 | ACTIVE | -  |
>>>Running
>>> | lb-mgmt-net=192.168.0.4; vipnet=100.100.100.4 |
>>>
>>>+--+-
>>>-+++-+‹‹‹
>>>+

Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-10 Thread Wanjing Xu (waxu)
Thanks, Michael.  Now I have brought up this octavia.  I have a question:
Is HA supported on octavia, or is it yet to come?  I am using
stable/mitaka and I only see one amphorae vm launched per loadbalancer.
And did anybody benchmark this octtavia against vender box?

Regards!

Wanjing

On 11/7/16, 10:02 AM, "Michael Johnson"  wrote:

>Hi Wanjing,
>
>You are not seeing the network interfaces for the VIP and member
>networks because they are inside a network namespace for security
>reasons.  You can see these by issuing "sudo ip netns exec
>amphora-haproxy ifconfig -a".
>
>I'm not sure what version of octavia and neutron you are using, but
>the issue you noted about "dns_name" was fixed here:
>https://review.openstack.org/#/c/337939/
>
>Michael
>
>
>On Thu, Nov 3, 2016 at 11:29 AM, Wanjing Xu (waxu)  wrote:
>> Going through the log , I saw the following error on o-hm
>>
>> 2016-11-03 03:31:06.441 19560 ERROR
>> octavia.controller.worker.controller_worker request_ids=request_ids)
>> 2016-11-03 03:31:06.441 19560 ERROR
>> octavia.controller.worker.controller_worker BadRequest: Unrecognized
>> attribute(s) 'dns_name'
>> 2016-11-03 03:31:06.441 19560 ERROR
>> octavia.controller.worker.controller_worker Neutron server returns
>> request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']
>>
>> And it seemed that I need to upgrade my neutron client.  While I am
>>planning
>> to do it, could somebody please send me the document on how this vip is
>> supposed to plug into the lbaas vm and what the failover is about ?
>>
>> Thanks!
>> Wanjing
>>
>>
>> From: Cisco Employee 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Wednesday, November 2, 2016 at 7:04 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able
>>to
>> ping loadbalancer ip
>>
>> So I bring up octavia using devstack (stable/mitaka).   I created a
>> loadbalander and a listener(not create member yet) and start to look at
>>how
>> things are connected to each other.  I can ssh to amphora vm and I do
>>see a
>> haproxy is up with front end point to my listener.  I tried to ping
>>(from
>> dhcp namespace) to the loadbalancer ip, and ping could not go through.
>>I am
>> wondering how packet is supposed to reach this amphora vm.  I can see
>>that
>> the vm is launched on both network(lb_mgmt network and my vipnet), but I
>> don¹t see any nic associated with my vipnet:
>>
>> ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
>> eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
>>   inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
>>   inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
>>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>   RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
>>   TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
>>   collisions:0 txqueuelen:1000
>>   RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)
>>
>> loLink encap:Local Loopback
>>   inet addr:127.0.0.1  Mask:255.0.0.0
>>   inet6 addr: ::1/128 Scope:Host
>>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>>   RX packets:212 errors:0 dropped:0 overruns:0 frame:0
>>   TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
>>   collisions:0 txqueuelen:0
>>   RX bytes:18248 (18.2 KB)  TX bytes:18248 (18.2 KB)
>>
>> localadmin@dmz-eth2-ucs1:~/devstack$ nova list
>> 
>>+--+-
>>-+++-+---
>>+
>> | ID   | Name
>> | Status | Task State | Power State | Networks
>> |
>> 
>>+--+-
>>-+++-+---
>>+
>> | 557a3de3-a32e-419d-bdf5-41d92dd2333b |
>> amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699 | ACTIVE | -  |
>>Running
>> | lb-mgmt-net=192.168.0.4; vipnet=100.100.100.4 |
>> 
>>+--+-
>>-+++-+‹‹‹
>>+
>>
>> And it seemed that amphora created a port from the vipnet for its
>>vrrp_ip,
>> but now sure how it is used and how it is supposed to help packet to
>>reach
>> loadbalancer ip
>>
>> It will be great if somebody can help on this, especially on network
>>side.
>>
>> Thanks
>> Wanjing
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 

Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-07 Thread Michael Johnson
Hi Wanjing,

You are not seeing the network interfaces for the VIP and member
networks because they are inside a network namespace for security
reasons.  You can see these by issuing "sudo ip netns exec
amphora-haproxy ifconfig -a".

I'm not sure what version of octavia and neutron you are using, but
the issue you noted about "dns_name" was fixed here:
https://review.openstack.org/#/c/337939/

Michael


On Thu, Nov 3, 2016 at 11:29 AM, Wanjing Xu (waxu)  wrote:
> Going through the log , I saw the following error on o-hm
>
> 2016-11-03 03:31:06.441 19560 ERROR
> octavia.controller.worker.controller_worker request_ids=request_ids)
> 2016-11-03 03:31:06.441 19560 ERROR
> octavia.controller.worker.controller_worker BadRequest: Unrecognized
> attribute(s) 'dns_name'
> 2016-11-03 03:31:06.441 19560 ERROR
> octavia.controller.worker.controller_worker Neutron server returns
> request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']
>
> And it seemed that I need to upgrade my neutron client.  While I am planning
> to do it, could somebody please send me the document on how this vip is
> supposed to plug into the lbaas vm and what the failover is about ?
>
> Thanks!
> Wanjing
>
>
> From: Cisco Employee 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Wednesday, November 2, 2016 at 7:04 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to
> ping loadbalancer ip
>
> So I bring up octavia using devstack (stable/mitaka).   I created a
> loadbalander and a listener(not create member yet) and start to look at how
> things are connected to each other.  I can ssh to amphora vm and I do see a
> haproxy is up with front end point to my listener.  I tried to ping (from
> dhcp namespace) to the loadbalancer ip, and ping could not go through.  I am
> wondering how packet is supposed to reach this amphora vm.  I can see that
> the vm is launched on both network(lb_mgmt network and my vipnet), but I
> don’t see any nic associated with my vipnet:
>
> ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
> eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
>   inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
>   inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)
>
> loLink encap:Local Loopback
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet6 addr: ::1/128 Scope:Host
>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>   RX packets:212 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:18248 (18.2 KB)  TX bytes:18248 (18.2 KB)
>
> localadmin@dmz-eth2-ucs1:~/devstack$ nova list
> +--+--+++-+---+
> | ID   | Name
> | Status | Task State | Power State | Networks
> |
> +--+--+++-+---+
> | 557a3de3-a32e-419d-bdf5-41d92dd2333b |
> amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699 | ACTIVE | -  | Running
> | lb-mgmt-net=192.168.0.4; vipnet=100.100.100.4 |
> +--+--+++-+———+
>
> And it seemed that amphora created a port from the vipnet for its vrrp_ip,
> but now sure how it is used and how it is supposed to help packet to reach
> loadbalancer ip
>
> It will be great if somebody can help on this, especially on network side.
>
> Thanks
> Wanjing
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-03 Thread Wanjing Xu (waxu)
Going through the log , I saw the following error on o-hm

2016-11-03 03:31:06.441 19560 ERROR octavia.controller.worker.controller_worker 
request_ids=request_ids)
2016-11-03 03:31:06.441 19560 ERROR octavia.controller.worker.controller_worker 
BadRequest: Unrecognized attribute(s) 'dns_name'
2016-11-03 03:31:06.441 19560 ERROR octavia.controller.worker.controller_worker 
Neutron server returns request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']

And it seemed that I need to upgrade my neutron client.  While I am planning to 
do it, could somebody please send me the document on how this vip is supposed 
to plug into the lbaas vm and what the failover is about ?

Thanks!
Wanjing


From: Cisco Employee >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, November 2, 2016 at 7:04 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping 
loadbalancer ip

So I bring up octavia using devstack (stable/mitaka).   I created a 
loadbalander and a listener(not create member yet) and start to look at how 
things are connected to each other.  I can ssh to amphora vm and I do see a 
haproxy is up with front end point to my listener.  I tried to ping (from dhcp 
namespace) to the loadbalancer ip, and ping could not go through.  I am 
wondering how packet is supposed to reach this amphora vm.  I can see that the 
vm is launched on both network(lb_mgmt network and my vipnet), but I don't see 
any nic associated with my vipnet:

ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
  inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:212 errors:0 dropped:0 overruns:0 frame:0
  TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:18248 (18.2 KB)  TX bytes:18248 (18.2 KB)

localadmin@dmz-eth2-ucs1:~/devstack$ nova list
+--+--+++-+---+
| ID   | Name   
  | Status | Task State | Power State | Networks
  |
+--+--+++-+---+
| 557a3de3-a32e-419d-bdf5-41d92dd2333b | 
amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699 | ACTIVE | -  | Running
 | lb-mgmt-net=192.168.0.4; vipnet=100.100.100.4 |
+--+--+++-+---+

And it seemed that amphora created a port from the vipnet for its vrrp_ip, but 
now sure how it is used and how it is supposed to help packet to reach 
loadbalancer ip

It will be great if somebody can help on this, especially on network side.

Thanks
Wanjing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev