Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-28 Thread Michael Johnson
Hi Yipei,

 

I have meant to add this as a config option, but in the interim you can do the 
following to disable the automatic cleanup by disabling the revert flow in 
taskflow:

 

octavia/common/base_taskflow.py line 37 add “never_resolve=True,” to the engine 
load parameters.

 

Michael

 

From: Yipei Niu [mailto:newy...@gmail.com] 
Sent: Monday, June 26, 2017 11:34 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [octavia] fail to plug vip to amphora

 

Hi, Micheal,

 

Thanks a lot for your help, but I still have one question. 

 

In Octavia, once the controller worker fails plugging VIP to the amphora, the 
amphora is deleted immediately, making it impossible to trace the error. How to 
prevent Octavia from stopping and deleting the amphora? 

 

Best regards,

Yipei 

 

On Mon, Jun 26, 2017 at 11:21 AM, Yipei Niu <newy...@gmail.com 
<mailto:newy...@gmail.com> > wrote:

Hi, all,

 

I am trying to create a load balancer in octavia. The amphora can be booted 
successfully, and can be reached via icmp. However, octavia fails to plug vip 
to the amphora through the amphora client api and returns 500 status code, 
causing some errors as follows.

 

   |__Flow 
'octavia-create-loadbalancer-flow': InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
Traceback (most recent call last):

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py",
 line 53, in _execute_task

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
result = task.execute(**arguments)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 240, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 219, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphora, loadbalancer, amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
137, in post_vip_plug

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
net_info)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
378, in plug_vip

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
return exc.check_exception(r)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/exceptions.py", 
line 32, in check_exception

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
raise responses[status_code]()

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker

 

To fix the problem, I log in the amphora and find that there is one http server 
process is listening on port 9443, so I think the amphora api services is 
active. But do not know how to further investigate what error happens inside 
the amphora api service and solve it? Look forward to your valuable comments.

 

Best regards,

Yipei 

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-28 Thread Yipei Niu
Hi, Ganpat,

Thanks a lot for your comments. I do not really understand your solution.
Do you mean I need to create a dummy file and verify it in the amphora?

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-28 Thread Ganpat Agarwal
Hello Yipei,

"*octavia.amphorae.backends.agent.api_server.listener [-] Failed to verify
haproxy file: Command '['haproxy', '-c', '-L', 'NK20KVuD6oi5NrRP7KOVflM*
*3MsQ', '-f',
'/var/lib/octavia/bca2c985-471a-4477-8217-92fa71d04cb7/haproxy.cfg.new']'
returned non-zero exit status 1*"

Verification of haproxy cfg file is failing.

You can create a dummy file from the haproxy template files(jinja2 files)
and verify on any system with haproxy installed.
*haproxy -c -f "filename"*

Regards,
Ganpat

On Wed, Jun 28, 2017 at 3:19 PM, Yipei Niu  wrote:

> Hi, Michael,
>
> Thanks for your help. I have already created a load balancer successfully,
> but failed creating a listener. The detailed errors of amphora-agent and
> syslog in the amphora are as follows.
>
> In amphora-agent.log:
>
> [2017-06-28 08:54:12 +] [1209] [INFO] Starting gunicorn 19.7.0
> [2017-06-28 08:54:13 +] [1209] [DEBUG] Arbiter booted
> [2017-06-28 08:54:13 +] [1209] [INFO] Listening at: http://[::]:9443
> (1209)
> [2017-06-28 08:54:13 +] [1209] [INFO] Using worker: sync
> [2017-06-28 08:54:13 +] [1209] [DEBUG] 1 workers
> [2017-06-28 08:54:13 +] [1816] [INFO] Booting worker with pid: 1816
> [2017-06-28 08:54:15 +] [1816] [DEBUG] POST /0.5/plug/vip/10.0.1.8
> :::192.168.0.12 - - [28/Jun/2017:08:54:59 +] "POST /0.5/plug/vip/
> 10.0.1.8 HTTP/1.1" 202 78 "-" "Octavia HaProxy Rest Client/0.5 (
> https://wiki.openstack.org/wiki/Octavia)"
> [2017-06-28 08:59:18 +] [1816] [DEBUG] PUT
> /0.5/listeners/9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4/
> bca2c985-471a-4477-8217-92fa71d04cb7/haproxy
> :::192.168.0.12 - - [28/Jun/2017:08:59:19 +] "PUT
> /0.5/listeners/9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4/
> bca2c985-471a-4477-8217-92fa71d04cb7/haproxy HTTP/1.1" 400 414 "-"
> "Octavia HaProxy Rest Client/0.5 (https://wiki.openstack.org/wiki/Octavia
> )"
>
> In syslog:
>
> Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
> #
> Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
> -BEGIN SSH HOST KEY FINGERPRINTS-
> Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 1024
> SHA256:qDQcKq2Je/CzlpPndccMf0aR0u/KPJEEIAl4RraAgVc
> root@amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (DSA)
> Jun 28 08:57:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 256
> SHA256:n+5tCCdJwASMaD/kJ6fm0kVNvXDh4aO0si2Uls4MXkI
> root@amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (ECDSA)
> Jun 28 08:57:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 256
> SHA256:7RWMBOW+QKzeolI6BDSpav9dVZuon58weIQJ9/peVxE
> root@amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (ED25519)
> Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 2048
> SHA256:9z+EcAAUyTENKJRctKCzPslK6Yf4c7s9R8sEflDITIU
> root@amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (RSA)
> Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
> -END SSH HOST KEY FINGERPRINTS-
> Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
> #
> Jun 28 08:57:17 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
> cloud-init[2092]: Cloud-init v. 0.7.9 running 'modules:final' at Wed, 28
> Jun 2017 08:57:03 +. Up 713.82 seconds.
> Jun 28 08:57:17 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
> cloud-init[2092]: Cloud-init v. 0.7.9 finished at Wed, 28 Jun 2017 08:57:16
> +. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up
> 727.30 seconds
> Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
> Started Execute cloud user/final scripts.
> Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
> Reached target Cloud-init target.
> Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
> Startup finished in 52.054s (kernel) + 11min 17.647s (userspace) = 12min
> 9.702s.
> Jun 28 08:59:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
> amphora-agent[1209]: 2017-06-28 08:59:19.243 1816 ERROR
> octavia.amphorae.backends.agent.api_server.listener [-] Failed to verify
> haproxy file: Command '['haproxy', '-c', '-L', 'NK20KVuD6oi5NrRP7KOVflM
> 3MsQ', '-f', 
> '/var/lib/octavia/bca2c985-471a-4477-8217-92fa71d04cb7/haproxy.cfg.new']'
> returned non-zero exit status 1
> Jun 28 09:00:11 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
> Starting Cleanup of Temporary Directories...
> Jun 28 09:00:12 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
> systemd-tmpfiles[3040]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line
> for path "/var/log", ignoring.
> Jun 28 09:00:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
> Started Cleanup of Temporary Directories.
>
> Look forward to your valuable comments.
>
> Best regards,
> Yipei
>
> On Tue, Jun 27, 2017 at 2:33 PM, Yipei Niu  wrote:
>
>> Hi, Micheal,
>>
>> Thanks a lot for your help, but I still have one question.
>>
>> In Octavia, once 

Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-28 Thread Yipei Niu
Hi, Michael,

Thanks for your help. I have already created a load balancer successfully,
but failed creating a listener. The detailed errors of amphora-agent and
syslog in the amphora are as follows.

In amphora-agent.log:

[2017-06-28 08:54:12 +] [1209] [INFO] Starting gunicorn 19.7.0
[2017-06-28 08:54:13 +] [1209] [DEBUG] Arbiter booted
[2017-06-28 08:54:13 +] [1209] [INFO] Listening at: http://[::]:9443
(1209)
[2017-06-28 08:54:13 +] [1209] [INFO] Using worker: sync
[2017-06-28 08:54:13 +] [1209] [DEBUG] 1 workers
[2017-06-28 08:54:13 +] [1816] [INFO] Booting worker with pid: 1816
[2017-06-28 08:54:15 +] [1816] [DEBUG] POST /0.5/plug/vip/10.0.1.8
:::192.168.0.12 - - [28/Jun/2017:08:54:59 +] "POST /0.5/plug/vip/
10.0.1.8 HTTP/1.1" 202 78 "-" "Octavia HaProxy Rest Client/0.5 (
https://wiki.openstack.org/wiki/Octavia)"
[2017-06-28 08:59:18 +] [1816] [DEBUG] PUT
/0.5/listeners/9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4/bca2c985-471a-4477-8217-92fa71d04cb7/haproxy
:::192.168.0.12 - - [28/Jun/2017:08:59:19 +] "PUT
/0.5/listeners/9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4/bca2c985-471a-4477-8217-92fa71d04cb7/haproxy
HTTP/1.1" 400 414 "-" "Octavia HaProxy Rest Client/0.5 (
https://wiki.openstack.org/wiki/Octavia)"

In syslog:

Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
#
Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
-BEGIN SSH HOST KEY FINGERPRINTS-
Jun 28 08:57:14 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 1024
SHA256:qDQcKq2Je/CzlpPndccMf0aR0u/KPJEEIAl4RraAgVc
root@amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (DSA)
Jun 28 08:57:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 256
SHA256:n+5tCCdJwASMaD/kJ6fm0kVNvXDh4aO0si2Uls4MXkI
root@amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (ECDSA)
Jun 28 08:57:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 256
SHA256:7RWMBOW+QKzeolI6BDSpav9dVZuon58weIQJ9/peVxE
root@amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (ED25519)
Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: 2048
SHA256:9z+EcAAUyTENKJRctKCzPslK6Yf4c7s9R8sEflDITIU
root@amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 (RSA)
Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2: -END
SSH HOST KEY FINGERPRINTS-
Jun 28 08:57:16 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 ec2:
#
Jun 28 08:57:17 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
cloud-init[2092]: Cloud-init v. 0.7.9 running 'modules:final' at Wed, 28
Jun 2017 08:57:03 +. Up 713.82 seconds.
Jun 28 08:57:17 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
cloud-init[2092]: Cloud-init v. 0.7.9 finished at Wed, 28 Jun 2017 08:57:16
+. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up
727.30 seconds
Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Started Execute cloud user/final scripts.
Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Reached target Cloud-init target.
Jun 28 08:57:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Startup finished in 52.054s (kernel) + 11min 17.647s (userspace) = 12min
9.702s.
Jun 28 08:59:19 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
amphora-agent[1209]: 2017-06-28 08:59:19.243 1816 ERROR
octavia.amphorae.backends.agent.api_server.listener [-] Failed to verify
haproxy file: Command '['haproxy', '-c', '-L', 'NK20KVuD6oi5NrRP7KOVflM
3MsQ', '-f',
'/var/lib/octavia/bca2c985-471a-4477-8217-92fa71d04cb7/haproxy.cfg.new']'
returned non-zero exit status 1
Jun 28 09:00:11 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Starting Cleanup of Temporary Directories...
Jun 28 09:00:12 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4
systemd-tmpfiles[3040]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line
for path "/var/log", ignoring.
Jun 28 09:00:15 amphora-9ed4f0a5-6b1e-4832-97cc-fb8d1518cbd4 systemd[1]:
Started Cleanup of Temporary Directories.

Look forward to your valuable comments.

Best regards,
Yipei

On Tue, Jun 27, 2017 at 2:33 PM, Yipei Niu  wrote:

> Hi, Micheal,
>
> Thanks a lot for your help, but I still have one question.
>
> In Octavia, once the controller worker fails plugging VIP to the amphora,
> the amphora is deleted immediately, making it impossible to trace the
> error. How to prevent Octavia from stopping and deleting the amphora?
>
> Best regards,
> Yipei
>
> On Mon, Jun 26, 2017 at 11:21 AM, Yipei Niu  wrote:
>
>> Hi, all,
>>
>> I am trying to create a load balancer in octavia. The amphora can be
>> booted successfully, and can be reached via icmp. However, octavia fails to
>> plug vip to the amphora through the amphora client api and returns 500
>> status code, causing some errors as follows.
>>
>>|__Flow
>> 'octavia-create-loadbalancer-flow': InternalServerError: Internal Server
>> Error
>> 

Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-27 Thread Yipei Niu
Hi, Micheal,

Thanks a lot for your help, but I still have one question.

In Octavia, once the controller worker fails plugging VIP to the amphora,
the amphora is deleted immediately, making it impossible to trace the
error. How to prevent Octavia from stopping and deleting the amphora?

Best regards,
Yipei

On Mon, Jun 26, 2017 at 11:21 AM, Yipei Niu  wrote:

> Hi, all,
>
> I am trying to create a load balancer in octavia. The amphora can be
> booted successfully, and can be reached via icmp. However, octavia fails to
> plug vip to the amphora through the amphora client api and returns 500
> status code, causing some errors as follows.
>
>|__Flow
> 'octavia-create-loadbalancer-flow': InternalServerError: Internal Server
> Error
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
> Traceback (most recent call last):
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
>   File "/usr/local/lib/python2.7/dist-packages/taskflow/
> engines/action_engine/executor.py", line 53, in _execute_task
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
> result = task.execute(**arguments)
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
>   File 
> "/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py",
> line 240, in execute
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
> amphorae_network_config)
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
>   File 
> "/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py",
> line 219, in execute
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
> amphora, loadbalancer, amphorae_network_config)
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
>   File 
> "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py",
> line 137, in post_vip_plug
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
> net_info)
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
>   File 
> "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py",
> line 378, in plug_vip
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
> return exc.check_exception(r)
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
>   File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/exceptions.py",
> line 32, in check_exception
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
> raise responses[status_code]()
> 2017-06-21 09:49:35.864 25411 ERROR 
> octavia.controller.worker.controller_worker
> InternalServerError: Internal Server Error
> 2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.
> controller_worker
>
> To fix the problem, I log in the amphora and find that there is one http
> server process is listening on port 9443, so I think the amphora api
> services is active. But do not know how to further investigate what error
> happens inside the amphora api service and solve it? Look forward to your
> valuable comments.
>
> Best regards,
> Yipei
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-26 Thread Michael Johnson
Hello Yipei,

 

You are on the track to debug this.

When you are logged into the amphora, please check the following logs to see 
what the amphora-agent error is:

 

/var/log/amphora-agent.log

And

/var/log/syslog

 

One of those two logs will have the error information.

 

Michael

 

 

From: Yipei Niu [mailto:newy...@gmail.com] 
Sent: Sunday, June 25, 2017 8:21 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [octavia] fail to plug vip to amphora

 

Hi, all,

 

I am trying to create a load balancer in octavia. The amphora can be booted 
successfully, and can be reached via icmp. However, octavia fails to plug vip 
to the amphora through the amphora client api and returns 500 status code, 
causing some errors as follows.

 

   |__Flow 
'octavia-create-loadbalancer-flow': InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
Traceback (most recent call last):

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py",
 line 53, in _execute_task

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
result = task.execute(**arguments)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 240, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 219, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphora, loadbalancer, amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
137, in post_vip_plug

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
net_info)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
378, in plug_vip

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
return exc.check_exception(r)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/exceptions.py", 
line 32, in check_exception

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
raise responses[status_code]()

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker

 

To fix the problem, I log in the amphora and find that there is one http server 
process is listening on port 9443, so I think the amphora api services is 
active. But do not know how to further investigate what error happens inside 
the amphora api service and solve it? Look forward to your valuable comments.

 

Best regards,

Yipei 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev