Re: [openstack-dev] [designate] Records for floating addresses are not removed when an instance is removed

2015-11-16 Thread Jaime Fernández
Thanks Matt. The link was very useful. However, I couldn't make it work.

The event 'port.delete.end' has the following payload:

> {u'port_id': u'86580146-1772-4e32-8d7b-92bbb9131ae5'}
>
It only provides the port_id. But without the tenant_id, I cannot get the
context associated to the tenant. I've tried with all_tenants:

> context = DesignateContext.get_admin_context(all_tenants=True,
> edit_managed_records=True)
>
but it does not find any record.

Is there any option in get_admin_context that permits me to delete records
without knowing the tenant_id (although the records were created assigned
to that tenant_id)?

On Fri, Nov 13, 2015 at 7:51 PM, Matt Fischer <m...@mattfischer.com> wrote:

> You can do it like we did for juno Designate as covered in our Vancouver
> talk start about 21 minutes:
>
> https://www.youtube.com/watch?v=N8y51zqtAPA
>
> We've not ported the code to Kilo or Liberty yet but the approach may
> still work.
>
>
> On Fri, Nov 13, 2015 at 9:49 AM, Jaime Fernández <jjja...@gmail.com>
> wrote:
>
>> When removing an instance (with one floating address assigned) in
>> Horizon, designate-sink only receives an event for instance removal. As a
>> result, only the instance is removed but the floating addresses records are
>> not removed.
>> I'm not sure if it's a bug in openstack (I guess that it should also
>> notify about the unassignment of floating addresses) or it should be
>> considered in the nova notification handler (
>> https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py#L72
>> ).
>> However, it is not possible to add metadata in the floating IP records to
>> save the instance_id and remove them easily when an instance is removed.
>> What's the best approach to remove the floating address records of an
>> instance that is being removed?
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Records for floating addresses are not removed when an instance is removed

2015-11-13 Thread Jaime Fernández
When removing an instance (with one floating address assigned) in Horizon,
designate-sink only receives an event for instance removal. As a result,
only the instance is removed but the floating addresses records are not
removed.
I'm not sure if it's a bug in openstack (I guess that it should also notify
about the unassignment of floating addresses) or it should be considered in
the nova notification handler (
https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py#L72
).
However, it is not possible to add metadata in the floating IP records to
save the instance_id and remove them easily when an instance is removed.
What's the best approach to remove the floating address records of an
instance that is being removed?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-29 Thread Jaime Fernández
Hi Kiall,

We haven't found the cause yet. Even moving the host to the same network,
the error still happens :(

We decided to migrate to rabbitmq. While migrating OST to rabbitmq, I've
connected to the local rabbitmq, and I've also seen the same problem (all
designate processes and rabbitmq in the same host). In this occasion, the
designate-api process died after 5 hours.

Here it is the logs:

2015-07-28 12:37:52.730 22487 INFO eventlet.wsgi
[req-84e3674a-860e-46bf-b7a7-b9866866152d noauth-user
4e3b6c0108f04b309737522a9deee9d8 - - -] 127.0.0.1 - - [28/Jul/2015
12:37:52] GET /v1/domains/38f8f79d-0f6c-42fb-abd7-98ae8cc87fc5/records
HTTP/1.1 200 1757 0.049479
2015-07-28 14:49:10.378 22487 INFO oslo_service.service [-] Caught SIGHUP,
exiting
2015-07-28 14:49:10.379 22487 INFO designate.service [-] Stopping api
service
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup [-] Error
stopping thread.
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup Traceback
(most recent call last):
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/threadgroup.py,
line 107, in _stop_threads
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup x.stop()
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/threadgroup.py,
line 48, in stop
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup
self.thread.kill()
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenthread.py,
line 240, in kill
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup return
kill(self, *throw_args)
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenthread.py,
line 294, in kill
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup
g.throw(*throw_args)
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenthread.py,
line 214, in main
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup result =
function(*args, **kwargs)
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/designate/service.py, line 230, in
_wsgi_handle
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup
log=loggers.WritableLogger(logger))
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/wsgi.py,
line 842, in server
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup
pool.waitall()
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenpool.py,
line 117, in waitall
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup Calling
waitall() from within one of the  \
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup
AssertionError: Calling waitall() from within one of the GreenPool's
greenthreads will never terminate.
2015-07-28 14:49:10.379 22487 ERROR oslo_service.threadgroup
2015-07-28 14:49:10.382 22487 WARNING oslo_config.cfg [-] Option logdir
from group DEFAULT is deprecated. Use option log-dir from group
DEFAULT.
2015-07-28 14:49:10.383 22487 INFO designate.service [-] Stopping api
service
2015-07-28 14:49:10.385 22487 WARNING oslo_config.cfg [-] Option
rabbit_password from group DEFAULT is deprecated. Use option
rabbit_password from group oslo_messaging_rabbit.
2015-07-28 14:49:10.386 22487 WARNING oslo_config.cfg [-] Option
rabbit_userid from group DEFAULT is deprecated. Use option
rabbit_userid from group oslo_messaging_rabbit.
2015-07-28 14:49:10.396 22487 INFO designate.service [-] Starting api
service (version: 1.0.0)
2015-07-28 14:49:40.445 22487 ERROR oslo_service.threadgroup [-] Could not
bind to 0.0.0.0:9001 after trying for 30 seconds
2015-07-28 14:49:40.445 22487 ERROR oslo_service.threadgroup Traceback
(most recent call last):
2015-07-28 14:49:40.445 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/threadgroup.py,
line 154, in wait
2015-07-28 14:49:40.445 22487 ERROR oslo_service.threadgroup x.wait()
2015-07-28 14:49:40.445 22487 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/threadgroup.py,
line 51, in 

Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-22 Thread Jaime Fernández
I moved the virtual machine where designate processes are running into a
host in the same LAN than OST. Now the designate-api process does not die
anymore (still using qpid). I'm afraid that some network problem or timeout
could be the reason for this problem. We'll keep monitoring this process to
confirm it.

I understand that it's a bug of designate-api or oslo.messaging library.

On Tue, Jul 21, 2015 at 1:19 PM, Jaime Fernández jjja...@gmail.com wrote:

 Hi Kiall,

 It's a bit strange because only designate-api dies but designate-sink is
 also integrated with qpid and survives.

 These issues are a bit difficult because it is not deterministic. What
 I've just tested is using a local qpid instance and it looks like the
 designate-api is not killed any more (however, it's a short period of
 time). We are going to integrate the host where designate components are
 installed in the same VLAN than the rest of OST just to check if it's a
 rare issue with the network.

 Before testing with Rabbit, as you recommended, we are testing with qpid
 in the same VLAN (just to discard the network issue).

 I will give you info about my progress.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-21 Thread Jaime Fernández
I confirm that it happened again. I started all the designate processes and
after an hour (approximately), the designate-api process died with the same
stack trace.

Restarting the designate-api does not help because API requests are not
replied any more (timeouts):
2015-07-21 10:12:53.463 4403 ERROR designate.api.middleware
[req-281e9665-c49b-43e0-a5d0-9a48e5f52aa1 noauth-user noauth-project - - -]
Timed out waiting for a reply to message ID 19cda11c089f43e2a43a75a5851926c8

When the designate-api process dies, I need to restart all the designate
processes. Then the api works correctly until the process dies again.

I consider that it's normal the hex dump for qpid (it is debug level)
although I noticed it was different in rabbitmq.

We have deployed designate in an Ubuntu host, as installation instructions
recommended, and I don't think there is any security issue to stop the
service. In fact, the trace is really strange because the api was already
bound on port 9001. Our Openstack platform is supported by RedHat and
that's why we need to integrate with qpid.

I will try a couple of different scenarios:
a) Use a qpid local instance (instead of OST qpid instance)
b) Use a rabbitmq local instance



On Mon, Jul 20, 2015 at 5:20 PM, Kiall Mac Innes ki...@macinnes.ie wrote:

 Side Question: Is is normal for QPid to log all the
 \x0f\x01\x00\x14\x00\x01\x00\x00\x00\x00\x00's etc?

 I'm guessing that, since you're using qpid, you're also on RedHat. Could
 RH's SELinux policies be preventing the service from binding to tcp/9001?

 If you start as root, do you see similar issues?

 Thanks,
 Kiall

 On 20/07/15 15:48, Jaime Fernández wrote:
  Hi Tim,
 
  I only start one api process. In fact, when I say that the api process
  dies, I don't have any designate-api process and there is no process
  listening on the 9001 port.
 
  When I started all the designate processes, the API worked correctly
  because I had tested it. But after some inactivity period (a couple of
  hours, or a day), then the designate-api process died. It is not
  possible that the process has been restarted during this time.
 
  I've just started the process again and now it works. I will check if it
  dies again and report it.
 
 
  Thanks
 
  On Mon, Jul 20, 2015 at 4:24 PM, Tim Simmons tim.simm...@rackspace.com
  mailto:tim.simm...@rackspace.com wrote:
 
  Jaime,
 
 
  Usually that's the error you see if you're trying to start up
  multiple API processes. They all try and bind to port 9001, so that
  error is saying the API can't bind. So something else (I suspect
  another designate-api process, or some other type of API) is already
  listening on that port.
 
 
  Hope that helps,
 
  Tim Simmons
 
 
 
  
  *From:* Jaime Fernández jjja...@gmail.com mailto:jjja...@gmail.com
 
  *Sent:* Monday, July 20, 2015 8:54 AM
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Subject:* [openstack-dev] [designate] The designate API service is
  stopped
 
  I've followed instructions to install Designate in Dev environment:
 
 http://docs.openstack.org/developer/designate/install/ubuntu-dev.html
 
  I've made some slight modifications to use qpid (instead of
  rabbitmq) and to integrate with Infoblox.
 
  What I've seen is that designate-api process dies (the other
  processes are running correctly). I'm not sure if the problem could
  be a network issue between designate-api and qpid.
 
  Here it is the output for last traces of designate-api process:
 
  2015-07-20 14:43:37.728 727 DEBUG qpid.messaging.io.raw [-]
  READ[3f383f8]:
  '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00'
  readable
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:411
  2015-07-20 14:43:37.729 727 DEBUG qpid.messaging.io.ops [-]
  RCVD[3f383f8]: ConnectionHeartbeat() write
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:651
  2015-07-20 14:43:37.729 727 DEBUG qpid.messaging.io.ops [-]
  SENT[3f383f8]: ConnectionHeartbeat() write_op
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:683
  2015-07-20 14:43:37.730 727 DEBUG qpid.messaging.io.raw [-]
  SENT[3f383f8]:
  '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00'
  writeable
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:475
  Traceback (most recent call last):
File
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/hubs/hub.py,
  line 457, in fire_timers
  timer()
File
 
  
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages

Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-21 Thread Jaime Fernández
Hi Kiall,

It's a bit strange because only designate-api dies but designate-sink is
also integrated with qpid and survives.

These issues are a bit difficult because it is not deterministic. What I've
just tested is using a local qpid instance and it looks like the
designate-api is not killed any more (however, it's a short period of
time). We are going to integrate the host where designate components are
installed in the same VLAN than the rest of OST just to check if it's a
rare issue with the network.

Before testing with Rabbit, as you recommended, we are testing with qpid in
the same VLAN (just to discard the network issue).

I will give you info about my progress.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] The designate API service is stopped

2015-07-20 Thread Jaime Fernández
I've followed instructions to install Designate in Dev environment:
http://docs.openstack.org/developer/designate/install/ubuntu-dev.html

I've made some slight modifications to use qpid (instead of rabbitmq) and
to integrate with Infoblox.

What I've seen is that designate-api process dies (the other processes are
running correctly). I'm not sure if the problem could be a network issue
between designate-api and qpid.

Here it is the output for last traces of designate-api process:

2015-07-20 14:43:37.728 727 DEBUG qpid.messaging.io.raw [-] READ[3f383f8]:
'\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00' readable
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:411
2015-07-20 14:43:37.729 727 DEBUG qpid.messaging.io.ops [-] RCVD[3f383f8]:
ConnectionHeartbeat() write
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:651
2015-07-20 14:43:37.729 727 DEBUG qpid.messaging.io.ops [-] SENT[3f383f8]:
ConnectionHeartbeat() write_op
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:683
2015-07-20 14:43:37.730 727 DEBUG qpid.messaging.io.raw [-] SENT[3f383f8]:
'\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00' writeable
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:475
Traceback (most recent call last):
  File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/hubs/hub.py,
line 457, in fire_timers
timer()
  File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/hubs/timer.py,
line 58, in __call__
cb(*args, **kw)
  File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenthread.py,
line 214, in main
result = function(*args, **kwargs)
  File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/service.py,
line 623, in run_service
service.start()
  File /home/sysadmin/openstack/designate/designate/service.py, line 173,
in start
socket = self._wsgi_get_socket()
  File /home/sysadmin/openstack/designate/designate/service.py, line 209,
in _wsgi_get_socket
'port': self._service_config.api_port})
RuntimeError: Could not bind to 0.0.0.0:9001 after trying for 30 seconds
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup [-] Could not
bind to 0.0.0.0:9001 after trying for 30 seconds
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup Traceback (most
recent call last):
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/threadgroup.py,
line 154, in wait
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup x.wait()
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/threadgroup.py,
line 51, in wait
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup return
self.thread.wait()
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenthread.py,
line 175, in wait
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup return
self._exit_event.wait()
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/event.py,
line 121, in wait
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup return
hubs.get_hub().switch()
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/hubs/hub.py,
line 294, in switch
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup return
self.greenlet.switch()
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenthread.py,
line 214, in main
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup result =
function(*args, **kwargs)
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/service.py,
line 623, in run_service
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup
service.start()
2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
/home/sysadmin/openstack/designate/designate/service.py, line 173, in
start
2015-07-20 14:43:55.221 727 ERROR 

Re: [openstack-dev] [designate] The designate API service is stopped

2015-07-20 Thread Jaime Fernández
Hi Tim,

I only start one api process. In fact, when I say that the api process
dies, I don't have any designate-api process and there is no process
listening on the 9001 port.

When I started all the designate processes, the API worked correctly
because I had tested it. But after some inactivity period (a couple of
hours, or a day), then the designate-api process died. It is not possible
that the process has been restarted during this time.

I've just started the process again and now it works. I will check if it
dies again and report it.


Thanks

On Mon, Jul 20, 2015 at 4:24 PM, Tim Simmons tim.simm...@rackspace.com
wrote:

  Jaime,


  Usually that's the error you see if you're trying to start up multiple
 API processes. They all try and bind to port 9001, so that error is saying
 the API can't bind. So something else (I suspect another designate-api
 process, or some other type of API) is already listening on that port.


  Hope that helps,

 Tim Simmons


  --
 *From:* Jaime Fernández jjja...@gmail.com
 *Sent:* Monday, July 20, 2015 8:54 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [designate] The designate API service is
 stopped

I've followed instructions to install Designate in Dev environment:
 http://docs.openstack.org/developer/designate/install/ubuntu-dev.html

  I've made some slight modifications to use qpid (instead of rabbitmq) and
 to integrate with Infoblox.

  What I've seen is that designate-api process dies (the other processes
 are running correctly). I'm not sure if the problem could be a network
 issue between designate-api and qpid.

  Here it is the output for last traces of designate-api process:

 2015-07-20 14:43:37.728 727 DEBUG qpid.messaging.io.raw [-] READ[3f383f8]:
 '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00' readable
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:411
 2015-07-20 14:43:37.729 727 DEBUG qpid.messaging.io.ops [-] RCVD[3f383f8]:
 ConnectionHeartbeat() write
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:651
 2015-07-20 14:43:37.729 727 DEBUG qpid.messaging.io.ops [-] SENT[3f383f8]:
 ConnectionHeartbeat() write_op
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:683
 2015-07-20 14:43:37.730 727 DEBUG qpid.messaging.io.raw [-] SENT[3f383f8]:
 '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00' writeable
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/qpid/messaging/driver.py:475
 Traceback (most recent call last):
   File
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/hubs/hub.py,
 line 457, in fire_timers
 timer()
   File
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/hubs/timer.py,
 line 58, in __call__
 cb(*args, **kw)
   File
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenthread.py,
 line 214, in main
 result = function(*args, **kwargs)
   File
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/service.py,
 line 623, in run_service
 service.start()
   File /home/sysadmin/openstack/designate/designate/service.py, line
 173, in start
 socket = self._wsgi_get_socket()
   File /home/sysadmin/openstack/designate/designate/service.py, line
 209, in _wsgi_get_socket
 'port': self._service_config.api_port})
 RuntimeError: Could not bind to 0.0.0.0:9001 after trying for 30 seconds
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup [-] Could not
 bind to 0.0.0.0:9001 after trying for 30 seconds
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup Traceback (most
 recent call last):
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/threadgroup.py,
 line 154, in wait
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup x.wait()
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/oslo.service-0.4.0-py2.7.egg/oslo_service/threadgroup.py,
 line 51, in wait
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup return
 self.thread.wait()
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
 /home/sysadmin/openstack/designate/.venv/local/lib/python2.7/site-packages/eventlet-0.17.4-py2.7.egg/eventlet/greenthread.py,
 line 175, in wait
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup return
 self._exit_event.wait()
 2015-07-20 14:43:55.221 727 ERROR oslo_service.threadgroup   File
 /home/sysadmin

Re: [openstack-dev] Error starting designate (DNSaaS)

2015-07-06 Thread Jaime Fernández
Finally, I got to install and start designate when moving to a trusty
Ubuntu image:
*https://vagrantcloud.com/ubuntu/boxes/trusty64
https://vagrantcloud.com/ubuntu/boxes/trusty64*

I did not succeed with precise64 (also Ubuntu):
*http://files.vagrantup.com/precise64.box
http://files.vagrantup.com/precise64.box*

On Thu, Jul 2, 2015 at 6:36 PM, Jaime Fernández jjja...@gmail.com wrote:

 Thanks Tim for the info.
 I've tried the installation of designate using the recommended guide (
 http://docs.openstack.org/developer/designate/install/ubuntu-dev.html) in
 a vagrant with ubuntu (precise64 image).
 I've found a problem in the same step. However, now the error is different:

 $ designate-manage database sync
 No handlers could be found for logger oslo_config.cfg
 usage: designate [-h] [--config-dir DIR] [--config-file PATH] [--debug]
  [--log-config-append PATH] [--log-date-format DATE_FORMAT]
  [--log-dir LOG_DIR] [--log-file PATH] [--log-format
 FORMAT]
  [--nouse-syslog] [--nouse-syslog-rfc-format] [--noverbose]
  [--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-syslog]
  [--use-syslog-rfc-format] [--verbose] [--version]
 [--nodebug]
  {powerdns} ...
 designate: error: argument category: invalid choice: 'database' (choose
 from 'powerdns')

 I've tried with master branch and even with stable/kilo branch with the
 same result.

 I've also noticed that master branch requires a custom installation of
 SQLAlchemy to avoid a version conflict:
 *pip install SQLAlchemy==0.9.9*

 I've contacted to #openstack-dns today and it looks like a dependency
 problem. However, all the dependencies were installed successfully. For me
 it's too hard to investigate the root of the problem. Tomorrow, I'll try to
 pursue this issue again in IRC.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Error starting designate (DNSaaS)

2015-07-02 Thread Jaime Fernández
Thanks Tim for the info.
I've tried the installation of designate using the recommended guide (
http://docs.openstack.org/developer/designate/install/ubuntu-dev.html) in a
vagrant with ubuntu (precise64 image).
I've found a problem in the same step. However, now the error is different:

$ designate-manage database sync
No handlers could be found for logger oslo_config.cfg
usage: designate [-h] [--config-dir DIR] [--config-file PATH] [--debug]
 [--log-config-append PATH] [--log-date-format DATE_FORMAT]
 [--log-dir LOG_DIR] [--log-file PATH] [--log-format FORMAT]
 [--nouse-syslog] [--nouse-syslog-rfc-format] [--noverbose]
 [--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-syslog]
 [--use-syslog-rfc-format] [--verbose] [--version]
[--nodebug]
 {powerdns} ...
designate: error: argument category: invalid choice: 'database' (choose
from 'powerdns')

I've tried with master branch and even with stable/kilo branch with the
same result.

I've also noticed that master branch requires a custom installation of
SQLAlchemy to avoid a version conflict:
*pip install SQLAlchemy==0.9.9*

I've contacted to #openstack-dns today and it looks like a dependency
problem. However, all the dependencies were installed successfully. For me
it's too hard to investigate the root of the problem. Tomorrow, I'll try to
pursue this issue again in IRC.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Error starting designate (DNSaaS)

2015-07-01 Thread Jaime Fernández
I've followed instructions at
http://designate.readthedocs.org/en/latest/getting-started.html#development-environment
to build and launch designate in a redhat 6.5 VM.

I was able to install it (with some minor changes) but when trying to start
designate, the first command failed:

*designate-manage database sync*

In the first execution, designate raises this error:

*2015-06-30 12:04:08.874 26704 INFO migrate.versioning.api
[designate-manage - - - - -] 46 - 47... 2015-06-30 12:04:09.456 26704
CRITICAL designate [designate-manage - - - - -] OperationalError:
(OperationalError) no such column: reverse_name u'UPDATE recordsets SET
reverse_name=reverse(recordsets.name http://recordsets.name)' ()*
If I reexecute this command, it changes slightly but it fails in the same
step (46 - 47):

*2015-06-30 12:04:23.359 26715 INFO migrate.versioning.api
[designate-manage - - - - -] 46 - 47... 2015-06-30 12:04:23.365 26715
CRITICAL designate [designate-manage - - - - -] OperationalError:
(OperationalError) duplicate column name: reverse_name u\nALTER TABLE
domains ADD reverse_name VARCHAR(255) DEFAULT '' ()*

It looks like a problem in the database scripts. Any fix for this?


Today I installed devstack in a Ubuntu VM, with designate support, and it
does work. Apparently both are using the same source code for designate.
Could it be related to the operating system?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Error starting designate (DNSaaS)

2015-07-01 Thread Jaime Fernández
Just to test designate, as a first step, I installed everything in a single
virtual machine.
The config file is exactly the same one than provided by
http://designate.readthedocs.org/en/latest/getting-started.html#development-environment
(except for property state_path with value /var/lib/designate). It looks
like this file configures a sqlite database. Perhaps tomorrow I will try
with MySQL instead of sqlite, but I also tried other official guideline
that was using bind and mysql and it also failed in the same step. So, I'm
not sure if it is some type of conflict with RHEL6.5, or the latest sql
scripts have some bug.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev