Re: [openstack-dev] [oslo.messaging][zmq] Redundant zmq.Context creation

2015-01-23 Thread Ilya Pekelny
On Fri, Jan 23, 2015 at 12:46 PM, ozamiatin  wrote:

> IMHO It should be created once per Reactor/Client or even per driver
> instance.
>

Per driver, sounds good.


>
> By the way (I didn't check it yet with current implementation of the
> driver) such approach should break the IPC, because such kind of sockets
> should be produced from the same context.
>

Please, check it. Looks like a potential bug.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ZeroMQ topic object.

2014-12-26 Thread Ilya Pekelny
Hi, all!

Unexpectedly I met a pretty significant issue when I have been solving a
small bug
https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1367614. The
problem is in several parts:

* Topics used for several purposes: to set subscriptions and to determine a
type of sockets.
* Topics is a strings which are modifying inline everywhere where it is
needed. So, the topic feature is very distributed and uncoordinated.

My issue with the bug was: "It is impossible just hash topic somewhere and
not to crash all the driver".  Second part of the issue is: "It is very
painful process to trace all the topic modifications which are distributed
though all the driver code".

After several attempts to fix the bug "with small losses" I concluded that
I need to create a control/entry point for topic string. Now I have a
proposal:

Blueprint —
https://blueprints.launchpad.net/oslo.messaging/+spec/zmq-topic-object
Spec — https://review.openstack.org/#/c/144149/
Patch — https://review.openstack.org/#/c/144120/

I want to discuss this feature and receive a feedbacks from a more
experienced rpc-Jedi.

Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Oslo.messaging error

2014-12-10 Thread Ilya Pekelny
Please, provide a RabbitMQ logs from a controller and an oslo.messaging
version. Do you use up-stream oslo.messaging version? It looks like well
known heartbeat bug.

On Wed, Dec 10, 2014 at 1:45 PM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi Raghavendra Lad,
>
> looks like Murano services can't connect ot the RabbitMQ server.
> Could you please share the configuration parameters for RabbitMQ  from
> ./etc/murano/murano.conf ?
>
>
> On Wed, Dec 10, 2014 at 10:55 AM,  wrote:
>
>>
>>
>>
>>
>> HI Team,
>>
>>
>>
>> I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the
>> below install murano-api I encounter the below error. Please assist.
>>
>>
>>
>> When I try to install
>>
>>
>>
>> I am using the Murano guide link provided below:
>>
>> https://murano.readthedocs.org/en/latest/install/manual.html
>>
>>
>>
>>
>>
>> I am trying to execute the section 7
>>
>>
>>
>> 1.Open a new console and launch Murano API. A separate terminal is
>> required because the console will be locked by a running process.
>>
>> 2. $ cd ~/murano/murano
>>
>> 3. $ tox -e venv -- murano-api \
>>
>> 4. > --config-file ./etc/murano/murano.conf
>>
>>
>>
>>
>>
>> I am getting the below error : I have a Juno Openstack ready and trying
>> to integrate Murano
>>
>>
>>
>>
>>
>> 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-]
>> neutron.endpoint_type  = publicURL log_opt_values
>> /home/
>> ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048
>>
>> 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-]
>> neutron.insecure   = False log_opt_values
>> /home/ubun
>> tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048
>>
>> 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-]
>> 
>>  log_opt_values
>> /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050
>>
>> 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-]
>> Connecting to AMQP server on controller:5672
>>
>> 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-]
>> Connecting to AMQP server on controller:5672
>>
>> 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting
>> up on http://0.0.0.0:8082/
>>
>> 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating
>> statistic information. update_stats
>> /home/ubuntu/murano/muran
>>  o/murano/common/statservice.py:57
>>
>> 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats
>> object:
>> > ction object at 0x7fada950a510> update_stats
>> /home/ubuntu/murano/murano/murano/common/statservice.py:58
>>
>> 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats:
>> Requests:0  Errors: 0 Ave.Res.Time 0.
>>
>> Per tenant: {} update_stats
>> /home/ubuntu/murano/murano/murano/common/statservice.py:64
>>
>> 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL
>> server mode set to
>> STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER
>> O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
>> _check_effective_sql_mode /hom
>> e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509
>>
>> 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit
>> [-] AMQP server controller:5672 closed the connection. Check
>> log  in credentials: Socket closed
>>
>> 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit
>> [-] AMQP server controller:5672 closed the connection. Check
>> log  in credentials: Socket closed
>>
>> 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit
>> [-] AMQP server controller:5672 closed the connection. Check
>> log  in credentials: Socket closed
>>
>> 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit
>> [-] AMQP server controller:5672 closed the connection. Check
>> log  in credentials: Socket closed
>>
>>
>>
>>
>>
>> Warm Regards,
>>
>> *Raghavendra Lad*
>>
>>
>>
>> --
>>
>> This message is for the designated recipient only and may contain
>> privileged, proprietary, or otherwise confidential information. If you have
>> received it in error, please notify the sender immediately and delete the
>> original. Any other use of the e-mail by you is prohibited. Where allowed
>> by local law, electronic communications with Accenture and its affiliates,
>> including e-mail and instant messaging (including content), may be scanned
>> by our systems for the purposes of information security and assessment of
>> internal compliance with Accenture policy.
>>
>> __
>>
>> www.accen

Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-18 Thread Ilya Pekelny
Thank you, Li Ma!

Yes, sure I'm going to provide more detailed mail with a solid propositions
list. I need to do some knowledge updates before I provide it.

On Tue, Nov 18, 2014 at 10:40 AM, Li Ma  wrote:

>  On 2014/11/17 18:44, Ilya Pekelny wrote:
>
> Hi, all!
>
>  We want to discuss opportunity of implementation of the p-2-p messaging
> model in oslo.messaging for ZeroMQ driver. Actual architecture
> uses uncharacteristic single broker architecture model. In this way we are
> ignoring the key 0MQ ideas. Lets describe our message in quotes from ZeroMQ
> documentation:
>
>
>- ZeroMQ has the core technical goals of simplicity and scalability,
>the core social goal of gathering together the best and brightest minds in
>distributed computing to build real, lasting solutions, and the political
>goal of breaking the old hegemony of centralization, as represented by most
>existing messaging systems prior to ZeroMQ.
> - The ZeroMQ Message Transport Protocol (ZMTP) is a transport layer
>protocol for exchanging messages between two peers over a connected
>transport layer such as TCP.
>- The two peers agree on the version and security mechanism of the
>connection by sending each other data and either continuing the discussion,
>or closing the connection.
>- The two peers handshake the security mechanism by exchanging zero or
>more commands. If the security handshake is successful, the peers continue
>the discussion, otherwise one or both peers closes the connection.
>- Each peer then sends the other metadata about the connection as a
>final command. The peers may check the metadata and each peer decides
>either to continue, or to close the connection.
>- Each peer is then able to send the other messages. Either peer may
>at any moment close the connection.
>
> From the current code docstring:
>
>  ZmqBaseReactor(ConsumerBase):
> """A consumer class implementing a centralized casting broker
> (PULL-PUSH).
>
> Hi, Ilya, thanks for raising this topic. Inline you discussed about the
> ZeroMQ nature, but I still cannot find any directions to how to refactor or
> redesign the ZeroMQ driver for olso.messaging. :-< Could you provide more
> details about how you think of it?
>
>   This approach is pretty unusual for ZeroMQ. Fortunately we have a bit
> of raw developments around the problem. These changes can introduce
> performance improvement. But to proof it we need to implement all new
> features, at least at WIP status. So, I need to be sure that the community
> doesn't avoid such of improvements.
>
> For community works, AFAIK, we'd first initialize CI for ZeroMQ. After
> that, we can work together on how to improve performance, reliability and
> scalability of ZeroMQ driver.
>
> cheers,
> Li Ma
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-18 Thread Ilya Pekelny
Thank you, Eric, for your descriptions!

I'm very new to rpc and oslo.messaging. So I can be wrong in the idea of
how it all designed and I can be wrong in a particular formulations. But
I'm highly motivated in improvement of the rpc/oslo.messaging knowledge.
I'm going to clarify all your descriptions and come back with updated
propositions.

Thank you, Russell, for your proposition!

I'll pay attention to AMQP 1.0 as soon as possible. I'm not sure I can take
it all to work but for sure I'll have an understanding about AMQP 1.0 in a
nearest future.

On Mon, Nov 17, 2014 at 5:03 PM, Eric Windisch  wrote:

>
>
> On Mon, Nov 17, 2014 at 5:44 AM, Ilya Pekelny 
> wrote:
>
>> We want to discuss opportunity of implementation of the p-2-p messaging
>> model in oslo.messaging for ZeroMQ driver. Actual architecture
>> uses uncharacteristic single broker architecture model. In this way we are
>> ignoring the key 0MQ ideas. Lets describe our message in quotes from ZeroMQ
>> documentation:
>>
>>
> The oslo.messaging driver is not using a single broker. It is designed for
> a distributed broker model where each host runs a broker. I'm not sure
> where the confusion comes from that implies this is a single-broker model?
>
> All of the points you make around negotiation and security are new
> concepts introduced after the initial design and implementation of the
> ZeroMQ driver. It certainly makes sense to investigate what new features
> are available in ZeroMQ (such as CurveCP) and to see how they might be
> leveraged.
>
> That said, quite a bit of trial-and-error and research went into deciding
> to use an opposing PUSH-PULL mechanism instead of REQ/REP. Most notably,
> it's much easier to make PUSH/PULL reliable than REQ/REP.
>
>
>> From the current code docstring:
>> ZmqBaseReactor(ConsumerBase):
>> """A consumer class implementing a centralized casting broker
>> (PULL-PUSH).
>>
>> This approach is pretty unusual for ZeroMQ. Fortunately we have a bit of
>> raw developments around the problem. These changes can introduce
>> performance improvement. But to proof it we need to implement all new
>> features, at least at WIP status. So, I need to be sure that the community
>> doesn't avoid such of improvements.
>>
>
> Again, the design implemented expects a broker running per machine (the
> zmq-receiver process). Each machine might have multiple workers all pulling
> messages from queues. Initially, the driver was designed such that each
> topic was mapped to its own ip:port, but this was not friendly to having
> arbitrary consumers of the library and required a port mapping file be
> distributed with the application. Plus, it's valid to have multiple
> consumers of a topic on a given host, something that is only possible with
> a distributed broker.
>
> As I left the driver, long review queues prevented me from merging a pile
> of changes to improve performance and increase reliability. I believe the
> architecture is still sound, even if much of the code itself is bad. What
> this driver needs is major cleanup, refactoring, and better testing.
>
> Regards,
> Eric Windisch
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.messaging outcome from the summit

2014-11-17 Thread Ilya Pekelny
 Flavio Percoco wrote:

> Still, I'd like us to learn from
> previous experiences and have a better plan for this driver (and
> future cases like this one).


Hi, all!

As one of a just joined ZeroMQ maintainers I have a growing plan of ZeroMQ
refactoring and development. At the most abstract view our plan is to
remove single broker and implement peer-2-peer model in the messaging
driver. Now exists a blueprint with this goal
https://blueprints.launchpad.net/oslo.messaging/+spec/reduce-central-broker.
I maintain a patch and a spec which I had inherited from Aleksey Kornienko.
For now this blueprint is the first step in the planning process. I believe
we can split this big work in a set of specs and if is needed in several
related blueprints. With these specs and BPs our plan should become
obvious. I wrote a mail in the dev mail list with short overview to the
coming work.

Please, feel free to discuss it all with me and correct me on this big road.

On Mon, Nov 17, 2014 at 4:45 PM, Doug Hellmann 
wrote:

> Thanks, Josh, I’ll subscribe to the issue to keep up to date.
>
> On Nov 16, 2014, at 6:58 PM, Joshua Harlow  wrote:
>
> > I started the following issue on kombu's github page (to see if there is
> any interest on there side to such an effort):
> >
> > https://github.com/celery/kombu/issues/430
> >
> > It's about seeing if the kombu folks would be ok with a 'rpc' subfolder
> in there repository that can start to contain 'rpc' like functionality that
> now exists in oslo.messaging (I don't see why they would be against this
> kind of idea, since it seems to make sense IMHO).
> >
> > Let's see what happens,
> >
> > -Josh
> >
> > Doug Hellmann wrote:
> >>
> >> On Nov 13, 2014, at 7:02 PM, Joshua Harlow  >> > wrote:
> >>
> >>> Don't forget my executor which isn't dependent on a larger set of
> >>> changes for asyncio/trollious...
> >>>
> >>> https://review.openstack.org/#/c/70914/
> >>>
> >>> The above will/should just 'work', although I'm unsure what thread
> >>> count should be by default (the number of green threads that is set at
> >>> like 200 shouldn't be the same number used in that executor which uses
> >>> real python/system threads). The neat thing about that executor is
> >>> that it can also replace the eventlet one, since when eventlet is
> >>> monkey patching the threading module (which it should be) then it
> >>> should behave just as the existing eventlet one; which IMHO is pretty
> >>> cool (and could be one way to completely remove the eventlet usage in
> >>> oslo.messaging).
> >>
> >> Good point, thanks for reminding me.
> >>
> >>>
> >>> As for the kombu discussions, maybe its time to jump on the #celery
> >>> channel (where the kombu folks hang out) and start talking to them
> >>> about how we can work better together to move some of our features
> >>> into kombu (and also depreciate/remove some of the oslo.messaging
> >>> features that now are in kombu). I believe
> >>> https://launchpad.net/~asksol is the main guy there (and also the main
> >>> maintainer of celery/kombu?). It'd be nice to have these
> >>> cross-community talks and at least come up with some kind of game
> >>> plan; hopefully one that benefits both communities…
> >>
> >> I would like that, but won’t have time to do it myself this cycle. Maybe
> >> we can find another volunteer from the team?
> >>
> >> Doug
> >>
> >>>
> >>> -Josh
> >>>
> >>> 
> >>>
> 
> >>> *From:* Doug Hellmann  >>> >
> >>> *To:* OpenStack Development Mailing List (not for usage questions)
> >>>  >>> >
> >>> *Sent:* Wednesday, November 12, 2014 12:22 PM
> >>> *Subject:* [openstack-dev] [oslo] oslo.messaging outcome from the
> summit
> >>>
> >>> The oslo.messaging session at the summit [1] resulted in some plans to
> >>> evolve how oslo.messaging works, but probably not during this cycle.
> >>>
> >>> First, we talked about what to do about the various drivers like
> >>> ZeroMQ and the new AMQP 1.0 driver. We decided that rather than moving
> >>> those out of the main tree and packaging them separately, we would
> >>> keep them all in the main repository to encourage the driver authors
> >>> to help out with the core library (oslo.messaging is a critical
> >>> component of OpenStack, and we’ve lost several of our core reviewers
> >>> for the library to other priorities recently).
> >>>
> >>> There is a new set of contributors interested in maintaining the
> >>> ZeroMQ driver, and they are going to work together to review each
> >>> other’s patches. We will re-evaluate keeping ZeroMQ at the end of
> >>> Kilo, based on how things go this cycle.
> >>>
> >>> We also talked about the fact that the new version of Kombu includes
> >>> some of the features we have implemented in our own driver, like
> >>> heartbeats and connection management. Kombu does not inc

[openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-17 Thread Ilya Pekelny
Hi, all!

We want to discuss opportunity of implementation of the p-2-p messaging
model in oslo.messaging for ZeroMQ driver. Actual architecture
uses uncharacteristic single broker architecture model. In this way we are
ignoring the key 0MQ ideas. Lets describe our message in quotes from ZeroMQ
documentation:


   - ZeroMQ has the core technical goals of simplicity and scalability, the
   core social goal of gathering together the best and brightest minds in
   distributed computing to build real, lasting solutions, and the political
   goal of breaking the old hegemony of centralization, as represented by most
   existing messaging systems prior to ZeroMQ.
   - The ZeroMQ Message Transport Protocol (ZMTP) is a transport layer
   protocol for exchanging messages between two peers over a connected
   transport layer such as TCP.
   - The two peers agree on the version and security mechanism of the
   connection by sending each other data and either continuing the discussion,
   or closing the connection.
   - The two peers handshake the security mechanism by exchanging zero or
   more commands. If the security handshake is successful, the peers continue
   the discussion, otherwise one or both peers closes the connection.
   - Each peer then sends the other metadata about the connection as a
   final command. The peers may check the metadata and each peer decides
   either to continue, or to close the connection.
   - Each peer is then able to send the other messages. Either peer may at
   any moment close the connection.

>From the current code docstring:

ZmqBaseReactor(ConsumerBase):
"""A consumer class implementing a centralized casting broker
(PULL-PUSH).

This approach is pretty unusual for ZeroMQ. Fortunately we have a bit of
raw developments around the problem. These changes can introduce
performance improvement. But to proof it we need to implement all new
features, at least at WIP status. So, I need to be sure that the community
doesn't avoid such of improvements.

Regards, Ilya, Oleksii.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev